From 0f2eddf5e8b100549331258159b16133a49d8bd1 Mon Sep 17 00:00:00 2001 From: Brandon Castellano Date: Sat, 7 Sep 2024 21:22:41 -0400 Subject: [PATCH] Migrate to main branch development (#419) * [cli] Fix SyntaxWarning due to incorrect escaping #400 * [cli] Fix exception when detect-hash is set as default detector * [cli] Fix new detectors not working with default-detector * [cli] Fix outstanding CodeQL lint warnings. * [cli] Unify type hints and clean up imports * add detect-hash and detect-hist as options for default-detector (#403) * Bump jinja2 from 3.1.3 to 3.1.4 in /website (#397) Bumps [jinja2](https://github.com/pallets/jinja) from 3.1.3 to 3.1.4. - [Release notes](https://github.com/pallets/jinja/releases) - [Changelog](https://github.com/pallets/jinja/blob/main/CHANGES.rst) - [Commits](https://github.com/pallets/jinja/compare/3.1.3...3.1.4) --- updated-dependencies: - dependency-name: jinja2 dependency-type: direct:production ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * [dist] Fix Github license detection. * [dist] Use Github license template. Fixes #365. * [docs] Add CITATION.cff #399 * [dist] Prepare for v0.6.4 release. * [build] Auto-generate .version_info and verify installer version. * [build] Add missing pre-release script invocation for Windows build on Github. * [build] Fix incorrect path to pre_release script. * [build] Omit unnecessary files in distributed docs. * [dist] Update Windows installer for v0.6.4. Bump OpenCV to 4.10. * [build] Use specific OpenCV version for Windows build. * [dist] Release v0.6.4. * [docs] Update changelog and image URI. * add detect-hash and detect-hist as options for default-detector --------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Breakthrough * [dist] Prepare changelog for next release. * [project] Switch from yapf to ruff for formatting * [project] Use ruff for linting project Now passes `ruff check` with some fixes suppressed. * [project] Enable more lint rules. * [docs] Change single quotes to double quotes. * Transition from yapf to ruff (#418) * [project] Enable more lint rules. * Bump actions/download-artifact from 3 to 4.1.7 in /.github/workflows in the github_actions group across 1 directory (#417) Bump actions/download-artifact Bumps the github_actions group with 1 update in the /.github/workflows directory: [actions/download-artifact](https://github.com/actions/download-artifact). Updates `actions/download-artifact` from 3 to 4.1.7 - [Release notes](https://github.com/actions/download-artifact/releases) - [Commits](https://github.com/actions/download-artifact/compare/v3...v4.1.7) --- updated-dependencies: - dependency-name: actions/download-artifact dependency-type: direct:production dependency-group: github_actions ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * [build] Fix incorrect version conversion for Pyinstaller build * [build] Update workflow actions. * [build] Update workflow actions. * Revert "[build] Update workflow actions." Mistaken merge commit. This reverts commit c23eee83b17d0b51e4e55dad852abcf0441d4b91. --------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --------- Signed-off-by: dependabot[bot] Co-authored-by: moritzbrantner <31051084+moritzbrantner@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- .style.yapf | 6 - dist/pre_release.py | 29 +- docs/api.rst | 10 +- docs/conf.py | 113 +- docs/generate_cli_docs.py | 152 +-- pyproject.toml | 51 +- scenedetect/__init__.py | 44 +- scenedetect/__main__.py | 25 +- scenedetect/_cli/__init__.py | 1028 ++++++++++--------- scenedetect/_cli/config.py | 187 ++-- scenedetect/_cli/context.py | 677 ++++++------ scenedetect/_cli/controller.py | 196 ++-- scenedetect/_thirdparty/__init__.py | 1 - scenedetect/_thirdparty/simpletable.py | 46 +- scenedetect/backends/__init__.py | 17 +- scenedetect/backends/moviepy.py | 28 +- scenedetect/backends/opencv.py | 99 +- scenedetect/backends/pyav.py | 59 +- scenedetect/detectors/__init__.py | 3 +- scenedetect/detectors/adaptive_detector.py | 27 +- scenedetect/detectors/content_detector.py | 32 +- scenedetect/detectors/hash_detector.py | 14 +- scenedetect/detectors/histogram_detector.py | 22 +- scenedetect/detectors/threshold_detector.py | 71 +- scenedetect/frame_timecode.py | 133 +-- scenedetect/platform.py | 112 +- scenedetect/scene_detector.py | 30 +- scenedetect/scene_manager.py | 420 ++++---- scenedetect/stats_manager.py | 75 +- scenedetect/video_manager.py | 218 ++-- scenedetect/video_splitter.py | 147 +-- scenedetect/video_stream.py | 16 +- setup.py | 3 +- tests/__init__.py | 3 +- tests/conftest.py | 22 +- tests/test_api.py | 44 +- tests/test_backend_opencv.py | 5 +- tests/test_backend_pyav.py | 15 +- tests/test_backwards_compat.py | 44 +- tests/test_cli.py | 367 ++++--- tests/test_detectors.py | 69 +- tests/test_frame_timecode.py | 194 ++-- tests/test_platform.py | 20 +- tests/test_scene_manager.py | 63 +- tests/test_stats_manager.py | 79 +- tests/test_video_splitter.py | 16 +- tests/test_video_stream.py | 72 +- website/pages/changelog.md | 10 + website/pages/contributing.md | 4 +- 49 files changed, 2835 insertions(+), 2283 deletions(-) delete mode 100644 .style.yapf diff --git a/.style.yapf b/.style.yapf deleted file mode 100644 index 9f089c5b..00000000 --- a/.style.yapf +++ /dev/null @@ -1,6 +0,0 @@ -[style] -based_on_style = yapf -spaces_before_comment = 15, 20 -indent_width = 4 -split_before_logical_operator = true -column_limit = 100 diff --git a/dist/pre_release.py b/dist/pre_release.py index c40751e1..11d00154 100644 --- a/dist/pre_release.py +++ b/dist/pre_release.py @@ -4,9 +4,13 @@ sys.path.append(os.path.abspath(".")) import scenedetect + + VERSION = scenedetect.__version__ -if len(sys.argv) <= 2 or not ("--ignore-installer" in sys.argv): +run_version_check = ("--ignore-installer" not in sys.argv) + +if run_version_check: installer_aip = '' with open("dist/installer/PySceneDetect.aip", "r") as f: installer_aip = f.read() @@ -16,12 +20,19 @@ with open("dist/.version_info", "wb") as f: v = VERSION.split(".") assert 2 <= len(v) <= 3, f"Unrecognized version format: {VERSION}" - - if len(v) == 3: - (maj, min, pat) = int(v[0]), int(v[1]), int(v[2]) - else: - (maj, min, pat) = int(v[0]), int(v[1]), 0 - + if len(v) < 3: + v.append("0") + (maj, min, pat, bld) = v[0], v[1], v[2], 0 + # If either major or minor have suffixes, assume it's a dev/beta build and set + # the final component to 999. + if not min.isdigit(): + assert "-" in min + min = min[:min.find("-")] + bld = 999 + if not pat.isdigit(): + assert "-" in pat + pat = pat[:pat.find("-")] + bld = 999 f.write(f"""# UTF-8 # # For more details about fixed file info 'ffi' see: @@ -30,8 +41,8 @@ ffi=FixedFileInfo( # filevers and prodvers should be always a tuple with four items: (1, 2, 3, 4) # Set not needed items to zero 0. -filevers=(0, {maj}, {min}, {pat}), -prodvers=(0, {maj}, {min}, {pat}), +filevers=({maj}, {min}, {pat}, {bld}), +prodvers=({maj}, {min}, {pat}, {bld}), # Contains a bitmask that specifies the valid bits 'flags'r mask=0x3f, # Contains a bitmask that specifies the Boolean attributes of the file. diff --git a/docs/api.rst b/docs/api.rst index 7271b42c..ab34f97a 100644 --- a/docs/api.rst +++ b/docs/api.rst @@ -61,7 +61,7 @@ To get started, the :func:`scenedetect.detect` function takes a path to a video .. code:: python from scenedetect import detect, ContentDetector - scene_list = detect('my_video.mp4', ContentDetector()) + scene_list = detect("my_video.mp4", ContentDetector()) ``scene_list`` is now a list of :class:`FrameTimecode ` pairs representing the start/end of each scene (try calling ``print(scene_list)``). Note that you can set ``show_progress=True`` when calling :func:`detect ` to display a progress bar with estimated time remaining. @@ -70,7 +70,7 @@ Next, let's print the scene list in a more readable format by iterating over it: .. code:: python for i, scene in enumerate(scene_list): - print('Scene %2d: Start %s / Frame %d, End %s / Frame %d' % ( + print("Scene %2d: Start %s / Frame %d, End %s / Frame %d" % ( i+1, scene[0].get_timecode(), scene[0].get_frames(), scene[1].get_timecode(), scene[1].get_frames(),)) @@ -80,8 +80,8 @@ Now that we know where each scene is, we can also :ref:`split the input video ` with ``show_stdout=True`` or specify a log file (verbosity can also be specified) to attach some common handlers, or use ``logging.getLogger('pyscenedetect')`` and attach log handlers manually. +PySceneDetect outputs messages to a logger named ``pyscenedetect`` which does not have any default handlers. You can use :func:`scenedetect.init_logger ` with ``show_stdout=True`` or specify a log file (verbosity can also be specified) to attach some common handlers, or use ``logging.getLogger("pyscenedetect")`` and attach log handlers manually. ======================================================================= diff --git a/docs/conf.py b/docs/conf.py index 0cb4f243..60ad6188 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # Configuration file for the Sphinx documentation builder. # @@ -15,15 +14,15 @@ import os import sys -sys.path.insert(0, os.path.abspath('..')) +sys.path.insert(0, os.path.abspath("..")) from scenedetect import __version__ as scenedetect_version # -- Project information ----------------------------------------------------- -project = 'PySceneDetect' -copyright = '2014-2024, Brandon Castellano' -author = 'Brandon Castellano' +project = "PySceneDetect" +copyright = "2014-2024, Brandon Castellano" +author = "Brandon Castellano" # The short X.Y version version = scenedetect_version @@ -36,49 +35,49 @@ # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ - 'sphinx.ext.napoleon', - 'sphinx.ext.autodoc', + "sphinx.ext.napoleon", + "sphinx.ext.autodoc", ] autoclass_content = "both" autodoc_member_order = "groupwise" -autodoc_typehints = 'description' -autodoc_typehints_format = 'short' +autodoc_typehints = "description" +autodoc_typehints_format = "short" # Add any paths that contain templates here, relative to this directory. -templates_path = ['_templates'] +templates_path = ["_templates"] # The suffix(es) of source filenames. # You can specify multiple suffix as a list of string: # # source_suffix = ['.rst', '.md'] -source_suffix = '.rst' +source_suffix = ".rst" # The root toctree document. -root_doc = 'index' +root_doc = "index" # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # # This is also used if you do content translation via gettext catalogs. # Usually you set "language" from the command line for these cases. -language = 'en' +language = "en" # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. # This pattern also affects html_static_path and html_extra_path . -exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store'] +exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"] # The name of the Pygments (syntax highlighting) style to use. -pygments_style = 'sphinx' +pygments_style = "sphinx" # -- Options for HTML output ------------------------------------------------- # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". -html_static_path = ['_static'] -html_css_files = ['pyscenedetect.css'] +html_static_path = ["_static"] +html_css_files = ["pyscenedetect.css"] # Custom sidebar templates, must be a dictionary that maps document names # to template names. @@ -93,40 +92,37 @@ # -- Options for HTMLHelp output --------------------------------------------- # Output file base name for HTML help builder. -htmlhelp_basename = 'PySceneDetectdoc' +htmlhelp_basename = "PySceneDetectdoc" # -- Options for LaTeX output ------------------------------------------------ latex_elements = { - # The paper size ('letterpaper' or 'a4paper'). - # - # 'papersize': 'letterpaper', - - # The font size ('10pt', '11pt' or '12pt'). - # - # 'pointsize': '10pt', - - # Additional stuff for the LaTeX preamble. - # - # 'preamble': '', - - # Latex figure (float) alignment - # - # 'figure_align': 'htbp', + # The paper size ('letterpaper' or 'a4paper'). + # + # 'papersize': 'letterpaper', + # The font size ('10pt', '11pt' or '12pt'). + # + # 'pointsize': '10pt', + # Additional stuff for the LaTeX preamble. + # + # 'preamble': '', + # Latex figure (float) alignment + # + # 'figure_align': 'htbp', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ - (root_doc, 'PySceneDetect.tex', 'PySceneDetect Documentation', 'Brandon Castellano', 'manual'), + (root_doc, "PySceneDetect.tex", "PySceneDetect Documentation", "Brandon Castellano", "manual"), ] # -- Options for manual page output ------------------------------------------ # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). -man_pages = [(root_doc, 'pyscenedetect', 'PySceneDetect Documentation', [author], 1)] +man_pages = [(root_doc, "pyscenedetect", "PySceneDetect Documentation", [author], 1)] # -- Options for Texinfo output ---------------------------------------------- @@ -134,31 +130,38 @@ # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ - (root_doc, 'PySceneDetect', 'PySceneDetect Documentation', author, 'PySceneDetect', - 'Python API and `scenedetect` command reference.', 'Miscellaneous'), + ( + root_doc, + "PySceneDetect", + "PySceneDetect Documentation", + author, + "PySceneDetect", + "Python API and `scenedetect` command reference.", + "Miscellaneous", + ), ] # -- Theme ------------------------------------------------- # TODO: Consider switching to sphinx_material. -html_theme = 'alabaster' +html_theme = "alabaster" html_theme_options = { - 'sidebar_width': '235px', - 'description': 'Version: [%s]' % (release), - 'show_relbar_bottom': True, - 'show_relbar_top': False, - 'github_user': 'Breakthrough', - 'github_repo': 'PySceneDetect', - 'github_type': 'star', - 'tip_bg': '#f0f6fa', - 'tip_border': '#c2dcf2', - 'hint_bg': '#f0faf0', - 'hint_border': '#d3ebdc', - 'warn_bg': '#f5ebd0', - 'warn_border': '#f2caa2', - 'attention_bg': '#f5dcdc', - 'attention_border': '#ffaaaa', - 'logo': 'pyscenedetect_logo.png', - 'logo_name': False, + "sidebar_width": "235px", + "description": "Version: [%s]" % (release), + "show_relbar_bottom": True, + "show_relbar_top": False, + "github_user": "Breakthrough", + "github_repo": "PySceneDetect", + "github_type": "star", + "tip_bg": "#f0f6fa", + "tip_border": "#c2dcf2", + "hint_bg": "#f0faf0", + "hint_border": "#d3ebdc", + "warn_bg": "#f5ebd0", + "warn_border": "#f2caa2", + "attention_bg": "#f5dcdc", + "attention_border": "#ffaaaa", + "logo": "pyscenedetect_logo.png", + "logo_name": False, } diff --git a/docs/generate_cli_docs.py b/docs/generate_cli_docs.py index cd5c6f6f..f2c85c5d 100644 --- a/docs/generate_cli_docs.py +++ b/docs/generate_cli_docs.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # Generate formatted CLI documentation for PySceneDetect. # # Inspired by sphinx-click: https://github.com/click-contrib/sphinx-click @@ -10,40 +9,39 @@ Run from main repo folder as working directory.""" +import inspect import os +import re import sys -import inspect import typing as ty -import re from dataclasses import dataclass # Add parent folder to path so we can resolve `scenedetect` imports. currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) parentdir = os.path.dirname(currentdir) sys.path.insert(0, parentdir) -from scenedetect._cli import scenedetect - # Third-party imports import click +from scenedetect._cli import scenedetect + StrGenerator = ty.Generator[str, None, None] -INDENT = ' ' * 4 +INDENT = " " * 4 -PAGE_SEP = '*' * 72 -TITLE_SEP = '=' * 72 -HEADING_SEP = '-' * 72 +PAGE_SEP = "*" * 72 +TITLE_SEP = "=" * 72 +HEADING_SEP = "-" * 72 OPTION_HELP_OVERRIDES = { - 'scenedetect': { - 'config': - 'Path to config file. See :ref:`config file reference ` for details.' + "scenedetect": { + "config": "Path to config file. See :ref:`config file reference ` for details." }, } -TITLE_LEVELS = ['*', '=', '-'] +TITLE_LEVELS = ["*", "=", "-"] -INFO_COMMANDS = ['help', 'about', 'version'] +INFO_COMMANDS = ["help", "about", "version"] INFO_COMMAND_OVERRIDE = """ .. _command-help: @@ -73,26 +71,28 @@ def patch_help(s: str, commands: ty.List[str]) -> str: # Patch some TODOs still not handled correctly below. pos = 0 while True: - pos = s.find('global option :option:', pos) + pos = s.find("global option :option:", pos) if pos < 0: break - pos = s.find('<-', pos) + pos = s.find("<-", pos) assert pos > 0 - s = s[:pos + 1] + 'scenedetect ' + s[pos + 1:] + s = s[: pos + 1] + "scenedetect " + s[pos + 1 :] + + for command in [command for command in commands if command not in INFO_COMMANDS]: - for command in [command for command in commands if not command in INFO_COMMANDS]: def add_link(_match: re.Match) -> str: - return ':ref:`%s `' % (command, command) - s = re.sub('``%s``(?!\\n)' % command, add_link, s) + return ":ref:`%s `" % (command, command) + + s = re.sub("``%s``(?!\\n)" % command, add_link, s) return s def generate_title(s: str, level: int = 0, len: int = 72) -> StrGenerator: - yield '\n' + yield "\n" if level == 0: - yield TITLE_LEVELS[level] * len + '\n' - yield s + '\n' - yield TITLE_LEVELS[level] * len + '\n\n' + yield TITLE_LEVELS[level] * len + "\n" + yield s + "\n" + yield TITLE_LEVELS[level] * len + "\n\n" @dataclass @@ -103,11 +103,11 @@ class ReplaceWithReference: def transform_backquotes(s: str) -> str: - return s.replace('``', '`').replace('`', '``') + return s.replace("``", "`").replace("`", "``") def add_backquotes(match: re.Match) -> str: - return '``%s``' % match.string[match.start():match.end()] + return "``%s``" % match.string[match.start() : match.end()] def add_backquotes_with_refs(refs: ty.Set[str]) -> ty.Callable[[str], str]: @@ -115,13 +115,13 @@ def add_backquotes_with_refs(refs: ty.Set[str]) -> ty.Callable[[str], str]: references to any found options.""" def _add_backquotes(s: re.Match) -> str: - to_add: str = s.string[s.start():s.end()] - flag = re.search('-+[\w-]+[^\.\=\s\/]*', to_add) - if flag is not None and flag.string[flag.start():flag.end()] in refs: + to_add: str = s.string[s.start() : s.end()] + flag = re.search("-+[\w-]+[^\.\=\s\/]*", to_add) + if flag is not None and flag.string[flag.start() : flag.end()] in refs: # add cross reference - cross_ref = flag.string[flag.start():flag.end()] - option = s.string[s.start():s.end()] - return ':option:`%s <%s>`' % (option, cross_ref) + cross_ref = flag.string[flag.start() : flag.end()] + option = s.string[s.start() : s.end()] + return ":option:`%s <%s>`" % (option, cross_ref) else: return add_backquotes(s) @@ -129,13 +129,13 @@ def _add_backquotes(s: re.Match) -> str: def extract_default_value(s: str) -> ty.Tuple[str, ty.Optional[str]]: - default = re.search('\[default: .*\]', s) + default = re.search("\[default: .*\]", s) if default is not None: span = default.span() assert span[1] == len(s) - s, default = s[:span[0]].strip(), s[span[0]:span[1]][len('[default: '):-1] + s, default = s[: span[0]].strip(), s[span[0] : span[1]][len("[default: ") : -1] # Double-quote any default values that contain spaces. - if ' ' in default and not '"' in default and not ',' in default: + if " " in default and '"' not in default and "," not in default: default = '"%s"' % default return (s, default) @@ -145,57 +145,63 @@ def transform_add_option_refs(s: str, refs: ty.List[str]) -> str: # TODO: Match prefix of `global option` and add ref to parent `scenedetect` command option. # Replace patch to complete this. # -c/--command - s = re.sub('-\w/--\w[\w-]*', transform, s) + s = re.sub("-\w/--\w[\w-]*", transform, s) # --arg=value, --arg=1.2.3, --arg=1,2,3 s = re.sub('-+[\w-]+=[^"\s\)]+(? StrGenerator: if isinstance(opt, click.Argument): - yield '\n.. option:: %s\n' % opt.name + yield "\n.. option:: %s\n" % opt.name return - yield '\n.. option:: %s\n' % ', '.join(arg if opt.metavar is None else '%s %s' % - (arg, opt.metavar) - for arg in sorted(opt.opts, reverse=True)) + yield "\n.. option:: %s\n" % ", ".join( + arg if opt.metavar is None else "%s %s" % (arg, opt.metavar) + for arg in sorted(opt.opts, reverse=True) + ) - help = OPTION_HELP_OVERRIDES[command.name][ - opt.name] if command.name in OPTION_HELP_OVERRIDES and opt.name in OPTION_HELP_OVERRIDES[ - command.name] else opt.help.strip() + help = ( + OPTION_HELP_OVERRIDES[command.name][opt.name] + if command.name in OPTION_HELP_OVERRIDES and opt.name in OPTION_HELP_OVERRIDES[command.name] + else opt.help.strip() + ) # TODO: Make metavars link to the option as well. help, default = extract_default_value(help) help = transform_add_option_refs(help, flags) - yield '\n %s\n' % help + yield "\n %s\n" % help if default is not None: - yield '\n Default: ``%s``\n' % default + yield "\n Default: ``%s``\n" % default -def generate_command_help(ctx: click.Context, - command: click.Command, - parent_name: ty.Optional[str] = None) -> StrGenerator: +def generate_command_help( + ctx: click.Context, command: click.Command, parent_name: ty.Optional[str] = None +) -> StrGenerator: # TODO: Add references to long options. Requires splitting out examples. # TODO: Add references to subcommands. Need to add actual refs, since programs can't be ref'd. # TODO: Handle dollar signs in examples by having both escaped and unescaped versions - yield '\n.. _command-%s:\n' % command.name - yield '\n.. program:: %s\n\n' % ( - command.name if parent_name is None else '%s %s' % (parent_name, command.name)) + yield "\n.. _command-%s:\n" % command.name + yield "\n.. program:: %s\n\n" % ( + command.name if parent_name is None else "%s %s" % (parent_name, command.name) + ) if parent_name: - yield from generate_title('``%s``' % command.name, 1) + yield from generate_title("``%s``" % command.name, 1) replacements = [ - opt for opts in [param.opts for param in command.params if hasattr(param, 'opts')] + opt + for opts in [param.opts for param in command.params if hasattr(param, "opts")] for opt in opts ] help = command.help - help = help.replace('Examples:\n', - ''.join(generate_title('Examples', 0 if not parent_name else 2))) - help = help.replace('\b\n', '') - help = help.format(scenedetect='scenedetect', scenedetect_with_video='scenedetect -i video.mp4') + help = help.replace( + "Examples:\n", "".join(generate_title("Examples", 0 if not parent_name else 2)) + ) + help = help.replace("\b\n", "") + help = help.format(scenedetect="scenedetect", scenedetect_with_video="scenedetect -i video.mp4") help = transform_backquotes(help) help = transform_add_option_refs(help, replacements) @@ -203,20 +209,19 @@ def generate_command_help(ctx: click.Context, if line.startswith(INDENT): indent = line.count(INDENT) line = line.strip() - yield '%s``%s``\n' % (indent * INDENT, line) if line else '\n' + yield "%s``%s``\n" % (indent * INDENT, line) if line else "\n" else: - yield '%s\n' % line + yield "%s\n" % line if command.params: - yield '\n' - yield from generate_title('Options', 0 if not parent_name else 2) + yield "\n" + yield from generate_title("Options", 0 if not parent_name else 2) for param in command.params: yield from format_option(command, param, replacements) - yield '\n' + yield "\n" def generate_subcommands(ctx: click.Context, commands: ty.List[str]) -> StrGenerator: - processed = set() for info_command in INFO_COMMANDS: @@ -224,16 +229,17 @@ def generate_subcommands(ctx: click.Context, commands: ty.List[str]) -> StrGener processed.add(info_command) yield INFO_COMMAND_OVERRIDE - yield from generate_title('Detectors', 0) - detectors = [command for command in commands if command.startswith('detect-')] + yield from generate_title("Detectors", 0) + detectors = [command for command in commands if command.startswith("detect-")] for detector in detectors: yield from generate_command_help(ctx, ctx.command.get_command(ctx, detector), ctx.info_name) processed.add(detector) - yield from generate_title('Commands', 0) + yield from generate_title("Commands", 0) output_commands = [ - command for command in commands - if (not command.startswith('detect-') and not command in INFO_COMMANDS) + command + for command in commands + if (not command.startswith("detect-") and command not in INFO_COMMANDS) ] for command in output_commands: yield from generate_command_help(ctx, ctx.command.get_command(ctx, command), ctx.info_name) @@ -246,22 +252,22 @@ def create_help() -> ty.Tuple[str, ty.List[str]]: ctx = click.Context(scenedetect, info_name=scenedetect.name) commands: ty.List[str] = ctx.command.list_commands(ctx) - #ctx.to_info_dict lacks metavar so we have to use the context directly. + # ctx.to_info_dict lacks metavar so we have to use the context directly. actions = [ - generate_title('``scenedetect`` 🎬 Command', level=0), + generate_title("``scenedetect`` 🎬 Command", level=0), generate_command_help(ctx, ctx.command), generate_subcommands(ctx, commands), ] lines = [] for action in actions: lines.extend(action) - return ''.join(lines), commands + return "".join(lines), commands def main(): help, commands = create_help() help = patch_help(help, commands) - with open('docs/cli.rst', 'wb') as f: + with open("docs/cli.rst", "wb") as f: f.write(help.encode()) diff --git a/pyproject.toml b/pyproject.toml index ff343399..8186012b 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,53 @@ +# +# PySceneDetect: Python-Based Video Scene Detector +# --------------------------------------------------------------- +# [ Site: http://www.bcastell.com/projects/PySceneDetect/ ] +# [ Github: https://github.com/Breakthrough/PySceneDetect/ ] +# [ Documentation: http://www.scenedetect.com/docs/ ] +# +# Copyright (C) 2014-2024 Brandon Castellano . +# -# TODO: Switch to poetry to try and fix dependents graph. Example: -# https://github.com/Textualize/rich/blob/master/pyproject.toml [build-system] requires = ["setuptools"] build-backend = "setuptools.build_meta" + +[tool.ruff] +exclude = [ + "docs" +] +line-length = 100 +indent-width = 4 + +[tool.ruff.format] +quote-style = "double" +indent-style = "space" +skip-magic-trailing-comma = false +docstring-code-format = true + +[tool.ruff.lint] +select = [ + # flake8-bugbear + "B", + # pycodestyle + "E", + # Pyflakes + "F", + # isort + "I", + # TODO - Add additional rule sets (https://docs.astral.sh/ruff/rules/): + # pyupgrade + #"UP", + # flake8-simplify + #"SIM", +] +ignore = [ + # TODO: Determine if we should use __all__, a reudndant alias, or keep this suppressed. + "F401", + # TODO: Line too long + "E501", + # TODO: Do not assign a `lambda` expression, use a `def` + "E731", +] +fixable = ["ALL"] +unfixable = [] diff --git a/scenedetect/__init__.py b/scenedetect/__init__.py index 160bee61..544be977 100644 --- a/scenedetect/__init__.py +++ b/scenedetect/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -27,36 +26,45 @@ except ModuleNotFoundError as ex: raise ModuleNotFoundError( "OpenCV could not be found, try installing opencv-python:\n\npip install opencv-python", - name='cv2', + name="cv2", ) from ex # Commonly used classes/functions exported under the `scenedetect` namespace for brevity. -from scenedetect.platform import init_logger +from scenedetect.platform import init_logger # noqa: I001 from scenedetect.frame_timecode import FrameTimecode from scenedetect.video_stream import VideoStream, VideoOpenFailure from scenedetect.video_splitter import split_video_ffmpeg, split_video_mkvmerge from scenedetect.scene_detector import SceneDetector -from scenedetect.detectors import ContentDetector, AdaptiveDetector, ThresholdDetector, HistogramDetector, HashDetector -from scenedetect.backends import (AVAILABLE_BACKENDS, VideoStreamCv2, VideoStreamAv, - VideoStreamMoviePy, VideoCaptureAdapter) +from scenedetect.detectors import ( + ContentDetector, + AdaptiveDetector, + ThresholdDetector, + HistogramDetector, + HashDetector, +) +from scenedetect.backends import ( + AVAILABLE_BACKENDS, + VideoStreamCv2, + VideoStreamAv, + VideoStreamMoviePy, + VideoCaptureAdapter, +) from scenedetect.stats_manager import StatsManager, StatsFileCorrupt from scenedetect.scene_manager import SceneManager, save_images - -# [DEPRECATED] DO NOT USE. -from scenedetect.video_manager import VideoManager +from scenedetect.video_manager import VideoManager # [DEPRECATED] DO NOT USE. # Used for module identification and when printing version & about info # (e.g. calling `scenedetect version` or `scenedetect about`). -__version__ = '0.6.4' +__version__ = "0.6.5-dev1" init_logger() -logger = getLogger('pyscenedetect') +logger = getLogger("pyscenedetect") def open_video( path: str, framerate: Optional[float] = None, - backend: str = 'opencv', + backend: str = "opencv", **kwargs, ) -> VideoStream: """Open a video at the given path. If `backend` is specified but not available on the current @@ -83,22 +91,22 @@ def open_video( if backend in AVAILABLE_BACKENDS: backend_type = AVAILABLE_BACKENDS[backend] try: - logger.debug('Opening video with %s...', backend_type.BACKEND_NAME) + logger.debug("Opening video with %s...", backend_type.BACKEND_NAME) return backend_type(path, framerate, **kwargs) except VideoOpenFailure as ex: - logger.warning('Failed to open video with %s: %s', backend_type.BACKEND_NAME, str(ex)) + logger.warning("Failed to open video with %s: %s", backend_type.BACKEND_NAME, str(ex)) if backend == VideoStreamCv2.BACKEND_NAME: raise last_error = ex else: - logger.warning('Backend %s not available.', backend) + logger.warning("Backend %s not available.", backend) # Fallback to OpenCV if `backend` is unavailable, or specified backend failed to open `path`. backend_type = VideoStreamCv2 - logger.warning('Trying another backend: %s', backend_type.BACKEND_NAME) + logger.warning("Trying another backend: %s", backend_type.BACKEND_NAME) try: return backend_type(path, framerate) except VideoOpenFailure as ex: - logger.debug('Failed to open video: %s', str(ex)) + logger.debug("Failed to open video: %s", str(ex)) if last_error is None: last_error = ex # Propagate any exceptions raised from specified backend, instead of errors from the fallback. @@ -158,6 +166,6 @@ def detect( show_progress=show_progress, end_time=end_time, ) - if not scene_manager.stats_manager is None: + if scene_manager.stats_manager is not None: scene_manager.stats_manager.save_to_csv(csv_file=stats_file_path) return scene_manager.get_scene_list(start_in_scene=start_in_scene) diff --git a/scenedetect/__main__.py b/scenedetect/__main__.py index 7a8cfb9a..7c9ec1b9 100755 --- a/scenedetect/__main__.py +++ b/scenedetect/__main__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -12,14 +11,13 @@ # """Entry point for PySceneDetect's command-line interface.""" -from logging import getLogger import sys +from logging import getLogger from scenedetect._cli import scenedetect from scenedetect._cli.context import CliContext from scenedetect._cli.controller import run_scenedetect - -from scenedetect.platform import logging_redirect_tqdm, FakeTqdmLoggingRedirect +from scenedetect.platform import FakeTqdmLoggingRedirect, logging_redirect_tqdm def main(): @@ -27,35 +25,36 @@ def main(): cli_ctx = CliContext() try: # Process command line arguments and subcommands to initialize the context. - scenedetect.main(obj=cli_ctx) # Parse CLI arguments with registered callbacks. + scenedetect.main(obj=cli_ctx) # Parse CLI arguments with registered callbacks. except SystemExit as exit: - help_command = any(arg in sys.argv for arg in ['-h', '--help']) + help_command = any(arg in sys.argv for arg in ["-h", "--help"]) if help_command or exit.code != 0: raise # If we get here, processing the command line and loading the context worked. Let's run # the controller if we didn't process any help requests. - logger = getLogger('pyscenedetect') + logger = getLogger("pyscenedetect") # Ensure log messages don't conflict with any progress bars. If we're in quiet mode, where # no progress bars get created, we instead create a fake context manager. This is done here # to avoid needing a separate context manager at each point a progress bar is created. - log_redirect = FakeTqdmLoggingRedirect() if cli_ctx.quiet_mode else logging_redirect_tqdm( - loggers=[logger]) + log_redirect = ( + FakeTqdmLoggingRedirect() if cli_ctx.quiet_mode else logging_redirect_tqdm(loggers=[logger]) + ) with log_redirect: try: run_scenedetect(cli_ctx) except KeyboardInterrupt: - logger.info('Stopped.') + logger.info("Stopped.") if __debug__: raise except BaseException as ex: if __debug__: raise else: - logger.critical('Unhandled exception:', exc_info=ex) - raise SystemExit(1) + logger.critical("Unhandled exception:", exc_info=ex) + raise SystemExit(1) from None -if __name__ == '__main__': +if __name__ == "__main__": main() diff --git a/scenedetect/_cli/__init__.py b/scenedetect/_cli/__init__.py index 1890b9b5..18047181 100644 --- a/scenedetect/_cli/__init__.py +++ b/scenedetect/_cli/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -18,29 +17,32 @@ """ # Some parts of this file need word wrap to be displayed. -# pylint: disable=line-too-long import inspect import logging -from typing import AnyStr, Optional, Tuple +import typing as ty import click import scenedetect -from scenedetect.detectors import (AdaptiveDetector, ContentDetector, HashDetector, - HistogramDetector, ThresholdDetector) +from scenedetect._cli.config import CHOICE_MAP, CONFIG_FILE_PATH, CONFIG_MAP +from scenedetect._cli.context import USER_CONFIG, CliContext from scenedetect.backends import AVAILABLE_BACKENDS +from scenedetect.detectors import ( + AdaptiveDetector, + ContentDetector, + HashDetector, + HistogramDetector, + ThresholdDetector, +) from scenedetect.platform import get_system_version_info -from scenedetect._cli.config import CHOICE_MAP, CONFIG_FILE_PATH, CONFIG_MAP -from scenedetect._cli.context import CliContext, USER_CONFIG - _PROGRAM_VERSION = scenedetect.__version__ """Used to avoid name conflict with named `scenedetect` command below.""" -logger = logging.getLogger('pyscenedetect') +logger = logging.getLogger("pyscenedetect") -_LINE_SEPARATOR = '-' * 72 +_LINE_SEPARATOR = "-" * 72 # About & copyright message string shown for the 'about' CLI command (scenedetect about). _ABOUT_STRING = """ @@ -83,16 +85,16 @@ class _Command(click.Command): def format_help(self, ctx: click.Context, formatter: click.HelpFormatter) -> None: """Writes the help into the formatter if it exists.""" if ctx.parent: - formatter.write(click.style('`%s` Command' % ctx.command.name, fg='cyan')) + formatter.write(click.style("`%s` Command" % ctx.command.name, fg="cyan")) formatter.write_paragraph() - formatter.write(click.style(_LINE_SEPARATOR, fg='cyan')) + formatter.write(click.style(_LINE_SEPARATOR, fg="cyan")) formatter.write_paragraph() else: - formatter.write(click.style(_LINE_SEPARATOR, fg='yellow')) + formatter.write(click.style(_LINE_SEPARATOR, fg="yellow")) formatter.write_paragraph() - formatter.write(click.style('PySceneDetect Help', fg='yellow')) + formatter.write(click.style("PySceneDetect Help", fg="yellow")) formatter.write_paragraph() - formatter.write(click.style(_LINE_SEPARATOR, fg='yellow')) + formatter.write(click.style(_LINE_SEPARATOR, fg="yellow")) formatter.write_paragraph() self.format_usage(ctx, formatter) @@ -103,9 +105,10 @@ def format_help(self, ctx: click.Context, formatter: click.HelpFormatter) -> Non def format_help_text(self, ctx: click.Context, formatter: click.HelpFormatter) -> None: """Writes the help text to the formatter if it exists.""" if self.help: - base_command = (ctx.parent.info_name if ctx.parent is not None else ctx.info_name) + base_command = ctx.parent.info_name if ctx.parent is not None else ctx.info_name formatted_help = self.help.format( - scenedetect=base_command, scenedetect_with_video='%s -i video.mp4' % base_command) + scenedetect=base_command, scenedetect_with_video="%s -i video.mp4" % base_command + ) text = inspect.cleandoc(formatted_help).partition("\f")[0] formatter.write_paragraph() formatter.write_text(text) @@ -120,6 +123,7 @@ def format_epilog(self, ctx: click.Context, formatter: click.HelpFormatter) -> N class _CommandGroup(_Command, click.Group): """Custom formatting for command groups.""" + pass @@ -127,179 +131,180 @@ def _print_command_help(ctx: click.Context, command: click.Command): """Print help/usage for a given command. Modifies `ctx` in-place.""" ctx.info_name = command.name ctx.command = command - click.echo('') + click.echo("") click.echo(command.get_help(ctx)) @click.group( cls=_CommandGroup, chain=True, - context_settings=dict(help_option_names=['-h', '--help']), + context_settings=dict(help_option_names=["-h", "--help"]), invoke_without_command=True, - epilog="""Type "scenedetect [command] --help" for command usage. See https://scenedetect.com/docs/ for online docs.""" + epilog="""Type "scenedetect [command] --help" for command usage. See https://scenedetect.com/docs/ for online docs.""", ) # *NOTE*: Although input is required, we cannot mark it as `required=True`, otherwise we will reject # commands of the form `scenedetect detect-content --help`. @click.option( - '--input', - '-i', + "--input", + "-i", multiple=False, required=False, - metavar='VIDEO', + metavar="VIDEO", type=click.STRING, - help='[REQUIRED] Input video file. Image sequences and URLs are supported.', + help="[REQUIRED] Input video file. Image sequences and URLs are supported.", ) @click.option( - '--output', - '-o', + "--output", + "-o", multiple=False, required=False, - metavar='DIR', + metavar="DIR", type=click.Path(exists=False, dir_okay=True, writable=True, resolve_path=True), - help='Output directory for created files. If unset, working directory will be used. May be overridden by command options.%s' + help="Output directory for created files. If unset, working directory will be used. May be overridden by command options.%s" % (USER_CONFIG.get_help_string("global", "output", show_default=False)), ) @click.option( - '--config', - '-c', - metavar='FILE', + "--config", + "-c", + metavar="FILE", type=click.Path(exists=True, file_okay=True, readable=True, resolve_path=False), - help='Path to config file. If unset, tries to load config from %s' % (CONFIG_FILE_PATH), + help="Path to config file. If unset, tries to load config from %s" % (CONFIG_FILE_PATH), ) @click.option( - '--stats', - '-s', - metavar='CSV', + "--stats", + "-s", + metavar="CSV", type=click.Path(exists=False, file_okay=True, writable=True, resolve_path=False), - help='Stats file (.csv) to write frame metrics. Existing files will be overwritten. Used for tuning detection parameters and data analysis.', + help="Stats file (.csv) to write frame metrics. Existing files will be overwritten. Used for tuning detection parameters and data analysis.", ) @click.option( - '--framerate', - '-f', - metavar='FPS', + "--framerate", + "-f", + metavar="FPS", type=click.FLOAT, default=None, - help='Override framerate with value as frames/sec.', + help="Override framerate with value as frames/sec.", ) @click.option( - '--min-scene-len', - '-m', - metavar='TIMECODE', + "--min-scene-len", + "-m", + metavar="TIMECODE", type=click.STRING, default=None, - help='Minimum length of any scene. TIMECODE can be specified as number of frames (-m=10), time in seconds (-m=2.5), or timecode (-m=00:02:53.633).%s' + help="Minimum length of any scene. TIMECODE can be specified as number of frames (-m=10), time in seconds (-m=2.5), or timecode (-m=00:02:53.633).%s" % USER_CONFIG.get_help_string("global", "min-scene-len"), ) @click.option( - '--drop-short-scenes', + "--drop-short-scenes", is_flag=True, flag_value=True, - help='Drop scenes shorter than -m/--min-scene-len, instead of combining with neighbors.%s' % - (USER_CONFIG.get_help_string('global', 'drop-short-scenes')), + help="Drop scenes shorter than -m/--min-scene-len, instead of combining with neighbors.%s" + % (USER_CONFIG.get_help_string("global", "drop-short-scenes")), ) @click.option( - '--merge-last-scene', + "--merge-last-scene", is_flag=True, flag_value=True, - help='Merge last scene with previous if shorter than -m/--min-scene-len.%s' % - (USER_CONFIG.get_help_string('global', 'merge-last-scene')), + help="Merge last scene with previous if shorter than -m/--min-scene-len.%s" + % (USER_CONFIG.get_help_string("global", "merge-last-scene")), ) @click.option( - '--backend', - '-b', - metavar='BACKEND', + "--backend", + "-b", + metavar="BACKEND", type=click.Choice(CHOICE_MAP["global"]["backend"]), default=None, - help='Backend to use for video input. Backend options can be set using a config file (-c/--config). [available: %s]%s' - % (', '.join(AVAILABLE_BACKENDS.keys()), USER_CONFIG.get_help_string("global", "backend")), + help="Backend to use for video input. Backend options can be set using a config file (-c/--config). [available: %s]%s" + % (", ".join(AVAILABLE_BACKENDS.keys()), USER_CONFIG.get_help_string("global", "backend")), ) @click.option( - '--downscale', - '-d', - metavar='N', + "--downscale", + "-d", + metavar="N", type=click.INT, default=None, - help='Integer factor to downscale video by before processing. If unset, value is selected based on resolution. Set -d=1 to disable downscaling.%s' + help="Integer factor to downscale video by before processing. If unset, value is selected based on resolution. Set -d=1 to disable downscaling.%s" % (USER_CONFIG.get_help_string("global", "downscale", show_default=False)), ) @click.option( - '--frame-skip', - '-fs', - metavar='N', + "--frame-skip", + "-fs", + metavar="N", type=click.INT, default=None, - help='Skip N frames during processing. Reduces processing speed at expense of accuracy. -fs=1 skips every other frame processing 50%% of the video, -fs=2 processes 33%% of the video frames, -fs=3 processes 25%%, etc... %s' + help="Skip N frames during processing. Reduces processing speed at expense of accuracy. -fs=1 skips every other frame processing 50%% of the video, -fs=2 processes 33%% of the video frames, -fs=3 processes 25%%, etc... %s" % USER_CONFIG.get_help_string("global", "frame-skip"), ) @click.option( - '--verbosity', - '-v', - metavar='LEVEL', - type=click.Choice(CHOICE_MAP['global']['verbosity'], False), + "--verbosity", + "-v", + metavar="LEVEL", + type=click.Choice(CHOICE_MAP["global"]["verbosity"], False), default=None, - help='Amount of information to show. LEVEL must be one of: %s. Overrides -q/--quiet.%s' % - (', '.join(CHOICE_MAP["global"]["verbosity"]), USER_CONFIG.get_help_string( - "global", "verbosity")), + help="Amount of information to show. LEVEL must be one of: %s. Overrides -q/--quiet.%s" + % ( + ", ".join(CHOICE_MAP["global"]["verbosity"]), + USER_CONFIG.get_help_string("global", "verbosity"), + ), ) @click.option( - '--logfile', - '-l', - metavar='FILE', + "--logfile", + "-l", + metavar="FILE", type=click.Path(exists=False, file_okay=True, writable=True, resolve_path=False), - help='Save debug log to FILE. Appends to existing file if present.', + help="Save debug log to FILE. Appends to existing file if present.", ) @click.option( - '--quiet', - '-q', + "--quiet", + "-q", is_flag=True, flag_value=True, - help='Suppress output to terminal/stdout. Equivalent to setting --verbosity=none.', + help="Suppress output to terminal/stdout. Equivalent to setting --verbosity=none.", ) @click.pass_context -# pylint: disable=redefined-builtin def scenedetect( ctx: click.Context, - input: Optional[AnyStr], - output: Optional[AnyStr], - stats: Optional[AnyStr], - config: Optional[AnyStr], - framerate: Optional[float], - min_scene_len: Optional[str], + input: ty.Optional[ty.AnyStr], + output: ty.Optional[ty.AnyStr], + stats: ty.Optional[ty.AnyStr], + config: ty.Optional[ty.AnyStr], + framerate: ty.Optional[float], + min_scene_len: ty.Optional[str], drop_short_scenes: bool, merge_last_scene: bool, - backend: Optional[str], - downscale: Optional[int], - frame_skip: Optional[int], - verbosity: Optional[str], - logfile: Optional[AnyStr], + backend: ty.Optional[str], + downscale: ty.Optional[int], + frame_skip: ty.Optional[int], + verbosity: ty.Optional[str], + logfile: ty.Optional[ty.AnyStr], quiet: bool, ): """PySceneDetect is a scene cut/transition detection program. PySceneDetect takes an input video, runs detection on it, and uses the resulting scene information to generate output. The syntax for using PySceneDetect is: - {scenedetect_with_video} [detector] [commands] + {scenedetect_with_video} [detector] [commands] -For [detector] use `detect-adaptive` or `detect-content` to find fast cuts, and `detect-threshold` for fades in/out. If [detector] is not specified, a default detector will be used. + For [detector] use `detect-adaptive` or `detect-content` to find fast cuts, and `detect-threshold` for fades in/out. If [detector] is not specified, a default detector will be used. -Examples: + Examples: -Split video wherever a new scene is detected: + Split video wherever a new scene is detected: - {scenedetect_with_video} split-video + {scenedetect_with_video} split-video -Save scene list in CSV format with images at the start, middle, and end of each scene: + Save scene list in CSV format with images at the start, middle, and end of each scene: - {scenedetect_with_video} list-scenes save-images + {scenedetect_with_video} list-scenes save-images -Skip the first 10 seconds of the input video: + Skip the first 10 seconds of the input video: - {scenedetect_with_video} time --start 10s detect-content + {scenedetect_with_video} time --start 10s detect-content -Show summary of all options and commands: + Show summary of all options and commands: - {scenedetect} --help + {scenedetect} --help -Global options (e.g. -i/--input, -c/--config) must be specified before any commands and their options. The order of commands is not strict, but each command must only be specified once. -""" + Global options (e.g. -i/--input, -c/--config) must be specified before any commands and their options. The order of commands is not strict, but each command must only be specified once. + """ assert isinstance(ctx.obj, CliContext) ctx.obj.handle_options( input_path=input, @@ -320,12 +325,9 @@ def scenedetect( ) -# pylint: enable=redefined-builtin - - -@click.command('help', cls=_Command) +@click.command("help", cls=_Command) @click.argument( - 'command_name', + "command_name", required=False, type=click.STRING, ) @@ -337,13 +339,13 @@ def help_command(ctx: click.Context, command_name: str): parent_command = ctx.parent.command all_commands = set(parent_command.list_commands(ctx)) if command_name is not None: - if not command_name in all_commands: + if command_name not in all_commands: error_strs = [ - 'unknown command. List of valid commands:', - ' %s' % ', '.join(sorted(all_commands)) + "unknown command. List of valid commands:", + " %s" % ", ".join(sorted(all_commands)), ] - raise click.BadParameter('\n'.join(error_strs), param_hint='command') - click.echo('') + raise click.BadParameter("\n".join(error_strs), param_hint="command") + click.echo("") _print_command_help(ctx, parent_command.get_command(ctx, command_name)) else: click.echo(ctx.parent.get_help()) @@ -352,73 +354,73 @@ def help_command(ctx: click.Context, command_name: str): ctx.exit() -@click.command('about', cls=_Command, add_help_option=False) +@click.command("about", cls=_Command, add_help_option=False) @click.pass_context def about_command(ctx: click.Context): """Print license/copyright info.""" assert isinstance(ctx.obj, CliContext) - click.echo('') - click.echo(click.style(_LINE_SEPARATOR, fg='cyan')) - click.echo(click.style(' About PySceneDetect %s' % _PROGRAM_VERSION, fg='yellow')) - click.echo(click.style(_LINE_SEPARATOR, fg='cyan')) + click.echo("") + click.echo(click.style(_LINE_SEPARATOR, fg="cyan")) + click.echo(click.style(" About PySceneDetect %s" % _PROGRAM_VERSION, fg="yellow")) + click.echo(click.style(_LINE_SEPARATOR, fg="cyan")) click.echo(_ABOUT_STRING) ctx.exit() -@click.command('version', cls=_Command, add_help_option=False) +@click.command("version", cls=_Command, add_help_option=False) @click.pass_context def version_command(ctx: click.Context): """Print PySceneDetect version.""" assert isinstance(ctx.obj, CliContext) - click.echo('') + click.echo("") click.echo(get_system_version_info()) ctx.exit() -@click.command('time', cls=_Command) +@click.command("time", cls=_Command) @click.option( - '--start', - '-s', - metavar='TIMECODE', + "--start", + "-s", + metavar="TIMECODE", type=click.STRING, default=None, - help='Time in video to start detection. TIMECODE can be specified as seconds (--start=100.0), frames (--start=100), or timecode (--start=00:01:40.000).', + help="Time in video to start detection. TIMECODE can be specified as seconds (--start=100.0), frames (--start=100), or timecode (--start=00:01:40.000).", ) @click.option( - '--duration', - '-d', - metavar='TIMECODE', + "--duration", + "-d", + metavar="TIMECODE", type=click.STRING, default=None, - help='Maximum time in video to process. TIMECODE format is the same as other arguments. Mutually exclusive with -e/--end.', + help="Maximum time in video to process. TIMECODE format is the same as other arguments. Mutually exclusive with -e/--end.", ) @click.option( - '--end', - '-e', - metavar='TIMECODE', + "--end", + "-e", + metavar="TIMECODE", type=click.STRING, default=None, - help='Time in video to end detecting scenes. TIMECODE format is the same as other arguments. Mutually exclusive with -d/--duration', + help="Time in video to end detecting scenes. TIMECODE format is the same as other arguments. Mutually exclusive with -d/--duration", ) @click.pass_context def time_command( ctx: click.Context, - start: Optional[str], - duration: Optional[str], - end: Optional[str], + start: ty.Optional[str], + duration: ty.Optional[str], + end: ty.Optional[str], ): """Set start/end/duration of input video. -Values can be specified as seconds (SSSS.nn), frames (NNNN), or timecode (HH:MM:SS.nnn). For example, to process only the first minute of a video: + Values can be specified as seconds (SSSS.nn), frames (NNNN), or timecode (HH:MM:SS.nnn). For example, to process only the first minute of a video: - {scenedetect_with_video} time --end 00:01:00 + {scenedetect_with_video} time --end 00:01:00 - {scenedetect_with_video} time --duration 60.0 + {scenedetect_with_video} time --duration 60.0 -Note that --end and --duration are mutually exclusive (i.e. only one of the two can be set). Lastly, the following is an example using absolute frame numbers to process frames 0 through 1000: + Note that --end and --duration are mutually exclusive (i.e. only one of the two can be set). Lastly, the following is an example using absolute frame numbers to process frames 0 through 1000: - {scenedetect_with_video} time --start 0 --end 1000 -""" + {scenedetect_with_video} time --start 0 --end 1000 + """ assert isinstance(ctx.obj, CliContext) ctx.obj.handle_time( start=start, @@ -432,10 +434,12 @@ def time_command( "--threshold", "-t", metavar="VAL", - type=click.FloatRange(CONFIG_MAP["detect-content"]["threshold"].min_val, - CONFIG_MAP["detect-content"]["threshold"].max_val), + type=click.FloatRange( + CONFIG_MAP["detect-content"]["threshold"].min_val, + CONFIG_MAP["detect-content"]["threshold"].max_val, + ), default=None, - help="The max difference (0.0 to 255.0) that adjacent frames score must exceed to trigger a cut. Lower values are more sensitive to shot changes. Refers to \"content_val\" in stats file.%s" + help='The max difference (0.0 to 255.0) that adjacent frames score must exceed to trigger a cut. Lower values are more sensitive to shot changes. Refers to "content_val" in stats file.%s' % (USER_CONFIG.get_help_string("detect-content", "threshold")), ) @click.option( @@ -452,7 +456,7 @@ def time_command( "-l", is_flag=True, flag_value=True, - help="Only use luma (brightness) channel. Useful for greyscale videos. Equivalent to setting -w=\"0 0 1 0\".%s" + help='Only use luma (brightness) channel. Useful for greyscale videos. Equivalent to setting -w="0 0 1 0".%s' % (USER_CONFIG.get_help_string("detect-content", "luma-only")), ) @click.option( @@ -470,9 +474,12 @@ def time_command( metavar="TIMECODE", type=click.STRING, default=None, - help="Minimum length of any scene. Overrides global option -m/--min-scene-len. %s" % - ("" if USER_CONFIG.is_default("detect-content", "min-scene-len") else - USER_CONFIG.get_help_string("detect-content", "min-scene-len")), + help="Minimum length of any scene. Overrides global option -m/--min-scene-len. %s" + % ( + "" + if USER_CONFIG.is_default("detect-content", "min-scene-len") + else USER_CONFIG.get_help_string("detect-content", "min-scene-len") + ), ) @click.option( "--filter-mode", @@ -480,42 +487,44 @@ def time_command( metavar="MODE", type=click.Choice(CHOICE_MAP["detect-content"]["filter-mode"], False), default=None, - help="Mode used to enforce -m/--min-scene-len option. Can be one of: %s. %s" % - (", ".join(CHOICE_MAP["detect-content"]["filter-mode"]), - USER_CONFIG.get_help_string("detect-content", "filter-mode")), + help="Mode used to enforce -m/--min-scene-len option. Can be one of: %s. %s" + % ( + ", ".join(CHOICE_MAP["detect-content"]["filter-mode"]), + USER_CONFIG.get_help_string("detect-content", "filter-mode"), + ), ) @click.pass_context def detect_content_command( ctx: click.Context, - threshold: Optional[float], - weights: Optional[Tuple[float, float, float, float]], + threshold: ty.Optional[float], + weights: ty.Optional[ty.Tuple[float, float, float, float]], luma_only: bool, - kernel_size: Optional[int], - min_scene_len: Optional[str], - filter_mode: Optional[str], + kernel_size: ty.Optional[int], + min_scene_len: ty.Optional[str], + filter_mode: ty.Optional[str], ): """Find fast cuts using differences in HSL (filtered). -For each frame, a score from 0 to 255.0 is calculated which represents the difference in content between the current and previous frame (higher = more different). A cut is generated when a frame score exceeds -t/--threshold. Frame scores are saved under the "content_val" column in a statsfile. + For each frame, a score from 0 to 255.0 is calculated which represents the difference in content between the current and previous frame (higher = more different). A cut is generated when a frame score exceeds -t/--threshold. Frame scores are saved under the "content_val" column in a statsfile. -Scores are calculated from several components which are also recorded in the statsfile: + Scores are calculated from several components which are also recorded in the statsfile: - - *delta_hue*: Difference between pixel hue values of adjacent frames. + - *delta_hue*: Difference between pixel hue values of adjacent frames. - - *delta_sat*: Difference between pixel saturation values of adjacent frames. + - *delta_sat*: Difference between pixel saturation values of adjacent frames. - - *delta_lum*: Difference between pixel luma (brightness) values of adjacent frames. + - *delta_lum*: Difference between pixel luma (brightness) values of adjacent frames. - - *delta_edges*: Difference between calculated edges of adjacent frames. Typically larger than other components, so threshold may need to be increased to compensate. + - *delta_edges*: Difference between calculated edges of adjacent frames. Typically larger than other components, so threshold may need to be increased to compensate. -Once calculated, these components are multiplied by the specified -w/--weights to calculate the final frame score ("content_val"). Weights are set as a set of 4 numbers in the form (*delta_hue*, *delta_sat*, *delta_lum*, *delta_edges*). For example, "--weights 1.0 0.5 1.0 0.2 --threshold 32" is a good starting point for trying edge detection. The final sum is normalized by the weight of all components, so they need not equal 100%. Edge detection is disabled by default to improve performance. + Once calculated, these components are multiplied by the specified -w/--weights to calculate the final frame score ("content_val"). Weights are set as a set of 4 numbers in the form (*delta_hue*, *delta_sat*, *delta_lum*, *delta_edges*). For example, "--weights 1.0 0.5 1.0 0.2 --threshold 32" is a good starting point for trying edge detection. The final sum is normalized by the weight of all components, so they need not equal 100%. Edge detection is disabled by default to improve performance. -Examples: + Examples: - {scenedetect_with_video} detect-content + {scenedetect_with_video} detect-content - {scenedetect_with_video} detect-content --threshold 27.5 -""" + {scenedetect_with_video} detect-content --threshold 27.5 + """ assert isinstance(ctx.obj, CliContext) detector_args = ctx.obj.get_detect_content_params( threshold=threshold, @@ -523,106 +532,110 @@ def detect_content_command( min_scene_len=min_scene_len, weights=weights, kernel_size=kernel_size, - filter_mode=filter_mode) - logger.debug('Adding detector: ContentDetector(%s)', detector_args) + filter_mode=filter_mode, + ) + logger.debug("Adding detector: ContentDetector(%s)", detector_args) ctx.obj.add_detector(ContentDetector(**detector_args)) -@click.command('detect-adaptive', cls=_Command) +@click.command("detect-adaptive", cls=_Command) @click.option( - '--threshold', - '-t', - metavar='VAL', + "--threshold", + "-t", + metavar="VAL", type=click.FLOAT, default=None, help='Threshold (float) that frame score must exceed to trigger a cut. Refers to "adaptive_ratio" in stats file.%s' - % (USER_CONFIG.get_help_string('detect-adaptive', 'threshold')), + % (USER_CONFIG.get_help_string("detect-adaptive", "threshold")), ) @click.option( - '--min-content-val', - '-c', - metavar='VAL', + "--min-content-val", + "-c", + metavar="VAL", type=click.FLOAT, default=None, - help='Minimum threshold (float) that "content_val" must exceed to trigger a cut.%s' % - (USER_CONFIG.get_help_string('detect-adaptive', 'min-content-val')), + help='Minimum threshold (float) that "content_val" must exceed to trigger a cut.%s' + % (USER_CONFIG.get_help_string("detect-adaptive", "min-content-val")), ) @click.option( - '--min-delta-hsv', - '-d', - metavar='VAL', + "--min-delta-hsv", + "-d", + metavar="VAL", type=click.FLOAT, default=None, - help='[DEPRECATED] Use -c/--min-content-val instead.%s' % - (USER_CONFIG.get_help_string('detect-adaptive', 'min-delta-hsv')), + help="[DEPRECATED] Use -c/--min-content-val instead.%s" + % (USER_CONFIG.get_help_string("detect-adaptive", "min-delta-hsv")), hidden=True, ) @click.option( - '--frame-window', - '-f', - metavar='VAL', + "--frame-window", + "-f", + metavar="VAL", type=click.INT, default=None, - help='Size of window to detect deviations from mean. Represents how many frames before/after the current one to use for mean.%s' - % (USER_CONFIG.get_help_string('detect-adaptive', 'frame-window')), + help="Size of window to detect deviations from mean. Represents how many frames before/after the current one to use for mean.%s" + % (USER_CONFIG.get_help_string("detect-adaptive", "frame-window")), ) @click.option( - '--weights', - '-w', + "--weights", + "-w", type=(float, float, float, float), default=None, help='Weights of 4 components ("delta_hue", "delta_sat", "delta_lum", "delta_edges") used to calculate "content_val".%s' % (USER_CONFIG.get_help_string("detect-content", "weights")), ) @click.option( - '--luma-only', - '-l', + "--luma-only", + "-l", is_flag=True, flag_value=True, help='Only use luma (brightness) channel. Useful for greyscale videos. Equivalent to "--weights 0 0 1 0".%s' % (USER_CONFIG.get_help_string("detect-content", "luma-only")), ) @click.option( - '--kernel-size', - '-k', - metavar='N', + "--kernel-size", + "-k", + metavar="N", type=click.INT, default=None, - help='Size of kernel for expanding detected edges. Must be odd number >= 3. If unset, size is estimated using video resolution.%s' + help="Size of kernel for expanding detected edges. Must be odd number >= 3. If unset, size is estimated using video resolution.%s" % (USER_CONFIG.get_help_string("detect-content", "kernel-size")), ) @click.option( - '--min-scene-len', - '-m', - metavar='TIMECODE', + "--min-scene-len", + "-m", + metavar="TIMECODE", type=click.STRING, default=None, - help='Minimum length of any scene. Overrides global option -m/--min-scene-len. TIMECODE can be specified in frames (-m=100), in seconds with `s` suffix (-m=3.5s), or timecode (-m=00:01:52.778).%s' - % ('' if USER_CONFIG.is_default('detect-adaptive', 'min-scene-len') else - USER_CONFIG.get_help_string('detect-adaptive', 'min-scene-len')), + help="Minimum length of any scene. Overrides global option -m/--min-scene-len. TIMECODE can be specified in frames (-m=100), in seconds with `s` suffix (-m=3.5s), or timecode (-m=00:01:52.778).%s" + % ( + "" + if USER_CONFIG.is_default("detect-adaptive", "min-scene-len") + else USER_CONFIG.get_help_string("detect-adaptive", "min-scene-len") + ), ) @click.pass_context def detect_adaptive_command( ctx: click.Context, - threshold: Optional[float], - min_content_val: Optional[float], - min_delta_hsv: Optional[float], - frame_window: Optional[int], - weights: Optional[Tuple[float, float, float, float]], + threshold: ty.Optional[float], + min_content_val: ty.Optional[float], + min_delta_hsv: ty.Optional[float], + frame_window: ty.Optional[int], + weights: ty.Optional[ty.Tuple[float, float, float, float]], luma_only: bool, - kernel_size: Optional[int], - min_scene_len: Optional[str], + kernel_size: ty.Optional[int], + min_scene_len: ty.Optional[str], ): """Find fast cuts using diffs in HSL colorspace (rolling average). -Two-pass algorithm that first calculates frame scores with `detect-content`, and then applies a rolling average when processing the result. This can help mitigate false detections in situations such as camera movement. + Two-pass algorithm that first calculates frame scores with `detect-content`, and then applies a rolling average when processing the result. This can help mitigate false detections in situations such as camera movement. -Examples: + Examples: - {scenedetect_with_video} detect-adaptive + {scenedetect_with_video} detect-adaptive - {scenedetect_with_video} detect-adaptive --threshold 3.2 -""" + {scenedetect_with_video} detect-adaptive --threshold 3.2 + """ assert isinstance(ctx.obj, CliContext) detector_args = ctx.obj.get_detect_adaptive_params( threshold=threshold, @@ -634,67 +647,74 @@ def detect_adaptive_command( weights=weights, kernel_size=kernel_size, ) - logger.debug('Adding detector: AdaptiveDetector(%s)', detector_args) + logger.debug("Adding detector: AdaptiveDetector(%s)", detector_args) ctx.obj.add_detector(AdaptiveDetector(**detector_args)) -@click.command('detect-threshold', cls=_Command) +@click.command("detect-threshold", cls=_Command) @click.option( - '--threshold', - '-t', - metavar='VAL', - type=click.FloatRange(CONFIG_MAP['detect-threshold']['threshold'].min_val, - CONFIG_MAP['detect-threshold']['threshold'].max_val), + "--threshold", + "-t", + metavar="VAL", + type=click.FloatRange( + CONFIG_MAP["detect-threshold"]["threshold"].min_val, + CONFIG_MAP["detect-threshold"]["threshold"].max_val, + ), default=None, help='Threshold (integer) that frame score must exceed to start a new scene. Refers to "delta_rgb" in stats file.%s' - % (USER_CONFIG.get_help_string('detect-threshold', 'threshold')), + % (USER_CONFIG.get_help_string("detect-threshold", "threshold")), ) @click.option( - '--fade-bias', - '-f', - metavar='PERCENT', - type=click.FloatRange(CONFIG_MAP['detect-threshold']['fade-bias'].min_val, - CONFIG_MAP['detect-threshold']['fade-bias'].max_val), + "--fade-bias", + "-f", + metavar="PERCENT", + type=click.FloatRange( + CONFIG_MAP["detect-threshold"]["fade-bias"].min_val, + CONFIG_MAP["detect-threshold"]["fade-bias"].max_val, + ), default=None, - help='Percent (%%) from -100 to 100 of timecode skew of cut placement. -100 indicates the start frame, +100 indicates the end frame, and 0 is the middle of both.%s' - % (USER_CONFIG.get_help_string('detect-threshold', 'fade-bias')), + help="Percent (%%) from -100 to 100 of timecode skew of cut placement. -100 indicates the start frame, +100 indicates the end frame, and 0 is the middle of both.%s" + % (USER_CONFIG.get_help_string("detect-threshold", "fade-bias")), ) @click.option( - '--add-last-scene', - '-l', + "--add-last-scene", + "-l", is_flag=True, flag_value=True, - help='If set and video ends after a fade-out event, generate a final cut at the last fade-out position.%s' - % (USER_CONFIG.get_help_string('detect-threshold', 'add-last-scene')), + help="If set and video ends after a fade-out event, generate a final cut at the last fade-out position.%s" + % (USER_CONFIG.get_help_string("detect-threshold", "add-last-scene")), ) @click.option( - '--min-scene-len', - '-m', - metavar='TIMECODE', + "--min-scene-len", + "-m", + metavar="TIMECODE", type=click.STRING, default=None, - help='Minimum length of any scene. Overrides global option -m/--min-scene-len. TIMECODE can be specified in frames (-m=100), in seconds with `s` suffix (-m=3.5s), or timecode (-m=00:01:52.778).%s' - % ('' if USER_CONFIG.is_default('detect-threshold', 'min-scene-len') else - USER_CONFIG.get_help_string('detect-threshold', 'min-scene-len')), + help="Minimum length of any scene. Overrides global option -m/--min-scene-len. TIMECODE can be specified in frames (-m=100), in seconds with `s` suffix (-m=3.5s), or timecode (-m=00:01:52.778).%s" + % ( + "" + if USER_CONFIG.is_default("detect-threshold", "min-scene-len") + else USER_CONFIG.get_help_string("detect-threshold", "min-scene-len") + ), ) @click.pass_context def detect_threshold_command( ctx: click.Context, - threshold: Optional[float], - fade_bias: Optional[float], + threshold: ty.Optional[float], + fade_bias: ty.Optional[float], add_last_scene: bool, - min_scene_len: Optional[str], + min_scene_len: ty.Optional[str], ): """Find fade in/out using averaging. -Detects fade-in and fade-out events using average pixel values. Resulting cuts are placed between adjacent fade-out and fade-in events. + Detects fade-in and fade-out events using average pixel values. Resulting cuts are placed between adjacent fade-out and fade-in events. -Examples: + Examples: - {scenedetect_with_video} detect-threshold + {scenedetect_with_video} detect-threshold - {scenedetect_with_video} detect-threshold --threshold 15 -""" + {scenedetect_with_video} detect-threshold --threshold 15 + """ assert isinstance(ctx.obj, CliContext) detector_args = ctx.obj.get_detect_threshold_params( threshold=threshold, @@ -702,7 +722,7 @@ def detect_threshold_command( add_last_scene=add_last_scene, min_scene_len=min_scene_len, ) - logger.debug('Adding detector: ThresholdDetector(%s)', detector_args) + logger.debug("Adding detector: ThresholdDetector(%s)", detector_args) ctx.obj.add_detector(ThresholdDetector(**detector_args)) @@ -711,21 +731,26 @@ def detect_threshold_command( "--threshold", "-t", metavar="VAL", - type=click.FloatRange(CONFIG_MAP["detect-hist"]["threshold"].min_val, - CONFIG_MAP["detect-hist"]["threshold"].max_val), + type=click.FloatRange( + CONFIG_MAP["detect-hist"]["threshold"].min_val, + CONFIG_MAP["detect-hist"]["threshold"].max_val, + ), default=None, help="Max difference (0.0 to 1.0) between histograms of adjacent frames. Lower " - "values are more sensitive to changes.%s" % - (USER_CONFIG.get_help_string("detect-hist", "threshold"))) + "values are more sensitive to changes.%s" + % (USER_CONFIG.get_help_string("detect-hist", "threshold")), +) @click.option( "--bins", "-b", metavar="NUM", - type=click.IntRange(CONFIG_MAP["detect-hist"]["bins"].min_val, - CONFIG_MAP["detect-hist"]["bins"].max_val), + type=click.IntRange( + CONFIG_MAP["detect-hist"]["bins"].min_val, CONFIG_MAP["detect-hist"]["bins"].max_val + ), default=None, - help="The number of bins to use for the histogram calculation.%s" % - (USER_CONFIG.get_help_string("detect-hist", "bins"))) + help="The number of bins to use for the histogram calculation.%s" + % (USER_CONFIG.get_help_string("detect-hist", "bins")), +) @click.option( "--min-scene-len", "-m", @@ -734,29 +759,38 @@ def detect_threshold_command( default=None, help="Minimum length of any scene. Overrides global min-scene-len (-m) setting." " TIMECODE can be specified as exact number of frames, a time in seconds followed by s," - " or a timecode in the format HH:MM:SS or HH:MM:SS.nnn.%s" % - ("" if USER_CONFIG.is_default("detect-hist", "min-scene-len") else USER_CONFIG.get_help_string( - "detect-hist", "min-scene-len"))) + " or a timecode in the format HH:MM:SS or HH:MM:SS.nnn.%s" + % ( + "" + if USER_CONFIG.is_default("detect-hist", "min-scene-len") + else USER_CONFIG.get_help_string("detect-hist", "min-scene-len") + ), +) @click.pass_context -def detect_hist_command(ctx: click.Context, threshold: Optional[float], bins: Optional[int], - min_scene_len: Optional[str]): +def detect_hist_command( + ctx: click.Context, + threshold: ty.Optional[float], + bins: ty.Optional[int], + min_scene_len: ty.Optional[str], +): """Find fast cuts by differencing YUV histograms. -Uses Y channel after converting each frame to YUV to create a histogram of each frame. Histograms between frames are compared to determine a score for how similar they are. + Uses Y channel after converting each frame to YUV to create a histogram of each frame. Histograms between frames are compared to determine a score for how similar they are. -Saved as the `hist_diff` metric in a statsfile. + Saved as the `hist_diff` metric in a statsfile. -Examples: + Examples: - {scenedetect_with_video} detect-hist + {scenedetect_with_video} detect-hist - {scenedetect_with_video} detect-hist --threshold 0.1 --bins 240 + {scenedetect_with_video} detect-hist --threshold 0.1 --bins 240 """ assert isinstance(ctx.obj, CliContext) assert isinstance(ctx.obj, CliContext) detector_args = ctx.obj.get_detect_hist_params( - threshold=threshold, bins=bins, min_scene_len=min_scene_len) + threshold=threshold, bins=bins, min_scene_len=min_scene_len + ) logger.debug("Adding detector: HistogramDetector(%s)", detector_args) ctx.obj.add_detector(HistogramDetector(**detector_args)) @@ -766,31 +800,41 @@ def detect_hist_command(ctx: click.Context, threshold: Optional[float], bins: Op "--threshold", "-t", metavar="VAL", - type=click.FloatRange(CONFIG_MAP["detect-hash"]["threshold"].min_val, - CONFIG_MAP["detect-hash"]["threshold"].max_val), + type=click.FloatRange( + CONFIG_MAP["detect-hash"]["threshold"].min_val, + CONFIG_MAP["detect-hash"]["threshold"].max_val, + ), default=None, - help=("Max distance between hash values (0.0 to 1.0) of adjacent frames. Lower values are " - "more sensitive to changes.%s" % - (USER_CONFIG.get_help_string("detect-hash", "threshold")))) + help=( + "Max distance between hash values (0.0 to 1.0) of adjacent frames. Lower values are " + "more sensitive to changes.%s" % (USER_CONFIG.get_help_string("detect-hash", "threshold")) + ), +) @click.option( "--size", "-s", metavar="SIZE", - type=click.IntRange(CONFIG_MAP["detect-hash"]["size"].min_val, - CONFIG_MAP["detect-hash"]["size"].max_val), + type=click.IntRange( + CONFIG_MAP["detect-hash"]["size"].min_val, CONFIG_MAP["detect-hash"]["size"].max_val + ), default=None, - help="Size of square of low frequency data to include from the discrete cosine transform.%s" % - (USER_CONFIG.get_help_string("detect-hash", "size"))) + help="Size of square of low frequency data to include from the discrete cosine transform.%s" + % (USER_CONFIG.get_help_string("detect-hash", "size")), +) @click.option( "--lowpass", "-l", metavar="FRAC", - type=click.IntRange(CONFIG_MAP["detect-hash"]["lowpass"].min_val, - CONFIG_MAP["detect-hash"]["lowpass"].max_val), + type=click.IntRange( + CONFIG_MAP["detect-hash"]["lowpass"].min_val, CONFIG_MAP["detect-hash"]["lowpass"].max_val + ), default=None, - help=("How much high frequency information to filter from the DCT. 2 means keep lower 1/2 of " - "the frequency data, 4 means only keep 1/4, etc...%s" % - (USER_CONFIG.get_help_string("detect-hash", "lowpass")))) + help=( + "How much high frequency information to filter from the DCT. 2 means keep lower 1/2 of " + "the frequency data, 4 means only keep 1/4, etc...%s" + % (USER_CONFIG.get_help_string("detect-hash", "lowpass")) + ), +) @click.option( "--min-scene-len", "-m", @@ -799,105 +843,119 @@ def detect_hist_command(ctx: click.Context, threshold: Optional[float], bins: Op default=None, help="Minimum length of any scene. Overrides global min-scene-len (-m) setting." " TIMECODE can be specified as exact number of frames, a time in seconds followed by s," - " or a timecode in the format HH:MM:SS or HH:MM:SS.nnn.%s" % - ("" if USER_CONFIG.is_default("detect-hash", "min-scene-len") else USER_CONFIG.get_help_string( - "detect-hash", "min-scene-len"))) + " or a timecode in the format HH:MM:SS or HH:MM:SS.nnn.%s" + % ( + "" + if USER_CONFIG.is_default("detect-hash", "min-scene-len") + else USER_CONFIG.get_help_string("detect-hash", "min-scene-len") + ), +) @click.pass_context -def detect_hash_command(ctx: click.Context, threshold: Optional[float], size: Optional[int], - lowpass: Optional[int], min_scene_len: Optional[str]): +def detect_hash_command( + ctx: click.Context, + threshold: ty.Optional[float], + size: ty.Optional[int], + lowpass: ty.Optional[int], + min_scene_len: ty.Optional[str], +): """Find fast cuts using perceptual hashing. -The perceptual hash is taken of adjacent frames, and used to calculate the hamming distance between them. The distance is then normalized by the squared size of the hash, and compared to the threshold. + The perceptual hash is taken of adjacent frames, and used to calculate the hamming distance between them. The distance is then normalized by the squared size of the hash, and compared to the threshold. -Saved as the `hash_dist` metric in a statsfile. + Saved as the `hash_dist` metric in a statsfile. -Examples: + Examples: - {scenedetect_with_video} detect-hash + {scenedetect_with_video} detect-hash - {scenedetect_with_video} detect-hash --size 32 --lowpass 3 + {scenedetect_with_video} detect-hash --size 32 --lowpass 3 """ assert isinstance(ctx.obj, CliContext) assert isinstance(ctx.obj, CliContext) detector_args = ctx.obj.get_detect_hash_params( - threshold=threshold, size=size, lowpass=lowpass, min_scene_len=min_scene_len) + threshold=threshold, size=size, lowpass=lowpass, min_scene_len=min_scene_len + ) logger.debug("Adding detector: HashDetector(%s)", detector_args) ctx.obj.add_detector(HashDetector(**detector_args)) -@click.command('load-scenes', cls=_Command) +@click.command("load-scenes", cls=_Command) @click.option( - '--input', - '-i', + "--input", + "-i", multiple=False, - metavar='FILE', + metavar="FILE", required=True, type=click.Path(exists=True, file_okay=True, readable=True, resolve_path=True), - help='Scene list to read cut information from.') + help="Scene list to read cut information from.", +) @click.option( - '--start-col-name', - '-c', - metavar='STRING', + "--start-col-name", + "-c", + metavar="STRING", type=click.STRING, default=None, - help='Name of column used to mark scene cuts.%s' % - (USER_CONFIG.get_help_string('load-scenes', 'start-col-name'))) + help="Name of column used to mark scene cuts.%s" + % (USER_CONFIG.get_help_string("load-scenes", "start-col-name")), +) @click.pass_context -def load_scenes_command(ctx: click.Context, input: Optional[str], start_col_name: Optional[str]): +def load_scenes_command( + ctx: click.Context, input: ty.Optional[str], start_col_name: ty.Optional[str] +): """Load scenes from CSV instead of detecting. Can be used with CSV generated by `list-scenes`. Scenes are loaded using the specified column as cut locations (frame number or timecode). -Examples: + Examples: - {scenedetect_with_video} load-scenes -i scenes.csv + {scenedetect_with_video} load-scenes -i scenes.csv - {scenedetect_with_video} load-scenes -i scenes.csv --start-col-name "Start Timecode" -""" + {scenedetect_with_video} load-scenes -i scenes.csv --start-col-name "Start Timecode" + """ assert isinstance(ctx.obj, CliContext) - logger.debug('Loading scenes from %s (start_col_name = %s)', input, start_col_name) + logger.debug("Loading scenes from %s (start_col_name = %s)", input, start_col_name) ctx.obj.handle_load_scenes(input=input, start_col_name=start_col_name) -@click.command('export-html', cls=_Command) +@click.command("export-html", cls=_Command) @click.option( - '--filename', - '-f', - metavar='NAME', - default='$VIDEO_NAME-Scenes.html', + "--filename", + "-f", + metavar="NAME", + default="$VIDEO_NAME-Scenes.html", type=click.STRING, - help='Filename format to use for the scene list HTML file. You can use the $VIDEO_NAME macro in the file name. Note that you may have to wrap the format name using single quotes.%s' - % (USER_CONFIG.get_help_string('export-html', 'filename')), + help="Filename format to use for the scene list HTML file. You can use the $VIDEO_NAME macro in the file name. Note that you may have to wrap the format name using single quotes.%s" + % (USER_CONFIG.get_help_string("export-html", "filename")), ) @click.option( - '--no-images', + "--no-images", is_flag=True, flag_value=True, - help='Export the scene list including or excluding the saved images.%s' % - (USER_CONFIG.get_help_string('export-html', 'no-images')), + help="Export the scene list including or excluding the saved images.%s" + % (USER_CONFIG.get_help_string("export-html", "no-images")), ) @click.option( - '--image-width', - '-w', - metavar='pixels', + "--image-width", + "-w", + metavar="pixels", type=click.INT, - help='Width in pixels of the images in the resulting HTML table.%s' % - (USER_CONFIG.get_help_string('export-html', 'image-width', show_default=False)), + help="Width in pixels of the images in the resulting HTML table.%s" + % (USER_CONFIG.get_help_string("export-html", "image-width", show_default=False)), ) @click.option( - '--image-height', - '-h', - metavar='pixels', + "--image-height", + "-h", + metavar="pixels", type=click.INT, - help='Height in pixels of the images in the resulting HTML table.%s' % - (USER_CONFIG.get_help_string('export-html', 'image-height', show_default=False)), + help="Height in pixels of the images in the resulting HTML table.%s" + % (USER_CONFIG.get_help_string("export-html", "image-height", show_default=False)), ) @click.pass_context def export_html_command( ctx: click.Context, - filename: Optional[AnyStr], + filename: ty.Optional[ty.AnyStr], no_images: bool, - image_width: Optional[int], - image_height: Optional[int], + image_width: ty.Optional[int], + image_height: ty.Optional[int], ): """Export scene list to HTML file. Requires save-images unless --no-images is specified.""" assert isinstance(ctx.obj, CliContext) @@ -909,52 +967,52 @@ def export_html_command( ) -@click.command('list-scenes', cls=_Command) +@click.command("list-scenes", cls=_Command) @click.option( - '--output', - '-o', - metavar='DIR', + "--output", + "-o", + metavar="DIR", type=click.Path(exists=False, dir_okay=True, writable=True, resolve_path=False), - help='Output directory to save videos to. Overrides global option -o/--output if set.%s' % - (USER_CONFIG.get_help_string('list-scenes', 'output', show_default=False)), + help="Output directory to save videos to. Overrides global option -o/--output if set.%s" + % (USER_CONFIG.get_help_string("list-scenes", "output", show_default=False)), ) @click.option( - '--filename', - '-f', - metavar='NAME', - default='$VIDEO_NAME-Scenes.csv', + "--filename", + "-f", + metavar="NAME", + default="$VIDEO_NAME-Scenes.csv", type=click.STRING, - help='Filename format to use for the scene list CSV file. You can use the $VIDEO_NAME macro in the file name. Note that you may have to wrap the name using single quotes or use escape characters (e.g. -f=\$VIDEO_NAME-Scenes.csv).%s' - % (USER_CONFIG.get_help_string('list-scenes', 'filename')), + help="Filename format to use for the scene list CSV file. You can use the $VIDEO_NAME macro in the file name. Note that you may have to wrap the name using single quotes or use escape characters (e.g. -f=\\$VIDEO_NAME-Scenes.csv).%s" + % (USER_CONFIG.get_help_string("list-scenes", "filename")), ) @click.option( - '--no-output-file', - '-n', + "--no-output-file", + "-n", is_flag=True, flag_value=True, - help='Only print scene list.%s' % - (USER_CONFIG.get_help_string('list-scenes', 'no-output-file')), + help="Only print scene list.%s" + % (USER_CONFIG.get_help_string("list-scenes", "no-output-file")), ) @click.option( - '--quiet', - '-q', + "--quiet", + "-q", is_flag=True, flag_value=True, - help='Suppress printing scene list.%s' % (USER_CONFIG.get_help_string('list-scenes', 'quiet')), + help="Suppress printing scene list.%s" % (USER_CONFIG.get_help_string("list-scenes", "quiet")), ) @click.option( - '--skip-cuts', - '-s', + "--skip-cuts", + "-s", is_flag=True, flag_value=True, - help='Skip cutting list as first row in the CSV file. Set for RFC 4180 compliant output.%s' % - (USER_CONFIG.get_help_string('list-scenes', 'skip-cuts')), + help="Skip cutting list as first row in the CSV file. Set for RFC 4180 compliant output.%s" + % (USER_CONFIG.get_help_string("list-scenes", "skip-cuts")), ) @click.pass_context def list_scenes_command( ctx: click.Context, - output: Optional[AnyStr], - filename: Optional[AnyStr], + output: ty.Optional[ty.AnyStr], + filename: ty.Optional[ty.AnyStr], no_output_file: bool, quiet: bool, skip_cuts: bool, @@ -970,108 +1028,112 @@ def list_scenes_command( ) -@click.command('split-video', cls=_Command) +@click.command("split-video", cls=_Command) @click.option( - '--output', - '-o', - metavar='DIR', + "--output", + "-o", + metavar="DIR", type=click.Path(exists=False, dir_okay=True, writable=True, resolve_path=False), - help='Output directory to save videos to. Overrides global option -o/--output if set.%s' % - (USER_CONFIG.get_help_string('split-video', 'output', show_default=False)), + help="Output directory to save videos to. Overrides global option -o/--output if set.%s" + % (USER_CONFIG.get_help_string("split-video", "output", show_default=False)), ) @click.option( - '--filename', - '-f', - metavar='NAME', + "--filename", + "-f", + metavar="NAME", default=None, type=click.STRING, - help='File name format to use when saving videos, with or without extension. You can use $VIDEO_NAME and $SCENE_NUMBER macros in the filename. You may have to wrap the format in single quotes or use escape characters to avoid variable expansion (e.g. -f=\\$VIDEO_NAME-Scene-\\$SCENE_NUMBER).%s' - % (USER_CONFIG.get_help_string('split-video', 'filename')), + help="File name format to use when saving videos, with or without extension. You can use $VIDEO_NAME and $SCENE_NUMBER macros in the filename. You may have to wrap the format in single quotes or use escape characters to avoid variable expansion (e.g. -f=\\$VIDEO_NAME-Scene-\\$SCENE_NUMBER).%s" + % (USER_CONFIG.get_help_string("split-video", "filename")), ) @click.option( - '--quiet', - '-q', + "--quiet", + "-q", is_flag=True, flag_value=True, - help='Hide output from external video splitting tool.%s' % - (USER_CONFIG.get_help_string('split-video', 'quiet')), + help="Hide output from external video splitting tool.%s" + % (USER_CONFIG.get_help_string("split-video", "quiet")), ) @click.option( - '--copy', - '-c', + "--copy", + "-c", is_flag=True, flag_value=True, - help="Copy instead of re-encode. Faster but less precise.%s" % - (USER_CONFIG.get_help_string('split-video', 'copy')), + help="Copy instead of re-encode. Faster but less precise.%s" + % (USER_CONFIG.get_help_string("split-video", "copy")), ) @click.option( - '--high-quality', - '-hq', + "--high-quality", + "-hq", is_flag=True, flag_value=True, - help='Encode video with higher quality, overrides -f option if present. Equivalent to: --rate-factor=17 --preset=slow%s' - % (USER_CONFIG.get_help_string('split-video', 'high-quality')), + help="Encode video with higher quality, overrides -f option if present. Equivalent to: --rate-factor=17 --preset=slow%s" + % (USER_CONFIG.get_help_string("split-video", "high-quality")), ) @click.option( - '--rate-factor', - '-crf', - metavar='RATE', + "--rate-factor", + "-crf", + metavar="RATE", default=None, - type=click.IntRange(CONFIG_MAP['split-video']['rate-factor'].min_val, - CONFIG_MAP['split-video']['rate-factor'].max_val), - help='Video encoding quality (x264 constant rate factor), from 0-100, where lower is higher quality (larger output). 0 indicates lossless.%s' - % (USER_CONFIG.get_help_string('split-video', 'rate-factor')), + type=click.IntRange( + CONFIG_MAP["split-video"]["rate-factor"].min_val, + CONFIG_MAP["split-video"]["rate-factor"].max_val, + ), + help="Video encoding quality (x264 constant rate factor), from 0-100, where lower is higher quality (larger output). 0 indicates lossless.%s" + % (USER_CONFIG.get_help_string("split-video", "rate-factor")), ) @click.option( - '--preset', - '-p', - metavar='LEVEL', + "--preset", + "-p", + metavar="LEVEL", default=None, - type=click.Choice(CHOICE_MAP['split-video']['preset']), - help='Video compression quality (x264 preset). Can be one of: %s. Faster modes take less time but output may be larger.%s' - % (', '.join( - CHOICE_MAP['split-video']['preset']), USER_CONFIG.get_help_string('split-video', 'preset')), + type=click.Choice(CHOICE_MAP["split-video"]["preset"]), + help="Video compression quality (x264 preset). Can be one of: %s. Faster modes take less time but output may be larger.%s" + % ( + ", ".join(CHOICE_MAP["split-video"]["preset"]), + USER_CONFIG.get_help_string("split-video", "preset"), + ), ) @click.option( - '--args', - '-a', - metavar='ARGS', + "--args", + "-a", + metavar="ARGS", type=click.STRING, default=None, help='Override codec arguments passed to FFmpeg when splitting scenes. Use double quotes (") around arguments. Must specify at least audio/video codec.%s' - % (USER_CONFIG.get_help_string('split-video', 'args')), + % (USER_CONFIG.get_help_string("split-video", "args")), ) @click.option( - '--mkvmerge', - '-m', + "--mkvmerge", + "-m", is_flag=True, flag_value=True, - help='Split video using mkvmerge. Faster than re-encoding, but less precise. If set, options other than -f/--filename, -q/--quiet and -o/--output will be ignored. Note that mkvmerge automatically appends the $SCENE_NUMBER suffix.%s' - % (USER_CONFIG.get_help_string('split-video', 'mkvmerge')), + help="Split video using mkvmerge. Faster than re-encoding, but less precise. If set, options other than -f/--filename, -q/--quiet and -o/--output will be ignored. Note that mkvmerge automatically appends the $SCENE_NUMBER suffix.%s" + % (USER_CONFIG.get_help_string("split-video", "mkvmerge")), ) @click.pass_context def split_video_command( ctx: click.Context, - output: Optional[AnyStr], - filename: Optional[AnyStr], + output: ty.Optional[ty.AnyStr], + filename: ty.Optional[ty.AnyStr], quiet: bool, copy: bool, high_quality: bool, - rate_factor: Optional[int], - preset: Optional[str], - args: Optional[str], + rate_factor: ty.Optional[int], + preset: ty.Optional[str], + args: ty.Optional[str], mkvmerge: bool, ): """Split input video using ffmpeg or mkvmerge. -Examples: + Examples: - {scenedetect_with_video} split-video + {scenedetect_with_video} split-video - {scenedetect_with_video} split-video --copy + {scenedetect_with_video} split-video --copy - {scenedetect_with_video} split-video --filename \$VIDEO_NAME-Clip-\$SCENE_NUMBER -""" + {scenedetect_with_video} split-video --filename \\$VIDEO_NAME-Clip-\\$SCENE_NUMBER + """ assert isinstance(ctx.obj, CliContext) ctx.obj.handle_split_video( output=output, @@ -1086,137 +1148,137 @@ def split_video_command( ) -@click.command('save-images', cls=_Command) +@click.command("save-images", cls=_Command) @click.option( - '--output', - '-o', - metavar='DIR', + "--output", + "-o", + metavar="DIR", type=click.Path(exists=False, dir_okay=True, writable=True, resolve_path=False), - help='Output directory for images. Overrides global option -o/--output if set.%s' % - (USER_CONFIG.get_help_string('save-images', 'output', show_default=False)), + help="Output directory for images. Overrides global option -o/--output if set.%s" + % (USER_CONFIG.get_help_string("save-images", "output", show_default=False)), ) @click.option( - '--filename', - '-f', - metavar='NAME', + "--filename", + "-f", + metavar="NAME", default=None, type=click.STRING, - help='Filename format *without* extension to use when saving images. You can use the $VIDEO_NAME, $SCENE_NUMBER, $IMAGE_NUMBER, and $FRAME_NUMBER macros in the file name. You may have to use escape characters (e.g. -f=\\$SCENE_NUMBER-Image-\\$IMAGE_NUMBER) or single quotes.%s' - % (USER_CONFIG.get_help_string('save-images', 'filename')), + help="Filename format *without* extension to use when saving images. You can use the $VIDEO_NAME, $SCENE_NUMBER, $IMAGE_NUMBER, and $FRAME_NUMBER macros in the file name. You may have to use escape characters (e.g. -f=\\$SCENE_NUMBER-Image-\\$IMAGE_NUMBER) or single quotes.%s" + % (USER_CONFIG.get_help_string("save-images", "filename")), ) @click.option( - '--num-images', - '-n', - metavar='N', + "--num-images", + "-n", + metavar="N", default=None, type=click.INT, - help='Number of images to generate per scene. Will always include start/end frame, unless -n=1, in which case the image will be the frame at the mid-point of the scene.%s' - % (USER_CONFIG.get_help_string('save-images', 'num-images')), + help="Number of images to generate per scene. Will always include start/end frame, unless -n=1, in which case the image will be the frame at the mid-point of the scene.%s" + % (USER_CONFIG.get_help_string("save-images", "num-images")), ) @click.option( - '--jpeg', - '-j', + "--jpeg", + "-j", is_flag=True, flag_value=True, - help='Set output format to JPEG (default).%s' % - (USER_CONFIG.get_help_string('save-images', 'format', show_default=False)), + help="Set output format to JPEG (default).%s" + % (USER_CONFIG.get_help_string("save-images", "format", show_default=False)), ) @click.option( - '--webp', - '-w', + "--webp", + "-w", is_flag=True, flag_value=True, - help='Set output format to WebP', + help="Set output format to WebP", ) @click.option( - '--quality', - '-q', - metavar='Q', + "--quality", + "-q", + metavar="Q", default=None, type=click.IntRange(0, 100), - help='JPEG/WebP encoding quality, from 0-100 (higher indicates better quality). For WebP, 100 indicates lossless. [default: JPEG: 95, WebP: 100]%s' - % (USER_CONFIG.get_help_string('save-images', 'quality', show_default=False)), + help="JPEG/WebP encoding quality, from 0-100 (higher indicates better quality). For WebP, 100 indicates lossless. [default: JPEG: 95, WebP: 100]%s" + % (USER_CONFIG.get_help_string("save-images", "quality", show_default=False)), ) @click.option( - '--png', - '-p', + "--png", + "-p", is_flag=True, flag_value=True, - help='Set output format to PNG.', + help="Set output format to PNG.", ) @click.option( - '--compression', - '-c', - metavar='C', + "--compression", + "-c", + metavar="C", default=None, type=click.IntRange(0, 9), - help='PNG compression rate, from 0-9. Higher values produce smaller files but result in longer compression time. This setting does not affect image quality, only file size.%s' - % (USER_CONFIG.get_help_string('save-images', 'compression')), + help="PNG compression rate, from 0-9. Higher values produce smaller files but result in longer compression time. This setting does not affect image quality, only file size.%s" + % (USER_CONFIG.get_help_string("save-images", "compression")), ) @click.option( - '-m', - '--frame-margin', - metavar='N', + "-m", + "--frame-margin", + metavar="N", default=None, type=click.INT, - help='Number of frames to ignore at beginning/end of scenes when saving images. Controls temporal padding on scene boundaries.%s' - % (USER_CONFIG.get_help_string('save-images', 'num-images')), + help="Number of frames to ignore at beginning/end of scenes when saving images. Controls temporal padding on scene boundaries.%s" + % (USER_CONFIG.get_help_string("save-images", "num-images")), ) @click.option( - '--scale', - '-s', - metavar='S', + "--scale", + "-s", + metavar="S", default=None, type=click.FLOAT, - help='Factor to scale images by. Ignored if -W/--width or -H/--height is set.%s' % - (USER_CONFIG.get_help_string('save-images', 'scale', show_default=False)), + help="Factor to scale images by. Ignored if -W/--width or -H/--height is set.%s" + % (USER_CONFIG.get_help_string("save-images", "scale", show_default=False)), ) @click.option( - '--height', - '-H', - metavar='H', + "--height", + "-H", + metavar="H", default=None, type=click.INT, - help='Height (pixels) of images.%s' % - (USER_CONFIG.get_help_string('save-images', 'height', show_default=False)), + help="Height (pixels) of images.%s" + % (USER_CONFIG.get_help_string("save-images", "height", show_default=False)), ) @click.option( - '--width', - '-W', - metavar='W', + "--width", + "-W", + metavar="W", default=None, type=click.INT, - help='Width (pixels) of images.%s' % - (USER_CONFIG.get_help_string('save-images', 'width', show_default=False)), + help="Width (pixels) of images.%s" + % (USER_CONFIG.get_help_string("save-images", "width", show_default=False)), ) @click.pass_context def save_images_command( ctx: click.Context, - output: Optional[AnyStr], - filename: Optional[AnyStr], - num_images: Optional[int], + output: ty.Optional[ty.AnyStr], + filename: ty.Optional[ty.AnyStr], + num_images: ty.Optional[int], jpeg: bool, webp: bool, - quality: Optional[int], + quality: ty.Optional[int], png: bool, - compression: Optional[int], - frame_margin: Optional[int], - scale: Optional[float], - height: Optional[int], - width: Optional[int], + compression: ty.Optional[int], + frame_margin: ty.Optional[int], + scale: ty.Optional[float], + height: ty.Optional[int], + width: ty.Optional[int], ): """Create images for each detected scene. -Images can be resized + Images can be resized -Examples: + Examples: - {scenedetect_with_video} save-images + {scenedetect_with_video} save-images - {scenedetect_with_video} save-images --width 1024 + {scenedetect_with_video} save-images --width 1024 - {scenedetect_with_video} save-images --filename \$SCENE_NUMBER-img\$IMAGE_NUMBER -""" + {scenedetect_with_video} save-images --filename \\$SCENE_NUMBER-img\\$IMAGE_NUMBER + """ assert isinstance(ctx.obj, CliContext) ctx.obj.handle_save_images( num_images=num_images, diff --git a/scenedetect/_cli/config.py b/scenedetect/_cli/config.py index 3407b2a5..929587ad 100644 --- a/scenedetect/_cli/config.py +++ b/scenedetect/_cli/config.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -15,12 +14,12 @@ possible and re-used by the CLI so that there is one source of truth. """ -from abc import ABC, abstractmethod -from enum import Enum import logging import os import os.path +from abc import ABC, abstractmethod from configparser import ConfigParser, ParsingError +from enum import Enum from typing import Any, AnyStr, Dict, List, Optional, Tuple, Union from platformdirs import user_config_dir @@ -31,7 +30,7 @@ from scenedetect.scene_manager import Interpolation from scenedetect.video_splitter import DEFAULT_FFMPEG_ARGS -VALID_PYAV_THREAD_MODES = ['NONE', 'SLICE', 'FRAME', 'AUTO'] +VALID_PYAV_THREAD_MODES = ["NONE", "SLICE", "FRAME", "AUTO"] class OptionParseFailure(Exception): @@ -53,7 +52,7 @@ def value(self) -> Any: @staticmethod @abstractmethod - def from_config(config_value: str, default: 'ValidatedValue') -> 'ValidatedValue': + def from_config(config_value: str, default: "ValidatedValue") -> "ValidatedValue": """Validate and get the user-specified configuration option. Raises: @@ -83,12 +82,13 @@ def __str__(self) -> str: return str(self.value) @staticmethod - def from_config(config_value: str, default: 'TimecodeValue') -> 'TimecodeValue': + def from_config(config_value: str, default: "TimecodeValue") -> "TimecodeValue": try: return TimecodeValue(config_value) except ValueError as ex: raise OptionParseFailure( - 'Timecodes must be in seconds (100.0), frames (100), or HH:MM:SS.') from ex + "Timecodes must be in seconds (100.0), frames (100), or HH:MM:SS." + ) from ex class RangeValue(ValidatedValue): @@ -128,7 +128,7 @@ def __str__(self) -> str: return str(self.value) @staticmethod - def from_config(config_value: str, default: 'RangeValue') -> 'RangeValue': + def from_config(config_value: str, default: "RangeValue") -> "RangeValue": try: return RangeValue( value=int(config_value) if isinstance(default.value, int) else float(config_value), @@ -136,14 +136,15 @@ def from_config(config_value: str, default: 'RangeValue') -> 'RangeValue': max_val=default.max_val, ) except ValueError as ex: - raise OptionParseFailure('Value must be between %s and %s.' % - (default.min_val, default.max_val)) from ex + raise OptionParseFailure( + "Value must be between %s and %s." % (default.min_val, default.max_val) + ) from ex class ScoreWeightsValue(ValidatedValue): """Validator for score weight values (currently a tuple of four numbers).""" - _IGNORE_CHARS = [',', '/', '(', ')'] + _IGNORE_CHARS = [",", "/", "(", ")"] """Characters to ignore.""" def __init__(self, value: Union[str, ContentDetector.Components]): @@ -151,7 +152,8 @@ def __init__(self, value: Union[str, ContentDetector.Components]): self._value = value else: translation_table = str.maketrans( - {char: ' ' for char in ScoreWeightsValue._IGNORE_CHARS}) + {char: " " for char in ScoreWeightsValue._IGNORE_CHARS} + ) values = value.translate(translation_table).split() if not len(values) == 4: raise ValueError("Score weights must be specified as four numbers!") @@ -165,16 +167,17 @@ def __repr__(self) -> str: return str(self.value) def __str__(self) -> str: - return '%.3f, %.3f, %.3f, %.3f' % self.value + return "%.3f, %.3f, %.3f, %.3f" % self.value @staticmethod - def from_config(config_value: str, default: 'ScoreWeightsValue') -> 'ScoreWeightsValue': + def from_config(config_value: str, default: "ScoreWeightsValue") -> "ScoreWeightsValue": try: return ScoreWeightsValue(config_value) except ValueError as ex: raise OptionParseFailure( - 'Score weights must be specified as four numbers in the form (H,S,L,E),' - ' e.g. (0.9, 0.2, 2.0, 0.5). Commas/brackets/slashes are ignored.') from ex + "Score weights must be specified as four numbers in the form (H,S,L,E)," + " e.g. (0.9, 0.2, 2.0, 0.5). Commas/brackets/slashes are ignored." + ) from ex class KernelSizeValue(ValidatedValue): @@ -201,21 +204,22 @@ def __repr__(self) -> str: def __str__(self) -> str: if self.value is None: - return 'auto' + return "auto" return str(self.value) @staticmethod - def from_config(config_value: str, default: 'KernelSizeValue') -> 'KernelSizeValue': + def from_config(config_value: str, default: "KernelSizeValue") -> "KernelSizeValue": try: return KernelSizeValue(int(config_value)) except ValueError as ex: raise OptionParseFailure( - 'Value must be an odd integer greater than 1, or set to -1 for auto kernel size.' + "Value must be an odd integer greater than 1, or set to -1 for auto kernel size." ) from ex class TimecodeFormat(Enum): """Format to display timecodes.""" + FRAMES = 0 """Print timecodes as exact frame number.""" TIMECODE = 1 @@ -229,16 +233,16 @@ def format(self, timecode: FrameTimecode) -> str: if self == TimecodeFormat.TIMECODE: return timecode.get_timecode() if self == TimecodeFormat.SECONDS: - return '%.3f' % timecode.get_seconds() - assert False + return "%.3f" % timecode.get_seconds() + raise RuntimeError("Unhandled format specifier.") ConfigValue = Union[bool, int, float, str] ConfigDict = Dict[str, Dict[str, ConfigValue]] -_CONFIG_FILE_NAME: AnyStr = 'scenedetect.cfg' +_CONFIG_FILE_NAME: AnyStr = "scenedetect.cfg" _CONFIG_FILE_DIR: AnyStr = user_config_dir("PySceneDetect", False) -_PLACEHOLDER = 0 # Placeholder for image quality default, as the value depends on output format +_PLACEHOLDER = 0 # Placeholder for image quality default, as the value depends on output format CONFIG_FILE_PATH: AnyStr = os.path.join(_CONFIG_FILE_DIR, _CONFIG_FILE_NAME) DEFAULT_JPG_QUALITY = 95 @@ -349,29 +353,42 @@ def format(self, timecode: FrameTimecode) -> str: certain string options are stored in `CHOICE_MAP`.""" CHOICE_MAP: Dict[str, Dict[str, List[str]]] = { - 'backend-pyav': { - 'threading_mode': [mode.lower() for mode in VALID_PYAV_THREAD_MODES], + "backend-pyav": { + "threading_mode": [mode.lower() for mode in VALID_PYAV_THREAD_MODES], }, - 'detect-content': { - 'filter-mode': [mode.name.lower() for mode in FlashFilter.Mode], + "detect-content": { + "filter-mode": [mode.name.lower() for mode in FlashFilter.Mode], }, - 'global': { - 'backend': ['opencv', 'pyav', 'moviepy'], - 'default-detector': ['detect-adaptive', 'detect-content', 'detect-threshold'], - 'downscale-method': [value.name.lower() for value in Interpolation], - 'verbosity': ['debug', 'info', 'warning', 'error', 'none'], + "global": { + "backend": ["opencv", "pyav", "moviepy"], + "default-detector": [ + "detect-adaptive", + "detect-content", + "detect-threshold", + "detect-hash", + "detect-hist", + ], + "downscale-method": [value.name.lower() for value in Interpolation], + "verbosity": ["debug", "info", "warning", "error", "none"], }, - 'list-scenes': { - 'cut-format': [value.name.lower() for value in TimecodeFormat], + "list-scenes": { + "cut-format": [value.name.lower() for value in TimecodeFormat], }, - 'save-images': { - 'format': ['jpeg', 'png', 'webp'], - 'scale-method': [value.name.lower() for value in Interpolation], + "save-images": { + "format": ["jpeg", "png", "webp"], + "scale-method": [value.name.lower() for value in Interpolation], }, - 'split-video': { - 'preset': [ - 'ultrafast', 'superfast', 'veryfast', 'faster', 'fast', 'medium', 'slow', 'slower', - 'veryslow' + "split-video": { + "preset": [ + "ultrafast", + "superfast", + "veryfast", + "faster", + "fast", + "medium", + "slow", + "slower", + "veryslow", ], }, } @@ -390,12 +407,12 @@ def _validate_structure(config: ConfigParser) -> List[str]: """ errors: List[str] = [] for section in config.sections(): - if not section in CONFIG_MAP.keys(): - errors.append('Unsupported config section: [%s]' % (section)) + if section not in CONFIG_MAP.keys(): + errors.append("Unsupported config section: [%s]" % (section)) continue - for (option_name, _) in config.items(section): - if not option_name in CONFIG_MAP[section].keys(): - errors.append('Unsupported config option in [%s]: %s' % (section, option_name)) + for option_name, _ in config.items(section): + if option_name not in CONFIG_MAP[section].keys(): + errors.append("Unsupported config option in [%s]: %s" % (section, option_name)) return errors @@ -414,20 +431,22 @@ def _parse_config(config: ConfigParser) -> Tuple[ConfigDict, List[str]]: try: value_type = None if isinstance(CONFIG_MAP[command][option], bool): - value_type = 'yes/no value' + value_type = "yes/no value" out_map[command][option] = config.getboolean(command, option) continue elif isinstance(CONFIG_MAP[command][option], int): - value_type = 'integer' + value_type = "integer" out_map[command][option] = config.getint(command, option) continue elif isinstance(CONFIG_MAP[command][option], float): - value_type = 'number' + value_type = "number" out_map[command][option] = config.getfloat(command, option) continue except ValueError as _: - errors.append('Invalid [%s] value for %s: %s is not a valid %s.' % - (command, option, config.get(command, option), value_type)) + errors.append( + "Invalid [%s] value for %s: %s is not a valid %s." + % (command, option, config.get(command, option), value_type) + ) continue # Handle custom validation types. @@ -437,21 +456,30 @@ def _parse_config(config: ConfigParser) -> Tuple[ConfigDict, List[str]]: if issubclass(option_type, ValidatedValue): try: out_map[command][option] = option_type.from_config( - config_value=config_value, default=default) + config_value=config_value, default=default + ) except OptionParseFailure as ex: - errors.append('Invalid [%s] value for %s:\n %s\n%s' % - (command, option, config_value, ex.error)) + errors.append( + "Invalid [%s] value for %s:\n %s\n%s" + % (command, option, config_value, ex.error) + ) continue # If we didn't process the value as a given type, handle it as a string. We also # replace newlines with spaces, and strip any remaining leading/trailing whitespace. if value_type is None: - config_value = config.get(command, option).replace('\n', ' ').strip() + config_value = config.get(command, option).replace("\n", " ").strip() if command in CHOICE_MAP and option in CHOICE_MAP[command]: if config_value.lower() not in CHOICE_MAP[command][option]: - errors.append('Invalid [%s] value for %s: %s. Must be one of: %s.' % - (command, option, config.get(command, option), ', '.join( - choice for choice in CHOICE_MAP[command][option]))) + errors.append( + "Invalid [%s] value for %s: %s. Must be one of: %s." + % ( + command, + option, + config.get(command, option), + ", ".join(choice for choice in CHOICE_MAP[command][option]), + ) + ) continue out_map[command][option] = config_value continue @@ -469,9 +497,8 @@ def __init__(self, init_log: Tuple[int, str], reason: Optional[Exception] = None class ConfigRegistry: - def __init__(self, path: Optional[str] = None, throw_exception: bool = True): - self._config: ConfigDict = {} # Options set in the loaded config file. + self._config: ConfigDict = {} # Options set in the loaded config file. self._init_log: List[Tuple[int, str]] = [] self._initialized = False @@ -487,7 +514,7 @@ def __init__(self, path: Optional[str] = None, throw_exception: bool = True): self._init_log = ex.init_log if ex.reason is not None: self._init_log += [ - (logging.ERROR, 'Error: %s' % str(ex.reason).replace('\t', ' ')), + (logging.ERROR, "Error: %s" % str(ex.reason).replace("\t", " ")), ] self._initialized = False @@ -527,13 +554,13 @@ def _load_from_disk(self, path=None): # Try to load and parse the config file at `path`. config = ConfigParser() try: - with open(path, 'r') as config_file: + with open(path) as config_file: config_file_contents = config_file.read() config.read_string(config_file_contents, source=path) except ParsingError as ex: - raise ConfigLoadFailure(self._init_log, reason=ex) + raise ConfigLoadFailure(self._init_log, reason=ex) from None except OSError as ex: - raise ConfigLoadFailure(self._init_log, reason=ex) + raise ConfigLoadFailure(self._init_log, reason=ex) from None # At this point the config file syntax is correct, but we need to still validate # the parsed options (i.e. that the options have valid values). errors = _validate_structure(config) @@ -548,11 +575,13 @@ def is_default(self, command: str, option: str) -> bool: """True if specified config option is unset (i.e. the default), False otherwise.""" return not (command in self._config and option in self._config[command]) - def get_value(self, - command: str, - option: str, - override: Optional[ConfigValue] = None, - ignore_default: bool = False) -> ConfigValue: + def get_value( + self, + command: str, + option: str, + override: Optional[ConfigValue] = None, + ignore_default: bool = False, + ) -> ConfigValue: """Get the current setting or default value of the specified command option.""" assert command in CONFIG_MAP and option in CONFIG_MAP[command] if override is not None: @@ -567,10 +596,9 @@ def get_value(self, return value.value return value - def get_help_string(self, - command: str, - option: str, - show_default: Optional[bool] = None) -> str: + def get_help_string( + self, command: str, option: str, show_default: Optional[bool] = None + ) -> str: """Get a string to specify for the help text indicating the current command option value, if set, or the default. @@ -584,11 +612,12 @@ def get_help_string(self, is_flag = isinstance(CONFIG_MAP[command][option], bool) if command in self._config and option in self._config[command]: if is_flag: - value_str = 'on' if self._config[command][option] else 'off' + value_str = "on" if self._config[command][option] else "off" else: value_str = str(self._config[command][option]) - return ' [setting: %s]' % (value_str) - if show_default is False or (show_default is None and is_flag - and CONFIG_MAP[command][option] is False): - return '' - return ' [default: %s]' % (str(CONFIG_MAP[command][option])) + return " [setting: %s]" % (value_str) + if show_default is False or ( + show_default is None and is_flag and CONFIG_MAP[command][option] is False + ): + return "" + return " [default: %s]" % (str(CONFIG_MAP[command][option])) diff --git a/scenedetect/_cli/context.py b/scenedetect/_cli/context.py index ee583727..de0e95a0 100644 --- a/scenedetect/_cli/context.py +++ b/scenedetect/_cli/context.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -15,41 +14,49 @@ import logging import os import typing as ty -from typing import Any, AnyStr, Dict, Optional, Tuple, Type import click -import scenedetect - -from scenedetect import open_video, AVAILABLE_BACKENDS - -from scenedetect.scene_detector import SceneDetector, FlashFilter -from scenedetect.platform import get_and_create_path, get_cv2_imwrite_params, init_logger -from scenedetect.frame_timecode import FrameTimecode, MAX_FPS_DELTA -from scenedetect.video_stream import VideoStream, VideoOpenFailure, FrameRateUnavailable -from scenedetect.video_splitter import is_mkvmerge_available, is_ffmpeg_available -from scenedetect.detectors import AdaptiveDetector, ContentDetector, ThresholdDetector, HistogramDetector +import scenedetect # Required to access __version__ +from scenedetect import AVAILABLE_BACKENDS, open_video +from scenedetect._cli.config import ( + CHOICE_MAP, + DEFAULT_JPG_QUALITY, + DEFAULT_WEBP_QUALITY, + ConfigLoadFailure, + ConfigRegistry, + TimecodeFormat, +) +from scenedetect.detectors import ( + AdaptiveDetector, + ContentDetector, + HashDetector, + HistogramDetector, + ThresholdDetector, +) +from scenedetect.frame_timecode import MAX_FPS_DELTA, FrameTimecode +from scenedetect.platform import get_cv2_imwrite_params, init_logger +from scenedetect.scene_detector import FlashFilter, SceneDetector +from scenedetect.scene_manager import Interpolation, SceneManager from scenedetect.stats_manager import StatsManager -from scenedetect.scene_manager import SceneManager, Interpolation - -from scenedetect._cli.config import (ConfigRegistry, ConfigLoadFailure, TimecodeFormat, CHOICE_MAP, - DEFAULT_JPG_QUALITY, DEFAULT_WEBP_QUALITY) +from scenedetect.video_splitter import is_ffmpeg_available, is_mkvmerge_available +from scenedetect.video_stream import FrameRateUnavailable, VideoOpenFailure, VideoStream -logger = logging.getLogger('pyscenedetect') +logger = logging.getLogger("pyscenedetect") USER_CONFIG = ConfigRegistry(throw_exception=False) -def parse_timecode(value: ty.Optional[str], - frame_rate: float, - correct_pts: bool = False) -> FrameTimecode: +def parse_timecode( + value: ty.Optional[str], frame_rate: float, correct_pts: bool = False +) -> FrameTimecode: """Parses a user input string into a FrameTimecode assuming the given framerate. If value is None, None will be returned instead of processing the value. Raises: click.BadParameter - """ + """ if value is None: return None try: @@ -60,16 +67,17 @@ def parse_timecode(value: ty.Optional[str], return FrameTimecode(timecode=value, fps=frame_rate) except ValueError as ex: raise click.BadParameter( - 'timecode must be in seconds (100.0), frames (100), or HH:MM:SS') from ex + "timecode must be in seconds (100.0), frames (100), or HH:MM:SS" + ) from ex def contains_sequence_or_url(video_path: str) -> bool: """Checks if the video path is a URL or image sequence.""" - return '%' in video_path or '://' in video_path + return "%" in video_path or "://" in video_path def check_split_video_requirements(use_mkvmerge: bool) -> None: - """ Validates that the proper tool is available on the system to perform the + """Validates that the proper tool is available on the system to perform the `split-video` command. Arguments: @@ -81,19 +89,19 @@ def check_split_video_requirements(use_mkvmerge: bool) -> None: if (use_mkvmerge and not is_mkvmerge_available()) or not is_ffmpeg_available(): error_strs = [ "{EXTERN_TOOL} is required for split-video{EXTRA_ARGS}.".format( - EXTERN_TOOL='mkvmerge' if use_mkvmerge else 'ffmpeg', - EXTRA_ARGS=' when mkvmerge (-m) is set' if use_mkvmerge else '') + EXTERN_TOOL="mkvmerge" if use_mkvmerge else "ffmpeg", + EXTRA_ARGS=" when mkvmerge (-m) is set" if use_mkvmerge else "", + ) ] - error_strs += ['Ensure the program is available on your system and try again.'] + error_strs += ["Ensure the program is available on your system and try again."] if not use_mkvmerge and is_mkvmerge_available(): - error_strs += ['You can specify mkvmerge (-m) to use mkvmerge for splitting.'] + error_strs += ["You can specify mkvmerge (-m) to use mkvmerge for splitting."] elif use_mkvmerge and is_ffmpeg_available(): - error_strs += ['You can specify copy (-c) to use ffmpeg stream copying.'] - error_str = '\n'.join(error_strs) - raise click.BadParameter(error_str, param_hint='split-video') + error_strs += ["You can specify copy (-c) to use ffmpeg stream copying."] + error_str = "\n".join(error_strs) + raise click.BadParameter(error_str, param_hint="split-video") -# pylint: disable=too-many-instance-attributes,too-many-arguments,too-many-locals class CliContext: """Context of the command-line interface and config file parameters passed between sub-commands. @@ -113,65 +121,66 @@ def __init__(self): self.added_detector: bool = False # Global `scenedetect` Options - self.output_dir: str = None # -o/--output - self.quiet_mode: bool = None # -q/--quiet or -v/--verbosity quiet - self.stats_file_path: str = None # -s/--stats - self.drop_short_scenes: bool = None # --drop-short-scenes - self.merge_last_scene: bool = None # --merge-last-scene - self.min_scene_len: FrameTimecode = None # -m/--min-scene-len - self.frame_skip: int = None # -fs/--frame-skip - self.default_detector: Tuple[Type[SceneDetector], - Dict[str, Any]] = None # [global] default-detector + self.output_dir: str = None # -o/--output + self.quiet_mode: bool = None # -q/--quiet or -v/--verbosity quiet + self.stats_file_path: str = None # -s/--stats + self.drop_short_scenes: bool = None # --drop-short-scenes + self.merge_last_scene: bool = None # --merge-last-scene + self.min_scene_len: FrameTimecode = None # -m/--min-scene-len + self.frame_skip: int = None # -fs/--frame-skip + self.default_detector: ty.Tuple[ty.Type[SceneDetector], ty.Dict[str, ty.Any]] = ( + None # [global] default-detector + ) # `time` Command Options self.time: bool = False - self.start_time: FrameTimecode = None # time -s/--start - self.end_time: FrameTimecode = None # time -e/--end - self.duration: FrameTimecode = None # time -d/--duration + self.start_time: FrameTimecode = None # time -s/--start + self.end_time: FrameTimecode = None # time -e/--end + self.duration: FrameTimecode = None # time -d/--duration # `save-images` Command Options self.save_images: bool = False - self.image_extension: str = None # save-images -j/--jpeg, -w/--webp, -p/--png - self.image_dir: str = None # save-images -o/--output - self.image_param: int = None # save-images -q/--quality if -j/-w, - # otherwise -c/--compression if -p - self.image_name_format: str = None # save-images -f/--name-format - self.num_images: int = None # save-images -n/--num-images - self.frame_margin: int = 1 # save-images -m/--frame-margin - self.scale: float = None # save-images -s/--scale - self.height: int = None # save-images -h/--height - self.width: int = None # save-images -w/--width - self.scale_method: Interpolation = None # [save-images] scale-method + self.image_extension: str = None # save-images -j/--jpeg, -w/--webp, -p/--png + self.image_dir: str = None # save-images -o/--output + self.image_param: int = None # save-images -q/--quality if -j/-w, + # otherwise -c/--compression if -p + self.image_name_format: str = None # save-images -f/--name-format + self.num_images: int = None # save-images -n/--num-images + self.frame_margin: int = 1 # save-images -m/--frame-margin + self.scale: float = None # save-images -s/--scale + self.height: int = None # save-images -h/--height + self.width: int = None # save-images -w/--width + self.scale_method: Interpolation = None # [save-images] scale-method # `split-video` Command Options self.split_video: bool = False - self.split_mkvmerge: bool = None # split-video -m/--mkvmerge - self.split_args: str = None # split-video -a/--args, -c/--copy - self.split_dir: str = None # split-video -o/--output - self.split_name_format: str = None # split-video -f/--filename - self.split_quiet: bool = None # split-video -q/--quiet + self.split_mkvmerge: bool = None # split-video -m/--mkvmerge + self.split_args: str = None # split-video -a/--args, -c/--copy + self.split_dir: str = None # split-video -o/--output + self.split_name_format: str = None # split-video -f/--filename + self.split_quiet: bool = None # split-video -q/--quiet # `list-scenes` Command Options self.list_scenes: bool = False - self.list_scenes_quiet: bool = None # list-scenes -q/--quiet - self.scene_list_dir: str = None # list-scenes -o/--output - self.scene_list_name_format: str = None # list-scenes -f/--filename - self.scene_list_output: bool = None # list-scenes -n/--no-output-file - self.skip_cuts: bool = None # list-scenes -s/--skip-cuts - self.display_cuts: bool = True # [list-scenes] display-cuts - self.display_scenes: bool = True # [list-scenes] display-scenes - self.cut_format: TimecodeFormat = TimecodeFormat.TIMECODE # [list-scenes] cut-format + self.list_scenes_quiet: bool = None # list-scenes -q/--quiet + self.scene_list_dir: str = None # list-scenes -o/--output + self.scene_list_name_format: str = None # list-scenes -f/--filename + self.scene_list_output: bool = None # list-scenes -n/--no-output-file + self.skip_cuts: bool = None # list-scenes -s/--skip-cuts + self.display_cuts: bool = True # [list-scenes] display-cuts + self.display_scenes: bool = True # [list-scenes] display-scenes + self.cut_format: TimecodeFormat = TimecodeFormat.TIMECODE # [list-scenes] cut-format # `export-html` Command Options self.export_html: bool = False - self.html_name_format: str = None # export-html -f/--filename - self.html_include_images: bool = None # export-html --no-images - self.image_width: int = None # export-html -w/--image-width - self.image_height: int = None # export-html -h/--image-height + self.html_name_format: str = None # export-html -f/--filename + self.html_include_images: bool = None # export-html --no-images + self.image_width: int = None # export-html -w/--image-width + self.image_height: int = None # export-html -h/--image-height # `load-scenes` Command Options - self.load_scenes_input: str = None # load-scenes -i/--input - self.load_scenes_column_name: str = None # load-scenes -c/--start-col-name + self.load_scenes_input: str = None # load-scenes -i/--input + self.load_scenes_column_name: str = None # load-scenes -c/--start-col-name # # Command Handlers @@ -179,21 +188,21 @@ def __init__(self): def handle_options( self, - input_path: AnyStr, - output: Optional[AnyStr], + input_path: ty.AnyStr, + output: ty.Optional[ty.AnyStr], framerate: float, - stats_file: Optional[AnyStr], - downscale: Optional[int], + stats_file: ty.Optional[ty.AnyStr], + downscale: ty.Optional[int], frame_skip: int, min_scene_len: str, drop_short_scenes: bool, merge_last_scene: bool, - backend: Optional[str], + backend: ty.Optional[str], quiet: bool, - logfile: Optional[AnyStr], - config: Optional[AnyStr], - stats: Optional[AnyStr], - verbosity: Optional[str], + logfile: ty.Optional[ty.AnyStr], + config: ty.Optional[ty.AnyStr], + stats: ty.Optional[ty.AnyStr], + verbosity: ty.Optional[str], ): """Parse all global options/arguments passed to the main scenedetect command, before other sub-commands (e.g. this function processes the [options] when calling @@ -218,9 +227,9 @@ def handle_options( self.config = ConfigRegistry(config) init_log += self.config.get_init_log() # Re-initialize logger with the correct verbosity. - if verbosity is None and not self.config.is_default('global', 'verbosity'): - verbosity_str = self.config.get_value('global', 'verbosity') - assert verbosity_str in CHOICE_MAP['global']['verbosity'] + if verbosity is None and not self.config.is_default("global", "verbosity"): + verbosity_str = self.config.get_value("global", "verbosity") + assert verbosity_str in CHOICE_MAP["global"]["verbosity"] self.quiet_mode = False self._initialize_logging(verbosity=verbosity_str, logfile=logfile) @@ -228,11 +237,11 @@ def handle_options( init_failure = True init_log += ex.init_log if ex.reason is not None: - init_log += [(logging.ERROR, 'Error: %s' % str(ex.reason).replace('\t', ' '))] + init_log += [(logging.ERROR, "Error: %s" % str(ex.reason).replace("\t", " "))] finally: # Make sure we print the version number even on any kind of init failure. - logger.info('PySceneDetect %s', scenedetect.__version__) - for (log_level, log_str) in init_log: + logger.info("PySceneDetect %s", scenedetect.__version__) + for log_level, log_str in init_log: logger.log(log_level, log_str) if init_failure: logger.critical("Error processing configuration file.") @@ -241,16 +250,17 @@ def handle_options( if self.config.config_dict: logger.debug("Current configuration:\n%s", str(self.config.config_dict)) - logger.debug('Parsing program options.') + logger.debug("Parsing program options.") if stats is not None and frame_skip: error_strs = [ - 'Unable to detect scenes with stats file if frame skip is not 0.', - ' Either remove the -fs/--frame-skip option, or the -s/--stats file.\n' + "Unable to detect scenes with stats file if frame skip is not 0.", + " Either remove the -fs/--frame-skip option, or the -s/--stats file.\n", ] - logger.error('\n'.join(error_strs)) + logger.error("\n".join(error_strs)) raise click.BadParameter( - 'Combining the -s/--stats and -fs/--frame-skip options is not supported.', - param_hint='frame skip + stats file') + "Combining the -s/--stats and -fs/--frame-skip options is not supported.", + param_hint="frame skip + stats file", + ) # Handle the case where -i/--input was not specified (e.g. for the `help` command). if input_path is None: @@ -260,19 +270,25 @@ def handle_options( self._open_video_stream( input_path=input_path, framerate=framerate, - backend=self.config.get_value("global", "backend", backend, ignore_default=True)) + backend=self.config.get_value("global", "backend", backend, ignore_default=True), + ) self.output_dir = output if output else self.config.get_value("global", "output") if self.output_dir: - logger.info('Output directory set:\n %s', self.output_dir) + logger.info("Output directory set:\n %s", self.output_dir) self.min_scene_len = parse_timecode( - min_scene_len if min_scene_len is not None else self.config.get_value( - "global", "min-scene-len"), self.video_stream.frame_rate) + min_scene_len + if min_scene_len is not None + else self.config.get_value("global", "min-scene-len"), + self.video_stream.frame_rate, + ) self.drop_short_scenes = drop_short_scenes or self.config.get_value( - "global", "drop-short-scenes") + "global", "drop-short-scenes" + ) self.merge_last_scene = merge_last_scene or self.config.get_value( - "global", "merge-last-scene") + "global", "merge-last-scene" + ) self.frame_skip = self.config.get_value("global", "frame-skip", frame_skip) # Create StatsManager if --stats is specified. @@ -282,20 +298,20 @@ def handle_options( # Initialize default detector with values in the config file. default_detector = self.config.get_value("global", "default-detector") - if default_detector == 'detect-adaptive': + if default_detector == "detect-adaptive": self.default_detector = (AdaptiveDetector, self.get_detect_adaptive_params()) - elif default_detector == 'detect-content': + elif default_detector == "detect-content": self.default_detector = (ContentDetector, self.get_detect_content_params()) - elif default_detector == 'detect-hash': + elif default_detector == "detect-hash": self.default_detector = (HashDetector, self.get_detect_hash_params()) - elif default_detector == 'detect-hist': + elif default_detector == "detect-hist": self.default_detector = (HistogramDetector, self.get_detect_hist_params()) - elif default_detector == 'detect-threshold': + elif default_detector == "detect-threshold": self.default_detector = (ThresholdDetector, self.get_detect_threshold_params()) else: - raise click.BadParameter("Unknown detector type!", param_hint='default-detector') + raise click.BadParameter("Unknown detector type!", param_hint="default-detector") - logger.debug('Initializing SceneManager.') + logger.debug("Initializing SceneManager.") scene_manager = SceneManager(self.stats_manager) if downscale is None and self.config.is_default("global", "downscale"): @@ -307,20 +323,21 @@ def handle_options( scene_manager.downscale = downscale except ValueError as ex: logger.debug(str(ex)) - raise click.BadParameter(str(ex), param_hint='downscale factor') - scene_manager.interpolation = Interpolation[self.config.get_value( - 'global', 'downscale-method').upper()] + raise click.BadParameter(str(ex), param_hint="downscale factor") from None + scene_manager.interpolation = Interpolation[ + self.config.get_value("global", "downscale-method").upper() + ] self.scene_manager = scene_manager def get_detect_content_params( self, - threshold: Optional[float] = None, + threshold: ty.Optional[float] = None, luma_only: bool = None, - min_scene_len: Optional[str] = None, - weights: Optional[Tuple[float, float, float, float]] = None, - kernel_size: Optional[int] = None, - filter_mode: Optional[str] = None, - ) -> Dict[str, Any]: + min_scene_len: ty.Optional[str] = None, + weights: ty.Optional[ty.Tuple[float, float, float, float]] = None, + kernel_size: ty.Optional[int] = None, + filter_mode: ty.Optional[str] = None, + ) -> ty.Dict[str, ty.Any]: """Handle detect-content command options and return args to construct one with.""" self._ensure_input_open() @@ -328,10 +345,10 @@ def get_detect_content_params( min_scene_len = 0 else: if min_scene_len is None: - if self.config.is_default('detect-content', 'min-scene-len'): + if self.config.is_default("detect-content", "min-scene-len"): min_scene_len = self.min_scene_len.frame_num else: - min_scene_len = self.config.get_value('detect-content', 'min-scene-len') + min_scene_len = self.config.get_value("detect-content", "min-scene-len") min_scene_len = parse_timecode(min_scene_len, self.video_stream.frame_rate).frame_num if weights is not None: @@ -339,50 +356,48 @@ def get_detect_content_params( weights = ContentDetector.Components(*weights) except ValueError as ex: logger.debug(str(ex)) - raise click.BadParameter(str(ex), param_hint='weights') + raise click.BadParameter(str(ex), param_hint="weights") from None return { - 'weights': - self.config.get_value('detect-content', 'weights', weights), - 'kernel_size': - self.config.get_value('detect-content', 'kernel-size', kernel_size), - 'luma_only': - luma_only or self.config.get_value('detect-content', 'luma-only'), - 'min_scene_len': - min_scene_len, - 'threshold': - self.config.get_value('detect-content', 'threshold', threshold), - 'filter_mode': - FlashFilter.Mode[self.config.get_value("detect-content", "filter-mode", - filter_mode).upper()], + "weights": self.config.get_value("detect-content", "weights", weights), + "kernel_size": self.config.get_value("detect-content", "kernel-size", kernel_size), + "luma_only": luma_only or self.config.get_value("detect-content", "luma-only"), + "min_scene_len": min_scene_len, + "threshold": self.config.get_value("detect-content", "threshold", threshold), + "filter_mode": FlashFilter.Mode[ + self.config.get_value("detect-content", "filter-mode", filter_mode).upper() + ], } def get_detect_adaptive_params( self, - threshold: Optional[float] = None, - min_content_val: Optional[float] = None, - frame_window: Optional[int] = None, + threshold: ty.Optional[float] = None, + min_content_val: ty.Optional[float] = None, + frame_window: ty.Optional[int] = None, luma_only: bool = None, - min_scene_len: Optional[str] = None, - weights: Optional[Tuple[float, float, float, float]] = None, - kernel_size: Optional[int] = None, - min_delta_hsv: Optional[float] = None, - ) -> Dict[str, Any]: + min_scene_len: ty.Optional[str] = None, + weights: ty.Optional[ty.Tuple[float, float, float, float]] = None, + kernel_size: ty.Optional[int] = None, + min_delta_hsv: ty.Optional[float] = None, + ) -> ty.Dict[str, ty.Any]: """Handle detect-adaptive command options and return args to construct one with.""" self._ensure_input_open() # TODO(v0.7): Remove these branches when removing -d/--min-delta-hsv. if min_delta_hsv is not None: - logger.error('-d/--min-delta-hsv is deprecated, use -c/--min-content-val instead.') + logger.error("-d/--min-delta-hsv is deprecated, use -c/--min-content-val instead.") if min_content_val is None: min_content_val = min_delta_hsv # Handle case where deprecated min-delta-hsv is set, and use it to set min-content-val. if not self.config.is_default("detect-adaptive", "min-delta-hsv"): - logger.error('[detect-adaptive] config file option `min-delta-hsv` is deprecated' - ', use `min-delta-hsv` instead.') + logger.error( + "[detect-adaptive] config file option `min-delta-hsv` is deprecated" + ", use `min-delta-hsv` instead." + ) if self.config.is_default("detect-adaptive", "min-content-val"): self.config.config_dict["detect-adaptive"]["min-content-val"] = ( - self.config.config_dict["detect-adaptive"]["min-deleta-hsv"]) + self.config.config_dict["detect-adaptive"]["min-deleta-hsv"] + ) if self.drop_short_scenes: min_scene_len = 0 @@ -399,31 +414,26 @@ def get_detect_adaptive_params( weights = ContentDetector.Components(*weights) except ValueError as ex: logger.debug(str(ex)) - raise click.BadParameter(str(ex), param_hint='weights') + raise click.BadParameter(str(ex), param_hint="weights") from None return { - 'adaptive_threshold': - self.config.get_value("detect-adaptive", "threshold", threshold), - 'weights': - self.config.get_value("detect-adaptive", "weights", weights), - 'kernel_size': - self.config.get_value("detect-adaptive", "kernel-size", kernel_size), - 'luma_only': - luma_only or self.config.get_value("detect-adaptive", "luma-only"), - 'min_content_val': - self.config.get_value("detect-adaptive", "min-content-val", min_content_val), - 'min_scene_len': - min_scene_len, - 'window_width': - self.config.get_value("detect-adaptive", "frame-window", frame_window), + "adaptive_threshold": self.config.get_value("detect-adaptive", "threshold", threshold), + "weights": self.config.get_value("detect-adaptive", "weights", weights), + "kernel_size": self.config.get_value("detect-adaptive", "kernel-size", kernel_size), + "luma_only": luma_only or self.config.get_value("detect-adaptive", "luma-only"), + "min_content_val": self.config.get_value( + "detect-adaptive", "min-content-val", min_content_val + ), + "min_scene_len": min_scene_len, + "window_width": self.config.get_value("detect-adaptive", "frame-window", frame_window), } def get_detect_threshold_params( self, - threshold: Optional[float] = None, - fade_bias: Optional[float] = None, + threshold: ty.Optional[float] = None, + fade_bias: ty.Optional[float] = None, add_last_scene: bool = None, - min_scene_len: Optional[str] = None, - ) -> Dict[str, Any]: + min_scene_len: ty.Optional[str] = None, + ) -> ty.Dict[str, ty.Any]: """Handle detect-threshold command options and return args to construct one with.""" self._ensure_input_open() @@ -438,17 +448,14 @@ def get_detect_threshold_params( min_scene_len = parse_timecode(min_scene_len, self.video_stream.frame_rate).frame_num # TODO(v1.0): add_last_scene cannot be disabled right now. return { - 'add_final_scene': - add_last_scene or self.config.get_value("detect-threshold", "add-last-scene"), - 'fade_bias': - self.config.get_value("detect-threshold", "fade-bias", fade_bias), - 'min_scene_len': - min_scene_len, - 'threshold': - self.config.get_value("detect-threshold", "threshold", threshold), + "add_final_scene": add_last_scene + or self.config.get_value("detect-threshold", "add-last-scene"), + "fade_bias": self.config.get_value("detect-threshold", "fade-bias", fade_bias), + "min_scene_len": min_scene_len, + "threshold": self.config.get_value("detect-threshold", "threshold", threshold), } - def handle_load_scenes(self, input: AnyStr, start_col_name: Optional[str]): + def handle_load_scenes(self, input: ty.AnyStr, start_col_name: ty.Optional[str]): """Handle `load-scenes` command options.""" self._ensure_input_open() if self.added_detector: @@ -458,13 +465,19 @@ def handle_load_scenes(self, input: AnyStr, start_col_name: Optional[str]): input = os.path.abspath(input) if not os.path.exists(input): raise click.BadParameter( - f'Could not load scenes, file does not exist: {input}', param_hint='-i/--input') + f"Could not load scenes, file does not exist: {input}", param_hint="-i/--input" + ) self.load_scenes_input = input - self.load_scenes_column_name = self.config.get_value("load-scenes", "start-col-name", - start_col_name) + self.load_scenes_column_name = self.config.get_value( + "load-scenes", "start-col-name", start_col_name + ) - def get_detect_hist_params(self, threshold: Optional[float], bins: Optional[int], - min_scene_len: Optional[str]) -> Dict[str, Any]: + def get_detect_hist_params( + self, + threshold: ty.Optional[float] = None, + bins: ty.Optional[int] = None, + min_scene_len: ty.Optional[str] = None, + ) -> ty.Dict[str, ty.Any]: """Handle detect-hist command options and return args to construct one with.""" self._ensure_input_open() if self.drop_short_scenes: @@ -477,14 +490,18 @@ def get_detect_hist_params(self, threshold: Optional[float], bins: Optional[int] min_scene_len = self.config.get_value("detect-hist", "min-scene-len") min_scene_len = parse_timecode(min_scene_len, self.video_stream.frame_rate).frame_num return { - 'bins': self.config.get_value("detect-hist", "bins", bins), - 'min_scene_len': min_scene_len, - 'threshold': self.config.get_value("detect-hist", "threshold", threshold), + "bins": self.config.get_value("detect-hist", "bins", bins), + "min_scene_len": min_scene_len, + "threshold": self.config.get_value("detect-hist", "threshold", threshold), } - def get_detect_hash_params(self, threshold: Optional[float], size: Optional[int], - lowpass: Optional[int], - min_scene_len: Optional[str]) -> Dict[str, Any]: + def get_detect_hash_params( + self, + threshold: ty.Optional[float] = None, + size: ty.Optional[int] = None, + lowpass: ty.Optional[int] = None, + min_scene_len: ty.Optional[str] = None, + ) -> ty.Dict[str, ty.Any]: """Handle detect-hash command options and return args to construct one with.""" self._ensure_input_open() if self.drop_short_scenes: @@ -505,35 +522,36 @@ def get_detect_hash_params(self, threshold: Optional[float], size: Optional[int] def handle_export_html( self, - filename: Optional[AnyStr], + filename: ty.Optional[ty.AnyStr], no_images: bool, - image_width: Optional[int], - image_height: Optional[int], + image_width: ty.Optional[int], + image_height: ty.Optional[int], ): """Handle `export-html` command options.""" self._ensure_input_open() if self.export_html: - self._on_duplicate_command('export_html') + self._on_duplicate_command("export_html") - no_images = no_images or self.config.get_value('export-html', 'no-images') + no_images = no_images or self.config.get_value("export-html", "no-images") self.html_include_images = not no_images - self.html_name_format = self.config.get_value('export-html', 'filename', filename) - self.image_width = self.config.get_value('export-html', 'image-width', image_width) - self.image_height = self.config.get_value('export-html', 'image-height', image_height) + self.html_name_format = self.config.get_value("export-html", "filename", filename) + self.image_width = self.config.get_value("export-html", "image-width", image_width) + self.image_height = self.config.get_value("export-html", "image-height", image_height) if not self.save_images and not no_images: raise click.BadArgumentUsage( - 'The export-html command requires that the save-images command\n' - 'is specified before it, unless --no-images is specified.') - logger.info('HTML file name format:\n %s', filename) + "The export-html command requires that the save-images command\n" + "is specified before it, unless --no-images is specified." + ) + logger.info("HTML file name format:\n %s", filename) self.export_html = True def handle_list_scenes( self, - output: Optional[AnyStr], - filename: Optional[AnyStr], + output: ty.Optional[ty.AnyStr], + filename: ty.Optional[ty.AnyStr], no_output_file: bool, quiet: bool, skip_cuts: bool, @@ -551,7 +569,8 @@ def handle_list_scenes( no_output_file = no_output_file or self.config.get_value("list-scenes", "no-output-file") self.scene_list_dir = self.config.get_value( - "list-scenes", "output", output, ignore_default=True) + "list-scenes", "output", output, ignore_default=True + ) self.scene_list_name_format = self.config.get_value("list-scenes", "filename", filename) if self.scene_list_name_format is not None and not no_output_file: logger.info("Scene list filename format:\n %s", self.scene_list_name_format) @@ -563,75 +582,78 @@ def handle_list_scenes( def handle_split_video( self, - output: Optional[AnyStr], - filename: Optional[AnyStr], + output: ty.Optional[ty.AnyStr], + filename: ty.Optional[ty.AnyStr], quiet: bool, copy: bool, high_quality: bool, - rate_factor: Optional[int], - preset: Optional[str], - args: Optional[str], + rate_factor: ty.Optional[int], + preset: ty.Optional[str], + args: ty.Optional[str], mkvmerge: bool, ): """Handle `split-video` command options.""" self._ensure_input_open() if self.split_video: - self._on_duplicate_command('split-video') + self._on_duplicate_command("split-video") check_split_video_requirements(use_mkvmerge=mkvmerge) if contains_sequence_or_url(self.video_stream.path): - error_str = 'The split-video command is incompatible with image sequences/URLs.' - raise click.BadParameter(error_str, param_hint='split-video') + error_str = "The split-video command is incompatible with image sequences/URLs." + raise click.BadParameter(error_str, param_hint="split-video") ## ## Common Arguments/Options ## self.split_video = True - self.split_quiet = quiet or self.config.get_value('split-video', 'quiet') - self.split_dir = self.config.get_value('split-video', 'output', output, ignore_default=True) + self.split_quiet = quiet or self.config.get_value("split-video", "quiet") + self.split_dir = self.config.get_value("split-video", "output", output, ignore_default=True) if self.split_dir is not None: - logger.info('Video output path set: \n%s', self.split_dir) - self.split_name_format = self.config.get_value('split-video', 'filename', filename) + logger.info("Video output path set: \n%s", self.split_dir) + self.split_name_format = self.config.get_value("split-video", "filename", filename) # We only load the config values for these flags/options if none of the other # encoder flags/options were set via the CLI to avoid any conflicting options # (e.g. if the config file sets `high-quality = yes` but `--copy` is specified). if not (mkvmerge or copy or high_quality or args or rate_factor or preset): - mkvmerge = self.config.get_value('split-video', 'mkvmerge') - copy = self.config.get_value('split-video', 'copy') - high_quality = self.config.get_value('split-video', 'high-quality') - rate_factor = self.config.get_value('split-video', 'rate-factor') - preset = self.config.get_value('split-video', 'preset') - args = self.config.get_value('split-video', 'args') + mkvmerge = self.config.get_value("split-video", "mkvmerge") + copy = self.config.get_value("split-video", "copy") + high_quality = self.config.get_value("split-video", "high-quality") + rate_factor = self.config.get_value("split-video", "rate-factor") + preset = self.config.get_value("split-video", "preset") + args = self.config.get_value("split-video", "args") # Disallow certain combinations of flags/options. if mkvmerge or copy: - command = 'mkvmerge (-m)' if mkvmerge else 'copy (-c)' + command = "mkvmerge (-m)" if mkvmerge else "copy (-c)" if high_quality: raise click.BadParameter( - 'high-quality (-hq) cannot be used with %s' % (command), - param_hint='split-video') + "high-quality (-hq) cannot be used with %s" % (command), + param_hint="split-video", + ) if args: raise click.BadParameter( - 'args (-a) cannot be used with %s' % (command), param_hint='split-video') + "args (-a) cannot be used with %s" % (command), param_hint="split-video" + ) if rate_factor: raise click.BadParameter( - 'rate-factor (crf) cannot be used with %s' % (command), - param_hint='split-video') + "rate-factor (crf) cannot be used with %s" % (command), param_hint="split-video" + ) if preset: raise click.BadParameter( - 'preset (-p) cannot be used with %s' % (command), param_hint='split-video') + "preset (-p) cannot be used with %s" % (command), param_hint="split-video" + ) ## ## mkvmerge-Specific Arguments/Options ## if mkvmerge: if copy: - logger.warning('copy mode (-c) ignored due to mkvmerge mode (-m).') + logger.warning("copy mode (-c) ignored due to mkvmerge mode (-m).") self.split_mkvmerge = True - logger.info('Using mkvmerge for video splitting.') + logger.info("Using mkvmerge for video splitting.") return ## @@ -644,96 +666,102 @@ def handle_split_video( rate_factor = 22 if not high_quality else 17 if preset is None: preset = "veryfast" if not high_quality else "slow" - args = ("-map 0:v:0 -map 0:a? -map 0:s? " - f"-c:v libx264 -preset {preset} -crf {rate_factor} -c:a aac") + args = ( + "-map 0:v:0 -map 0:a? -map 0:s? " + f"-c:v libx264 -preset {preset} -crf {rate_factor} -c:a aac" + ) - logger.info('ffmpeg arguments: %s', args) + logger.info("ffmpeg arguments: %s", args) self.split_args = args if filename: - logger.info('Output file name format: %s', filename) + logger.info("Output file name format: %s", filename) def handle_save_images( self, - num_images: Optional[int], - output: Optional[AnyStr], - filename: Optional[AnyStr], + num_images: ty.Optional[int], + output: ty.Optional[ty.AnyStr], + filename: ty.Optional[ty.AnyStr], jpeg: bool, webp: bool, - quality: Optional[int], + quality: ty.Optional[int], png: bool, - compression: Optional[int], - frame_margin: Optional[int], - scale: Optional[float], - height: Optional[int], - width: Optional[int], + compression: ty.Optional[int], + frame_margin: ty.Optional[int], + scale: ty.Optional[float], + height: ty.Optional[int], + width: ty.Optional[int], ): """Handle `save-images` command options.""" self._ensure_input_open() if self.save_images: - self._on_duplicate_command('save-images') + self._on_duplicate_command("save-images") - if '://' in self.video_stream.path: - error_str = '\nThe save-images command is incompatible with URLs.' + if "://" in self.video_stream.path: + error_str = "\nThe save-images command is incompatible with URLs." logger.error(error_str) - raise click.BadParameter(error_str, param_hint='save-images') + raise click.BadParameter(error_str, param_hint="save-images") num_flags = sum([1 if flag else 0 for flag in [jpeg, webp, png]]) if num_flags > 1: - logger.error('Multiple image type flags set for save-images command.') + logger.error("Multiple image type flags set for save-images command.") raise click.BadParameter( - 'Only one image type (JPG/PNG/WEBP) can be specified.', param_hint='save-images') + "Only one image type (JPG/PNG/WEBP) can be specified.", param_hint="save-images" + ) # Only use config params for image format if one wasn't specified. elif num_flags == 0: - image_format = self.config.get_value('save-images', 'format').lower() - jpeg = image_format == 'jpeg' - webp = image_format == 'webp' - png = image_format == 'png' + image_format = self.config.get_value("save-images", "format").lower() + jpeg = image_format == "jpeg" + webp = image_format == "webp" + png = image_format == "png" # Only use config params for scale/height/width if none of them are specified explicitly. if scale is None and height is None and width is None: - self.scale = self.config.get_value('save-images', 'scale') - self.height = self.config.get_value('save-images', 'height') - self.width = self.config.get_value('save-images', 'width') + self.scale = self.config.get_value("save-images", "scale") + self.height = self.config.get_value("save-images", "height") + self.width = self.config.get_value("save-images", "width") else: self.scale = scale self.height = height self.width = width - self.scale_method = Interpolation[self.config.get_value('save-images', - 'scale-method').upper()] + self.scale_method = Interpolation[ + self.config.get_value("save-images", "scale-method").upper() + ] default_quality = DEFAULT_WEBP_QUALITY if webp else DEFAULT_JPG_QUALITY quality = ( - default_quality if self.config.is_default('save-images', 'quality') else - self.config.get_value('save-images', 'quality')) + default_quality + if self.config.is_default("save-images", "quality") + else self.config.get_value("save-images", "quality") + ) - compression = self.config.get_value('save-images', 'compression', compression) + compression = self.config.get_value("save-images", "compression", compression) self.image_param = compression if png else quality - self.image_extension = 'jpg' if jpeg else 'png' if png else 'webp' + self.image_extension = "jpg" if jpeg else "png" if png else "webp" valid_params = get_cv2_imwrite_params() - if not self.image_extension in valid_params or valid_params[self.image_extension] is None: + if self.image_extension not in valid_params or valid_params[self.image_extension] is None: error_strs = [ - 'Image encoder type `%s` not supported.' % self.image_extension.upper(), - 'The specified encoder type could not be found in the current OpenCV module.', - 'To enable this output format, please update the installed version of OpenCV.', - 'If you build OpenCV, ensure the the proper dependencies are enabled. ' + "Image encoder type `%s` not supported." % self.image_extension.upper(), + "The specified encoder type could not be found in the current OpenCV module.", + "To enable this output format, please update the installed version of OpenCV.", + "If you build OpenCV, ensure the the proper dependencies are enabled. ", ] - logger.debug('\n'.join(error_strs)) - raise click.BadParameter('\n'.join(error_strs), param_hint='save-images') + logger.debug("\n".join(error_strs)) + raise click.BadParameter("\n".join(error_strs), param_hint="save-images") - self.image_dir = self.config.get_value('save-images', 'output', output, ignore_default=True) + self.image_dir = self.config.get_value("save-images", "output", output, ignore_default=True) - self.image_name_format = self.config.get_value('save-images', 'filename', filename) - self.num_images = self.config.get_value('save-images', 'num-images', num_images) - self.frame_margin = self.config.get_value('save-images', 'frame-margin', frame_margin) + self.image_name_format = self.config.get_value("save-images", "filename", filename) + self.num_images = self.config.get_value("save-images", "num-images", num_images) + self.frame_margin = self.config.get_value("save-images", "frame-margin", frame_margin) - image_type = ('jpeg' if jpeg else self.image_extension).upper() - image_param_type = 'Compression' if png else 'Quality' - image_param_type = ' [%s: %d]' % (image_param_type, self.image_param) - logger.info('Image output format set: %s%s', image_type, image_param_type) + image_type = ("jpeg" if jpeg else self.image_extension).upper() + image_param_type = "Compression" if png else "Quality" + image_param_type = " [%s: %d]" % (image_param_type, self.image_param) + logger.info("Image output format set: %s%s", image_type, image_param_type) if self.image_dir is not None: - logger.info('Image output directory set:\n %s', os.path.abspath(self.image_dir)) + logger.info("Image output directory set:\n %s", os.path.abspath(self.image_dir)) self.save_images = True @@ -741,13 +769,15 @@ def handle_time(self, start, duration, end): """Handle `time` command options.""" self._ensure_input_open() if self.time: - self._on_duplicate_command('time') + self._on_duplicate_command("time") if duration is not None and end is not None: raise click.BadParameter( - 'Only one of --duration/-d or --end/-e can be specified, not both.', - param_hint='time') - logger.debug('Setting video time:\n start: %s, duration: %s, end: %s', start, duration, - end) + "Only one of --duration/-d or --end/-e can be specified, not both.", + param_hint="time", + ) + logger.debug( + "Setting video time:\n start: %s, duration: %s, end: %s", start, duration, end + ) # *NOTE*: The Python API uses 0-based frame indices, but the CLI uses 1-based indices to # match the default start number used by `ffmpeg` when saving frames as images. As such, # we must correct start time if set as frames. See the test_cli_time* tests for for details. @@ -764,9 +794,9 @@ def handle_time(self, start, duration, end): def _initialize_logging( self, - quiet: Optional[bool] = None, - verbosity: Optional[str] = None, - logfile: Optional[AnyStr] = None, + quiet: ty.Optional[bool] = None, + verbosity: ty.Optional[str] = None, + logfile: ty.Optional[ty.AnyStr] = None, ): """Setup logging based on CLI args and user configuration settings.""" if quiet is not None: @@ -774,29 +804,29 @@ def _initialize_logging( curr_verbosity = logging.INFO # Convert verbosity into it's log level enum, and override quiet mode if set. if verbosity is not None: - assert verbosity in CHOICE_MAP['global']['verbosity'] - if verbosity.lower() == 'none': + assert verbosity in CHOICE_MAP["global"]["verbosity"] + if verbosity.lower() == "none": self.quiet_mode = True - verbosity = 'info' + verbosity = "info" else: # Override quiet mode if verbosity is set. self.quiet_mode = False curr_verbosity = getattr(logging, verbosity.upper()) else: - verbosity_str = USER_CONFIG.get_value('global', 'verbosity') - assert verbosity_str in CHOICE_MAP['global']['verbosity'] - if verbosity_str.lower() == 'none': + verbosity_str = USER_CONFIG.get_value("global", "verbosity") + assert verbosity_str in CHOICE_MAP["global"]["verbosity"] + if verbosity_str.lower() == "none": self.quiet_mode = True else: curr_verbosity = getattr(logging, verbosity_str.upper()) # Override quiet mode if verbosity is set. - if not USER_CONFIG.is_default('global', 'verbosity'): + if not USER_CONFIG.is_default("global", "verbosity"): self.quiet_mode = False # Initialize logger with the set CLI args / user configuration. init_logger(log_level=curr_verbosity, show_stdout=not self.quiet_mode, log_file=logfile) def add_detector(self, detector): - """ Add Detector: Adds a detection algorithm to the CliContext's SceneManager. """ + """Add Detector: Adds a detection algorithm to the CliContext's SceneManager.""" if self.load_scenes_input: raise click.ClickException("The load-scenes command cannot be used with detectors.") self._ensure_input_open() @@ -812,40 +842,44 @@ def _ensure_input_open(self) -> None: click.BadParameter: self.video_stream was not initialized. """ if self.video_stream is None: - raise click.ClickException('No input video (-i/--input) was specified.') + raise click.ClickException("No input video (-i/--input) was specified.") - def _open_video_stream(self, input_path: AnyStr, framerate: Optional[float], - backend: Optional[str]): - if '%' in input_path and backend != 'opencv': + def _open_video_stream( + self, input_path: ty.AnyStr, framerate: ty.Optional[float], backend: ty.Optional[str] + ): + if "%" in input_path and backend != "opencv": raise click.BadParameter( - 'The OpenCV backend (`--backend opencv`) must be used to process image sequences.', - param_hint='-i/--input') + "The OpenCV backend (`--backend opencv`) must be used to process image sequences.", + param_hint="-i/--input", + ) if framerate is not None and framerate < MAX_FPS_DELTA: - raise click.BadParameter('Invalid framerate specified!', param_hint='-f/--framerate') + raise click.BadParameter("Invalid framerate specified!", param_hint="-f/--framerate") try: if backend is None: - backend = self.config.get_value('global', 'backend') + backend = self.config.get_value("global", "backend") else: - if not backend in AVAILABLE_BACKENDS: + if backend not in AVAILABLE_BACKENDS: raise click.BadParameter( - 'Specified backend %s is not available on this system!' % backend, - param_hint='-b/--backend') + "Specified backend %s is not available on this system!" % backend, + param_hint="-b/--backend", + ) # Open the video with the specified backend, loading any required config settings. - if backend == 'pyav': + if backend == "pyav": self.video_stream = open_video( path=input_path, framerate=framerate, backend=backend, - threading_mode=self.config.get_value('backend-pyav', 'threading-mode'), - suppress_output=self.config.get_value('backend-pyav', 'suppress-output'), + threading_mode=self.config.get_value("backend-pyav", "threading-mode"), + suppress_output=self.config.get_value("backend-pyav", "suppress-output"), ) - elif backend == 'opencv': + elif backend == "opencv": self.video_stream = open_video( path=input_path, framerate=framerate, backend=backend, - max_decode_attempts=self.config.get_value('backend-opencv', - 'max-decode-attempts'), + max_decode_attempts=self.config.get_value( + "backend-opencv", "max-decode-attempts" + ), ) # Handle backends without any config options. else: @@ -854,19 +888,23 @@ def _open_video_stream(self, input_path: AnyStr, framerate: Optional[float], framerate=framerate, backend=backend, ) - logger.debug('Video opened using backend %s', type(self.video_stream).__name__) + logger.debug("Video opened using backend %s", type(self.video_stream).__name__) except FrameRateUnavailable as ex: raise click.BadParameter( - 'Failed to obtain framerate for input video. Manually specify framerate with the' - ' -f/--framerate option, or try re-encoding the file.', - param_hint='-i/--input') from ex + "Failed to obtain framerate for input video. Manually specify framerate with the" + " -f/--framerate option, or try re-encoding the file.", + param_hint="-i/--input", + ) from ex except VideoOpenFailure as ex: raise click.BadParameter( - 'Failed to open input video%s: %s' % - (' using %s backend' % backend if backend else '', str(ex)), - param_hint='-i/--input') from ex + "Failed to open input video%s: %s" + % (" using %s backend" % backend if backend else "", str(ex)), + param_hint="-i/--input", + ) from ex except OSError as ex: - raise click.BadParameter('Input error:\n\n\t%s\n' % str(ex), param_hint='-i/--input') + raise click.BadParameter( + "Input error:\n\n\t%s\n" % str(ex), param_hint="-i/--input" + ) from None def _on_duplicate_command(self, command: str) -> None: """Called when a command is duplicated to stop parsing and raise an error. @@ -878,10 +916,11 @@ def _on_duplicate_command(self, command: str) -> None: click.BadParameter """ error_strs = [] - error_strs.append('Error: Command %s specified multiple times.' % command) - error_strs.append('The %s command may appear only one time.') + error_strs.append("Error: Command %s specified multiple times." % command) + error_strs.append("The %s command may appear only one time.") - logger.error('\n'.join(error_strs)) + logger.error("\n".join(error_strs)) raise click.BadParameter( - '\n Command %s may only be specified once.' % command, - param_hint='%s command' % command) + "\n Command %s may only be specified once." % command, + param_hint="%s command" % command, + ) diff --git a/scenedetect/_cli/controller.py b/scenedetect/_cli/controller.py index d7180542..eae039d4 100644 --- a/scenedetect/_cli/controller.py +++ b/scenedetect/_cli/controller.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -15,21 +14,27 @@ import csv import logging import os -from string import Template import time import typing as ty -from typing import Dict, List, Tuple, Optional from string import Template +from scenedetect._cli.context import CliContext, check_split_video_requirements from scenedetect.frame_timecode import FrameTimecode from scenedetect.platform import get_and_create_path -from scenedetect.scene_manager import get_scenes_from_cuts, save_images, write_scene_list, write_scene_list_html -from scenedetect.video_splitter import split_video_mkvmerge, split_video_ffmpeg +from scenedetect.scene_manager import ( + get_scenes_from_cuts, + save_images, + write_scene_list, + write_scene_list_html, +) +from scenedetect.video_splitter import split_video_ffmpeg, split_video_mkvmerge from scenedetect.video_stream import SeekError -from scenedetect._cli.context import CliContext, check_split_video_requirements +logger = logging.getLogger("pyscenedetect") + +SceneList = ty.List[ty.Tuple[FrameTimecode, FrameTimecode]] -logger = logging.getLogger('pyscenedetect') +CutList = ty.List[FrameTimecode] def run_scenedetect(context: CliContext): @@ -61,11 +66,13 @@ def run_scenedetect(context: CliContext): _save_stats(context) if scene_list: logger.info( - 'Detected %d scenes, average shot length %.1f seconds.', len(scene_list), + "Detected %d scenes, average shot length %.1f seconds.", + len(scene_list), sum([(end_time - start_time).get_seconds() for start_time, end_time in scene_list]) - / float(len(scene_list))) + / float(len(scene_list)), + ) else: - logger.info('No scenes detected.') + logger.info("No scenes detected.") # Handle list-scenes command. _list_scenes(context, scene_list, cut_list) @@ -80,48 +87,56 @@ def run_scenedetect(context: CliContext): _split_video(context, scene_list) -def _detect(context: CliContext): +def _detect(context: CliContext) -> ty.Optional[ty.Tuple[SceneList, CutList]]: # Use default detector if one was not specified. if context.scene_manager.get_num_detectors() == 0: detector_type, detector_args = context.default_detector - logger.debug('Using default detector: %s(%s)' % (detector_type.__name__, detector_args)) + logger.debug("Using default detector: %s(%s)" % (detector_type.__name__, detector_args)) context.scene_manager.add_detector(detector_type(**detector_args)) perf_start_time = time.time() if context.start_time is not None: - logger.debug('Seeking to start time...') + logger.debug("Seeking to start time...") try: context.video_stream.seek(target=context.start_time) except SeekError as ex: - logger.critical('Failed to seek to %s / frame %d: %s', - context.start_time.get_timecode(), context.start_time.get_frames(), - str(ex)) - return + logger.critical( + "Failed to seek to %s / frame %d: %s", + context.start_time.get_timecode(), + context.start_time.get_frames(), + str(ex), + ) + return None num_frames = context.scene_manager.detect_scenes( video=context.video_stream, duration=context.duration, end_time=context.end_time, frame_skip=context.frame_skip, - show_progress=not context.quiet_mode) + show_progress=not context.quiet_mode, + ) # Handle case where video failure is most likely due to multiple audio tracks (#179). # TODO(#380): Ensure this does not erroneusly fire. - if num_frames <= 0 and context.video_stream.BACKEND_NAME == 'opencv': + if num_frames <= 0 and context.video_stream.BACKEND_NAME == "opencv": logger.critical( - 'Failed to read any frames from video file. This could be caused by the video' - ' having multiple audio tracks. If so, try installing the PyAV backend:\n' - ' pip install av\n' - 'Or remove the audio tracks by running either:\n' - ' ffmpeg -i input.mp4 -c copy -an output.mp4\n' - ' mkvmerge -o output.mkv input.mp4\n' - 'For details, see https://scenedetect.com/faq/') - return + "Failed to read any frames from video file. This could be caused by the video" + " having multiple audio tracks. If so, try installing the PyAV backend:\n" + " pip install av\n" + "Or remove the audio tracks by running either:\n" + " ffmpeg -i input.mp4 -c copy -an output.mp4\n" + " mkvmerge -o output.mkv input.mp4\n" + "For details, see https://scenedetect.com/faq/" + ) + return None perf_duration = time.time() - perf_start_time - logger.info('Processed %d frames in %.1f seconds (average %.2f FPS).', num_frames, - perf_duration, - float(num_frames) / perf_duration) + logger.info( + "Processed %d frames in %.1f seconds (average %.2f FPS).", + num_frames, + perf_duration, + float(num_frames) / perf_duration, + ) # Get list of detected cuts/scenes from the SceneManager to generate the required output # files, based on the given commands (list-scenes, split-video, save-images, etc...). @@ -137,34 +152,36 @@ def _save_stats(context: CliContext) -> None: return if context.stats_manager.is_save_required(): path = get_and_create_path(context.stats_file_path, context.output_dir) - logger.info('Saving frame metrics to stats file: %s', path) + logger.info("Saving frame metrics to stats file: %s", path) with open(path, mode="w") as file: context.stats_manager.save_to_csv(csv_file=file) else: - logger.debug('No frame metrics updated, skipping update of the stats file.') + logger.debug("No frame metrics updated, skipping update of the stats file.") -def _list_scenes(context: CliContext, scene_list: List[Tuple[FrameTimecode, FrameTimecode]], - cut_list: List[FrameTimecode]) -> None: +def _list_scenes(context: CliContext, scene_list: SceneList, cut_list: CutList) -> None: """Handles the `list-scenes` command.""" if not context.list_scenes: return # Write scene list CSV to if required. if context.scene_list_output: - scene_list_filename = Template( - context.scene_list_name_format).safe_substitute(VIDEO_NAME=context.video_stream.name) - if not scene_list_filename.lower().endswith('.csv'): - scene_list_filename += '.csv' + scene_list_filename = Template(context.scene_list_name_format).safe_substitute( + VIDEO_NAME=context.video_stream.name + ) + if not scene_list_filename.lower().endswith(".csv"): + scene_list_filename += ".csv" scene_list_path = get_and_create_path( scene_list_filename, - context.scene_list_dir if context.scene_list_dir is not None else context.output_dir) - logger.info('Writing scene list to CSV file:\n %s', scene_list_path) - with open(scene_list_path, 'wt') as scene_list_file: + context.scene_list_dir if context.scene_list_dir is not None else context.output_dir, + ) + logger.info("Writing scene list to CSV file:\n %s", scene_list_path) + with open(scene_list_path, "w") as scene_list_file: write_scene_list( output_csv_file=scene_list_file, scene_list=scene_list, include_cut_list=not context.skip_cuts, - cut_list=cut_list) + cut_list=cut_list, + ) # Suppress output if requested. if context.list_scenes_quiet: return @@ -176,26 +193,37 @@ def _list_scenes(context: CliContext, scene_list: List[Tuple[FrameTimecode, Fram | Scene # | Start Frame | Start Time | End Frame | End Time | ----------------------------------------------------------------------- %s ------------------------------------------------------------------------""", '\n'.join([ - " | %5d | %11d | %s | %11d | %s |" % - (i + 1, start_time.get_frames() + 1, start_time.get_timecode(), - end_time.get_frames(), end_time.get_timecode()) - for i, (start_time, end_time) in enumerate(scene_list) - ])) +-----------------------------------------------------------------------""", + "\n".join( + [ + " | %5d | %11d | %s | %11d | %s |" + % ( + i + 1, + start_time.get_frames() + 1, + start_time.get_timecode(), + end_time.get_frames(), + end_time.get_timecode(), + ) + for i, (start_time, end_time) in enumerate(scene_list) + ] + ), + ) # Print cut list. if cut_list and context.display_cuts: - logger.info("Comma-separated timecode list:\n %s", - ",".join([context.cut_format.format(cut) for cut in cut_list])) + logger.info( + "Comma-separated timecode list:\n %s", + ",".join([context.cut_format.format(cut) for cut in cut_list]), + ) def _save_images( - context: CliContext, - scene_list: List[Tuple[FrameTimecode, FrameTimecode]]) -> Optional[Dict[int, List[str]]]: + context: CliContext, scene_list: SceneList +) -> ty.Optional[ty.Dict[int, ty.List[str]]]: """Handles the `save-images` command.""" if not context.save_images: return None # Command can override global output directory setting. - output_dir = (context.output_dir if context.image_dir is None else context.image_dir) + output_dir = context.output_dir if context.image_dir is None else context.image_dir return save_images( scene_list=scene_list, video=context.video_stream, @@ -209,23 +237,28 @@ def _save_images( scale=context.scale, height=context.height, width=context.width, - interpolation=context.scale_method) + interpolation=context.scale_method, + ) -def _export_html(context: CliContext, scene_list: List[Tuple[FrameTimecode, FrameTimecode]], - cut_list: List[FrameTimecode], image_filenames: Optional[Dict[int, - List[str]]]) -> None: +def _export_html( + context: CliContext, + scene_list: SceneList, + cut_list: CutList, + image_filenames: ty.Optional[ty.Dict[int, ty.List[str]]], +) -> None: """Handles the `export-html` command.""" if not context.export_html: return # Command can override global output directory setting. - output_dir = (context.output_dir if context.image_dir is None else context.image_dir) - html_filename = Template( - context.html_name_format).safe_substitute(VIDEO_NAME=context.video_stream.name) - if not html_filename.lower().endswith('.html'): - html_filename += '.html' + output_dir = context.output_dir if context.image_dir is None else context.image_dir + html_filename = Template(context.html_name_format).safe_substitute( + VIDEO_NAME=context.video_stream.name + ) + if not html_filename.lower().endswith(".html"): + html_filename += ".html" html_path = get_and_create_path(html_filename, output_dir) - logger.info('Exporting to html file:\n %s:', html_path) + logger.info("Exporting to html file:\n %s:", html_path) if not context.html_include_images: image_filenames = None write_scene_list_html( @@ -234,24 +267,24 @@ def _export_html(context: CliContext, scene_list: List[Tuple[FrameTimecode, Fram cut_list, image_filenames=image_filenames, image_width=context.image_width, - image_height=context.image_height) + image_height=context.image_height, + ) -def _split_video(context: CliContext, scene_list: List[Tuple[FrameTimecode, - FrameTimecode]]) -> None: +def _split_video(context: CliContext, scene_list: SceneList) -> None: """Handles the `split-video` command.""" if not context.split_video: return output_path_template = context.split_name_format # Add proper extension to filename template if required. - dot_pos = output_path_template.rfind('.') + dot_pos = output_path_template.rfind(".") extension_length = 0 if dot_pos < 0 else len(output_path_template) - (dot_pos + 1) # If using mkvmerge, force extension to .mkv. - if context.split_mkvmerge and not output_path_template.endswith('.mkv'): - output_path_template += '.mkv' + if context.split_mkvmerge and not output_path_template.endswith(".mkv"): + output_path_template += ".mkv" # Otherwise, if using ffmpeg, only add an extension if one doesn't exist. elif not 2 <= extension_length <= 4: - output_path_template += '.mp4' + output_path_template += ".mp4" # Ensure the appropriate tool is available before handling split-video. check_split_video_requirements(context.split_mkvmerge) # Command can override global output directory setting. @@ -275,29 +308,28 @@ def _split_video(context: CliContext, scene_list: List[Tuple[FrameTimecode, show_output=not (context.quiet_mode or context.split_quiet), ) if scene_list: - logger.info('Video splitting completed, scenes written to disk.') + logger.info("Video splitting completed, scenes written to disk.") -def _load_scenes( - context: CliContext -) -> ty.Tuple[ty.Iterable[ty.Tuple[FrameTimecode, FrameTimecode]], ty.Iterable[FrameTimecode]]: +def _load_scenes(context: CliContext) -> ty.Tuple[SceneList, CutList]: assert context.load_scenes_input assert os.path.exists(context.load_scenes_input) - with open(context.load_scenes_input, 'r') as input_file: + with open(context.load_scenes_input) as input_file: file_reader = csv.reader(input_file) csv_headers = next(file_reader) - if not context.load_scenes_column_name in csv_headers: + if context.load_scenes_column_name not in csv_headers: csv_headers = next(file_reader) # Check to make sure column headers are present if context.load_scenes_column_name not in csv_headers: - raise ValueError('specified column header for scene start is not present') + raise ValueError("specified column header for scene start is not present") col_idx = csv_headers.index(context.load_scenes_column_name) cut_list = sorted( FrameTimecode(row[col_idx], fps=context.video_stream.frame_rate) - 1 - for row in file_reader) + for row in file_reader + ) # `SceneDetector` works on cuts, so we have to skip the first scene and use the first frame # of the next scene as the cut point. This can be fixed if we used `SparseSceneDetector` # but this part of the API is being reworked and hasn't been used by any detectors yet. @@ -319,13 +351,11 @@ def _load_scenes( cut_list = [cut for cut in cut_list if cut < end_time] return get_scenes_from_cuts( - cut_list=cut_list, start_pos=start_time, end_pos=end_time), cut_list - + cut_list=cut_list, start_pos=start_time, end_pos=end_time + ), cut_list -def _postprocess_scene_list( - context: CliContext, scene_list: ty.List[ty.Tuple[FrameTimecode, FrameTimecode]] -) -> ty.List[ty.Tuple[FrameTimecode, FrameTimecode]]: +def _postprocess_scene_list(context: CliContext, scene_list: SceneList) -> SceneList: # Handle --merge-last-scene. If set, when the last scene is shorter than --min-scene-len, # it will be merged with the previous one. if context.merge_last_scene and context.min_scene_len is not None and context.min_scene_len > 0: diff --git a/scenedetect/_thirdparty/__init__.py b/scenedetect/_thirdparty/__init__.py index 83987ca8..5442893f 100644 --- a/scenedetect/_thirdparty/__init__.py +++ b/scenedetect/_thirdparty/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- diff --git a/scenedetect/_thirdparty/simpletable.py b/scenedetect/_thirdparty/simpletable.py index e940a432..df01519d 100644 --- a/scenedetect/_thirdparty/simpletable.py +++ b/scenedetect/_thirdparty/simpletable.py @@ -1,5 +1,4 @@ #!/usr/bin/python -# -*- coding: utf-8 -*- # The MIT License (MIT) # @@ -56,13 +55,15 @@ def quote(string): try: from urllib.parse import quote + return quote(string) except ModuleNotFoundError: from urllib import pathname2url + return pathname2url(string) -class SimpleTableCell(object): +class SimpleTableCell: """A table class to create table cells. Example: @@ -82,12 +83,12 @@ def __init__(self, text, header=False): def __str__(self): """Return the HTML code for the table cell.""" if self.header: - return '%s' % (self.text) + return "%s" % (self.text) else: - return '%s' % (self.text) + return "%s" % (self.text) -class SimpleTableImage(object): +class SimpleTableImage: """A table class to create table cells with an image. Example: @@ -121,12 +122,12 @@ def __str__(self): output += ' height="%s"' % (self.height) if self.width: output += ' width="%s"' % (self.width) - output += '>' + output += ">" return output -class SimpleTableRow(object): +class SimpleTableRow: """A table class to create table rows, populated by table cells. Example: @@ -161,14 +162,14 @@ def __str__(self): """Return the HTML code for the table row and its cells as a string.""" row = [] - row.append('') + row.append("") for cell in self.cells: row.append(str(cell)) - row.append('') + row.append("") - return '\n'.join(row) + return "\n".join(row) def __iter__(self): """Iterate through row cells""" @@ -185,7 +186,7 @@ def add_cells(self, cells): self.cells.append(cell) -class SimpleTable(object): +class SimpleTable: """A table class to create HTML tables, populated by HTML table rows. Example: @@ -232,9 +233,9 @@ def __str__(self): table = [] if self.css_class: - table.append('' % self.css_class) + table.append("
" % self.css_class) else: - table.append('
') + table.append("
") if self.header_row: table.append(str(self.header_row)) @@ -242,9 +243,9 @@ def __str__(self): for row in self.rows: table.append(str(row)) - table.append('
') + table.append("") - return '\n'.join(table) + return "\n".join(table) def __iter__(self): """Iterate through table rows""" @@ -261,7 +262,7 @@ def add_rows(self, rows): self.rows.append(row) -class HTMLPage(object): +class HTMLPage: """A class to create HTML pages containing CSS and tables.""" def __init__(self, tables=None, css=None, encoding="utf-8"): @@ -285,14 +286,15 @@ def __str__(self): page.append('' % self.css) # Set encoding - page.append('' % self.encoding) + page.append( + '' % self.encoding + ) for table in self.tables: page.append(str(table)) - page.append('
') + page.append("
") - return '\n'.join(page) + return "\n".join(page) def __iter__(self): """Iterate through tables""" @@ -301,7 +303,7 @@ def __iter__(self): def save(self, filename): """Save HTML page to a file using the proper encoding""" - with codecs.open(filename, 'w', self.encoding) as outfile: + with codecs.open(filename, "w", self.encoding) as outfile: for line in str(self): outfile.write(line) @@ -324,4 +326,4 @@ def fit_data_to_columns(data, num_cols): if len(data) % num_cols != 0: num_iterations += 1 - return [data[num_cols * i:num_cols * i + num_cols] for i in range(num_iterations)] + return [data[num_cols * i : num_cols * i + num_cols] for i in range(num_iterations)] diff --git a/scenedetect/backends/__init__.py b/scenedetect/backends/__init__.py index 6296bd31..a8bd763a 100644 --- a/scenedetect/backends/__init__.py +++ b/scenedetect/backends/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -87,7 +86,7 @@ from typing import Dict, Type # OpenCV must be available at minimum. -from scenedetect.backends.opencv import VideoStreamCv2, VideoCaptureAdapter +from scenedetect.backends.opencv import VideoCaptureAdapter, VideoStreamCv2 try: from scenedetect.backends.pyav import VideoStreamAv @@ -102,11 +101,15 @@ # TODO: Lazy-loading backends would improve startup performance. However, this requires removing # some of the re-exported types above from the public API. AVAILABLE_BACKENDS: Dict[str, Type] = { - backend.BACKEND_NAME: backend for backend in filter(None, [ - VideoStreamCv2, - VideoStreamAv, - VideoStreamMoviePy, - ]) + backend.BACKEND_NAME: backend + for backend in filter( + None, + [ + VideoStreamCv2, + VideoStreamAv, + VideoStreamMoviePy, + ], + ) } """All available backends that :func:`scenedetect.open_video` can consider for the `backend` parameter. These backends must support construction with the following signature: diff --git a/scenedetect/backends/moviepy.py b/scenedetect/backends/moviepy.py index e0c4a92b..e85f37c4 100644 --- a/scenedetect/backends/moviepy.py +++ b/scenedetect/backends/moviepy.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -18,18 +17,18 @@ """ from logging import getLogger -from typing import AnyStr, Tuple, Union, Optional +from typing import AnyStr, Optional, Tuple, Union import cv2 -from moviepy.video.io.ffmpeg_reader import FFMPEG_VideoReader import numpy as np +from moviepy.video.io.ffmpeg_reader import FFMPEG_VideoReader +from scenedetect.backends.opencv import VideoStreamCv2 from scenedetect.frame_timecode import FrameTimecode from scenedetect.platform import get_file_name -from scenedetect.video_stream import VideoStream, SeekError, VideoOpenFailure -from scenedetect.backends.opencv import VideoStreamCv2 +from scenedetect.video_stream import SeekError, VideoOpenFailure, VideoStream -logger = getLogger('pyscenedetect') +logger = getLogger("pyscenedetect") class VideoStreamMoviePy(VideoStream): @@ -53,7 +52,8 @@ def __init__(self, path: AnyStr, framerate: Optional[float] = None, print_infos: # TODO: Add framerate override. if framerate is not None: raise NotImplementedError( - "VideoStreamMoviePy does not support the `framerate` argument yet.") + "VideoStreamMoviePy does not support the `framerate` argument yet." + ) self._path = path # TODO: Need to map errors based on the strings, since several failure @@ -77,7 +77,7 @@ def __init__(self, path: AnyStr, framerate: Optional[float] = None, print_infos: # VideoStream Methods/Properties # - BACKEND_NAME = 'moviepy' + BACKEND_NAME = "moviepy" """Unique name used to identify this backend.""" @property @@ -103,13 +103,13 @@ def is_seekable(self) -> bool: @property def frame_size(self) -> Tuple[int, int]: """Size of each video frame in pixels as a tuple of (width, height).""" - return tuple(self._reader.infos['video_size']) + return tuple(self._reader.infos["video_size"]) @property def duration(self) -> Optional[FrameTimecode]: """Duration of the stream as a FrameTimecode, or None if non terminating.""" - assert isinstance(self._reader.infos['duration'], float) - return self.base_timecode + self._reader.infos['duration'] + assert isinstance(self._reader.infos["duration"], float) + return self.base_timecode + self._reader.infos["duration"] @property def aspect_ratio(self) -> float: @@ -178,7 +178,7 @@ def seek(self, target: Union[FrameTimecode, float, int]): target = FrameTimecode(target, self.frame_rate) try: self._reader.get_frame(target.get_seconds()) - except IOError as ex: + except OSError as ex: # Leave the object in a valid state. self.reset() # TODO(#380): Other backends do not currently throw an exception if attempting to seek @@ -192,7 +192,7 @@ def seek(self, target: Union[FrameTimecode, float, int]): self._frame_number = target.frame_num def reset(self): - """ Close and re-open the VideoStream (should be equivalent to calling `seek(0)`). """ + """Close and re-open the VideoStream (should be equivalent to calling `seek(0)`).""" self._reader.initialize() self._last_frame = self._reader.read_frame() self._frame_number = 0 @@ -213,7 +213,7 @@ def read(self, decode: bool = True, advance: bool = True) -> Union[np.ndarray, b if self._last_frame_rgb is None: self._last_frame_rgb = cv2.cvtColor(self._last_frame, cv2.COLOR_BGR2RGB) return self._last_frame_rgb - if not hasattr(self._reader, 'lastread'): + if not hasattr(self._reader, "lastread"): return False self._last_frame = self._reader.lastread self._reader.read_frame() diff --git a/scenedetect/backends/opencv.py b/scenedetect/backends/opencv.py index 4ab9a897..862a19e2 100644 --- a/scenedetect/backends/opencv.py +++ b/scenedetect/backends/opencv.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -18,33 +17,33 @@ which do not support seeking. """ -from logging import getLogger import math -from typing import AnyStr, Tuple, Union, Optional import os.path +from logging import getLogger +from typing import AnyStr, Optional, Tuple, Union import cv2 import numpy as np -from scenedetect.frame_timecode import FrameTimecode, MAX_FPS_DELTA +from scenedetect.frame_timecode import MAX_FPS_DELTA, FrameTimecode from scenedetect.platform import get_file_name -from scenedetect.video_stream import VideoStream, SeekError, VideoOpenFailure, FrameRateUnavailable +from scenedetect.video_stream import FrameRateUnavailable, SeekError, VideoOpenFailure, VideoStream -logger = getLogger('pyscenedetect') +logger = getLogger("pyscenedetect") -IMAGE_SEQUENCE_IDENTIFIER = '%' +IMAGE_SEQUENCE_IDENTIFIER = "%" NON_VIDEO_FILE_INPUT_IDENTIFIERS = ( - IMAGE_SEQUENCE_IDENTIFIER, # image sequence - '://', # URL/network stream - ' ! ', # gstreamer pipe + IMAGE_SEQUENCE_IDENTIFIER, # image sequence + "://", # URL/network stream + " ! ", # gstreamer pipe ) def _get_aspect_ratio(cap: cv2.VideoCapture, epsilon: float = 0.0001) -> float: """Display/pixel aspect ratio of the VideoCapture as a float (1.0 represents square pixels).""" # Versions of OpenCV < 3.4.1 do not support this, so we fall back to 1.0. - if not 'CAP_PROP_SAR_NUM' in dir(cv2): + if "CAP_PROP_SAR_NUM" not in dir(cv2): return 1.0 num: float = cap.get(cv2.CAP_PROP_SAR_NUM) den: float = cap.get(cv2.CAP_PROP_SAR_DEN) @@ -86,21 +85,22 @@ def __init__( super().__init__() # TODO(v0.7): Replace with DeprecationWarning that `path_or_device` will be removed in v0.8. if path_or_device is not None: - logger.error('path_or_device is deprecated, use path or VideoCaptureAdapter instead.') + logger.error("path_or_device is deprecated, use path or VideoCaptureAdapter instead.") path = path_or_device if path is None: - raise ValueError('Path must be specified!') + raise ValueError("Path must be specified!") if framerate is not None and framerate < MAX_FPS_DELTA: - raise ValueError('Specified framerate (%f) is invalid!' % framerate) + raise ValueError("Specified framerate (%f) is invalid!" % framerate) if max_decode_attempts < 0: - raise ValueError('Maximum decode attempts must be >= 0!') + raise ValueError("Maximum decode attempts must be >= 0!") self._path_or_device = path self._is_device = isinstance(self._path_or_device, int) # Initialized in _open_capture: - self._cap: Optional[ - cv2.VideoCapture] = None # Reference to underlying cv2.VideoCapture object. + self._cap: Optional[cv2.VideoCapture] = ( + None # Reference to underlying cv2.VideoCapture object. + ) self._frame_rate: Optional[float] = None # VideoCapture state @@ -130,7 +130,7 @@ def capture(self) -> cv2.VideoCapture: # VideoStream Methods/Properties # - BACKEND_NAME = 'opencv' + BACKEND_NAME = "opencv" """Unique name used to identify this backend.""" @property @@ -157,7 +157,7 @@ def name(self) -> str: if IMAGE_SEQUENCE_IDENTIFIER in file_name: # file_name is an image sequence, trim everything including/after the %. # TODO: This excludes any suffix after the sequence identifier. - file_name = file_name[:file_name.rfind(IMAGE_SEQUENCE_IDENTIFIER)] + file_name = file_name[: file_name.rfind(IMAGE_SEQUENCE_IDENTIFIER)] return file_name @property @@ -170,8 +170,10 @@ def is_seekable(self) -> bool: @property def frame_size(self) -> Tuple[int, int]: """Size of each video frame in pixels as a tuple of (width, height).""" - return (math.trunc(self._cap.get(cv2.CAP_PROP_FRAME_WIDTH)), - math.trunc(self._cap.get(cv2.CAP_PROP_FRAME_HEIGHT))) + return ( + math.trunc(self._cap.get(cv2.CAP_PROP_FRAME_WIDTH)), + math.trunc(self._cap.get(cv2.CAP_PROP_FRAME_HEIGHT)), + ) @property def duration(self) -> Optional[FrameTimecode]: @@ -258,7 +260,7 @@ def seek(self, target: Union[FrameTimecode, float, int]): self._has_grabbed = self._cap.grab() def reset(self): - """ Close and re-open the VideoStream (should be equivalent to calling `seek(0)`). """ + """Close and re-open the VideoStream (should be equivalent to calling `seek(0)`).""" self._cap.release() self._open_capture(self._frame_rate) @@ -289,9 +291,9 @@ def read(self, decode: bool = True, advance: bool = True) -> Union[np.ndarray, b # Report previous failure in debug mode. if has_grabbed: self._decode_failures += 1 - logger.debug('Frame failed to decode.') + logger.debug("Frame failed to decode.") if not self._warning_displayed and self._decode_failures > 1: - logger.warning('Failed to decode some frames, results may be inaccurate.') + logger.warning("Failed to decode some frames, results may be inaccurate.") # We didn't manage to grab a frame even after retrying, so just return. if not has_grabbed: return False @@ -309,30 +311,33 @@ def read(self, decode: bool = True, advance: bool = True) -> Union[np.ndarray, b def _open_capture(self, framerate: Optional[float] = None): """Opens capture referenced by this object and resets internal state.""" if self._is_device and self._path_or_device < 0: - raise ValueError('Invalid/negative device ID specified.') + raise ValueError("Invalid/negative device ID specified.") input_is_video_file = not self._is_device and not any( - identifier in self._path_or_device for identifier in NON_VIDEO_FILE_INPUT_IDENTIFIERS) + identifier in self._path_or_device for identifier in NON_VIDEO_FILE_INPUT_IDENTIFIERS + ) # We don't have a way of querying why opening a video fails (errors are logged at least), # so provide a better error message if we try to open a file that doesn't exist. - if input_is_video_file: - if not os.path.exists(self._path_or_device): - raise OSError('Video file not found.') + if input_is_video_file and not os.path.exists(self._path_or_device): + raise OSError("Video file not found.") cap = cv2.VideoCapture(self._path_or_device) if not cap.isOpened(): raise VideoOpenFailure( - 'Ensure file is valid video and system dependencies are up to date.\n') + "Ensure file is valid video and system dependencies are up to date.\n" + ) # Display an error if the video codec type seems unsupported (#86) as this indicates # potential video corruption, or may explain missing frames. We only perform this check # for video files on-disk (skipped for devices, image sequences, streams, etc...). - codec_unsupported: bool = (int(abs(cap.get(cv2.CAP_PROP_FOURCC))) == 0) + codec_unsupported: bool = int(abs(cap.get(cv2.CAP_PROP_FOURCC))) == 0 if codec_unsupported and input_is_video_file: - logger.error('Video codec detection failed. If output is incorrect:\n' - ' - Re-encode the input video with ffmpeg\n' - ' - Update OpenCV (pip install --upgrade opencv-python)\n' - ' - Use the PyAV backend (--backend pyav)\n' - 'For details, see https://github.com/Breakthrough/PySceneDetect/issues/86') + logger.error( + "Video codec detection failed. If output is incorrect:\n" + " - Re-encode the input video with ffmpeg\n" + " - Update OpenCV (pip install --upgrade opencv-python)\n" + " - Use the PyAV backend (--backend pyav)\n" + "For details, see https://github.com/Breakthrough/PySceneDetect/issues/86" + ) # Ensure the framerate is correct to avoid potential divide by zero errors. This can be # addressed in the PyAV backend if required since it supports integer timebases. @@ -380,11 +385,11 @@ def __init__( super().__init__() if framerate is not None and framerate < MAX_FPS_DELTA: - raise ValueError('Specified framerate (%f) is invalid!' % framerate) + raise ValueError("Specified framerate (%f) is invalid!" % framerate) if max_read_attempts < 0: - raise ValueError('Maximum decode attempts must be >= 0!') + raise ValueError("Maximum decode attempts must be >= 0!") if not cap.isOpened(): - raise ValueError('Specified VideoCapture must already be opened!') + raise ValueError("Specified VideoCapture must already be opened!") if framerate is None: framerate = cap.get(cv2.CAP_PROP_FPS) if framerate < MAX_FPS_DELTA: @@ -417,7 +422,7 @@ def capture(self) -> cv2.VideoCapture: # VideoStream Methods/Properties # - BACKEND_NAME = 'opencv_adapter' + BACKEND_NAME = "opencv_adapter" """Unique name used to identify this backend.""" @property @@ -429,12 +434,12 @@ def frame_rate(self) -> float: @property def path(self) -> str: """Always 'CAP_ADAPTER'.""" - return 'CAP_ADAPTER' + return "CAP_ADAPTER" @property def name(self) -> str: """Always 'CAP_ADAPTER'.""" - return 'CAP_ADAPTER' + return "CAP_ADAPTER" @property def is_seekable(self) -> bool: @@ -444,8 +449,10 @@ def is_seekable(self) -> bool: @property def frame_size(self) -> Tuple[int, int]: """Reported size of each video frame in pixels as a tuple of (width, height).""" - return (math.trunc(self._cap.get(cv2.CAP_PROP_FRAME_WIDTH)), - math.trunc(self._cap.get(cv2.CAP_PROP_FRAME_HEIGHT))) + return ( + math.trunc(self._cap.get(cv2.CAP_PROP_FRAME_WIDTH)), + math.trunc(self._cap.get(cv2.CAP_PROP_FRAME_HEIGHT)), + ) @property def duration(self) -> Optional[FrameTimecode]: @@ -526,9 +533,9 @@ def read(self, decode: bool = True, advance: bool = True) -> Union[np.ndarray, b # Report previous failure in debug mode. if has_grabbed: self._decode_failures += 1 - logger.debug('Frame failed to decode.') + logger.debug("Frame failed to decode.") if not self._warning_displayed and self._decode_failures > 1: - logger.warning('Failed to decode some frames, results may be inaccurate.') + logger.warning("Failed to decode some frames, results may be inaccurate.") # We didn't manage to grab a frame even after retrying, so just return. if not has_grabbed: return False diff --git a/scenedetect/backends/pyav.py b/scenedetect/backends/pyav.py index 07647818..cba203c7 100644 --- a/scenedetect/backends/pyav.py +++ b/scenedetect/backends/pyav.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -15,15 +14,14 @@ from logging import getLogger from typing import AnyStr, BinaryIO, Optional, Tuple, Union -# pylint: disable=c-extension-no-member import av import numpy as np -from scenedetect.frame_timecode import FrameTimecode, MAX_FPS_DELTA +from scenedetect.frame_timecode import MAX_FPS_DELTA, FrameTimecode from scenedetect.platform import get_file_name -from scenedetect.video_stream import VideoStream, VideoOpenFailure, FrameRateUnavailable +from scenedetect.video_stream import FrameRateUnavailable, VideoOpenFailure, VideoStream -logger = getLogger('pyscenedetect') +logger = getLogger("pyscenedetect") VALID_THREAD_MODES = [ av.codec.context.ThreadType.NONE, @@ -82,26 +80,26 @@ def __init__( # Ensure specified framerate is valid if set. if framerate is not None and framerate < MAX_FPS_DELTA: - raise ValueError('Specified framerate (%f) is invalid!' % framerate) + raise ValueError("Specified framerate (%f) is invalid!" % framerate) - self._name = '' if name is None else name - self._path = '' + self._name = "" if name is None else name + self._path = "" self._frame = None self._reopened = True if threading_mode: threading_mode = threading_mode.upper() - if not threading_mode in VALID_THREAD_MODES: - raise ValueError('Invalid threading mode! Must be one of: %s' % VALID_THREAD_MODES) + if threading_mode not in VALID_THREAD_MODES: + raise ValueError("Invalid threading mode! Must be one of: %s" % VALID_THREAD_MODES) if not suppress_output: - logger.debug('Restoring default ffmpeg log callbacks.') + logger.debug("Restoring default ffmpeg log callbacks.") av.logging.restore_default_callback() try: if isinstance(path_or_io, (str, bytes)): self._path = path_or_io - self._io = open(path_or_io, 'rb') + self._io = open(path_or_io, "rb") if not self._name: self._name = get_file_name(self.path, include_extension=False) else: @@ -111,7 +109,7 @@ def __init__( if threading_mode is not None: self._video_stream.thread_type = threading_mode self._reopened = False - logger.debug('Threading mode set: %s', threading_mode) + logger.debug("Threading mode set: %s", threading_mode) except OSError: raise except Exception as ex: @@ -119,8 +117,11 @@ def __init__( if framerate is None: # Calculate framerate from video container. `guessed_rate` below appears in PyAV 9. - frame_rate = self._video_stream.guessed_rate if hasattr( - self._video_stream, 'guessed_rate') else self._codec_context.framerate + frame_rate = ( + self._video_stream.guessed_rate + if hasattr(self._video_stream, "guessed_rate") + else self._codec_context.framerate + ) if frame_rate is None or frame_rate == 0: raise FrameRateUnavailable() # TODO: Refactor FrameTimecode to support raw timing rather than framerate based calculations. @@ -144,7 +145,7 @@ def __del__(self): # VideoStream Methods/Properties # - BACKEND_NAME = 'pyav' + BACKEND_NAME = "pyav" """Unique name used to identify this backend.""" @property @@ -207,8 +208,10 @@ def frame_number(self) -> int: @property def aspect_ratio(self) -> float: """Pixel aspect ratio as a float (1.0 represents square pixels).""" - if not hasattr(self._codec_context, - "display_aspect_ratio") or self._codec_context.display_aspect_ratio is None: + if ( + not hasattr(self._codec_context, "display_aspect_ratio") + or self._codec_context.display_aspect_ratio is None + ): return 1.0 ar_denom = self._codec_context.display_aspect_ratio.denominator if ar_denom <= 0: @@ -238,12 +241,13 @@ def seek(self, target: Union[FrameTimecode, float, int]) -> None: """ if target < 0: raise ValueError("Target cannot be negative!") - beginning = (target == 0) - target = (self.base_timecode + target) + beginning = target == 0 + target = self.base_timecode + target if target >= 1: target = target - 1 target_pts = self._video_stream.start_time + int( - (self.base_timecode + target).get_seconds() / self._video_stream.time_base) + (self.base_timecode + target).get_seconds() / self._video_stream.time_base + ) self._frame = None self._container.seek(target_pts, stream=self._video_stream) if not beginning: @@ -253,7 +257,7 @@ def seek(self, target: Union[FrameTimecode, float, int]) -> None: break def reset(self): - """ Close and re-open the VideoStream (should be equivalent to calling `seek(0)`). """ + """Close and re-open the VideoStream (should be equivalent to calling `seek(0)`).""" self._container.close() self._frame = None try: @@ -286,7 +290,7 @@ def read(self, decode: bool = True, advance: bool = True) -> Union[np.ndarray, b return False has_advanced = True if decode: - return self._frame.to_ndarray(format='bgr24') + return self._frame.to_ndarray(format="bgr24") return has_advanced # @@ -320,14 +324,15 @@ def _get_duration(self) -> int: # Lastly, if that calculation fails, try to calculate it based on the stream duration. if duration_sec is None or duration_sec < MAX_FPS_DELTA: if self._video_stream.duration is None: - logger.warning('Video duration unavailable.') + logger.warning("Video duration unavailable.") return 0 # Streams use stream `time_base` as the time base. time_base = self._video_stream.time_base if time_base.denominator == 0: logger.warning( - 'Unable to calculate video duration: time_base (%s) has zero denominator!', - str(time_base)) + "Unable to calculate video duration: time_base (%s) has zero denominator!", + str(time_base), + ) return 0 duration_sec = float(self._video_stream.duration / time_base) return round(duration_sec * self.frame_rate) @@ -341,7 +346,7 @@ def _handle_eof(self): return False self._reopened = True # Don't re-open the video if we can't seek or aren't in AUTO/FRAME thread_type mode. - if not self.is_seekable or not self._video_stream.thread_type in ('AUTO', 'FRAME'): + if not self.is_seekable or self._video_stream.thread_type not in ("AUTO", "FRAME"): return False last_frame = self.frame_number orig_pos = self._io.tell() diff --git a/scenedetect/detectors/__init__.py b/scenedetect/detectors/__init__.py index c7a0833c..a87a5689 100644 --- a/scenedetect/detectors/__init__.py +++ b/scenedetect/detectors/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -36,7 +35,7 @@ processing videos, however they can also be used to process frames directly. """ -from scenedetect.detectors.content_detector import ContentDetector +from scenedetect.detectors.content_detector import ContentDetector # noqa: I001 from scenedetect.detectors.threshold_detector import ThresholdDetector from scenedetect.detectors.adaptive_detector import AdaptiveDetector from scenedetect.detectors.hash_detector import HashDetector diff --git a/scenedetect/detectors/adaptive_detector.py b/scenedetect/detectors/adaptive_detector.py index 064255f5..0cbb4895 100644 --- a/scenedetect/detectors/adaptive_detector.py +++ b/scenedetect/detectors/adaptive_detector.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -24,7 +23,7 @@ from scenedetect.detectors import ContentDetector -logger = getLogger('pyscenedetect') +logger = getLogger("pyscenedetect") class AdaptiveDetector(ContentDetector): @@ -71,12 +70,12 @@ def __init__( # TODO(v0.7): Replace with DeprecationWarning that `video_manager` and `min_delta_hsv` will # be removed in v0.8. if video_manager is not None: - logger.error('video_manager is deprecated, use video instead.') + logger.error("video_manager is deprecated, use video instead.") if min_delta_hsv is not None: - logger.error('min_delta_hsv is deprecated, use min_content_val instead.') + logger.error("min_delta_hsv is deprecated, use min_content_val instead.") min_content_val = min_delta_hsv if window_width < 1: - raise ValueError('window_width must be at least 1.') + raise ValueError("window_width must be at least 1.") super().__init__( threshold=255.0, @@ -93,7 +92,8 @@ def __init__( self.window_width = window_width self._adaptive_ratio_key = AdaptiveDetector.ADAPTIVE_RATIO_KEY_TEMPLATE.format( - window_width=window_width, luma_only='' if not luma_only else '_lum') + window_width=window_width, luma_only="" if not luma_only else "_lum" + ) self._first_frame_num = None # NOTE: This must be different than `self._last_scene_cut` which is used by the base class. @@ -141,9 +141,9 @@ def process_frame(self, frame_num: int, frame_img: Optional[np.ndarray]) -> List return [] self._buffer = self._buffer[-required_frames:] (target_frame, target_score) = self._buffer[self.window_width] - average_window_score = ( - sum(score for i, (_frame, score) in enumerate(self._buffer) if i != self.window_width) / - (2.0 * self.window_width)) + average_window_score = sum( + score for i, (_frame, score) in enumerate(self._buffer) if i != self.window_width + ) / (2.0 * self.window_width) average_is_zero = abs(average_window_score) < 0.00001 @@ -159,7 +159,8 @@ def process_frame(self, frame_num: int, frame_img: Optional[np.ndarray]) -> List # Check to see if adaptive_ratio exceeds the adaptive_threshold as well as there # being a large enough content_val to trigger a cut threshold_met: bool = ( - adaptive_ratio >= self.adaptive_threshold and target_score >= self.min_content_val) + adaptive_ratio >= self.adaptive_threshold and target_score >= self.min_content_val + ) min_length_met: bool = (frame_num - self._last_cut) >= self.min_scene_len if threshold_met and min_length_met: self._last_cut = target_frame @@ -169,8 +170,10 @@ def process_frame(self, frame_num: int, frame_img: Optional[np.ndarray]) -> List def get_content_val(self, frame_num: int) -> Optional[float]: """Returns the average content change for a frame.""" # TODO(v0.7): Add DeprecationWarning that `get_content_val` will be removed in v0.7. - logger.error("get_content_val is deprecated and will be removed. Lookup the value" - " using a StatsManager with ContentDetector.FRAME_SCORE_KEY.") + logger.error( + "get_content_val is deprecated and will be removed. Lookup the value" + " using a StatsManager with ContentDetector.FRAME_SCORE_KEY." + ) if self.stats_manager is not None: return self.stats_manager.get_metrics(frame_num, [ContentDetector.FRAME_SCORE_KEY])[0] return 0.0 diff --git a/scenedetect/detectors/content_detector.py b/scenedetect/detectors/content_detector.py index 954a91d7..bfa99ac4 100644 --- a/scenedetect/detectors/content_detector.py +++ b/scenedetect/detectors/content_detector.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -15,14 +14,15 @@ This detector is available from the command-line as the `detect-content` command. """ -from dataclasses import dataclass + import math +from dataclasses import dataclass from typing import List, NamedTuple, Optional -import numpy import cv2 +import numpy -from scenedetect.scene_detector import SceneDetector, FlashFilter +from scenedetect.scene_detector import FlashFilter, SceneDetector def _mean_pixel_distance(left: numpy.ndarray, right: numpy.ndarray) -> float: @@ -32,7 +32,7 @@ def _mean_pixel_distance(left: numpy.ndarray, right: numpy.ndarray) -> float: assert len(left.shape) == 2 and len(right.shape) == 2 assert left.shape == right.shape num_pixels: float = float(left.shape[0] * left.shape[1]) - return (numpy.sum(numpy.abs(left.astype(numpy.int32) - right.astype(numpy.int32))) / num_pixels) + return numpy.sum(numpy.abs(left.astype(numpy.int32) - right.astype(numpy.int32))) / num_pixels def _estimated_kernel_size(frame_width: int, frame_height: int) -> int: @@ -56,6 +56,7 @@ class ContentDetector(SceneDetector): # a wider variety of test cases. class Components(NamedTuple): """Components that make up a frame's score, and their default values.""" + delta_hue: float = 1.0 """Difference between pixel hue values of adjacent frames.""" delta_sat: float = 1.0 @@ -80,7 +81,7 @@ class Components(NamedTuple): ) """Component weights to use if `luma_only` is set.""" - FRAME_SCORE_KEY = 'content_val' + FRAME_SCORE_KEY = "content_val" """Key in statsfile representing the final frame score after weighed by specified components.""" METRIC_KEYS = [FRAME_SCORE_KEY, *Components._fields] @@ -89,6 +90,7 @@ class Components(NamedTuple): @dataclass class _FrameData: """Data calculated for a given frame.""" + hue: numpy.ndarray """Frame hue map [2D 8-bit].""" sat: numpy.ndarray @@ -102,7 +104,7 @@ def __init__( self, threshold: float = 27.0, min_scene_len: int = 15, - weights: 'ContentDetector.Components' = DEFAULT_COMPONENT_WEIGHTS, + weights: "ContentDetector.Components" = DEFAULT_COMPONENT_WEIGHTS, luma_only: bool = False, kernel_size: Optional[int] = None, filter_mode: FlashFilter.Mode = FlashFilter.Mode.MERGE, @@ -133,7 +135,7 @@ def __init__( if kernel_size is not None: print(kernel_size) if kernel_size < 3 or kernel_size % 2 == 0: - raise ValueError('kernel_size must be odd integer >= 3') + raise ValueError("kernel_size must be odd integer >= 3") self._kernel = numpy.ones((kernel_size, kernel_size), numpy.uint8) self._frame_score: Optional[float] = None self._flash_filter = FlashFilter(mode=filter_mode, length=min_scene_len) @@ -155,8 +157,7 @@ def _calculate_frame_score(self, frame_num: int, frame_img: numpy.ndarray) -> fl hue, sat, lum = cv2.split(cv2.cvtColor(frame_img, cv2.COLOR_BGR2HSV)) # Performance: Only calculate edges if we have to. - calculate_edges: bool = ((self._weights.delta_edges > 0.0) - or self.stats_manager is not None) + calculate_edges: bool = (self._weights.delta_edges > 0.0) or self.stats_manager is not None edges = self._detect_edges(lum) if calculate_edges else None if self._last_frame is None: @@ -168,13 +169,14 @@ def _calculate_frame_score(self, frame_num: int, frame_img: numpy.ndarray) -> fl delta_hue=_mean_pixel_distance(hue, self._last_frame.hue), delta_sat=_mean_pixel_distance(sat, self._last_frame.sat), delta_lum=_mean_pixel_distance(lum, self._last_frame.lum), - delta_edges=(0.0 if edges is None else _mean_pixel_distance( - edges, self._last_frame.edges)), + delta_edges=( + 0.0 if edges is None else _mean_pixel_distance(edges, self._last_frame.edges) + ), ) - frame_score: float = ( - sum(component * weight for (component, weight) in zip(score_components, self._weights)) - / sum(abs(weight) for weight in self._weights)) + frame_score: float = sum( + component * weight for (component, weight) in zip(score_components, self._weights) + ) / sum(abs(weight) for weight in self._weights) # Record components and frame score if needed for analysis. if self.stats_manager is not None: diff --git a/scenedetect/detectors/hash_detector.py b/scenedetect/detectors/hash_detector.py index 1ec508a7..36f7e1b5 100644 --- a/scenedetect/detectors/hash_detector.py +++ b/scenedetect/detectors/hash_detector.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # --------------------------------------------------------------- @@ -35,8 +34,8 @@ """ # Third-Party Library Imports -import numpy import cv2 +import numpy # PySceneDetect Library Imports from scenedetect.scene_detector import SceneDetector @@ -112,14 +111,16 @@ def process_frame(self, frame_num, frame_img): if self._last_frame is not None: # We obtain the change in hash value between subsequent frames. curr_hash = self.hash_frame( - frame_img=frame_img, hash_size=self._size, factor=self._factor) + frame_img=frame_img, hash_size=self._size, factor=self._factor + ) last_hash = self._last_hash if last_hash.size == 0: # Calculate hash of last frame last_hash = self.hash_frame( - frame_img=self._last_frame, hash_size=self._size, factor=self._factor) + frame_img=self._last_frame, hash_size=self._size, factor=self._factor + ) # Hamming distance is calculated to compare to last frame hash_dist = numpy.count_nonzero(curr_hash.flatten() != last_hash.flatten()) @@ -134,8 +135,9 @@ def process_frame(self, frame_num, frame_img): # We consider any frame over the threshold a new scene, but only if # the minimum scene length has been reached (otherwise it is ignored). - if hash_dist_norm >= self._threshold and ((frame_num - self._last_scene_cut) - >= self._min_scene_len): + if hash_dist_norm >= self._threshold and ( + (frame_num - self._last_scene_cut) >= self._min_scene_len + ): cut_list.append(frame_num) self._last_scene_cut = frame_num diff --git a/scenedetect/detectors/histogram_detector.py b/scenedetect/detectors/histogram_detector.py index ad469489..9e37df09 100644 --- a/scenedetect/detectors/histogram_detector.py +++ b/scenedetect/detectors/histogram_detector.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # --------------------------------------------------------------- @@ -29,7 +28,7 @@ class HistogramDetector(SceneDetector): """Compares the difference in the Y channel of YUV histograms for adjacent frames. When the difference exceeds a given threshold, a cut is detected.""" - METRIC_KEYS = ['hist_diff'] + METRIC_KEYS = ["hist_diff"] def __init__(self, threshold: float = 0.05, bins: int = 256, min_scene_len: int = 15): """ @@ -71,10 +70,10 @@ def process_frame(self, frame_num: int, frame_img: numpy.ndarray) -> List[int]: np_data_type = frame_img.dtype if np_data_type != numpy.uint8: - raise ValueError('Image must be 8-bit rgb for HistogramDetector') + raise ValueError("Image must be 8-bit rgb for HistogramDetector") if frame_img.shape[2] != 3: - raise ValueError('Image must have three color channels for HistogramDetector') + raise ValueError("Image must have three color channels for HistogramDetector") # Initialize last scene cut point at the beginning of the frames of interest. if not self._last_scene_cut: @@ -84,7 +83,7 @@ def process_frame(self, frame_num: int, frame_img: numpy.ndarray) -> List[int]: # We can only start detecting once we have a frame to compare with. if self._last_hist is not None: - #TODO: We can have EMA of histograms to make it more robust + # TODO: We can have EMA of histograms to make it more robust # ema_hist = alpha * hist + (1 - alpha) * ema_hist # Compute histogram difference between frames @@ -97,8 +96,9 @@ def process_frame(self, frame_num: int, frame_img: numpy.ndarray) -> List[int]: # Values close to 1 indicate very similar frames, while lower values suggest changes. # Example: If `_threshold` is set to 0.8, it implies that only changes resulting in a correlation # less than 0.8 between histograms will be considered significant enough to denote a scene change. - if hist_diff <= self._threshold and ((frame_num - self._last_scene_cut) - >= self._min_scene_len): + if hist_diff <= self._threshold and ( + (frame_num - self._last_scene_cut) >= self._min_scene_len + ): cut_list.append(frame_num) self._last_scene_cut = frame_num @@ -111,9 +111,9 @@ def process_frame(self, frame_num: int, frame_img: numpy.ndarray) -> List[int]: return cut_list @staticmethod - def calculate_histogram(frame_img: numpy.ndarray, - bins: int = 256, - normalize: bool = True) -> numpy.ndarray: + def calculate_histogram( + frame_img: numpy.ndarray, bins: int = 256, normalize: bool = True + ) -> numpy.ndarray: """ Calculates and optionally normalizes the histogram of the luma (Y) channel of an image converted from BGR to YUV color space. @@ -142,7 +142,7 @@ def calculate_histogram(frame_img: numpy.ndarray, Examples: --------- - >>> img = cv2.imread('path_to_image.jpg') + >>> img = cv2.imread("path_to_image.jpg") >>> hist = calculate_histogram(img, bins=256, normalize=True) >>> print(hist.shape) (256,) diff --git a/scenedetect/detectors/threshold_detector.py b/scenedetect/detectors/threshold_detector.py index 784bd1f9..f14d1882 100644 --- a/scenedetect/detectors/threshold_detector.py +++ b/scenedetect/detectors/threshold_detector.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -16,15 +15,15 @@ This detector is available from the command-line as the `detect-threshold` command. """ +import typing as ty from enum import Enum from logging import getLogger -from typing import List, Optional import numpy from scenedetect.scene_detector import SceneDetector -logger = getLogger('pyscenedetect') +logger = getLogger("pyscenedetect") ## ## ThresholdDetector Helper Functions @@ -62,12 +61,13 @@ class ThresholdDetector(SceneDetector): class Method(Enum): """Method for ThresholdDetector to use when comparing frame brightness to the threshold.""" + FLOOR = 0 """Fade out happens when frame brightness falls below threshold.""" CEILING = 1 """Fade out happens when frame brightness rises above threshold.""" - THRESHOLD_VALUE_KEY = 'average_rgb' + THRESHOLD_VALUE_KEY = "average_rgb" def __init__( self, @@ -95,7 +95,7 @@ def __init__( """ # TODO(v0.7): Replace with DeprecationWarning that `block_size` will be removed in v0.8. if block_size is not None: - logger.error('block_size is deprecated.') + logger.error("block_size is deprecated.") super().__init__() self.threshold = int(threshold) @@ -109,15 +109,15 @@ def __init__( self.add_final_scene = add_final_scene # Where the last fade (threshold crossing) was detected. self.last_fade = { - 'frame': 0, # frame number where the last detected fade is - 'type': None # type of fade, can be either 'in' or 'out' + "frame": 0, # frame number where the last detected fade is + "type": None, # type of fade, can be either 'in' or 'out' } self._metric_keys = [ThresholdDetector.THRESHOLD_VALUE_KEY] - def get_metrics(self) -> List[str]: + def get_metrics(self) -> ty.List[str]: return self._metric_keys - def process_frame(self, frame_num: int, frame_img: numpy.ndarray) -> List[int]: + def process_frame(self, frame_num: int, frame_img: numpy.ndarray) -> ty.List[int]: """Process the next frame. `frame_num` is assumed to be sequential. Args: @@ -126,7 +126,7 @@ def process_frame(self, frame_num: int, frame_img: numpy.ndarray) -> List[int]: frame_img (numpy.ndarray or None): Video frame corresponding to `frame_img`. Returns: - List[int]: List of frames where scene cuts have been detected. There may be 0 + ty.List[int]: List of frames where scene cuts have been detected. There may be 0 or more frames in the list, and not necessarily the same as frame_num. """ @@ -145,8 +145,9 @@ def process_frame(self, frame_num: int, frame_img: numpy.ndarray) -> List[int]: # less than or equal to the threshold; however, since this differs on # user-supplied values, we supply the average pixel intensity as this # frame metric instead (to assist with manually selecting a threshold) - if (self.stats_manager is not None) and (self.stats_manager.metrics_exist( - frame_num, self._metric_keys)): + if (self.stats_manager is not None) and ( + self.stats_manager.metrics_exist(frame_num, self._metric_keys) + ): frame_avg = self.stats_manager.get_metrics(frame_num, self._metric_keys)[0] else: frame_avg = _compute_frame_average(frame_img) @@ -154,33 +155,36 @@ def process_frame(self, frame_num: int, frame_img: numpy.ndarray) -> List[int]: self.stats_manager.set_metrics(frame_num, {self._metric_keys[0]: frame_avg}) if self.processed_frame: - if self.last_fade['type'] == 'in' and (( - (self.method == ThresholdDetector.Method.FLOOR and frame_avg < self.threshold) or - (self.method == ThresholdDetector.Method.CEILING and frame_avg >= self.threshold))): + if self.last_fade["type"] == "in" and ( + (self.method == ThresholdDetector.Method.FLOOR and frame_avg < self.threshold) + or (self.method == ThresholdDetector.Method.CEILING and frame_avg >= self.threshold) + ): # Just faded out of a scene, wait for next fade in. - self.last_fade['type'] = 'out' - self.last_fade['frame'] = frame_num + self.last_fade["type"] = "out" + self.last_fade["frame"] = frame_num - elif self.last_fade['type'] == 'out' and ( - (self.method == ThresholdDetector.Method.FLOOR and frame_avg >= self.threshold) or - (self.method == ThresholdDetector.Method.CEILING and frame_avg < self.threshold)): + elif self.last_fade["type"] == "out" and ( + (self.method == ThresholdDetector.Method.FLOOR and frame_avg >= self.threshold) + or (self.method == ThresholdDetector.Method.CEILING and frame_avg < self.threshold) + ): # Only add the scene if min_scene_len frames have passed. if (frame_num - self.last_scene_cut) >= self.min_scene_len: # Just faded into a new scene, compute timecode for the scene # split based on the fade bias. - f_out = self.last_fade['frame'] + f_out = self.last_fade["frame"] f_split = int( - (frame_num + f_out + int(self.fade_bias * (frame_num - f_out))) / 2) + (frame_num + f_out + int(self.fade_bias * (frame_num - f_out))) / 2 + ) cut_list.append(f_split) self.last_scene_cut = frame_num - self.last_fade['type'] = 'in' - self.last_fade['frame'] = frame_num + self.last_fade["type"] = "in" + self.last_fade["frame"] = frame_num else: - self.last_fade['frame'] = 0 + self.last_fade["frame"] = 0 if frame_avg < self.threshold: - self.last_fade['type'] = 'out' + self.last_fade["type"] = "out" else: - self.last_fade['type'] = 'in' + self.last_fade["type"] = "in" self.processed_frame = True return cut_list @@ -197,8 +201,13 @@ def post_process(self, frame_num: int): # scene break to indicate the end of the scene. This is only done for # fade-outs, as a scene cut is already added when a fade-in is found. cut_times = [] - if self.last_fade['type'] == 'out' and self.add_final_scene and ( - (self.last_scene_cut is None and frame_num >= self.min_scene_len) or - (frame_num - self.last_scene_cut) >= self.min_scene_len): - cut_times.append(self.last_fade['frame']) + if ( + self.last_fade["type"] == "out" + and self.add_final_scene + and ( + (self.last_scene_cut is None and frame_num >= self.min_scene_len) + or (frame_num - self.last_scene_cut) >= self.min_scene_len + ) + ): + cut_times.append(self.last_fade["frame"]) return cut_times diff --git a/scenedetect/frame_timecode.py b/scenedetect/frame_timecode.py index 5c009f52..ffb836b4 100644 --- a/scenedetect/frame_timecode.py +++ b/scenedetect/frame_timecode.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -88,9 +87,11 @@ class FrameTimecode: 3. Exact number of frames as `int`, or `str` in form NNNNN (`456` or `"456"`) """ - def __init__(self, - timecode: Union[int, float, str, 'FrameTimecode'] = None, - fps: Union[int, float, str, 'FrameTimecode'] = None): + def __init__( + self, + timecode: Union[int, float, str, "FrameTimecode"] = None, + fps: Union[int, float, str, "FrameTimecode"] = None, + ): """ Arguments: timecode: A frame number (int), number of seconds (float), or timecode (str in @@ -112,20 +113,21 @@ def __init__(self, self.framerate = timecode.framerate self.frame_num = timecode.frame_num if fps is not None: - raise TypeError('Framerate cannot be overwritten when copying a FrameTimecode.') + raise TypeError("Framerate cannot be overwritten when copying a FrameTimecode.") else: # Ensure other arguments are consistent with API. if fps is None: - raise TypeError('Framerate (fps) is a required argument.') + raise TypeError("Framerate (fps) is a required argument.") if isinstance(fps, FrameTimecode): fps = fps.framerate # Process the given framerate, if it was not already set. if not isinstance(fps, (int, float)): - raise TypeError('Framerate must be of type int/float.') - if (isinstance(fps, int) and not fps > 0) or (isinstance(fps, float) - and not fps >= MAX_FPS_DELTA): - raise ValueError('Framerate must be positive and greater than zero.') + raise TypeError("Framerate must be of type int/float.") + if (isinstance(fps, int) and not fps > 0) or ( + isinstance(fps, float) and not fps >= MAX_FPS_DELTA + ): + raise ValueError("Framerate must be positive and greater than zero.") self.framerate = float(fps) # Process the timecode value, storing it as an exact number of frames. @@ -197,7 +199,7 @@ def get_timecode(self, precision: int = 3, use_rounding: bool = True) -> str: # Compute hours and minutes based off of seconds, and update seconds. secs = self.get_seconds() hrs = int(secs / _SECONDS_PER_HOUR) - secs -= (hrs * _SECONDS_PER_HOUR) + secs -= hrs * _SECONDS_PER_HOUR mins = int(secs / _SECONDS_PER_MINUTE) secs = max(0.0, secs - (mins * _SECONDS_PER_MINUTE)) if use_rounding: @@ -211,15 +213,15 @@ def get_timecode(self, precision: int = 3, use_rounding: bool = True) -> str: mins = 0 hrs += 1 # We have to extend the precision by 1 here, since `format` will round up. - msec = format(secs, '.%df' % (precision + 1)) if precision else '' + msec = format(secs, ".%df" % (precision + 1)) if precision else "" # Need to include decimal place in `msec_str`. - msec_str = msec[-(2 + precision):-1] + msec_str = msec[-(2 + precision) : -1] secs_str = f"{int(secs):02d}{msec_str}" # Return hours, minutes, and seconds as a formatted timecode string. - return '%02d:%02d:%s' % (hrs, mins, secs_str) + return "%02d:%02d:%s" % (hrs, mins, secs_str) # TODO(v1.0): Add a `previous` property to replace the existing one and deprecate this getter. - def previous_frame(self) -> 'FrameTimecode': + def previous_frame(self) -> "FrameTimecode": """Return a new FrameTimecode for the previous frame (or 0 if on frame 0).""" new_timecode = FrameTimecode(self) new_timecode.frame_num = max(0, new_timecode.frame_num - 1) @@ -236,7 +238,7 @@ def _seconds_to_frames(self, seconds: float) -> int: return round(seconds * self.framerate) def _parse_timecode_number(self, timecode: Union[int, float]) -> int: - """ Parse a timecode number, storing it as the exact number of frames. + """Parse a timecode number, storing it as the exact number of frames. Can be passed as frame number (int), seconds (float) Raises: @@ -246,20 +248,20 @@ def _parse_timecode_number(self, timecode: Union[int, float]) -> int: # Exact number of frames N if isinstance(timecode, int): if timecode < 0: - raise ValueError('Timecode frame number must be positive and greater than zero.') + raise ValueError("Timecode frame number must be positive and greater than zero.") return timecode # Number of seconds S elif isinstance(timecode, float): if timecode < 0.0: - raise ValueError('Timecode value must be positive and greater than zero.') + raise ValueError("Timecode value must be positive and greater than zero.") return self._seconds_to_frames(timecode) # FrameTimecode elif isinstance(timecode, FrameTimecode): return timecode.frame_num elif timecode is None: - raise TypeError('Timecode/frame number must be specified!') + raise TypeError("Timecode/frame number must be specified!") else: - raise TypeError('Timecode format/type unrecognized.') + raise TypeError("Timecode format/type unrecognized.") def _parse_timecode_string(self, input: str) -> int: """Parses a string based on the three possible forms (in timecode format, @@ -273,83 +275,84 @@ def _parse_timecode_string(self, input: str) -> int: Raises: ValueError: Value could not be parsed correctly. """ - assert not self.framerate is None + assert self.framerate is not None input = input.strip() # Exact number of frames N if input.isdigit(): timecode = int(input) if timecode < 0: - raise ValueError('Timecode frame number must be positive.') + raise ValueError("Timecode frame number must be positive.") return timecode # Timecode in string format 'HH:MM:SS[.nnn]' elif input.find(":") >= 0: values = input.split(":") hrs, mins = int(values[0]), int(values[1]) - secs = float(values[2]) if '.' in values[2] else int(values[2]) + secs = float(values[2]) if "." in values[2] else int(values[2]) if not (hrs >= 0 and mins >= 0 and secs >= 0 and mins < 60 and secs < 60): - raise ValueError('Invalid timecode range (values outside allowed range).') + raise ValueError("Invalid timecode range (values outside allowed range).") secs += (hrs * 60 * 60) + (mins * 60) return self._seconds_to_frames(secs) # Try to parse the number as seconds in the format 1234.5 or 1234s - if input.endswith('s'): + if input.endswith("s"): input = input[:-1] - if not input.replace('.', '').isdigit(): - raise ValueError('All characters in timecode seconds string must be digits.') + if not input.replace(".", "").isdigit(): + raise ValueError("All characters in timecode seconds string must be digits.") as_float = float(input) if as_float < 0.0: - raise ValueError('Timecode seconds value must be positive.') + raise ValueError("Timecode seconds value must be positive.") return self._seconds_to_frames(as_float) - def __iadd__(self, other: Union[int, float, str, 'FrameTimecode']) -> 'FrameTimecode': + def __iadd__(self, other: Union[int, float, str, "FrameTimecode"]) -> "FrameTimecode": if isinstance(other, int): self.frame_num += other elif isinstance(other, FrameTimecode): if self.equal_framerate(other.framerate): self.frame_num += other.frame_num else: - raise ValueError('FrameTimecode instances require equal framerate for addition.') + raise ValueError("FrameTimecode instances require equal framerate for addition.") # Check if value to add is in number of seconds. elif isinstance(other, float): self.frame_num += self._seconds_to_frames(other) elif isinstance(other, str): self.frame_num += self._parse_timecode_string(other) else: - raise TypeError('Unsupported type for performing addition with FrameTimecode.') - if self.frame_num < 0: # Required to allow adding negative seconds/frames. + raise TypeError("Unsupported type for performing addition with FrameTimecode.") + if self.frame_num < 0: # Required to allow adding negative seconds/frames. self.frame_num = 0 return self - def __add__(self, other: Union[int, float, str, 'FrameTimecode']) -> 'FrameTimecode': + def __add__(self, other: Union[int, float, str, "FrameTimecode"]) -> "FrameTimecode": to_return = FrameTimecode(timecode=self) to_return += other return to_return - def __isub__(self, other: Union[int, float, str, 'FrameTimecode']) -> 'FrameTimecode': + def __isub__(self, other: Union[int, float, str, "FrameTimecode"]) -> "FrameTimecode": if isinstance(other, int): self.frame_num -= other elif isinstance(other, FrameTimecode): if self.equal_framerate(other.framerate): self.frame_num -= other.frame_num else: - raise ValueError('FrameTimecode instances require equal framerate for subtraction.') + raise ValueError("FrameTimecode instances require equal framerate for subtraction.") # Check if value to add is in number of seconds. elif isinstance(other, float): self.frame_num -= self._seconds_to_frames(other) elif isinstance(other, str): self.frame_num -= self._parse_timecode_string(other) else: - raise TypeError('Unsupported type for performing subtraction with FrameTimecode: %s' % - type(other)) + raise TypeError( + "Unsupported type for performing subtraction with FrameTimecode: %s" % type(other) + ) if self.frame_num < 0: self.frame_num = 0 return self - def __sub__(self, other: Union[int, float, str, 'FrameTimecode']) -> 'FrameTimecode': + def __sub__(self, other: Union[int, float, str, "FrameTimecode"]) -> "FrameTimecode": to_return = FrameTimecode(timecode=self) to_return -= other return to_return - def __eq__(self, other: Union[int, float, str, 'FrameTimecode']) -> 'FrameTimecode': + def __eq__(self, other: Union[int, float, str, "FrameTimecode"]) -> "FrameTimecode": if isinstance(other, int): return self.frame_num == other elif isinstance(other, float): @@ -361,17 +364,19 @@ def __eq__(self, other: Union[int, float, str, 'FrameTimecode']) -> 'FrameTimeco return self.frame_num == other.frame_num else: raise TypeError( - 'FrameTimecode objects must have the same framerate to be compared.') + "FrameTimecode objects must have the same framerate to be compared." + ) elif other is None: return False else: - raise TypeError('Unsupported type for performing == with FrameTimecode: %s' % - type(other)) + raise TypeError( + "Unsupported type for performing == with FrameTimecode: %s" % type(other) + ) - def __ne__(self, other: Union[int, float, str, 'FrameTimecode']) -> bool: + def __ne__(self, other: Union[int, float, str, "FrameTimecode"]) -> bool: return not self == other - def __lt__(self, other: Union[int, float, str, 'FrameTimecode']) -> bool: + def __lt__(self, other: Union[int, float, str, "FrameTimecode"]) -> bool: if isinstance(other, int): return self.frame_num < other elif isinstance(other, float): @@ -383,12 +388,14 @@ def __lt__(self, other: Union[int, float, str, 'FrameTimecode']) -> bool: return self.frame_num < other.frame_num else: raise TypeError( - 'FrameTimecode objects must have the same framerate to be compared.') + "FrameTimecode objects must have the same framerate to be compared." + ) else: - raise TypeError('Unsupported type for performing < with FrameTimecode: %s' % - type(other)) + raise TypeError( + "Unsupported type for performing < with FrameTimecode: %s" % type(other) + ) - def __le__(self, other: Union[int, float, str, 'FrameTimecode']) -> bool: + def __le__(self, other: Union[int, float, str, "FrameTimecode"]) -> bool: if isinstance(other, int): return self.frame_num <= other elif isinstance(other, float): @@ -400,12 +407,14 @@ def __le__(self, other: Union[int, float, str, 'FrameTimecode']) -> bool: return self.frame_num <= other.frame_num else: raise TypeError( - 'FrameTimecode objects must have the same framerate to be compared.') + "FrameTimecode objects must have the same framerate to be compared." + ) else: - raise TypeError('Unsupported type for performing <= with FrameTimecode: %s' % - type(other)) + raise TypeError( + "Unsupported type for performing <= with FrameTimecode: %s" % type(other) + ) - def __gt__(self, other: Union[int, float, str, 'FrameTimecode']) -> bool: + def __gt__(self, other: Union[int, float, str, "FrameTimecode"]) -> bool: if isinstance(other, int): return self.frame_num > other elif isinstance(other, float): @@ -417,12 +426,14 @@ def __gt__(self, other: Union[int, float, str, 'FrameTimecode']) -> bool: return self.frame_num > other.frame_num else: raise TypeError( - 'FrameTimecode objects must have the same framerate to be compared.') + "FrameTimecode objects must have the same framerate to be compared." + ) else: - raise TypeError('Unsupported type for performing > with FrameTimecode: %s' % - type(other)) + raise TypeError( + "Unsupported type for performing > with FrameTimecode: %s" % type(other) + ) - def __ge__(self, other: Union[int, float, str, 'FrameTimecode']) -> bool: + def __ge__(self, other: Union[int, float, str, "FrameTimecode"]) -> bool: if isinstance(other, int): return self.frame_num >= other elif isinstance(other, float): @@ -434,10 +445,12 @@ def __ge__(self, other: Union[int, float, str, 'FrameTimecode']) -> bool: return self.frame_num >= other.frame_num else: raise TypeError( - 'FrameTimecode objects must have the same framerate to be compared.') + "FrameTimecode objects must have the same framerate to be compared." + ) else: - raise TypeError('Unsupported type for performing >= with FrameTimecode: %s' % - type(other)) + raise TypeError( + "Unsupported type for performing >= with FrameTimecode: %s" % type(other) + ) # TODO(v1.0): __int__ and __float__ should be removed. Mark as deprecated, and indicate # need to use relevant property instead. @@ -452,7 +465,7 @@ def __str__(self) -> str: return self.get_timecode() def __repr__(self) -> str: - return '%s [frame=%d, fps=%.3f]' % (self.get_timecode(), self.frame_num, self.framerate) + return "%s [frame=%d, fps=%.3f]" % (self.get_timecode(), self.frame_num, self.framerate) def __hash__(self) -> int: return self.frame_num diff --git a/scenedetect/platform.py b/scenedetect/platform.py index 38c86bf3..65aa7f80 100644 --- a/scenedetect/platform.py +++ b/scenedetect/platform.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -37,7 +36,6 @@ class FakeTqdmObject: """Provides a no-op tqdm-like object.""" - # pylint: disable=unused-argument def __init__(self, **kawrgs): """No-op.""" @@ -50,13 +48,10 @@ def close(self): def set_description(self, desc=None, refresh=True): """No-op.""" - # pylint: enable=unused-argument - class FakeTqdmLoggingRedirect: """Provides a no-op tqdm context manager for redirecting log messages.""" - # pylint: disable=redefined-builtin,unused-argument def __init__(self, **kawrgs): """No-op.""" @@ -66,20 +61,14 @@ def __enter__(self): def __exit__(self, type, value, traceback): """No-op.""" - # pylint: enable=redefined-builtin,unused-argument - # Try to import tqdm and the logging redirect, otherwise provide fake implementations.. try: - # pylint: disable=unused-import from tqdm import tqdm from tqdm.contrib.logging import logging_redirect_tqdm - # pylint: enable=unused-import except ModuleNotFoundError: - # pylint: disable=invalid-name tqdm = FakeTqdmObject logging_redirect_tqdm = FakeTqdmLoggingRedirect - # pylint: enable=invalid-name ## ## OpenCV imwrite Supported Image Types & Quality/Compression Parameters @@ -88,7 +77,7 @@ def __exit__(self, type, value, traceback): # TODO: Move this into scene_manager. def get_cv2_imwrite_params() -> Dict[str, Union[int, None]]: - """ Get OpenCV imwrite Params: Returns a dict of supported image formats and + """Get OpenCV imwrite Params: Returns a dict of supported image formats and their associated quality/compression parameter index, or None if that format is not supported. @@ -100,7 +89,7 @@ def get_cv2_imwrite_params() -> Dict[str, Union[int, None]]: """ def _get_cv2_param(param_name: str) -> Union[int, None]: - if param_name.startswith('CV_'): + if param_name.startswith("CV_"): param_name = param_name[3:] try: return getattr(cv2, param_name) @@ -108,9 +97,9 @@ def _get_cv2_param(param_name: str) -> Union[int, None]: return None return { - 'jpg': _get_cv2_param('IMWRITE_JPEG_QUALITY'), - 'png': _get_cv2_param('IMWRITE_PNG_COMPRESSION'), - 'webp': _get_cv2_param('IMWRITE_WEBP_QUALITY') + "jpg": _get_cv2_param("IMWRITE_JPEG_QUALITY"), + "png": _get_cv2_param("IMWRITE_PNG_COMPRESSION"), + "webp": _get_cv2_param("IMWRITE_WEBP_QUALITY"), } @@ -128,14 +117,14 @@ def get_file_name(file_path: AnyStr, include_extension=True) -> AnyStr: file_name = os.path.basename(file_path) if not include_extension: file_name = str(file_name) - last_dot_pos = file_name.rfind('.') + last_dot_pos = file_name.rfind(".") if last_dot_pos >= 0: file_name = file_name[:last_dot_pos] return file_name def get_and_create_path(file_path: AnyStr, output_directory: Optional[AnyStr] = None) -> AnyStr: - """ Get & Create Path: Gets and returns the full/absolute path to file_path + """Get & Create Path: Gets and returns the full/absolute path to file_path in the specified output_directory if set, creating any required directories along the way. @@ -167,9 +156,9 @@ def get_and_create_path(file_path: AnyStr, output_directory: Optional[AnyStr] = ## -def init_logger(log_level: int = logging.INFO, - show_stdout: bool = False, - log_file: Optional[str] = None): +def init_logger( + log_level: int = logging.INFO, show_stdout: bool = False, log_file: Optional[str] = None +): """Initializes logging for PySceneDetect. The logger instance used is named 'pyscenedetect'. By default the logger has no handlers to suppress output. All existing log handlers are replaced every time this function is invoked. @@ -181,10 +170,10 @@ def init_logger(log_level: int = logging.INFO, log_file: If set, add handler to dump debug log messages to given file path. """ # Format of log messages depends on verbosity. - INFO_TEMPLATE = '[PySceneDetect] %(message)s' - DEBUG_TEMPLATE = '%(levelname)s: %(module)s.%(funcName)s(): %(message)s' + INFO_TEMPLATE = "[PySceneDetect] %(message)s" + DEBUG_TEMPLATE = "%(levelname)s: %(module)s.%(funcName)s(): %(message)s" # Get the named logger and remove any existing handlers. - logger_instance = logging.getLogger('pyscenedetect') + logger_instance = logging.getLogger("pyscenedetect") logger_instance.handlers = [] logger_instance.setLevel(log_level) # Add stdout handler if required. @@ -192,7 +181,8 @@ def init_logger(log_level: int = logging.INFO, handler = logging.StreamHandler(stream=sys.stdout) handler.setLevel(log_level) handler.setFormatter( - logging.Formatter(fmt=DEBUG_TEMPLATE if log_level == logging.DEBUG else INFO_TEMPLATE)) + logging.Formatter(fmt=DEBUG_TEMPLATE if log_level == logging.DEBUG else INFO_TEMPLATE) + ) logger_instance.addHandler(handler) # Add debug log handler if required. if log_file: @@ -230,12 +220,12 @@ def invoke_command(args: List[str]) -> int: try: return subprocess.call(args) except OSError as err: - if os.name != 'nt': + if os.name != "nt": raise exception_string = str(err) # Error 206: The filename or extension is too long # Error 87: The parameter is incorrect - to_match = ('206', '87') + to_match = ("206", "87") if any([x in exception_string for x in to_match]): raise CommandTooLong() from err raise @@ -247,17 +237,16 @@ def get_ffmpeg_path() -> Optional[str]: """ # Try invoking ffmpeg with the current environment. try: - subprocess.call(['ffmpeg', '-v', 'quiet']) - return 'ffmpeg' + subprocess.call(["ffmpeg", "-v", "quiet"]) + return "ffmpeg" except OSError: pass # Failed to invoke ffmpeg with current environment, try another possibility. # Try invoking ffmpeg using the one from `imageio_ffmpeg` if available. try: - # pylint: disable=import-outside-toplevel from imageio_ffmpeg import get_ffmpeg_exe - # pylint: enable=import-outside-toplevel - subprocess.call([get_ffmpeg_exe(), '-v', 'quiet']) + + subprocess.call([get_ffmpeg_exe(), "-v", "quiet"]) return get_ffmpeg_exe() # Gracefully handle case where imageio_ffmpeg is not available. except ModuleNotFoundError: @@ -278,9 +267,9 @@ def get_ffmpeg_version() -> Optional[str]: if ffmpeg_path is None: return None # If get_ffmpeg_path() returns a value, the path it returns should be invocable. - output = subprocess.check_output(args=[ffmpeg_path, '-version'], text=True) + output = subprocess.check_output(args=[ffmpeg_path, "-version"], text=True) output_split = output.split() - if len(output_split) >= 3 and output_split[1] == 'version': + if len(output_split) >= 3 and output_split[1] == "version": return output_split[2] # If parsing the version fails, return the entire first line of output. return output.splitlines()[0] @@ -288,15 +277,15 @@ def get_ffmpeg_version() -> Optional[str]: def get_mkvmerge_version() -> Optional[str]: """Get mkvmerge version identifier, or None if mkvmerge is not found in PATH.""" - tool_name = 'mkvmerge' + tool_name = "mkvmerge" try: - output = subprocess.check_output(args=[tool_name, '--version'], text=True) + output = subprocess.check_output(args=[tool_name, "--version"], text=True) except FileNotFoundError: # mkvmerge doesn't exist on the system return None output_split = output.split() if len(output_split) >= 1 and output_split[0] == tool_name: - return ' '.join(output_split[1:]) + return " ".join(output_split[1:]) # If parsing the version fails, return the entire first line of output. return output.splitlines()[0] @@ -307,31 +296,32 @@ def get_system_version_info() -> str: Used for the `scenedetect version -a` command. """ - output_template = '{:<12} {}' - line_separator = '-' * 60 - not_found_str = 'Not Installed' + output_template = "{:<12} {}" + line_separator = "-" * 60 + not_found_str = "Not Installed" out_lines = [] # System (Python, OS) - out_lines += ['System Info', line_separator] + out_lines += ["System Info", line_separator] out_lines += [ - output_template.format(name, version) for name, version in ( - ('OS', '%s' % platform.platform()), - ('Python', '%d.%d.%d' % sys.version_info[0:3]), + output_template.format(name, version) + for name, version in ( + ("OS", "%s" % platform.platform()), + ("Python", "%d.%d.%d" % sys.version_info[0:3]), ) ] # Third-Party Packages - out_lines += ['', 'Packages', line_separator] + out_lines += ["", "Packages", line_separator] third_party_packages = ( - 'av', - 'click', - 'cv2', - 'moviepy', - 'numpy', - 'platformdirs', - 'scenedetect', - 'tqdm', + "av", + "click", + "cv2", + "moviepy", + "numpy", + "platformdirs", + "scenedetect", + "tqdm", ) for module_name in third_party_packages: try: @@ -341,21 +331,23 @@ def get_system_version_info() -> str: out_lines.append(output_template.format(module_name, not_found_str)) # External Tools - out_lines += ['', 'Tools', line_separator] + out_lines += ["", "Tools", line_separator] tool_version_info = ( - ('ffmpeg', get_ffmpeg_version()), - ('mkvmerge', get_mkvmerge_version()), + ("ffmpeg", get_ffmpeg_version()), + ("mkvmerge", get_mkvmerge_version()), ) - for (tool_name, tool_version) in tool_version_info: + for tool_name, tool_version in tool_version_info: out_lines.append( - output_template.format(tool_name, tool_version if tool_version else not_found_str)) + output_template.format(tool_name, tool_version if tool_version else not_found_str) + ) - return '\n'.join(out_lines) + return "\n".join(out_lines) class Template(string.Template): """Template matcher used to replace instances of $TEMPLATES in filenames.""" - idpattern = '[A-Z0-9_]+' + + idpattern = "[A-Z0-9_]+" flags = re.ASCII diff --git a/scenedetect/scene_detector.py b/scenedetect/scene_detector.py index ded5d35d..6ce50993 100644 --- a/scenedetect/scene_detector.py +++ b/scenedetect/scene_detector.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -25,17 +24,16 @@ event (in, out, cut, etc...). """ -from enum import Enum import typing as ty +from enum import Enum import numpy from scenedetect.stats_manager import StatsManager -# pylint: disable=unused-argument, no-self-use class SceneDetector: - """ Base class to inherit from when implementing a scene detection algorithm. + """Base class to inherit from when implementing a scene detection algorithm. This API is not yet stable and subject to change. @@ -45,6 +43,7 @@ class SceneDetector: Also see the implemented scene detectors in the scenedetect.detectors module to get an idea of how a particular detector can be created. """ + # TODO(v0.7): Make this a proper abstract base class. stats_manager: ty.Optional[StatsManager] = None @@ -67,8 +66,10 @@ def is_processing_required(self, frame_num: int) -> bool: to be passed to process_frame for the given frame_num). """ metric_keys = self.get_metrics() - return not metric_keys or not (self.stats_manager is not None - and self.stats_manager.metrics_exist(frame_num, metric_keys)) + return not metric_keys or not ( + self.stats_manager is not None + and self.stats_manager.metrics_exist(frame_num, metric_keys) + ) def stats_manager_required(self) -> bool: """Stats Manager Required: Prototype indicating if detector requires stats. @@ -133,8 +134,9 @@ class SparseSceneDetector(SceneDetector): An example of a SparseSceneDetector is the MotionDetector. """ - def process_frame(self, frame_num: int, - frame_img: numpy.ndarray) -> ty.List[ty.Tuple[int, int]]: + def process_frame( + self, frame_num: int, frame_img: numpy.ndarray + ) -> ty.List[ty.Tuple[int, int]]: """Process Frame: Computes/stores metrics and detects any scene changes. Prototype method, no actual detection. @@ -158,7 +160,6 @@ def post_process(self, frame_num: int) -> ty.List[ty.Tuple[int, int]]: class FlashFilter: - class Mode(Enum): MERGE = 0 """Merge consecutive cuts shorter than filter length.""" @@ -168,10 +169,10 @@ class Mode(Enum): def __init__(self, mode: Mode, length: int): self._mode = mode self._filter_length = length # Number of frames to use for activating the filter. - self._last_above = None # Last frame above threshold. - self._merge_enabled = False # Used to disable merging until at least one cut was found. - self._merge_triggered = False # True when the merge filter is active. - self._merge_start = None # Frame number where we started the merge filte. + self._last_above = None # Last frame above threshold. + self._merge_enabled = False # Used to disable merging until at least one cut was found. + self._merge_triggered = False # True when the merge filter is active. + self._merge_start = None # Frame number where we started the merge filte. def filter(self, frame_num: int, above_threshold: bool) -> ty.List[int]: if not self._filter_length > 0: @@ -180,8 +181,9 @@ def filter(self, frame_num: int, above_threshold: bool) -> ty.List[int]: self._last_above = frame_num if self._mode == FlashFilter.Mode.MERGE: return self._filter_merge(frame_num=frame_num, above_threshold=above_threshold) - if self._mode == FlashFilter.Mode.SUPPRESS: + elif self._mode == FlashFilter.Mode.SUPPRESS: return self._filter_suppress(frame_num=frame_num, above_threshold=above_threshold) + raise RuntimeError("Unhandled FlashFilter mode.") def _filter_suppress(self, frame_num: int, above_threshold: bool) -> ty.List[int]: min_length_met: bool = (frame_num - self._last_above) >= self._filter_length diff --git a/scenedetect/scene_manager.py b/scenedetect/scene_manager.py index bbada707..dc3bba04 100644 --- a/scenedetect/scene_manager.py +++ b/scenedetect/scene_manager.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -81,26 +80,31 @@ def on_new_scene(frame_img: numpy.ndarray, frame_num: int): """ import csv -from enum import Enum -from typing import Iterable, List, Tuple, Optional, Dict, Callable, Union, TextIO -import threading -import queue import logging import math +import queue import sys +import threading +from enum import Enum +from typing import Callable, Dict, Iterable, List, Optional, TextIO, Tuple, Union import cv2 import numpy as np -from scenedetect._thirdparty.simpletable import (SimpleTableCell, SimpleTableImage, SimpleTableRow, - SimpleTable, HTMLPage) -from scenedetect.platform import (tqdm, get_and_create_path, get_cv2_imwrite_params, Template) +from scenedetect._thirdparty.simpletable import ( + HTMLPage, + SimpleTable, + SimpleTableCell, + SimpleTableImage, + SimpleTableRow, +) from scenedetect.frame_timecode import FrameTimecode -from scenedetect.video_stream import VideoStream +from scenedetect.platform import Template, get_and_create_path, get_cv2_imwrite_params, tqdm from scenedetect.scene_detector import SceneDetector, SparseSceneDetector -from scenedetect.stats_manager import StatsManager, FrameMetricRegistered +from scenedetect.stats_manager import StatsManager +from scenedetect.video_stream import VideoStream -logger = logging.getLogger('pyscenedetect') +logger = logging.getLogger("pyscenedetect") # TODO: This value can and should be tuned for performance improvements as much as possible, # until accuracy falls, on a large enough dataset. This has yet to be done, but the current @@ -114,12 +118,13 @@ def on_new_scene(frame_img: numpy.ndarray, frame_num: int): MAX_FRAME_SIZE_ERRORS: int = 16 """Maximum number of frame size error messages that can be logged.""" -PROGRESS_BAR_DESCRIPTION = ' Detected: %d | Progress' +PROGRESS_BAR_DESCRIPTION = " Detected: %d | Progress" """Template to use for progress bar.""" class Interpolation(Enum): """Interpolation method used for image resizing. Based on constants defined in OpenCV.""" + NEAREST = cv2.INTER_NEAREST """Nearest neighbor interpolation.""" LINEAR = cv2.INTER_LINEAR @@ -181,7 +186,7 @@ def get_scenes_from_cuts( """ # TODO(v0.7): Use the warnings module to turn this into a warning. if base_timecode is not None: - logger.error('`base_timecode` argument is deprecated has no effect.') + logger.error("`base_timecode` argument is deprecated has no effect.") # Scene list, where scenes are tuples of (Start FrameTimecode, End FrameTimecode). scene_list = [] @@ -200,10 +205,12 @@ def get_scenes_from_cuts( return scene_list -def write_scene_list(output_csv_file: TextIO, - scene_list: Iterable[Tuple[FrameTimecode, FrameTimecode]], - include_cut_list: bool = True, - cut_list: Optional[Iterable[FrameTimecode]] = None) -> None: +def write_scene_list( + output_csv_file: TextIO, + scene_list: Iterable[Tuple[FrameTimecode, FrameTimecode]], + include_cut_list: bool = True, + cut_list: Optional[Iterable[FrameTimecode]] = None, +) -> None: """Writes the given list of scenes to an output file handle in CSV format. Arguments: @@ -215,41 +222,56 @@ def write_scene_list(output_csv_file: TextIO, in the video that need to be split to generate individual scenes). If not specified, the cut list is generated using the start times of each scene following the first one. """ - csv_writer = csv.writer(output_csv_file, lineterminator='\n') + csv_writer = csv.writer(output_csv_file, lineterminator="\n") # If required, output the cutting list as the first row (i.e. before the header row). if include_cut_list: csv_writer.writerow( - ["Timecode List:"] + - cut_list if cut_list else [start.get_timecode() for start, _ in scene_list[1:]]) - csv_writer.writerow([ - "Scene Number", "Start Frame", "Start Timecode", "Start Time (seconds)", "End Frame", - "End Timecode", "End Time (seconds)", "Length (frames)", "Length (timecode)", - "Length (seconds)" - ]) + ["Timecode List:"] + cut_list + if cut_list + else [start.get_timecode() for start, _ in scene_list[1:]] + ) + csv_writer.writerow( + [ + "Scene Number", + "Start Frame", + "Start Timecode", + "Start Time (seconds)", + "End Frame", + "End Timecode", + "End Time (seconds)", + "Length (frames)", + "Length (timecode)", + "Length (seconds)", + ] + ) for i, (start, end) in enumerate(scene_list): duration = end - start - csv_writer.writerow([ - '%d' % (i + 1), - '%d' % (start.get_frames() + 1), - start.get_timecode(), - '%.3f' % start.get_seconds(), - '%d' % end.get_frames(), - end.get_timecode(), - '%.3f' % end.get_seconds(), - '%d' % duration.get_frames(), - duration.get_timecode(), - '%.3f' % duration.get_seconds() - ]) - - -def write_scene_list_html(output_html_filename, - scene_list, - cut_list=None, - css=None, - css_class='mytable', - image_filenames=None, - image_width=None, - image_height=None): + csv_writer.writerow( + [ + "%d" % (i + 1), + "%d" % (start.get_frames() + 1), + start.get_timecode(), + "%.3f" % start.get_seconds(), + "%d" % end.get_frames(), + end.get_timecode(), + "%.3f" % end.get_seconds(), + "%d" % duration.get_frames(), + duration.get_timecode(), + "%.3f" % duration.get_seconds(), + ] + ) + + +def write_scene_list_html( + output_html_filename, + scene_list, + cut_list=None, + css=None, + css_class="mytable", + image_filenames=None, + image_width=None, + image_height=None, +): """Writes the given list of scenes to an output file handle in html format. Arguments: @@ -306,37 +328,49 @@ def write_scene_list_html(output_html_filename, # Output Timecode list timecode_table = SimpleTable( - [["Timecode List:"] + - (cut_list if cut_list else [start.get_timecode() for start, _ in scene_list[1:]])], - css_class=css_class) + [ + ["Timecode List:"] + + (cut_list if cut_list else [start.get_timecode() for start, _ in scene_list[1:]]) + ], + css_class=css_class, + ) # Output list of scenes header_row = [ - "Scene Number", "Start Frame", "Start Timecode", "Start Time (seconds)", "End Frame", - "End Timecode", "End Time (seconds)", "Length (frames)", "Length (timecode)", - "Length (seconds)" + "Scene Number", + "Start Frame", + "Start Timecode", + "Start Time (seconds)", + "End Frame", + "End Timecode", + "End Time (seconds)", + "Length (frames)", + "Length (timecode)", + "Length (seconds)", ] for i, (start, end) in enumerate(scene_list): duration = end - start - row = SimpleTableRow([ - '%d' % (i + 1), - '%d' % (start.get_frames() + 1), - start.get_timecode(), - '%.3f' % start.get_seconds(), - '%d' % end.get_frames(), - end.get_timecode(), - '%.3f' % end.get_seconds(), - '%d' % duration.get_frames(), - duration.get_timecode(), - '%.3f' % duration.get_seconds() - ]) + row = SimpleTableRow( + [ + "%d" % (i + 1), + "%d" % (start.get_frames() + 1), + start.get_timecode(), + "%.3f" % start.get_seconds(), + "%d" % end.get_frames(), + end.get_timecode(), + "%.3f" % end.get_seconds(), + "%d" % duration.get_frames(), + duration.get_timecode(), + "%.3f" % duration.get_seconds(), + ] + ) if image_filenames: for image in image_filenames[i]: row.add_cell( - SimpleTableCell( - SimpleTableImage(image, width=image_width, height=image_height))) + SimpleTableCell(SimpleTableImage(image, width=image_width, height=image_height)) + ) if i == 0: scene_table = SimpleTable(rows=[row], header_row=header_row, css_class=css_class) @@ -355,20 +389,22 @@ def write_scene_list_html(output_html_filename, # TODO(v1.0): Refactor to take a SceneList object; consider moving this and save scene list # to a better spot, or just move them to scene_list.py. # -def save_images(scene_list: List[Tuple[FrameTimecode, FrameTimecode]], - video: VideoStream, - num_images: int = 3, - frame_margin: int = 1, - image_extension: str = 'jpg', - encoder_param: int = 95, - image_name_template: str = '$VIDEO_NAME-Scene-$SCENE_NUMBER-$IMAGE_NUMBER', - output_dir: Optional[str] = None, - show_progress: Optional[bool] = False, - scale: Optional[float] = None, - height: Optional[int] = None, - width: Optional[int] = None, - interpolation: Interpolation = Interpolation.CUBIC, - video_manager=None) -> Dict[int, List[str]]: +def save_images( + scene_list: List[Tuple[FrameTimecode, FrameTimecode]], + video: VideoStream, + num_images: int = 3, + frame_margin: int = 1, + image_extension: str = "jpg", + encoder_param: int = 95, + image_name_template: str = "$VIDEO_NAME-Scene-$SCENE_NUMBER-$IMAGE_NUMBER", + output_dir: Optional[str] = None, + show_progress: Optional[bool] = False, + scale: Optional[float] = None, + height: Optional[int] = None, + width: Optional[int] = None, + interpolation: Interpolation = Interpolation.CUBIC, + video_manager=None, +) -> Dict[int, List[str]]: """Save a set number of images from each scene, given a list of scenes and the associated video/frame source. @@ -418,7 +454,7 @@ def save_images(scene_list: List[Tuple[FrameTimecode, FrameTimecode]], """ # TODO(v0.7): Add DeprecationWarning that `video_manager` will be removed in v0.8. if video_manager is not None: - logger.error('`video_manager` argument is deprecated, use `video` instead.') + logger.error("`video_manager` argument is deprecated, use `video` instead.") video = video_manager if not scene_list: @@ -428,56 +464,66 @@ def save_images(scene_list: List[Tuple[FrameTimecode, FrameTimecode]], # TODO: Validate that encoder_param is within the proper range. # Should be between 0 and 100 (inclusive) for jpg/webp, and 1-9 for png. - imwrite_param = [get_cv2_imwrite_params()[image_extension], encoder_param - ] if encoder_param is not None else [] + imwrite_param = ( + [get_cv2_imwrite_params()[image_extension], encoder_param] + if encoder_param is not None + else [] + ) video.reset() # Setup flags and init progress bar if available. completed = True - logger.info('Generating output images (%d per scene)...', num_images) + logger.info("Generating output images (%d per scene)...", num_images) progress_bar = None if show_progress: - progress_bar = tqdm(total=len(scene_list) * num_images, unit='images', dynamic_ncols=True) + progress_bar = tqdm(total=len(scene_list) * num_images, unit="images", dynamic_ncols=True) filename_template = Template(image_name_template) - scene_num_format = '%0' - scene_num_format += str(max(3, math.floor(math.log(len(scene_list), 10)) + 1)) + 'd' - image_num_format = '%0' - image_num_format += str(math.floor(math.log(num_images, 10)) + 2) + 'd' + scene_num_format = "%0" + scene_num_format += str(max(3, math.floor(math.log(len(scene_list), 10)) + 1)) + "d" + image_num_format = "%0" + image_num_format += str(math.floor(math.log(num_images, 10)) + 2) + "d" framerate = scene_list[0][0].framerate # TODO(v1.0): Split up into multiple sub-expressions so auto-formatter works correctly. timecode_list = [ [ - FrameTimecode(int(f), fps=framerate) for f in [ - # middle frames - a[len(a) // 2] if (0 < j < num_images - 1) or num_images == 1 - - # first frame - else min(a[0] + frame_margin, a[-1]) if j == 0 - - # last frame + FrameTimecode(int(f), fps=framerate) + for f in [ + # middle frames + a[len(a) // 2] + if (0 < j < num_images - 1) or num_images == 1 + # first frame + else min(a[0] + frame_margin, a[-1]) + if j == 0 + # last frame else max(a[-1] - frame_margin, a[0]) - - # for each evenly-split array of frames in the scene list + # for each evenly-split array of frames in the scene list for j, a in enumerate(np.array_split(r, num_images)) ] - ] for i, r in enumerate([ - # pad ranges to number of images - r if 1 + r[-1] - r[0] >= num_images else list(r) + [r[-1]] * (num_images - len(r)) - # create range of frames in scene - for r in ( - range( - start.get_frames(), - start.get_frames() + max( - 1, # guard against zero length scenes - end.get_frames() - start.get_frames())) - # for each scene in scene list - for start, end in scene_list) - ]) + ] + for i, r in enumerate( + [ + # pad ranges to number of images + r if 1 + r[-1] - r[0] >= num_images else list(r) + [r[-1]] * (num_images - len(r)) + # create range of frames in scene + for r in ( + range( + start.get_frames(), + start.get_frames() + + max( + 1, # guard against zero length scenes + end.get_frames() - start.get_frames(), + ), + ) + # for each scene in scene list + for start, end in scene_list + ) + ] + ) ] image_filenames = {i: [] for i in range(len(timecode_list))} @@ -485,31 +531,30 @@ def save_images(scene_list: List[Tuple[FrameTimecode, FrameTimecode]], if abs(aspect_ratio - 1.0) < 0.01: aspect_ratio = None - logger.debug('Writing images with template %s', filename_template.template) + logger.debug("Writing images with template %s", filename_template.template) for i, scene_timecodes in enumerate(timecode_list): for j, image_timecode in enumerate(scene_timecodes): video.seek(image_timecode) frame_im = video.read() if frame_im is not None: # TODO: Allow NUM to be a valid suffix in addition to NUMBER. - file_path = '%s.%s' % ( + file_path = "%s.%s" % ( filename_template.safe_substitute( VIDEO_NAME=video.name, SCENE_NUMBER=scene_num_format % (i + 1), IMAGE_NUMBER=image_num_format % (j + 1), FRAME_NUMBER=image_timecode.get_frames(), TIMESTAMP_MS=int(image_timecode.get_seconds() * 1000), - TIMECODE=image_timecode.get_timecode().replace(":", ";")), + TIMECODE=image_timecode.get_timecode().replace(":", ";"), + ), image_extension, ) image_filenames[i].append(file_path) # TODO: Combine this resize with the ones below. if aspect_ratio is not None: frame_im = cv2.resize( - frame_im, (0, 0), - fx=aspect_ratio, - fy=1.0, - interpolation=interpolation.value) + frame_im, (0, 0), fx=aspect_ratio, fy=1.0, interpolation=interpolation.value + ) frame_height = frame_im.shape[0] frame_width = frame_im.shape[1] @@ -523,10 +568,12 @@ def save_images(scene_list: List[Tuple[FrameTimecode, FrameTimecode]], height = int(factor * frame_height) assert height > 0 and width > 0 frame_im = cv2.resize( - frame_im, (width, height), interpolation=interpolation.value) + frame_im, (width, height), interpolation=interpolation.value + ) elif scale: frame_im = cv2.resize( - frame_im, (0, 0), fx=scale, fy=scale, interpolation=interpolation.value) + frame_im, (0, 0), fx=scale, fy=scale, interpolation=interpolation.value + ) cv2.imwrite(get_and_create_path(file_path, output_dir), frame_im, imwrite_param) else: @@ -539,7 +586,7 @@ def save_images(scene_list: List[Tuple[FrameTimecode, FrameTimecode]], progress_bar.close() if not completed: - logger.error('Could not generate all output images.') + logger.error("Could not generate all output images.") return image_filenames @@ -668,7 +715,7 @@ def add_detector(self, detector: SceneDetector) -> None: self._frame_buffer_size = max(detector.event_buffer_length, self._frame_buffer_size) def get_num_detectors(self) -> int: - """Get number of registered scene detectors added via add_detector. """ + """Get number of registered scene detectors added via add_detector.""" return len(self._detector_list) def clear(self) -> None: @@ -687,13 +734,13 @@ def clear(self) -> None: self.clear_detectors() def clear_detectors(self) -> None: - """Remove all scene detectors added to the SceneManager via add_detector(). """ + """Remove all scene detectors added to the SceneManager via add_detector().""" self._detector_list.clear() self._sparse_detector_list.clear() - def get_scene_list(self, - base_timecode: Optional[FrameTimecode] = None, - start_in_scene: bool = False) -> List[Tuple[FrameTimecode, FrameTimecode]]: + def get_scene_list( + self, base_timecode: Optional[FrameTimecode] = None, start_in_scene: bool = False + ) -> List[Tuple[FrameTimecode, FrameTimecode]]: """Return a list of tuples of start/end FrameTimecodes for each detected scene. Arguments: @@ -711,12 +758,13 @@ def get_scene_list(self, """ # TODO(v0.7): Replace with DeprecationWarning that `base_timecode` will be removed in v0.8. if base_timecode is not None: - logger.error('`base_timecode` argument is deprecated and has no effect.') + logger.error("`base_timecode` argument is deprecated and has no effect.") if self._base_timecode is None: return [] cut_list = self._get_cutting_list() scene_list = get_scenes_from_cuts( - cut_list=cut_list, start_pos=self._start_pos, end_pos=self._last_pos + 1) + cut_list=cut_list, start_pos=self._start_pos, end_pos=self._last_pos + 1 + ) # If we didn't actually detect any cuts, make sure the resulting scene_list is empty # unless start_in_scene is True. if not cut_list and not start_in_scene: @@ -735,13 +783,17 @@ def _get_event_list(self) -> List[Tuple[FrameTimecode, FrameTimecode]]: if not self._event_list: return [] assert self._base_timecode is not None - return [(self._base_timecode + start, self._base_timecode + end) - for start, end in self._event_list] + return [ + (self._base_timecode + start, self._base_timecode + end) + for start, end in self._event_list + ] - def _process_frame(self, - frame_num: int, - frame_im: np.ndarray, - callback: Optional[Callable[[np.ndarray, int], None]] = None) -> bool: + def _process_frame( + self, + frame_num: int, + frame_im: np.ndarray, + callback: Optional[Callable[[np.ndarray, int], None]] = None, + ) -> bool: """Add any cuts detected with the current frame to the cutting list. Returns True if any new cuts were detected, False otherwise.""" new_cuts = False @@ -751,7 +803,7 @@ def _process_frame(self, self._frame_buffer.append(frame_im) # frame_buffer[-1] is current frame, -2 is one behind, etc # so index based on cut frame should be [event_frame - (frame_num + 1)] - self._frame_buffer = self._frame_buffer[-(self._frame_buffer_size + 1):] + self._frame_buffer = self._frame_buffer[-(self._frame_buffer_size + 1) :] for detector in self._detector_list: cuts = detector.process_frame(frame_num, frame_im) self._cutting_list += cuts @@ -778,14 +830,16 @@ def stop(self) -> None: """Stop the current :meth:`detect_scenes` call, if any. Thread-safe.""" self._stop.set() - def detect_scenes(self, - video: VideoStream = None, - duration: Optional[FrameTimecode] = None, - end_time: Optional[FrameTimecode] = None, - frame_skip: int = 0, - show_progress: bool = False, - callback: Optional[Callable[[np.ndarray, int], None]] = None, - frame_source: Optional[VideoStream] = None) -> int: + def detect_scenes( + self, + video: VideoStream = None, + duration: Optional[FrameTimecode] = None, + end_time: Optional[FrameTimecode] = None, + frame_skip: int = 0, + show_progress: bool = False, + callback: Optional[Callable[[np.ndarray, int], None]] = None, + frame_source: Optional[VideoStream] = None, + ) -> int: """Perform scene detection on the given video using the added SceneDetectors, returning the number of frames processed. Results can be obtained by calling :meth:`get_scene_list` or :meth:`get_cut_list`. @@ -823,14 +877,14 @@ def detect_scenes(self, if video is None: raise TypeError("detect_scenes() missing 1 required positional argument: 'video'") if frame_skip > 0 and self.stats_manager is not None: - raise ValueError('frame_skip must be 0 when using a StatsManager.') + raise ValueError("frame_skip must be 0 when using a StatsManager.") if duration is not None and end_time is not None: - raise ValueError('duration and end_time cannot be set at the same time!') + raise ValueError("duration and end_time cannot be set at the same time!") # TODO: These checks should be handled by the FrameTimecode constructor. if duration is not None and isinstance(duration, (int, float)) and duration < 0: - raise ValueError('duration must be greater than or equal to 0!') + raise ValueError("duration must be greater than or equal to 0!") if end_time is not None and isinstance(end_time, (int, float)) and end_time < 0: - raise ValueError('end_time must be greater than or equal to 0!') + raise ValueError("end_time must be greater than or equal to 0!") self._base_timecode = video.base_timecode @@ -847,9 +901,9 @@ def detect_scenes(self, total_frames = 0 if video.duration is not None: if end_time is not None and end_time < video.duration: - total_frames = (end_time - start_frame_num) + total_frames = end_time - start_frame_num else: - total_frames = (video.duration.get_frames() - start_frame_num) + total_frames = video.duration.get_frames() - start_frame_num # Calculate the desired downscale factor and log the effective resolution. if self.auto_downscale: @@ -857,15 +911,18 @@ def detect_scenes(self, else: downscale_factor = self.downscale if downscale_factor > 1: - logger.info('Downscale factor set to %d, effective resolution: %d x %d', - downscale_factor, video.frame_size[0] // downscale_factor, - video.frame_size[1] // downscale_factor) + logger.info( + "Downscale factor set to %d, effective resolution: %d x %d", + downscale_factor, + video.frame_size[0] // downscale_factor, + video.frame_size[1] // downscale_factor, + ) progress_bar = None if show_progress: progress_bar = tqdm( total=int(total_frames), - unit='frames', + unit="frames", desc=PROGRESS_BAR_DESCRIPTION % 0, dynamic_ncols=True, ) @@ -875,27 +932,30 @@ def detect_scenes(self, decode_thread = threading.Thread( target=SceneManager._decode_thread, args=(self, video, frame_skip, downscale_factor, end_time, frame_queue), - daemon=True) + daemon=True, + ) decode_thread.start() frame_im = None - logger.info('Detecting scenes...') + logger.info("Detecting scenes...") while not self._stop.is_set(): next_frame, position = frame_queue.get() if next_frame is None and position is None: break - if not next_frame is None: + if next_frame is not None: frame_im = next_frame new_cuts = self._process_frame(position.frame_num, frame_im, callback) if progress_bar is not None: if new_cuts: progress_bar.set_description( - PROGRESS_BAR_DESCRIPTION % len(self._cutting_list), refresh=False) + PROGRESS_BAR_DESCRIPTION % len(self._cutting_list), refresh=False + ) progress_bar.update(1 + frame_skip) if progress_bar is not None: progress_bar.set_description( - PROGRESS_BAR_DESCRIPTION % len(self._cutting_list), refresh=True) + PROGRESS_BAR_DESCRIPTION % len(self._cutting_list), refresh=True + ) progress_bar.close() # Unblock any puts in the decode thread before joining. This can happen if the main # processing thread stops before the decode thread. @@ -938,25 +998,32 @@ def _decode_thread( if video.frame_size != decoded_size: logger.warn( f"WARNING: Decoded frame size ({decoded_size}) does not match " - f" video resolution {video.frame_size}, possible corrupt input.") + f" video resolution {video.frame_size}, possible corrupt input." + ) elif self._frame_size != decoded_size: self._frame_size_errors += 1 if self._frame_size_errors <= MAX_FRAME_SIZE_ERRORS: logger.error( f"ERROR: Frame at {str(video.position)} has incorrect size and " f"cannot be processed: decoded size = {decoded_size}, " - f"expected = {self._frame_size}. Video may be corrupt.") + f"expected = {self._frame_size}. Video may be corrupt." + ) if self._frame_size_errors == MAX_FRAME_SIZE_ERRORS: logger.warn( - f"WARNING: Too many errors emitted, skipping future messages.") + "WARNING: Too many errors emitted, skipping future messages." + ) # Skip processing frames that have an incorrect size. continue if downscale_factor > 1: frame_im = cv2.resize( - frame_im, (round(frame_im.shape[1] / downscale_factor), - round(frame_im.shape[0] / downscale_factor)), - interpolation=self._interpolation.value) + frame_im, + ( + round(frame_im.shape[1] / downscale_factor), + round(frame_im.shape[0] / downscale_factor), + ), + interpolation=self._interpolation.value, + ) else: if video.read(decode=False) is False: break @@ -982,7 +1049,7 @@ def _decode_thread( logger.debug("Received KeyboardInterrupt.") self._stop.set() except BaseException: - logger.critical('Fatal error: Exception raised in decode thread.') + logger.critical("Fatal error: Exception raised in decode thread.") self._exception_info = sys.exc_info() self._stop.set() @@ -993,17 +1060,13 @@ def _decode_thread( # Make sure main thread stops processing loop. out_queue.put((None, None)) - # pylint: enable=bare-except - # # Deprecated Methods # - # pylint: disable=unused-argument - - def get_cut_list(self, - base_timecode: Optional[FrameTimecode] = None, - show_warning: bool = True) -> List[FrameTimecode]: + def get_cut_list( + self, base_timecode: Optional[FrameTimecode] = None, show_warning: bool = True + ) -> List[FrameTimecode]: """[DEPRECATED] Return a list of FrameTimecodes of the detected scene changes/cuts. Unlike get_scene_list, the cutting list returns a list of FrameTimecodes representing @@ -1026,12 +1089,11 @@ def get_cut_list(self, """ # TODO(v0.7): Use the warnings module to turn this into a warning. if show_warning: - logger.error('`get_cut_list()` is deprecated and will be removed in a future release.') + logger.error("`get_cut_list()` is deprecated and will be removed in a future release.") return self._get_cutting_list() def get_event_list( - self, - base_timecode: Optional[FrameTimecode] = None + self, base_timecode: Optional[FrameTimecode] = None ) -> List[Tuple[FrameTimecode, FrameTimecode]]: """[DEPRECATED] DO NOT USE. @@ -1048,11 +1110,9 @@ def get_event_list( List of pairs of FrameTimecode objects denoting the detected scenes. """ # TODO(v0.7): Use the warnings module to turn this into a warning. - logger.error('`get_event_list()` is deprecated and will be removed in a future release.') + logger.error("`get_event_list()` is deprecated and will be removed in a future release.") return self._get_event_list() - # pylint: enable=unused-argument - def _is_processing_required(self, frame_num: int) -> bool: """True if frame metrics not in StatsManager, False otherwise.""" if self.stats_manager is None: diff --git a/scenedetect/stats_manager.py b/scenedetect/stats_manager.py index 8bb8b9ec..b028e244 100644 --- a/scenedetect/stats_manager.py +++ b/scenedetect/stats_manager.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -23,15 +22,16 @@ """ import csv -from logging import getLogger +import os.path import typing as ty +from logging import getLogger + # TODO: Replace below imports with `ty.` prefix. from typing import Any, Dict, Iterable, List, Optional, Set, TextIO, Union -import os.path from scenedetect.frame_timecode import FrameTimecode -logger = getLogger('pyscenedetect') +logger = getLogger("pyscenedetect") ## ## StatsManager CSV File Column Names (Header Row) @@ -50,19 +50,22 @@ class FrameMetricRegistered(Exception): """[DEPRECATED - DO NOT USE] No longer used.""" + pass class FrameMetricNotRegistered(Exception): """[DEPRECATED - DO NOT USE] No longer used.""" + pass class StatsFileCorrupt(Exception): """Raised when frame metrics/stats could not be loaded from a provided CSV file.""" - def __init__(self, - message: str = "Could not load frame metric data data from passed CSV file."): + def __init__( + self, message: str = "Could not load frame metric data data from passed CSV file." + ): super().__init__(message) @@ -98,8 +101,10 @@ def __init__(self, base_timecode: FrameTimecode = None): # of each frame metric key and the value it represents (usually float). self._frame_metrics: Dict[FrameTimecode, Dict[str, float]] = dict() self._metric_keys: Set[str] = set() - self._metrics_updated: bool = False # Flag indicating if metrics require saving. - self._base_timecode: Optional[FrameTimecode] = base_timecode # Used for timing calculations. + self._metrics_updated: bool = False # Flag indicating if metrics require saving. + self._base_timecode: Optional[FrameTimecode] = ( + base_timecode # Used for timing calculations. + ) @property def metric_keys(self) -> ty.Iterable[str]: @@ -127,7 +132,7 @@ def get_metrics(self, frame_number: int, metric_keys: Iterable[str]) -> List[Any return [self._get_metric(frame_number, metric_key) for metric_key in metric_keys] def set_metrics(self, frame_number: int, metric_kv_dict: Dict[str, Any]) -> None: - """ Set Metrics: Sets the provided statistics/metrics for a given frame. + """Set Metrics: Sets the provided statistics/metrics for a given frame. Arguments: frame_number: Frame number to retrieve metrics for. @@ -138,7 +143,7 @@ def set_metrics(self, frame_number: int, metric_kv_dict: Dict[str, Any]) -> None self._set_metric(frame_number, metric_key, metric_kv_dict[metric_key]) def metrics_exist(self, frame_number: int, metric_keys: Iterable[str]) -> bool: - """ Metrics Exist: Checks if the given metrics/stats exist for the given frame. + """Metrics Exist: Checks if the given metrics/stats exist for the given frame. Returns: bool: True if the given metric keys exist for the frame, False otherwise. @@ -146,7 +151,7 @@ def metrics_exist(self, frame_number: int, metric_keys: Iterable[str]) -> bool: return all([self._metric_exists(frame_number, metric_key) for metric_key in metric_keys]) def is_save_required(self) -> bool: - """ Is Save Required: Checks if the stats have been updated since loading. + """Is Save Required: Checks if the stats have been updated since loading. Returns: bool: True if there are frame metrics/statistics not yet written to disk, @@ -154,11 +159,13 @@ def is_save_required(self) -> bool: """ return self._metrics_updated - def save_to_csv(self, - csv_file: Union[str, bytes, TextIO], - base_timecode: Optional[FrameTimecode] = None, - force_save=True) -> None: - """ Save To CSV: Saves all frame metrics stored in the StatsManager to a CSV file. + def save_to_csv( + self, + csv_file: Union[str, bytes, TextIO], + base_timecode: Optional[FrameTimecode] = None, + force_save=True, + ) -> None: + """Save To CSV: Saves all frame metrics stored in the StatsManager to a CSV file. Arguments: csv_file: A file handle opened in write mode (e.g. open('...', 'w')) or a path as str. @@ -170,7 +177,7 @@ def save_to_csv(self, """ # TODO(v0.7): Replace with DeprecationWarning that `base_timecode` will be removed in v0.8. if base_timecode is not None: - logger.error('base_timecode is deprecated and has no effect.') + logger.error("base_timecode is deprecated and has no effect.") if not (force_save or self.is_save_required()): logger.info("No metrics to write.") @@ -179,11 +186,11 @@ def save_to_csv(self, # If we get a path instead of an open file handle, recursively call ourselves # again but with file handle instead of path. if isinstance(csv_file, (str, bytes)): - with open(csv_file, 'w') as file: + with open(csv_file, "w") as file: self.save_to_csv(csv_file=file, force_save=force_save) return - csv_writer = csv.writer(csv_file, lineterminator='\n') + csv_writer = csv.writer(csv_file, lineterminator="\n") metric_keys = sorted(list(self._metric_keys)) csv_writer.writerow([COLUMN_NAME_FRAME_NUMBER, COLUMN_NAME_TIMECODE] + metric_keys) frame_keys = sorted(self._frame_metrics.keys()) @@ -191,9 +198,9 @@ def save_to_csv(self, for frame_key in frame_keys: frame_timecode = self._base_timecode + frame_key csv_writer.writerow( - [frame_timecode.get_frames() + - 1, frame_timecode.get_timecode()] + - [str(metric) for metric in self.get_metrics(frame_key, metric_keys)]) + [frame_timecode.get_frames() + 1, frame_timecode.get_timecode()] + + [str(metric) for metric in self.get_metrics(frame_key, metric_keys)] + ) @staticmethod def valid_header(row: List[str]) -> bool: @@ -237,13 +244,13 @@ def load_from_csv(self, csv_file: Union[str, bytes, TextIO]) -> Optional[int]: # recursively call ourselves again but with file set instead of path. if isinstance(csv_file, (str, bytes)): if os.path.exists(csv_file): - with open(csv_file, 'r') as file: + with open(csv_file) as file: return self.load_from_csv(csv_file=file) # Path doesn't exist. return None # If we get here, file is a valid file handle in read-only text mode. - csv_reader = csv.reader(csv_file, lineterminator='\n') + csv_reader = csv.reader(csv_file, lineterminator="\n") num_cols = None num_metrics = None num_frames = None @@ -262,28 +269,29 @@ def load_from_csv(self, csv_file: Union[str, bytes, TextIO]) -> Optional[int]: num_cols = len(row) num_metrics = num_cols - 2 if not num_metrics > 0: - raise StatsFileCorrupt('No metrics defined in CSV file.') + raise StatsFileCorrupt("No metrics defined in CSV file.") loaded_metrics = list(row[2:]) num_frames = 0 for row in csv_reader: metric_dict = {} if not len(row) == num_cols: - raise StatsFileCorrupt('Wrong number of columns detected in stats file row.') + raise StatsFileCorrupt("Wrong number of columns detected in stats file row.") frame_number = int(row[0]) # Switch from 1-based to 0-based frame numbers. if frame_number > 0: frame_number -= 1 self.set_metrics(frame_number, metric_dict) for i, metric in enumerate(row[2:]): - if metric and metric != 'None': + if metric and metric != "None": try: self._set_metric(frame_number, loaded_metrics[i], float(metric)) except ValueError: - raise StatsFileCorrupt('Corrupted value in stats file: %s' % - metric) from ValueError + raise StatsFileCorrupt( + "Corrupted value in stats file: %s" % metric + ) from ValueError num_frames += 1 self._metric_keys = self._metric_keys.union(set(loaded_metrics)) - logger.info('Loaded %d metrics for %d frames.', num_metrics, num_frames) + logger.info("Loaded %d metrics for %d frames.", num_metrics, num_frames) self._metrics_updated = False return num_frames @@ -296,10 +304,11 @@ def _get_metric(self, frame_number: int, metric_key: str) -> Optional[Any]: def _set_metric(self, frame_number: int, metric_key: str, metric_value: Any) -> None: self._metrics_updated = True - if not frame_number in self._frame_metrics: + if frame_number not in self._frame_metrics: self._frame_metrics[frame_number] = dict() self._frame_metrics[frame_number][metric_key] = metric_value def _metric_exists(self, frame_number: int, metric_key: str) -> bool: - return (frame_number in self._frame_metrics - and metric_key in self._frame_metrics[frame_number]) + return ( + frame_number in self._frame_metrics and metric_key in self._frame_metrics[frame_number] + ) diff --git a/scenedetect/video_manager.py b/scenedetect/video_manager.py index a927bc95..ab09c8a5 100644 --- a/scenedetect/video_manager.py +++ b/scenedetect/video_manager.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -19,18 +18,18 @@ in a future release. """ -import os import math +import os from logging import getLogger - from typing import Iterable, List, Optional, Tuple, Union -import numpy as np + import cv2 +import numpy as np -from scenedetect.platform import get_file_name -from scenedetect.frame_timecode import FrameTimecode, MAX_FPS_DELTA -from scenedetect.video_stream import VideoStream, VideoOpenFailure, FrameRateUnavailable from scenedetect.backends.opencv import _get_aspect_ratio +from scenedetect.frame_timecode import MAX_FPS_DELTA, FrameTimecode +from scenedetect.platform import get_file_name +from scenedetect.video_stream import FrameRateUnavailable, VideoOpenFailure, VideoStream ## ## VideoManager Exceptions @@ -38,12 +37,12 @@ class VideoParameterMismatch(Exception): - """ VideoParameterMismatch: Raised when opening multiple videos with a VideoManager, and some - of the video parameters (frame height, frame width, and framerate/FPS) do not match. """ + """VideoParameterMismatch: Raised when opening multiple videos with a VideoManager, and some + of the video parameters (frame height, frame width, and framerate/FPS) do not match.""" - def __init__(self, - file_list=None, - message="OpenCV VideoCapture object parameters do not match."): + def __init__( + self, file_list=None, message="OpenCV VideoCapture object parameters do not match." + ): # type: (Iterable[Tuple[int, float, float, str, str]], str) -> None # Pass message string to base Exception class. super(VideoParameterMismatch, self).__init__(message) @@ -54,13 +53,13 @@ def __init__(self, class VideoDecodingInProgress(RuntimeError): - """ VideoDecodingInProgress: Raised when attempting to call certain VideoManager methods that - must be called *before* start() has been called. """ + """VideoDecodingInProgress: Raised when attempting to call certain VideoManager methods that + must be called *before* start() has been called.""" class InvalidDownscaleFactor(ValueError): - """ InvalidDownscaleFactor: Raised when trying to set invalid downscale factor, - i.e. the supplied downscale factor was not a positive integer greater than zero. """ + """InvalidDownscaleFactor: Raised when trying to set invalid downscale factor, + i.e. the supplied downscale factor was not a positive integer greater than zero.""" ## @@ -75,12 +74,12 @@ def get_video_name(video_file: str) -> Tuple[str, str]: Tuple of the form [name, video_file]. """ if isinstance(video_file, int): - return ('Device %d' % video_file, video_file) + return ("Device %d" % video_file, video_file) return (os.path.split(video_file)[1], video_file) def get_num_frames(cap_list: Iterable[cv2.VideoCapture]) -> int: - """ Get Number of Frames: Returns total number of frames in the cap_list. + """Get Number of Frames: Returns total number of frames in the cap_list. Calls get(CAP_PROP_FRAME_COUNT) and returns the sum for all VideoCaptures. """ @@ -92,7 +91,7 @@ def open_captures( framerate: Optional[float] = None, validate_parameters: bool = True, ) -> Tuple[List[cv2.VideoCapture], float, Tuple[int, int]]: - """ Open Captures - helper function to open all capture objects, set the framerate, + """Open Captures - helper function to open all capture objects, set the framerate, and ensure that all open captures have been opened and the framerates match on a list of video file paths, or a list containing a single device ID. @@ -139,12 +138,14 @@ def open_captures( raise TypeError("Expected type float for parameter framerate.") # Check if files exist if passed video file is not an image sequence # (checked with presence of % in filename) or not a URL (://). - if not is_device and any([ + if not is_device and any( + [ not os.path.exists(video_file) for video_file in video_files - if not ('%' in video_file or '://' in video_file) - ]): - raise IOError("Video file(s) not found.") + if not ("%" in video_file or "://" in video_file) + ] + ): + raise OSError("Video file(s) not found.") cap_list = [] try: @@ -155,11 +156,17 @@ def open_captures( raise VideoOpenFailure(str(closed_caps)) cap_framerates = [cap.get(cv2.CAP_PROP_FPS) for cap in cap_list] - cap_framerate, check_framerate = validate_capture_framerate(video_names, cap_framerates, - framerate) + cap_framerate, check_framerate = validate_capture_framerate( + video_names, cap_framerates, framerate + ) # Store frame sizes as integers (VideoCapture.get() returns float). - cap_frame_sizes = [(math.trunc(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), - math.trunc(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))) for cap in cap_list] + cap_frame_sizes = [ + ( + math.trunc(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), + math.trunc(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)), + ) + for cap in cap_list + ] cap_frame_size = cap_frame_sizes[0] # If we need to validate the parameters, we check that the FPS and width/height @@ -169,7 +176,8 @@ def open_captures( video_names=video_names, cap_frame_sizes=cap_frame_sizes, check_framerate=check_framerate, - cap_framerates=cap_framerates) + cap_framerates=cap_framerates, + ) except: for cap in cap_list: @@ -203,9 +211,11 @@ def validate_capture_framerate( else: raise TypeError("Expected float for framerate, got %s." % type(framerate).__name__) else: - unavailable_framerates = [(video_names[i][0], video_names[i][1]) - for i, fps in enumerate(cap_framerates) - if fps < MAX_FPS_DELTA] + unavailable_framerates = [ + (video_names[i][0], video_names[i][1]) + for i, fps in enumerate(cap_framerates) + if fps < MAX_FPS_DELTA + ] if unavailable_framerates: raise FrameRateUnavailable() return (cap_framerate, check_framerate) @@ -217,7 +227,7 @@ def validate_capture_parameters( check_framerate: bool = False, cap_framerates: Optional[List[float]] = None, ) -> None: - """ Validate Capture Parameters: Ensures that all passed capture frame sizes and (optionally) + """Validate Capture Parameters: Ensures that all passed capture frame sizes and (optionally) framerates are equal. Raises VideoParameterMismatch if there is a mismatch. Raises: @@ -226,20 +236,35 @@ def validate_capture_parameters( bad_params = [] max_framerate_delta = MAX_FPS_DELTA # Check heights/widths match. - bad_params += [(cv2.CAP_PROP_FRAME_WIDTH, frame_size[0], cap_frame_sizes[0][0], - video_names[i][0], video_names[i][1]) - for i, frame_size in enumerate(cap_frame_sizes) - if abs(frame_size[0] - cap_frame_sizes[0][0]) > 0] - bad_params += [(cv2.CAP_PROP_FRAME_HEIGHT, frame_size[1], cap_frame_sizes[0][1], - video_names[i][0], video_names[i][1]) - for i, frame_size in enumerate(cap_frame_sizes) - if abs(frame_size[1] - cap_frame_sizes[0][1]) > 0] + bad_params += [ + ( + cv2.CAP_PROP_FRAME_WIDTH, + frame_size[0], + cap_frame_sizes[0][0], + video_names[i][0], + video_names[i][1], + ) + for i, frame_size in enumerate(cap_frame_sizes) + if abs(frame_size[0] - cap_frame_sizes[0][0]) > 0 + ] + bad_params += [ + ( + cv2.CAP_PROP_FRAME_HEIGHT, + frame_size[1], + cap_frame_sizes[0][1], + video_names[i][0], + video_names[i][1], + ) + for i, frame_size in enumerate(cap_frame_sizes) + if abs(frame_size[1] - cap_frame_sizes[0][1]) > 0 + ] # Check framerates if required. if check_framerate: - bad_params += [(cv2.CAP_PROP_FPS, fps, cap_framerates[0], video_names[i][0], - video_names[i][1]) - for i, fps in enumerate(cap_framerates) - if math.fabs(fps - cap_framerates[0]) > max_framerate_delta] + bad_params += [ + (cv2.CAP_PROP_FPS, fps, cap_framerates[0], video_names[i][0], video_names[i][1]) + for i, fps in enumerate(cap_framerates) + if math.fabs(fps - cap_framerates[0]) > max_framerate_delta + ] if bad_params: raise VideoParameterMismatch(bad_params) @@ -256,12 +281,14 @@ class VideoManager(VideoStream): Provides a cv2.VideoCapture-like interface to a set of one or more video files, or a single device ID. Supports seeking and setting end time/duration.""" - BACKEND_NAME = 'video_manager_do_not_use' + BACKEND_NAME = "video_manager_do_not_use" - def __init__(self, - video_files: List[str], - framerate: Optional[float] = None, - logger=getLogger('pyscenedetect')): + def __init__( + self, + video_files: List[str], + framerate: Optional[float] = None, + logger=None, + ): """[DEPRECATED] DO NOT USE. Arguments: @@ -283,6 +310,8 @@ def __init__(self, """ # TODO(v0.7): Add DeprecationWarning that this class will be removed in v0.8: 'VideoManager # will be removed in PySceneDetect v0.8. Use VideoStreamCv2 or VideoCaptureAdapter instead.' + if logger is None: + logger = getLogger("pyscenedetect") logger.error("VideoManager is deprecated and will be removed.") if not video_files: raise ValueError("At least one string/integer must be passed in the video_files list.") @@ -292,7 +321,8 @@ def __init__(self, # These VideoCaptures are only open in this process. self._is_device = isinstance(video_files[0], int) self._cap_list, self._cap_framerate, self._cap_framesize = open_captures( - video_files=video_files, framerate=framerate) + video_files=video_files, framerate=framerate + ) self._path = video_files[0] if not self._is_device else video_files self._end_of_video = False self._start_time = self.get_base_timecode() @@ -303,9 +333,13 @@ def __init__(self, self._video_file_paths = video_files self._logger = logger if self._logger is not None: - self._logger.info('Loaded %d video%s, framerate: %.3f FPS, resolution: %d x %d', - len(self._cap_list), 's' if len(self._cap_list) > 1 else '', - self.get_framerate(), *self.get_framesize()) + self._logger.info( + "Loaded %d video%s, framerate: %.3f FPS, resolution: %d x %d", + len(self._cap_list), + "s" if len(self._cap_list) > 1 else "", + self.get_framerate(), + *self.get_framesize(), + ) self._started = False self._frame_length = self.get_base_timecode() + get_num_frames(self._cap_list) self._first_cap_len = self.get_base_timecode() + get_num_frames([self._cap_list[0]]) @@ -340,10 +374,10 @@ def get_video_name(self) -> str: """ video_paths = self.get_video_paths() if not video_paths: - return '' + return "" video_name = os.path.basename(video_paths[0]) - if video_name.rfind('.') >= 0: - video_name = video_name[:video_name.rfind('.')] + if video_name.rfind(".") >= 0: + video_name = video_name[: video_name.rfind(".")] return video_name def get_framerate(self) -> float: @@ -380,7 +414,7 @@ def get_base_timecode(self) -> FrameTimecode: return FrameTimecode(timecode=0, fps=self._cap_framerate) def get_current_timecode(self) -> FrameTimecode: - """ Get Current Timecode - returns a FrameTimecode object at current VideoManager position. + """Get Current Timecode - returns a FrameTimecode object at current VideoManager position. Returns: Timecode at the current VideoManager position. @@ -396,7 +430,7 @@ def get_framesize(self) -> Tuple[int, int]: return self._cap_framesize def get_framesize_effective(self) -> Tuple[int, int]: - """ Get Frame Size - returns the frame size of the video(s) open in the + """Get Frame Size - returns the frame size of the video(s) open in the VideoManager's capture objects. Returns: @@ -404,11 +438,13 @@ def get_framesize_effective(self) -> Tuple[int, int]: """ return self._cap_framesize - def set_duration(self, - duration: Optional[FrameTimecode] = None, - start_time: Optional[FrameTimecode] = None, - end_time: Optional[FrameTimecode] = None) -> None: - """ Set Duration - sets the duration/length of the video(s) to decode, as well as + def set_duration( + self, + duration: Optional[FrameTimecode] = None, + start_time: Optional[FrameTimecode] = None, + end_time: Optional[FrameTimecode] = None, + ) -> None: + """Set Duration - sets the duration/length of the video(s) to decode, as well as the start/end times. Must be called before :meth:`start()` is called, otherwise a VideoDecodingInProgress exception will be thrown. May be called after :meth:`reset()` as well. @@ -432,9 +468,11 @@ def set_duration(self, raise VideoDecodingInProgress() # Ensure any passed timecodes have the proper framerate. - if ((duration is not None and not duration.equal_framerate(self._cap_framerate)) - or (start_time is not None and not start_time.equal_framerate(self._cap_framerate)) - or (end_time is not None and not end_time.equal_framerate(self._cap_framerate))): + if ( + (duration is not None and not duration.equal_framerate(self._cap_framerate)) + or (start_time is not None and not start_time.equal_framerate(self._cap_framerate)) + or (end_time is not None and not end_time.equal_framerate(self._cap_framerate)) + ): raise ValueError("FrameTimecode framerate does not match.") if duration is not None and end_time is not None: @@ -455,13 +493,15 @@ def set_duration(self, self._frame_length -= self._start_time if self._logger is not None: - self._logger.info('Duration set, start: %s, duration: %s, end: %s.', - start_time.get_timecode() if start_time is not None else start_time, - duration.get_timecode() if duration is not None else duration, - end_time.get_timecode() if end_time is not None else end_time) + self._logger.info( + "Duration set, start: %s, duration: %s, end: %s.", + start_time.get_timecode() if start_time is not None else start_time, + duration.get_timecode() if duration is not None else duration, + end_time.get_timecode() if end_time is not None else end_time, + ) def get_duration(self) -> FrameTimecode: - """ Get Duration - gets the duration/length of the video(s) to decode, + """Get Duration - gets the duration/length of the video(s) to decode, as well as the start/end times. If the end time was not set by :meth:`set_duration()`, the end timecode @@ -477,7 +517,7 @@ def get_duration(self) -> FrameTimecode: return (self._frame_length, self._start_time, end_time) def start(self) -> None: - """ Start - starts video decoding and seeks to start time. Raises + """Start - starts video decoding and seeks to start time. Raises exception VideoDecodingInProgress if the method is called after the decoder process has already been started. @@ -497,7 +537,6 @@ def start(self) -> None: # This overrides the seek method from the VideoStream interface, but the name was changed # from `timecode` to `target`. For compatibility, we allow calling seek with the form # seek(0), seek(timecode=0), and seek(target=0). Specifying both arguments is an error. - # pylint: disable=arguments-differ def seek(self, timecode: FrameTimecode = None, target: FrameTimecode = None) -> bool: """Seek forwards to the passed timecode. @@ -516,9 +555,9 @@ def seek(self, timecode: FrameTimecode = None, target: FrameTimecode = None) -> ValueError: Either none or both `timecode` and `target` were set. """ if timecode is None and target is None: - raise ValueError('`target` must be set.') + raise ValueError("`target` must be set.") if timecode is not None and target is not None: - raise ValueError('Only one of `timecode` or `target` can be set.') + raise ValueError("Only one of `timecode` or `target` can be set.") if target is not None: timecode = target assert timecode is not None @@ -539,8 +578,8 @@ def seek(self, timecode: FrameTimecode = None, target: FrameTimecode = None) -> # TODO: This should throw an exception instead of potentially failing silently # if no logger was provided. if self._logger is not None: - self._logger.error('Seeking past the first input video is not currently supported.') - self._logger.warning('Seeking to end of first input.') + self._logger.error("Seeking past the first input video is not currently supported.") + self._logger.warning("Seeking to end of first input.") timecode = self._first_cap_len if self._curr_cap is not None and self._end_of_video is not True: self._curr_cap.set(cv2.CAP_PROP_POS_FRAMES, timecode.get_frames() - 1) @@ -551,17 +590,15 @@ def seek(self, timecode: FrameTimecode = None, target: FrameTimecode = None) -> return False return True - # pylint: enable=arguments-differ - def release(self) -> None: - """ Release (cv2.VideoCapture method), releases all open capture(s). """ + """Release (cv2.VideoCapture method), releases all open capture(s).""" for cap in self._cap_list: cap.release() self._cap_list = [] self._started = False def reset(self) -> None: - """ Reset - Reopens captures passed to the constructor of the VideoManager. + """Reset - Reopens captures passed to the constructor of the VideoManager. Can only be called after the :meth:`release()` method has been called. @@ -575,11 +612,12 @@ def reset(self) -> None: self._end_of_video = False self._curr_time = self.get_base_timecode() self._cap_list, self._cap_framerate, self._cap_framesize = open_captures( - video_files=self._video_file_paths, framerate=self._curr_time.get_framerate()) + video_files=self._video_file_paths, framerate=self._curr_time.get_framerate() + ) self._curr_cap, self._curr_cap_idx = None, None def get(self, capture_prop: int, index: Optional[int] = None) -> Union[float, int]: - """ Get (cv2.VideoCapture method) - obtains capture properties from the current + """Get (cv2.VideoCapture method) - obtains capture properties from the current VideoCapture object in use. Index represents the same index as the original video_files list passed to the constructor. Getting/setting the position (POS) properties has no effect; seeking is implemented using VideoDecoder methods. @@ -607,7 +645,7 @@ def get(self, capture_prop: int, index: Optional[int] = None) -> Union[float, in return self._cap_list[index].get(capture_prop) def grab(self) -> bool: - """ Grab (cv2.VideoCapture method) - retrieves a frame but does not return it. + """Grab (cv2.VideoCapture method) - retrieves a frame but does not return it. Returns: bool: True if a frame was grabbed, False otherwise. @@ -631,7 +669,7 @@ def grab(self) -> bool: return grabbed def retrieve(self) -> Tuple[bool, Optional[np.ndarray]]: - """ Retrieve (cv2.VideoCapture method) - retrieves and returns a frame. + """Retrieve (cv2.VideoCapture method) - retrieves and returns a frame. Frame returned corresponds to last call to :meth:`grab()`. @@ -654,7 +692,7 @@ def retrieve(self) -> Tuple[bool, Optional[np.ndarray]]: return (retrieved, self._last_frame) def read(self, decode: bool = True, advance: bool = True) -> Union[np.ndarray, bool]: - """ Return next frame (or current if advance = False), or False if end of video. + """Return next frame (or current if advance = False), or False if end of video. Arguments: decode: Decode and return the frame. @@ -690,7 +728,7 @@ def _get_next_cap(self) -> bool: return True def _correct_frame_length(self) -> None: - """ Checks if the current frame position exceeds that originally calculated, + """Checks if the current frame position exceeds that originally calculated, and adjusts the internally calculated frame length accordingly. Called after exhausting all input frames from the video source(s). """ @@ -749,8 +787,10 @@ def frame_rate(self) -> float: @property def frame_size(self) -> Tuple[int, int]: """Size of each video frame in pixels as a tuple of (width, height).""" - return (math.trunc(self._cap_list[0].get(cv2.CAP_PROP_FRAME_WIDTH)), - math.trunc(self._cap_list[0].get(cv2.CAP_PROP_FRAME_HEIGHT))) + return ( + math.trunc(self._cap_list[0].get(cv2.CAP_PROP_FRAME_WIDTH)), + math.trunc(self._cap_list[0].get(cv2.CAP_PROP_FRAME_HEIGHT)), + ) @property def is_seekable(self) -> bool: diff --git a/scenedetect/video_splitter.py b/scenedetect/video_splitter.py index a4bce715..8b41834d 100644 --- a/scenedetect/video_splitter.py +++ b/scenedetect/video_splitter.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -33,18 +32,18 @@ available on the computer, depending on the specified command-line options. """ -from dataclasses import dataclass import logging import math -from pathlib import Path import subprocess import time import typing as ty +from dataclasses import dataclass +from pathlib import Path -from scenedetect.platform import (tqdm, invoke_command, CommandTooLong, get_ffmpeg_path, Template) from scenedetect.frame_timecode import FrameTimecode +from scenedetect.platform import CommandTooLong, Template, get_ffmpeg_path, invoke_command, tqdm -logger = logging.getLogger('pyscenedetect') +logger = logging.getLogger("pyscenedetect") TimecodePair = ty.Tuple[FrameTimecode, FrameTimecode] """Named type for pairs of timecodes, which typically represents the start/end of a scene.""" @@ -62,7 +61,8 @@ """Relative path to the ffmpeg binary on this system, if any (will be None if not available).""" DEFAULT_FFMPEG_ARGS = ( - "-map 0:v:0 -map 0:a? -map 0:s? -c:v libx264 -preset veryfast -crf 22 -c:a aac") + "-map 0:v:0 -map 0:a? -map 0:s? -c:v libx264 -preset veryfast -crf 22 -c:a aac" +) """Default arguments passed to ffmpeg when invoking the `split_video_ffmpeg` function.""" ## @@ -71,14 +71,14 @@ def is_mkvmerge_available() -> bool: - """ Is mkvmerge Available: Gracefully checks if mkvmerge command is available. + """Is mkvmerge Available: Gracefully checks if mkvmerge command is available. Returns: True if `mkvmerge` can be invoked, False otherwise. """ ret_val = None try: - ret_val = subprocess.call(['mkvmerge', '--quiet']) + ret_val = subprocess.call(["mkvmerge", "--quiet"]) except OSError: return False if ret_val is not None and ret_val != 2: @@ -87,7 +87,7 @@ def is_mkvmerge_available() -> bool: def is_ffmpeg_available() -> bool: - """ Is ffmpeg Available: Gracefully checks if ffmpeg command is available. + """Is ffmpeg Available: Gracefully checks if ffmpeg command is available. Returns: True if `ffmpeg` can be invoked, False otherwise. @@ -103,6 +103,7 @@ def is_ffmpeg_available() -> bool: @dataclass class VideoMetadata: """Information about the video being split.""" + name: str """Expected name of the video. May differ from `path`.""" path: Path @@ -114,6 +115,7 @@ class VideoMetadata: @dataclass class SceneMetadata: """Information about the scene being extracted.""" + index: int """0-based index of this scene.""" start: FrameTimecode @@ -128,20 +130,21 @@ class SceneMetadata: def default_formatter(template: str) -> PathFormatter: """Formats filenames using a template string which allows the following variables: - `$VIDEO_NAME`, `$SCENE_NUMBER`, `$START_TIME`, `$END_TIME`, `$START_FRAME`, `$END_FRAME` + `$VIDEO_NAME`, `$SCENE_NUMBER`, `$START_TIME`, `$END_TIME`, `$START_FRAME`, `$END_FRAME` """ MIN_DIGITS = 3 format_scene_number: PathFormatter = lambda video, scene: ( - ('%0' + str(max(MIN_DIGITS, - math.floor(math.log(video.total_scenes, 10)) + 1)) + 'd') % - (scene.index + 1)) + ("%0" + str(max(MIN_DIGITS, math.floor(math.log(video.total_scenes, 10)) + 1)) + "d") + % (scene.index + 1) + ) formatter: PathFormatter = lambda video, scene: Template(template).safe_substitute( VIDEO_NAME=video.name, SCENE_NUMBER=format_scene_number(video, scene), START_TIME=str(scene.start.get_timecode().replace(":", ";")), END_TIME=str(scene.end.get_timecode().replace(":", ";")), START_FRAME=str(scene.start.get_frames()), - END_FRAME=str(scene.end.get_frames())) + END_FRAME=str(scene.end.get_frames()), + ) return formatter @@ -154,12 +157,12 @@ def split_video_mkvmerge( input_video_path: str, scene_list: ty.Iterable[TimecodePair], output_dir: ty.Optional[Path] = None, - output_file_template: str = '$VIDEO_NAME.mkv', + output_file_template: str = "$VIDEO_NAME.mkv", video_name: ty.Optional[str] = None, show_output: bool = False, suppress_output=None, ) -> int: - """ Calls the mkvmerge command on the input video, splitting it at the + """Calls the mkvmerge command on the input video, splitting it at the passed timecodes, where each scene is written in sequence from 001. Arguments: @@ -179,19 +182,20 @@ def split_video_mkvmerge( """ # Handle backwards compatibility with v0.5 API. if isinstance(input_video_path, list): - logger.error('Using a list of paths is deprecated. Pass a single path instead.') + logger.error("Using a list of paths is deprecated. Pass a single path instead.") if len(input_video_path) > 1: - raise ValueError('Concatenating multiple input videos is not supported.') + raise ValueError("Concatenating multiple input videos is not supported.") input_video_path = input_video_path[0] if suppress_output is not None: - logger.error('suppress_output is deprecated, use show_output instead.') + logger.error("suppress_output is deprecated, use show_output instead.") show_output = not suppress_output if not scene_list: return 0 - logger.info('Splitting input video using mkvmerge, output path template:\n %s', - output_file_template) + logger.info( + "Splitting input video using mkvmerge, output path template:\n %s", output_file_template + ) if video_name is None: video_name = Path(input_video_path).stem @@ -207,31 +211,40 @@ def split_video_mkvmerge( output_path.parent.mkdir(parents=True, exist_ok=True) try: - call_list = ['mkvmerge'] + call_list = ["mkvmerge"] if not show_output: - call_list.append('--quiet') + call_list.append("--quiet") call_list += [ - '-o', - str(output_path), '--split', - 'parts:%s' % ','.join([ - '%s-%s' % (start_time.get_timecode(), end_time.get_timecode()) - for start_time, end_time in scene_list - ]), input_video_path + "-o", + str(output_path), + "--split", + "parts:%s" + % ",".join( + [ + "%s-%s" % (start_time.get_timecode(), end_time.get_timecode()) + for start_time, end_time in scene_list + ] + ), + input_video_path, ] total_frames = scene_list[-1][1].get_frames() - scene_list[0][0].get_frames() processing_start_time = time.time() # TODO: Capture stdout/stderr and show that if the command fails. ret_val = invoke_command(call_list) if show_output: - logger.info('Average processing speed %.2f frames/sec.', - float(total_frames) / (time.time() - processing_start_time)) + logger.info( + "Average processing speed %.2f frames/sec.", + float(total_frames) / (time.time() - processing_start_time), + ) except CommandTooLong: logger.error(COMMAND_TOO_LONG_STRING) except OSError: - logger.error('mkvmerge could not be found on the system.' - ' Please install mkvmerge to enable video output support.') + logger.error( + "mkvmerge could not be found on the system." + " Please install mkvmerge to enable video output support." + ) if ret_val != 0: - logger.error('Error splitting video (mkvmerge returned %d).', ret_val) + logger.error("Error splitting video (mkvmerge returned %d).", ret_val) return ret_val @@ -239,7 +252,7 @@ def split_video_ffmpeg( input_video_path: str, scene_list: ty.Iterable[TimecodePair], output_dir: ty.Optional[Path] = None, - output_file_template: str = '$VIDEO_NAME-Scene-$SCENE_NUMBER.mp4', + output_file_template: str = "$VIDEO_NAME-Scene-$SCENE_NUMBER.mp4", video_name: ty.Optional[str] = None, arg_override: str = DEFAULT_FFMPEG_ARGS, show_progress: bool = False, @@ -248,7 +261,7 @@ def split_video_ffmpeg( hide_progress=None, formatter: ty.Optional[PathFormatter] = None, ) -> int: - """ Calls the ffmpeg command on the input video, generating a new video for + """Calls the ffmpeg command on the input video, generating a new video for each scene based on the start/end timecodes. Arguments: @@ -274,22 +287,23 @@ def split_video_ffmpeg( """ # Handle backwards compatibility with v0.5 API. if isinstance(input_video_path, list): - logger.error('Using a list of paths is deprecated. Pass a single path instead.') + logger.error("Using a list of paths is deprecated. Pass a single path instead.") if len(input_video_path) > 1: - raise ValueError('Concatenating multiple input videos is not supported.') + raise ValueError("Concatenating multiple input videos is not supported.") input_video_path = input_video_path[0] if suppress_output is not None: - logger.error('suppress_output is deprecated, use show_output instead.') + logger.error("suppress_output is deprecated, use show_output instead.") show_output = not suppress_output if hide_progress is not None: - logger.error('hide_progress is deprecated, use show_progress instead.') + logger.error("hide_progress is deprecated, use show_progress instead.") show_progress = not hide_progress if not scene_list: return 0 - logger.info('Splitting input video using ffmpeg, output path template:\n %s', - output_file_template) + logger.info( + "Splitting input video using ffmpeg, output path template:\n %s", output_file_template + ) if video_name is None: video_name = Path(input_video_path).stem @@ -297,23 +311,24 @@ def split_video_ffmpeg( arg_override = arg_override.replace('\\"', '"') ret_val = 0 - arg_override = arg_override.split(' ') - scene_num_format = '%0' - scene_num_format += str(max(3, math.floor(math.log(len(scene_list), 10)) + 1)) + 'd' + arg_override = arg_override.split(" ") + scene_num_format = "%0" + scene_num_format += str(max(3, math.floor(math.log(len(scene_list), 10)) + 1)) + "d" if formatter is None: formatter = default_formatter(output_file_template) video_metadata = VideoMetadata( - name=video_name, path=input_video_path, total_scenes=len(scene_list)) + name=video_name, path=input_video_path, total_scenes=len(scene_list) + ) try: progress_bar = None total_frames = scene_list[-1][1].get_frames() - scene_list[0][0].get_frames() if show_progress: - progress_bar = tqdm(total=total_frames, unit='frame', miniters=1, dynamic_ncols=True) + progress_bar = tqdm(total=total_frames, unit="frame", miniters=1, dynamic_ncols=True) processing_start_time = time.time() for i, (start_time, end_time) in enumerate(scene_list): - duration = (end_time - start_time) + duration = end_time - start_time scene_metadata = SceneMetadata(index=i, start=start_time, end=end_time) output_path = Path(formatter(scene=scene_metadata, video=video_metadata)) if output_dir: @@ -321,29 +336,35 @@ def split_video_ffmpeg( output_path.parent.mkdir(parents=True, exist_ok=True) # Gracefully handle case where FFMPEG_PATH might be unset. - call_list = [FFMPEG_PATH if FFMPEG_PATH is not None else 'ffmpeg'] + call_list = [FFMPEG_PATH if FFMPEG_PATH is not None else "ffmpeg"] if not show_output: - call_list += ['-v', 'quiet'] + call_list += ["-v", "quiet"] elif i > 0: # Only show ffmpeg output for the first call, which will display any # errors if it fails, and then break the loop. We only show error messages # for the remaining calls. - call_list += ['-v', 'error'] + call_list += ["-v", "error"] call_list += [ - '-nostdin', '-y', '-ss', - str(start_time.get_seconds()), '-i', input_video_path, '-t', - str(duration.get_seconds()) + "-nostdin", + "-y", + "-ss", + str(start_time.get_seconds()), + "-i", + input_video_path, + "-t", + str(duration.get_seconds()), ] call_list += arg_override - call_list += ['-sn'] + call_list += ["-sn"] call_list += [str(output_path)] ret_val = invoke_command(call_list) if show_output and i == 0 and len(scene_list) > 1: logger.info( - 'Output from ffmpeg for Scene 1 shown above, splitting remaining scenes...') + "Output from ffmpeg for Scene 1 shown above, splitting remaining scenes..." + ) if ret_val != 0: # TODO: Capture stdout/stderr and display it on any failed calls. - logger.error('Error splitting video (ffmpeg returned %d).', ret_val) + logger.error("Error splitting video (ffmpeg returned %d).", ret_val) break if progress_bar: progress_bar.update(duration.get_frames()) @@ -351,12 +372,16 @@ def split_video_ffmpeg( if progress_bar: progress_bar.close() if show_output: - logger.info('Average processing speed %.2f frames/sec.', - float(total_frames) / (time.time() - processing_start_time)) + logger.info( + "Average processing speed %.2f frames/sec.", + float(total_frames) / (time.time() - processing_start_time), + ) except CommandTooLong: logger.error(COMMAND_TOO_LONG_STRING) except OSError: - logger.error('ffmpeg could not be found on the system.' - ' Please install ffmpeg to enable video output support.') + logger.error( + "ffmpeg could not be found on the system." + " Please install ffmpeg to enable video output support." + ) return ret_val diff --git a/scenedetect/video_stream.py b/scenedetect/video_stream.py index bfdcbbf0..8d188daf 100644 --- a/scenedetect/video_stream.py +++ b/scenedetect/video_stream.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -33,7 +32,7 @@ """ from abc import ABC, abstractmethod -from typing import Tuple, Optional, Union +from typing import Optional, Tuple, Union import numpy as np @@ -54,7 +53,6 @@ class SeekError(Exception): class VideoOpenFailure(Exception): """Raised by a backend if opening a video fails.""" - # pylint: disable=useless-super-delegation def __init__(self, message: str = "Unknown backend error."): """ Arguments: @@ -62,16 +60,16 @@ def __init__(self, message: str = "Unknown backend error."): """ super().__init__(message) - # pylint: enable=useless-super-delegation - class FrameRateUnavailable(VideoOpenFailure): """Exception instance to provide consistent error messaging across backends when the video frame rate is unavailable or cannot be calculated. Subclass of VideoOpenFailure.""" def __init__(self): - super().__init__('Unable to obtain video framerate! Specify `framerate` manually, or' - ' re-encode/re-mux the video and try again.') + super().__init__( + "Unable to obtain video framerate! Specify `framerate` manually, or" + " re-encode/re-mux the video and try again." + ) ## @@ -80,7 +78,7 @@ def __init__(self): class VideoStream(ABC): - """ Interface which all video backends must implement. """ + """Interface which all video backends must implement.""" # # Default Implementations @@ -192,7 +190,7 @@ def read(self, decode: bool = True, advance: bool = True) -> Union[np.ndarray, b @abstractmethod def reset(self) -> None: - """ Close and re-open the VideoStream (equivalent to seeking back to beginning). """ + """Close and re-open the VideoStream (equivalent to seeking back to beginning).""" raise NotImplementedError @abstractmethod diff --git a/setup.py b/setup.py index 2d8b2415..ec281380 100644 --- a/setup.py +++ b/setup.py @@ -1,5 +1,4 @@ #!/usr/bin/env python -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # --------------------------------------------------------------- @@ -9,7 +8,7 @@ # # Copyright (C) 2014-2024 Brandon Castellano . # -""" PySceneDetect setup.py - DEPRECATED. +"""PySceneDetect setup.py - DEPRECATED. Build using `python -m build` and installing the resulting .whl using `pip`. """ diff --git a/tests/__init__.py b/tests/__init__.py index 5a618310..981ec4b7 100644 --- a/tests/__init__.py +++ b/tests/__init__.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -10,7 +9,7 @@ # PySceneDetect is licensed under the BSD 3-Clause License; see the # included LICENSE file, or visit one of the above pages for details. # -""" PySceneDetect Unit Test Suite +"""PySceneDetect Unit Test Suite To run all available tests run `pytest -v` from the parent directory (i.e. the root project folder of PySceneDetect containing the scenedetect/ diff --git a/tests/conftest.py b/tests/conftest.py index f7e8a25a..6034456c 100644 --- a/tests/conftest.py +++ b/tests/conftest.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -10,7 +9,7 @@ # PySceneDetect is licensed under the BSD 3-Clause License; see the # included LICENSE file, or visit one of the above pages for details. # -""" PySceneDetect Test Configuration +"""PySceneDetect Test Configuration This file includes all pytest configuration for running PySceneDetect's tests. @@ -27,9 +26,9 @@ # TODO: Properly cleanup temporary files. -from typing import AnyStr import logging import os +from typing import AnyStr import pytest @@ -39,19 +38,22 @@ def check_exists(path: AnyStr) -> AnyStr: - """ Returns the absolute path to a (relative) path of a file that + """Returns the absolute path to a (relative) path of a file that should exist within the tests/ directory. Throws FileNotFoundError if the file could not be found. """ if not os.path.exists(path): - raise FileNotFoundError(""" + raise FileNotFoundError( + """ Test video file (%s) must be present to run test case. This file can be obtained by running the following commands from the root of the repository: git fetch --depth=1 https://github.com/Breakthrough/PySceneDetect.git refs/heads/resources:refs/remotes/origin/resources git checkout refs/remotes/origin/resources -- tests/resources/ git reset -""" % path) +""" + % path + ) return path @@ -73,6 +75,7 @@ def pytest_assertrepr_compare(op, left, right): "", *right.splitlines(), ] + return [] # @@ -84,11 +87,12 @@ def pytest_assertrepr_compare(op, left, right): def no_logs_gte_error(caplog): """Ensure no log messages with error severity or higher were reported during test execution.""" # TODO: Remove exclusion for VideoManager module when removed from codebase. - EXCLUDED_MODULES = {'video_manager'} + EXCLUDED_MODULES = {"video_manager"} yield errors = [ - record for record in caplog.get_records('call') - if record.levelno >= logging.ERROR and not record.module in EXCLUDED_MODULES + record + for record in caplog.get_records("call") + if record.levelno >= logging.ERROR and record.module not in EXCLUDED_MODULES ] assert not errors, "Test failed due to presence of one or more logs with ERROR severity." diff --git a/tests/test_api.py b/tests/test_api.py index 1ddb5596..07559253 100644 --- a/tests/test_api.py +++ b/tests/test_api.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -18,65 +17,69 @@ when calling `detect()` or `detect_scenes()`. """ -# pylint: disable=import-outside-toplevel, redefined-outer-name, unused-argument - def test_api_detect(test_video_file: str): """Demonstrate usage of the `detect()` function to process a complete video.""" - from scenedetect import detect, ContentDetector + from scenedetect import ContentDetector, detect + scene_list = detect(test_video_file, ContentDetector()) for i, scene in enumerate(scene_list): - print('Scene %d: %s - %s' % (i + 1, scene[0].get_timecode(), scene[1].get_timecode())) + print("Scene %d: %s - %s" % (i + 1, scene[0].get_timecode(), scene[1].get_timecode())) def test_api_detect_start_end_time(test_video_file: str): """Demonstrate usage of the `detect()` function to process a subset of a video.""" - from scenedetect import detect, ContentDetector + from scenedetect import ContentDetector, detect + # Times can be seconds (float), frames (int), or timecode 'HH:MM:SSS.nnn' (str). # See test_api_timecode_types() for examples of each format. scene_list = detect(test_video_file, ContentDetector(), start_time=10.5, end_time=15.9) for i, scene in enumerate(scene_list): - print('Scene %d: %s - %s' % (i + 1, scene[0].get_timecode(), scene[1].get_timecode())) + print("Scene %d: %s - %s" % (i + 1, scene[0].get_timecode(), scene[1].get_timecode())) def test_api_detect_stats(test_video_file: str): """Demonstrate usage of the `detect()` function to generate a statsfile.""" - from scenedetect import detect, ContentDetector + from scenedetect import ContentDetector, detect + detect(test_video_file, ContentDetector(), stats_file_path="frame_metrics.csv") def test_api_scene_manager(test_video_file: str): """Demonstrate how to use a SceneManager to implement a function similar to `detect()`.""" - from scenedetect import SceneManager, ContentDetector, open_video + from scenedetect import ContentDetector, SceneManager, open_video + video = open_video(test_video_file) scene_manager = SceneManager() scene_manager.add_detector(ContentDetector()) scene_manager.detect_scenes(video=video) scene_list = scene_manager.get_scene_list() for i, scene in enumerate(scene_list): - print('Scene %d: %s - %s' % (i + 1, scene[0].get_timecode(), scene[1].get_timecode())) + print("Scene %d: %s - %s" % (i + 1, scene[0].get_timecode(), scene[1].get_timecode())) def test_api_scene_manager_start_end_time(test_video_file: str): """Demonstrate how to use a SceneManager to process a subset of the input video.""" - from scenedetect import SceneManager, ContentDetector, open_video + from scenedetect import ContentDetector, SceneManager, open_video + video = open_video(test_video_file) scene_manager = SceneManager() scene_manager.add_detector(ContentDetector()) # Times can be seconds (float), frames (int), or timecode 'HH:MM:SSS.nnn' (str). # See test_api_timecode_types() for examples of each format. - start_time = 200 # Start at frame (int) 200 + start_time = 200 # Start at frame (int) 200 end_time = 15.0 # End at 15 seconds (float) video.seek(start_time) scene_manager.detect_scenes(video=video, end_time=end_time) scene_list = scene_manager.get_scene_list() for i, scene in enumerate(scene_list): - print('Scene %d: %s - %s' % (i + 1, scene[0].get_timecode(), scene[1].get_timecode())) + print("Scene %d: %s - %s" % (i + 1, scene[0].get_timecode(), scene[1].get_timecode())) def test_api_timecode_types(): """Demonstrate all different types of timecodes that can be used.""" from scenedetect import FrameTimecode + base_timecode = FrameTimecode(timecode=0, fps=10.0) # Frames (int) timecode = base_timecode + 1 @@ -85,29 +88,31 @@ def test_api_timecode_types(): timecode = base_timecode + 1.0 assert timecode.get_frames() == 10 # Timecode (str, 'HH:MM:SS' or 'HH:MM:SSS.nnn') - timecode = base_timecode + '00:00:01.500' + timecode = base_timecode + "00:00:01.500" assert timecode.get_frames() == 15 # Seconds (str, 'SSSs' or 'SSSS.SSSs') - timecode = base_timecode + '1.5s' + timecode = base_timecode + "1.5s" assert timecode.get_frames() == 15 def test_api_stats_manager(test_video_file: str): """Demonstrate using a StatsManager to save per-frame statistics to disk.""" - from scenedetect import SceneManager, StatsManager, ContentDetector, open_video + from scenedetect import ContentDetector, SceneManager, StatsManager, open_video + video = open_video(test_video_file) scene_manager = SceneManager(stats_manager=StatsManager()) scene_manager.add_detector(ContentDetector()) scene_manager.detect_scenes(video=video) # Save per-frame statistics to disk. - filename = '%s.stats.csv' % test_video_file + filename = "%s.stats.csv" % test_video_file scene_manager.stats_manager.save_to_csv(csv_file=filename) def test_api_scene_manager_callback(test_video_file: str): """Demonstrate how to use a callback with the SceneManager detect_scenes method.""" import numpy - from scenedetect import SceneManager, ContentDetector, open_video + + from scenedetect import ContentDetector, SceneManager, open_video # Callback to invoke on the first frame of every new scene detection. def on_new_scene(frame_img: numpy.ndarray, frame_num: int): @@ -125,7 +130,8 @@ def test_api_device_callback(test_video_file: str): wrapping it with a `VideoCaptureAdapter.`""" import cv2 import numpy - from scenedetect import SceneManager, ContentDetector, VideoCaptureAdapter + + from scenedetect import ContentDetector, SceneManager, VideoCaptureAdapter # Callback to invoke on the first frame of every new scene detection. def on_new_scene(frame_img: numpy.ndarray, frame_num: int): diff --git a/tests/test_backend_opencv.py b/tests/test_backend_opencv.py index eeae7620..4a77f2cb 100644 --- a/tests/test_backend_opencv.py +++ b/tests/test_backend_opencv.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -10,7 +9,7 @@ # PySceneDetect is licensed under the BSD 3-Clause License; see the # included LICENSE file, or visit one of the above pages for details. # -""" PySceneDetect scenedetect.backend.opencv Tests +"""PySceneDetect scenedetect.backend.opencv Tests This file includes unit tests for the scenedetect.backend.opencv module that implements the VideoStreamCv2 ('opencv') backend. These tests validate behaviour specific to this backend. @@ -21,7 +20,7 @@ import cv2 from scenedetect import ContentDetector, SceneManager -from scenedetect.backends.opencv import VideoStreamCv2, VideoCaptureAdapter +from scenedetect.backends.opencv import VideoCaptureAdapter, VideoStreamCv2 GROUND_TRUTH_CAPTURE_ADAPTER_TEST = [1, 90, 210] GROUND_TRUTH_CAPTURE_ADAPTER_CALLBACK_TEST = [30, 180, 394] diff --git a/tests/test_backend_pyav.py b/tests/test_backend_pyav.py index bfcc4bfb..8e27a495 100644 --- a/tests/test_backend_pyav.py +++ b/tests/test_backend_pyav.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -10,7 +9,7 @@ # PySceneDetect is licensed under the BSD 3-Clause License; see the # included LICENSE file, or visit one of the above pages for details. # -""" PySceneDetect scenedetect.backend.pyav Tests +"""PySceneDetect scenedetect.backend.pyav Tests This file includes unit tests for the scenedetect.backend.pyav module that implements the VideoStreamAv ('pyav') backend. These tests validate behaviour specific to this backend. @@ -24,9 +23,9 @@ def test_video_stream_pyav_bytesio(test_video_file: str): """Test that VideoStreamAv works with a BytesIO input in addition to a path.""" # Mode must be binary! - video_file = open(test_video_file, mode='rb') - stream = VideoStreamAv(path_or_io=video_file, threading_mode=None) - assert stream.is_seekable - stream.seek(50) - for _ in range(10): - assert stream.read() is not False + with open(test_video_file, mode="rb") as video_file: + stream = VideoStreamAv(path_or_io=video_file, threading_mode=None) + assert stream.is_seekable + stream.seek(50) + for _ in range(10): + assert stream.read() is not False diff --git a/tests/test_backwards_compat.py b/tests/test_backwards_compat.py index 2c7b5064..b4111aba 100644 --- a/tests/test_backwards_compat.py +++ b/tests/test_backwards_compat.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -21,7 +20,7 @@ import logging import os -from scenedetect import SceneManager, StatsManager, VideoManager, ContentDetector +from scenedetect import ContentDetector, SceneManager, StatsManager, VideoManager from scenedetect.platform import init_logger @@ -35,25 +34,27 @@ def validate_backwards_compatibility(test_video_file: str, stats_file_path: str) # Suppress errors generated by using deprecated classes/arguments below. init_logger(log_level=logging.CRITICAL) video_manager = VideoManager([test_video_file]) - stats_file_path = test_video_file + '.csv' + stats_file_path = test_video_file + ".csv" stats_manager = StatsManager() scene_manager = SceneManager(stats_manager) scene_manager.add_detector(ContentDetector()) base_timecode = video_manager.get_base_timecode() scene_list = [] try: - start_time = base_timecode + 20 # 00:00:00.667 - end_time = base_timecode + 10.0 # 00:00:10.000 + start_time = base_timecode + 20 # 00:00:00.667 + end_time = base_timecode + 10.0 # 00:00:10.000 if os.path.exists(stats_file_path): - with open(stats_file_path, 'r') as stats_file: + with open(stats_file_path) as stats_file: stats_manager.load_from_csv(stats_file) # ContentDetector requires at least 1 frame before it can calculate any metrics. - assert stats_manager.metrics_exist(start_time.get_frames() + 1, - [ContentDetector.FRAME_SCORE_KEY]) + assert stats_manager.metrics_exist( + start_time.get_frames() + 1, [ContentDetector.FRAME_SCORE_KEY] + ) # Correct end frame # for presentation duration. - assert stats_manager.metrics_exist(end_time.get_frames() - 1, - [ContentDetector.FRAME_SCORE_KEY]) + assert stats_manager.metrics_exist( + end_time.get_frames() - 1, [ContentDetector.FRAME_SCORE_KEY] + ) video_manager.set_duration(start_time=start_time, end_time=end_time) video_manager.set_downscale_factor() @@ -66,18 +67,21 @@ def validate_backwards_compatibility(test_video_file: str, stats_file_path: str) # Correct end frame # for presentation duration. assert video_manager.get_current_timecode().get_frames() == end_time.get_frames() + 1 - print('List of scenes obtained:') + print("List of scenes obtained:") for i, scene in enumerate(scene_list): - print(' Scene %2d: Start %s / Frame %d, End %s / Frame %d' % ( - i + 1, - scene[0].get_timecode(), - scene[0].get_frames(), - scene[1].get_timecode(), - scene[1].get_frames(), - )) + print( + " Scene %2d: Start %s / Frame %d, End %s / Frame %d" + % ( + i + 1, + scene[0].get_timecode(), + scene[0].get_frames(), + scene[1].get_timecode(), + scene[1].get_frames(), + ) + ) if stats_manager.is_save_required(): - with open(stats_file_path, 'w') as stats_file: + with open(stats_file_path, "w") as stats_file: stats_manager.save_to_csv(stats_file, base_timecode=base_timecode) finally: video_manager.release() @@ -87,7 +91,7 @@ def validate_backwards_compatibility(test_video_file: str, stats_file_path: str) def test_backwards_compatibility_with_stats(test_video_file: str): """Runs equivalent code to `tests/api_test.py` from v0.5 twice to also exercise loading a statsfile from disk.""" - stats_file_path = test_video_file + '.csv' + stats_file_path = test_video_file + ".csv" if os.path.exists(stats_file_path): os.remove(stats_file_path) scenes = validate_backwards_compatibility(test_video_file, stats_file_path) diff --git a/tests/test_cli.py b/tests/test_cli.py index fffa0a56..dbd9ef90 100644 --- a/tests/test_cli.py +++ b/tests/test_cli.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -13,12 +12,12 @@ import glob import os -import typing as ty import subprocess -import pytest +import typing as ty from pathlib import Path import cv2 +import pytest from scenedetect.video_splitter import is_ffmpeg_available, is_mkvmerge_available @@ -42,24 +41,28 @@ # TODO: Missing tests for --min-scene-len and --drop-short-scenes. -SCENEDETECT_CMD = 'python -m scenedetect' +SCENEDETECT_CMD = "python -m scenedetect" ALL_DETECTORS = [ - 'detect-content', 'detect-threshold', 'detect-adaptive', 'detect-hist', 'detect-hash' + "detect-content", + "detect-threshold", + "detect-adaptive", + "detect-hist", + "detect-hash", ] -ALL_BACKENDS = ['opencv', 'pyav'] +ALL_BACKENDS = ["opencv", "pyav"] -DEFAULT_VIDEO_PATH = 'tests/resources/goldeneye.mp4' +DEFAULT_VIDEO_PATH = "tests/resources/goldeneye.mp4" DEFAULT_VIDEO_NAME = Path(DEFAULT_VIDEO_PATH).stem -DEFAULT_BACKEND = 'opencv' -DEFAULT_STATSFILE = 'statsfile.csv' -DEFAULT_TIME = '-s 2s -d 4s' # Seek forward a bit but limit the amount we process. -DEFAULT_DETECTOR = 'detect-content' -DEFAULT_CONFIG_FILE = 'scenedetect.cfg' # Ensure we default to a "blank" config file. -DEFAULT_NUM_SCENES = 2 # Number of scenes we expect to detect given above params. +DEFAULT_BACKEND = "opencv" +DEFAULT_STATSFILE = "statsfile.csv" +DEFAULT_TIME = "-s 2s -d 4s" # Seek forward a bit but limit the amount we process. +DEFAULT_DETECTOR = "detect-content" +DEFAULT_CONFIG_FILE = "scenedetect.cfg" # Ensure we default to a "blank" config file. +DEFAULT_NUM_SCENES = 2 # Number of scenes we expect to detect given above params. def invoke_scenedetect( - args: str = '', + args: str = "", output_dir: ty.Optional[str] = None, config_file: ty.Optional[str] = DEFAULT_CONFIG_FILE, **kwargs, @@ -91,11 +94,11 @@ def invoke_scenedetect( value_dict.update(**kwargs) command = SCENEDETECT_CMD if output_dir: - command += ' -o %s' % output_dir + command += " -o %s" % output_dir if config_file: - command += ' -c %s' % config_file - command += ' ' + args.format(**value_dict) - return subprocess.call(command.strip().split(' ')) + command += " -c %s" % config_file + command += " " + args.format(**value_dict) + return subprocess.call(command.strip().split(" ")) def test_cli_no_args(): @@ -105,10 +108,10 @@ def test_cli_no_args(): def test_cli_default_detector(): """Test `scenedetect` command invoked without a detector.""" - assert invoke_scenedetect('-i {VIDEO} time {TIME}', config_file=None) == 0 + assert invoke_scenedetect("-i {VIDEO} time {TIME}", config_file=None) == 0 -@pytest.mark.parametrize('info_command', ['help', 'about', 'version']) +@pytest.mark.parametrize("info_command", ["help", "about", "version"]) def test_cli_info_command(info_command): """Test `scenedetect` info commands (e.g. help, about).""" assert invoke_scenedetect(info_command) == 0 @@ -116,10 +119,10 @@ def test_cli_info_command(info_command): def test_cli_time_validate_options(): """Validate behavior of setting parameters via the `time` command.""" - base_command = '-i {VIDEO} time {TIME} {DETECTOR}' + base_command = "-i {VIDEO} time {TIME} {DETECTOR}" # Ensure cannot set end and duration together. - assert invoke_scenedetect(base_command, TIME='-s 2.0 -d 6.0 -e 8.0') != 0 - assert invoke_scenedetect(base_command, TIME='-s 2.0 -e 8.0 -d 6.0 ') != 0 + assert invoke_scenedetect(base_command, TIME="-s 2.0 -d 6.0 -e 8.0") != 0 + assert invoke_scenedetect(base_command, TIME="-s 2.0 -e 8.0 -d 6.0 ") != 0 def test_cli_time_end(): @@ -142,10 +145,11 @@ def test_cli_time_end(): ] for test_case in TEST_CASES: output = subprocess.check_output( - SCENEDETECT_CMD.split(' ') + - ["-i", DEFAULT_VIDEO_PATH, "-m", "0", "detect-content", "list-scenes", "-n"] + - test_case.split(), - text=True) + SCENEDETECT_CMD.split(" ") + + ["-i", DEFAULT_VIDEO_PATH, "-m", "0", "detect-content", "list-scenes", "-n"] + + test_case.split(), + text=True, + ) assert EXPECTED in output, test_case @@ -169,10 +173,11 @@ def test_cli_time_start(): ] for test_case in TEST_CASES: output = subprocess.check_output( - SCENEDETECT_CMD.split(' ') + - ["-i", DEFAULT_VIDEO_PATH, "-m", "0", "detect-content", "list-scenes", "-n"] + - test_case.split(), - text=True) + SCENEDETECT_CMD.split(" ") + + ["-i", DEFAULT_VIDEO_PATH, "-m", "0", "detect-content", "list-scenes", "-n"] + + test_case.split(), + text=True, + ) assert EXPECTED in output, test_case @@ -213,10 +218,11 @@ def test_cli_time_scene_boundary(): ] for test_case in TEST_CASES: output = subprocess.check_output( - SCENEDETECT_CMD.split(' ') + - ["-i", DEFAULT_VIDEO_PATH, "-m", "0", "detect-content", "list-scenes", "-n"] + - test_case.split(), - text=True) + SCENEDETECT_CMD.split(" ") + + ["-i", DEFAULT_VIDEO_PATH, "-m", "0", "detect-content", "list-scenes", "-n"] + + test_case.split(), + text=True, + ) assert EXPECTED in output, test_case @@ -224,10 +230,12 @@ def test_cli_time_end_of_video(): """Validate frame number/timecode alignment at the end of the video. The end timecode includes presentation time and therefore should represent the full length of the video.""" output = subprocess.check_output( - SCENEDETECT_CMD.split(' ') + - ['-i', DEFAULT_VIDEO_PATH, 'detect-content', 'list-scenes', '-n', 'time', '-s', '1872'], - text=True) - assert """ + SCENEDETECT_CMD.split(" ") + + ["-i", DEFAULT_VIDEO_PATH, "detect-content", "list-scenes", "-n", "time", "-s", "1872"], + text=True, + ) + assert ( + """ ----------------------------------------------------------------------- | Scene # | Start Frame | Start Time | End Frame | End Time | ----------------------------------------------------------------------- @@ -235,31 +243,39 @@ def test_cli_time_end_of_video(): | 2 | 1917 | 00:01:19.913 | 1966 | 00:01:21.999 | | 3 | 1967 | 00:01:21.999 | 1980 | 00:01:22.582 | ----------------------------------------------------------------------- -""" in output +""" + in output + ) assert "00:01:19.913,00:01:21.999" in output -@pytest.mark.parametrize('detector_command', ALL_DETECTORS) +@pytest.mark.parametrize("detector_command", ALL_DETECTORS) def test_cli_detector(detector_command: str): """Test each detection algorithm.""" # Ensure all detectors work without a statsfile. - assert invoke_scenedetect('-i {VIDEO} time {TIME} {DETECTOR}', DETECTOR=detector_command) == 0 + assert invoke_scenedetect("-i {VIDEO} time {TIME} {DETECTOR}", DETECTOR=detector_command) == 0 -@pytest.mark.parametrize('detector_command', ALL_DETECTORS) +@pytest.mark.parametrize("detector_command", ALL_DETECTORS) def test_cli_detector_with_stats(tmp_path, detector_command: str): """Test each detection algorithm with a statsfile.""" # Run with a statsfile twice to ensure the file is populated with those metrics and reloaded. - assert invoke_scenedetect( - '-i {VIDEO} -s {STATS} time {TIME} {DETECTOR}', - output_dir=tmp_path, - DETECTOR=detector_command, - ) == 0 - assert invoke_scenedetect( - '-i {VIDEO} -s {STATS} time {TIME} {DETECTOR}', - output_dir=tmp_path, - DETECTOR=detector_command, - ) == 0 + assert ( + invoke_scenedetect( + "-i {VIDEO} -s {STATS} time {TIME} {DETECTOR}", + output_dir=tmp_path, + DETECTOR=detector_command, + ) + == 0 + ) + assert ( + invoke_scenedetect( + "-i {VIDEO} -s {STATS} time {TIME} {DETECTOR}", + output_dir=tmp_path, + DETECTOR=detector_command, + ) + == 0 + ) # TODO: Check for existence of statsfile by trying to load it with the library, # and ensuring that we got some frames. @@ -267,20 +283,29 @@ def test_cli_detector_with_stats(tmp_path, detector_command: str): def test_cli_list_scenes(tmp_path: Path): """Test `list-scenes` command.""" # Regular invocation - assert invoke_scenedetect( - '-i {VIDEO} time {TIME} {DETECTOR} list-scenes', - output_dir=tmp_path, - ) == 0 + assert ( + invoke_scenedetect( + "-i {VIDEO} time {TIME} {DETECTOR} list-scenes", + output_dir=tmp_path, + ) + == 0 + ) # Add statsfile - assert invoke_scenedetect( - '-i {VIDEO} -s {STATS} time {TIME} {DETECTOR} list-scenes', - output_dir=tmp_path, - ) == 0 + assert ( + invoke_scenedetect( + "-i {VIDEO} -s {STATS} time {TIME} {DETECTOR} list-scenes", + output_dir=tmp_path, + ) + == 0 + ) # Suppress output file - assert invoke_scenedetect( - '-i {VIDEO} time {TIME} {DETECTOR} list-scenes -n', - output_dir=tmp_path, - ) == 0 + assert ( + invoke_scenedetect( + "-i {VIDEO} time {TIME} {DETECTOR} list-scenes -n", + output_dir=tmp_path, + ) + == 0 + ) # TODO: Check for output files from regular invocation. # TODO: Delete scene list and ensure is not recreated using -n. @@ -289,56 +314,86 @@ def test_cli_list_scenes(tmp_path: Path): def test_cli_split_video_ffmpeg(tmp_path: Path): """Test `split-video` command using ffmpeg.""" # Assumption: The default filename format is VIDEO_NAME-Scene-SCENE_NUMBER. - assert invoke_scenedetect( - '-i {VIDEO} -s {STATS} time {TIME} {DETECTOR} split-video', output_dir=tmp_path) == 0 + assert ( + invoke_scenedetect( + "-i {VIDEO} -s {STATS} time {TIME} {DETECTOR} split-video", output_dir=tmp_path + ) + == 0 + ) entries = sorted(tmp_path.glob(f"{DEFAULT_VIDEO_NAME}-Scene-*")) - assert (len(entries) == DEFAULT_NUM_SCENES), entries + assert len(entries) == DEFAULT_NUM_SCENES, entries [entry.unlink() for entry in entries] - assert invoke_scenedetect( - '-i {VIDEO} -s {STATS} time {TIME} {DETECTOR} split-video -c', output_dir=tmp_path) == 0 + assert ( + invoke_scenedetect( + "-i {VIDEO} -s {STATS} time {TIME} {DETECTOR} split-video -c", output_dir=tmp_path + ) + == 0 + ) entries = sorted(tmp_path.glob(f"{DEFAULT_VIDEO_NAME}-Scene-*")) - assert (len(entries) == DEFAULT_NUM_SCENES) + assert len(entries) == DEFAULT_NUM_SCENES [entry.unlink() for entry in entries] - assert invoke_scenedetect( - '-i {VIDEO} -s {STATS} time {TIME} {DETECTOR} split-video -f abc$VIDEO_NAME-123$SCENE_NUMBER', - output_dir=tmp_path) == 0 + assert ( + invoke_scenedetect( + "-i {VIDEO} -s {STATS} time {TIME} {DETECTOR} split-video -f abc$VIDEO_NAME-123$SCENE_NUMBER", + output_dir=tmp_path, + ) + == 0 + ) entries = sorted(tmp_path.glob(f"abc{DEFAULT_VIDEO_NAME}-123*")) - assert (len(entries) == DEFAULT_NUM_SCENES), entries + assert len(entries) == DEFAULT_NUM_SCENES, entries [entry.unlink() for entry in entries] # -a/--args and -c/--copy are mutually exclusive, so this command should fail (return nonzero) assert invoke_scenedetect( - "-i {VIDEO} -s {STATS} time {TIME} {DETECTOR} split-video -c -a \"-c:v libx264\"", - output_dir=tmp_path) + '-i {VIDEO} -s {STATS} time {TIME} {DETECTOR} split-video -c -a "-c:v libx264"', + output_dir=tmp_path, + ) @pytest.mark.skipif(condition=not is_mkvmerge_available(), reason="mkvmerge is not available") def test_cli_split_video_mkvmerge(tmp_path: Path): """Test `split-video` command using mkvmerge.""" - assert invoke_scenedetect( - '-i {VIDEO} -s {STATS} time {TIME} {DETECTOR} split-video -m', output_dir=tmp_path) == 0 - assert invoke_scenedetect( - '-i {VIDEO} -s {STATS} time {TIME} {DETECTOR} split-video -m -c', output_dir=tmp_path) == 0 - assert invoke_scenedetect( - '-i {VIDEO} -s {STATS} time {TIME} {DETECTOR} split-video -m -f "test$VIDEO_NAME"', - output_dir=tmp_path) == 0 + assert ( + invoke_scenedetect( + "-i {VIDEO} -s {STATS} time {TIME} {DETECTOR} split-video -m", output_dir=tmp_path + ) + == 0 + ) + assert ( + invoke_scenedetect( + "-i {VIDEO} -s {STATS} time {TIME} {DETECTOR} split-video -m -c", output_dir=tmp_path + ) + == 0 + ) + assert ( + invoke_scenedetect( + '-i {VIDEO} -s {STATS} time {TIME} {DETECTOR} split-video -m -f "test$VIDEO_NAME"', + output_dir=tmp_path, + ) + == 0 + ) # -a/--args and -m/--mkvmerge are mutually exclusive assert invoke_scenedetect( '-i {VIDEO} -s {STATS} time {TIME} {DETECTOR} split-video -m -a "-c:v libx264"', - output_dir=tmp_path) + output_dir=tmp_path, + ) # TODO: Check for existence of split video files. def test_cli_save_images(tmp_path: Path): """Test `save-images` command.""" - assert invoke_scenedetect( - '-i {VIDEO} -s {STATS} time {TIME} {DETECTOR} save-images', output_dir=tmp_path) == 0 + assert ( + invoke_scenedetect( + "-i {VIDEO} -s {STATS} time {TIME} {DETECTOR} save-images", output_dir=tmp_path + ) + == 0 + ) # Open one of the created images and make sure it has the correct resolution. # TODO: Also need to test that the right number of images was generated, and compare with # expected frames from the actual video. - images = glob.glob(os.path.join(tmp_path, '*.jpg')) + images = glob.glob(os.path.join(tmp_path, "*.jpg")) assert images image = cv2.imread(images[0]) assert image.shape == (544, 1280, 3) @@ -347,11 +402,15 @@ def test_cli_save_images(tmp_path: Path): # TODO(#134): This works fine with OpenCV currently, but needs to be supported for PyAV and MoviePy. def test_cli_save_images_rotation(rotated_video_file, tmp_path): """Test that `save-images` command rotates images correctly with the default backend.""" - assert invoke_scenedetect( - '-i {VIDEO} {DETECTOR} time {TIME} save-images', - VIDEO=rotated_video_file, - output_dir=tmp_path) == 0 - images = glob.glob(os.path.join(tmp_path, '*.jpg')) + assert ( + invoke_scenedetect( + "-i {VIDEO} {DETECTOR} time {TIME} save-images", + VIDEO=rotated_video_file, + output_dir=tmp_path, + ) + == 0 + ) + images = glob.glob(os.path.join(tmp_path, "*.jpg")) assert images image = cv2.imread(images[0]) # Note same resolution as in test_cli_save_images but rotated 90 degrees. @@ -360,42 +419,51 @@ def test_cli_save_images_rotation(rotated_video_file, tmp_path): def test_cli_export_html(tmp_path: Path): """Test `export-html` command.""" - base_command = '-i {VIDEO} -s {STATS} time {TIME} {DETECTOR} {COMMAND}' - assert invoke_scenedetect( - base_command, COMMAND='save-images export-html', output_dir=tmp_path) == 0 - assert invoke_scenedetect( - base_command, COMMAND='export-html --no-images', output_dir=tmp_path) == 0 + base_command = "-i {VIDEO} -s {STATS} time {TIME} {DETECTOR} {COMMAND}" + assert ( + invoke_scenedetect(base_command, COMMAND="save-images export-html", output_dir=tmp_path) + == 0 + ) + assert ( + invoke_scenedetect(base_command, COMMAND="export-html --no-images", output_dir=tmp_path) + == 0 + ) # TODO: Check for existence of HTML & image files. -@pytest.mark.parametrize('backend_type', ALL_BACKENDS) +@pytest.mark.parametrize("backend_type", ALL_BACKENDS) def test_cli_backend(backend_type: str): """Test setting the `-b`/`--backend` argument.""" - assert invoke_scenedetect( - '-i {VIDEO} -b {BACKEND} time {TIME} {DETECTOR}', BACKEND=backend_type) == 0 + assert ( + invoke_scenedetect("-i {VIDEO} -b {BACKEND} time {TIME} {DETECTOR}", BACKEND=backend_type) + == 0 + ) def test_cli_backend_unsupported(): """Ensure setting an invalid backend returns an error.""" - assert invoke_scenedetect( - '-i {VIDEO} -b {BACKEND} {DETECTOR}', BACKEND='unknown_backend_type') != 0 + assert ( + invoke_scenedetect("-i {VIDEO} -b {BACKEND} {DETECTOR}", BACKEND="unknown_backend_type") + != 0 + ) def test_cli_load_scenes(): """Ensure we can load scenes both with and without the cut row.""" - assert invoke_scenedetect('-i {VIDEO} time {TIME} {DETECTOR} list-scenes') == 0 - assert invoke_scenedetect('-i {VIDEO} time {TIME} load-scenes -i {VIDEO_NAME}-Scenes.csv') == 0 + assert invoke_scenedetect("-i {VIDEO} time {TIME} {DETECTOR} list-scenes") == 0 + assert invoke_scenedetect("-i {VIDEO} time {TIME} load-scenes -i {VIDEO_NAME}-Scenes.csv") == 0 # Specifying a detector with load-scenes should be disallowed. assert invoke_scenedetect( - '-i {VIDEO} time {TIME} {DETECTOR} load-scenes -i {VIDEO_NAME}-Scenes.csv') + "-i {VIDEO} time {TIME} {DETECTOR} load-scenes -i {VIDEO_NAME}-Scenes.csv" + ) # Specifying load-scenes several times should be disallowed. assert invoke_scenedetect( - '-i {VIDEO} time {TIME} load-scenes -i {VIDEO_NAME}-Scenes.csv load-scenes -i {VIDEO_NAME}-Scenes.csv' + "-i {VIDEO} time {TIME} load-scenes -i {VIDEO_NAME}-Scenes.csv load-scenes -i {VIDEO_NAME}-Scenes.csv" ) # If `-s`/`--skip-cuts` is specified, the resulting scene list should still be compatible with # the `load-scenes` command. - assert invoke_scenedetect('-i {VIDEO} time {TIME} {DETECTOR} list-scenes -s') == 0 - assert invoke_scenedetect('-i {VIDEO} time {TIME} load-scenes -i {VIDEO_NAME}-Scenes.csv') == 0 + assert invoke_scenedetect("-i {VIDEO} time {TIME} {DETECTOR} list-scenes -s") == 0 + assert invoke_scenedetect("-i {VIDEO} time {TIME} load-scenes -i {VIDEO_NAME}-Scenes.csv") == 0 def test_cli_load_scenes_with_time_frames(): @@ -406,24 +474,27 @@ def test_cli_load_scenes_with_time_frames(): 2,91 3,211 """ - with open('test_scene_list.csv', 'w') as f: + with open("test_scene_list.csv", "w") as f: f.write(scenes_csv) output = subprocess.check_output( - SCENEDETECT_CMD.split(' ') + [ - '-i', + SCENEDETECT_CMD.split(" ") + + [ + "-i", DEFAULT_VIDEO_PATH, - 'load-scenes', - '-i', - 'test_scene_list.csv', - 'time', - '-s', - '2s', - '-e', - '10s', - 'list-scenes', + "load-scenes", + "-i", + "test_scene_list.csv", + "time", + "-s", + "2s", + "-e", + "10s", + "list-scenes", ], - text=True) - assert """ + text=True, + ) + assert ( + """ ----------------------------------------------------------------------- | Scene # | Start Frame | Start Time | End Frame | End Time | ----------------------------------------------------------------------- @@ -431,7 +502,9 @@ def test_cli_load_scenes_with_time_frames(): | 2 | 91 | 00:00:03.754 | 210 | 00:00:08.759 | | 3 | 211 | 00:00:08.759 | 240 | 00:00:10.010 | ----------------------------------------------------------------------- -""" in output +""" + in output + ) assert "00:00:03.754,00:00:08.759" in output @@ -443,21 +516,45 @@ def test_cli_load_scenes_round_trip(): 2,91 3,211 """ - with open('test_scene_list.csv', 'w') as f: + with open("test_scene_list.csv", "w") as f: f.write(scenes_csv) ground_truth = subprocess.check_output( - SCENEDETECT_CMD.split(' ') + [ - '-i', DEFAULT_VIDEO_PATH, 'detect-content', 'list-scenes', '-f', 'testout.csv', 'time', - '-s', '200', '-e', '400' + SCENEDETECT_CMD.split(" ") + + [ + "-i", + DEFAULT_VIDEO_PATH, + "detect-content", + "list-scenes", + "-f", + "testout.csv", + "time", + "-s", + "200", + "-e", + "400", ], - text=True) + text=True, + ) loaded_first_pass = subprocess.check_output( - SCENEDETECT_CMD.split(' ') + [ - '-i', DEFAULT_VIDEO_PATH, 'load-scenes', '-i', 'testout.csv', 'time', '-s', '200', '-e', - '400', 'list-scenes', '-f', 'testout2.csv' + SCENEDETECT_CMD.split(" ") + + [ + "-i", + DEFAULT_VIDEO_PATH, + "load-scenes", + "-i", + "testout.csv", + "time", + "-s", + "200", + "-e", + "400", + "list-scenes", + "-f", + "testout2.csv", ], - text=True) - SPLIT_POINT = ' | Scene # | Start Frame | Start Time | End Frame | End Time |' + text=True, + ) + SPLIT_POINT = " | Scene # | Start Frame | Start Time | End Frame | End Time |" assert ground_truth.split(SPLIT_POINT)[1] == loaded_first_pass.split(SPLIT_POINT)[1] - with open('testout.csv') as first, open('testout2.csv') as second: + with open("testout.csv") as first, open("testout2.csv") as second: assert first.readlines() == second.readlines() diff --git a/tests/test_detectors.py b/tests/test_detectors.py index 38152b01..0df1f95a 100644 --- a/tests/test_detectors.py +++ b/tests/test_detectors.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -10,22 +9,28 @@ # PySceneDetect is licensed under the BSD 3-Clause License; see the # included LICENSE file, or visit one of the above pages for details. # -""" PySceneDetect Scene Detection Tests +"""PySceneDetect Scene Detection Tests These tests ensure that the detection algorithms deliver consistent results by using known ground truths of scene cut locations in the test case material. """ -from dataclasses import dataclass import os import typing as ty +from dataclasses import dataclass import pytest -from scenedetect import detect, SceneManager, FrameTimecode, StatsManager, SceneDetector -from scenedetect.detectors import * +from scenedetect import FrameTimecode, SceneDetector, SceneManager, StatsManager, detect from scenedetect.backends.opencv import VideoStreamCv2 +from scenedetect.detectors import ( + AdaptiveDetector, + ContentDetector, + HashDetector, + HistogramDetector, + ThresholdDetector, +) FAST_CUT_DETECTORS: ty.Tuple[ty.Type[SceneDetector]] = ( AdaptiveDetector, @@ -44,20 +49,23 @@ # TODO: Reduce code duplication here and in `conftest.py` def get_absolute_path(relative_path: str) -> str: - """ Returns the absolute path to a (relative) path of a file that + """Returns the absolute path to a (relative) path of a file that should exist within the tests/ directory. Throws FileNotFoundError if the file could not be found. """ abs_path = os.path.join(os.path.abspath(os.path.dirname(__file__)), relative_path) if not os.path.exists(abs_path): - raise FileNotFoundError(""" + raise FileNotFoundError( + """ Test video file (%s) must be present to run test case. This file can be obtained by running the following commands from the root of the repository: git fetch --depth=1 https://github.com/Breakthrough/PySceneDetect.git refs/heads/resources:refs/remotes/origin/resources git checkout refs/remotes/origin/resources -- tests/resources/ git reset -""" % relative_path) +""" + % relative_path + ) return abs_path @@ -82,7 +90,8 @@ def detect(self): video_path=self.path, detector=self.detector, start_time=self.start_time, - end_time=self.end_time) + end_time=self.end_time, + ) def get_fast_cut_test_cases(): @@ -96,8 +105,11 @@ def get_fast_cut_test_cases(): detector=detector_type(min_scene_len=15), start_time=1199, end_time=1450, - scene_boundaries=[1199, 1226, 1260, 1281, 1334, 1365]), - id="%s/default" % detector_type.__name__) for detector_type in FAST_CUT_DETECTORS + scene_boundaries=[1199, 1226, 1260, 1281, 1334, 1365], + ), + id="%s/default" % detector_type.__name__, + ) + for detector_type in FAST_CUT_DETECTORS ] # goldeneye.mp4 with min_scene_len = 30 test_cases += [ @@ -107,8 +119,11 @@ def get_fast_cut_test_cases(): detector=detector_type(min_scene_len=30), start_time=1199, end_time=1450, - scene_boundaries=[1199, 1260, 1334, 1365]), - id="%s/m=30" % detector_type.__name__) for detector_type in FAST_CUT_DETECTORS + scene_boundaries=[1199, 1260, 1334, 1365], + ), + id="%s/m=30" % detector_type.__name__, + ) + for detector_type in FAST_CUT_DETECTORS ] return test_cases @@ -124,16 +139,20 @@ def get_fade_in_out_test_cases(): detector=ThresholdDetector(), start_time=0, end_time=500, - scene_boundaries=[0, 15, 198, 376]), - id="threshold_testvideo_default"), + scene_boundaries=[0, 15, 198, 376], + ), + id="threshold_testvideo_default", + ), pytest.param( TestCase( path=get_absolute_path("resources/fades.mp4"), detector=ThresholdDetector(), start_time=0, end_time=250, - scene_boundaries=[0, 84, 167]), - id="threshold_fades_default"), + scene_boundaries=[0, 84, 167], + ), + id="threshold_fades_default", + ), pytest.param( TestCase( path=get_absolute_path("resources/fades.mp4"), @@ -144,8 +163,10 @@ def get_fade_in_out_test_cases(): ), start_time=0, end_time=250, - scene_boundaries=[0, 84, 167, 245]), - id="threshold_fades_floor"), + scene_boundaries=[0, 84, 167, 245], + ), + id="threshold_fades_floor", + ), pytest.param( TestCase( path=get_absolute_path("resources/fades.mp4"), @@ -156,8 +177,10 @@ def get_fade_in_out_test_cases(): ), start_time=0, end_time=250, - scene_boundaries=[0, 42, 125, 209]), - id="threshold_fades_ceil"), + scene_boundaries=[0, 42, 125, 209], + ), + id="threshold_fades_ceil", + ), ] @@ -181,7 +204,7 @@ def test_detect_fades(test_case: TestCase): def test_detectors_with_stats(test_video_file): - """ Test all detectors functionality with a StatsManager. """ + """Test all detectors functionality with a StatsManager.""" # TODO(v1.0): Parameterize this test case (move fixture from cli to test config). for detector in ALL_DETECTORS: video = VideoStreamCv2(test_video_file) @@ -189,7 +212,7 @@ def test_detectors_with_stats(test_video_file): scene_manager = SceneManager(stats_manager=stats) scene_manager.add_detector(detector()) scene_manager.auto_downscale = True - end_time = FrameTimecode('00:00:08', video.frame_rate) + end_time = FrameTimecode("00:00:08", video.frame_rate) scene_manager.detect_scenes(video=video, end_time=end_time) initial_scene_len = len(scene_manager.get_scene_list()) assert initial_scene_len > 0, "Test case must have at least one scene." diff --git a/tests/test_frame_timecode.py b/tests/test_frame_timecode.py index aa5c5386..39b25125 100644 --- a/tests/test_frame_timecode.py +++ b/tests/test_frame_timecode.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -10,7 +9,7 @@ # PySceneDetect is licensed under the BSD 3-Clause License; see the # included LICENSE file, or visit one of the above pages for details. # -""" PySceneDetect scenedetect.timecode Tests +"""PySceneDetect scenedetect.timecode Tests This file includes unit tests for the scenedetect.timecode module (specifically, the FrameTimecode object, used for representing frame-accurate timestamps and time values). @@ -21,18 +20,15 @@ or string HH:MM:SS[.nnn]. timecode format. """ -# pylint: disable=invalid-name, expression-not-assigned, unneeded-not, pointless-statement - # Third-Party Library Imports import pytest # Standard Library Imports -from scenedetect.frame_timecode import FrameTimecode -from scenedetect.frame_timecode import MAX_FPS_DELTA +from scenedetect.frame_timecode import MAX_FPS_DELTA, FrameTimecode def test_framerate(): - ''' Test FrameTimecode constructor argument "fps". ''' + """Test FrameTimecode constructor argument "fps".""" # Not passing fps results in TypeError. with pytest.raises(TypeError): FrameTimecode() @@ -65,7 +61,7 @@ def test_framerate(): def test_timecode_numeric(): - ''' Test FrameTimecode constructor argument "timecode" with numeric arguments. ''' + """Test FrameTimecode constructor argument "timecode" with numeric arguments.""" with pytest.raises(ValueError): FrameTimecode(timecode=-1, fps=1) with pytest.raises(ValueError): @@ -81,67 +77,67 @@ def test_timecode_numeric(): def test_timecode_string(): - ''' Test FrameTimecode constructor argument "timecode" with string arguments. ''' + """Test FrameTimecode constructor argument "timecode" with string arguments.""" # Invalid strings: with pytest.raises(ValueError): - FrameTimecode(timecode='-1', fps=1) + FrameTimecode(timecode="-1", fps=1) with pytest.raises(ValueError): - FrameTimecode(timecode='-1.0', fps=1.0) + FrameTimecode(timecode="-1.0", fps=1.0) with pytest.raises(ValueError): - FrameTimecode(timecode='-0.1', fps=1.0) + FrameTimecode(timecode="-0.1", fps=1.0) with pytest.raises(ValueError): - FrameTimecode(timecode='1.9x', fps=1) + FrameTimecode(timecode="1.9x", fps=1) with pytest.raises(ValueError): - FrameTimecode(timecode='1x', fps=1.0) + FrameTimecode(timecode="1x", fps=1.0) with pytest.raises(ValueError): - FrameTimecode(timecode='1.9.9', fps=1.0) + FrameTimecode(timecode="1.9.9", fps=1.0) with pytest.raises(ValueError): - FrameTimecode(timecode='1.0-', fps=1.0) + FrameTimecode(timecode="1.0-", fps=1.0) # Frame number integer [int->str] ('%d', integer number as string) - assert FrameTimecode(timecode='0', fps=1).frame_num == 0 - assert FrameTimecode(timecode='1', fps=1).frame_num == 1 - assert FrameTimecode(timecode='10', fps=1.0).frame_num == 10 + assert FrameTimecode(timecode="0", fps=1).frame_num == 0 + assert FrameTimecode(timecode="1", fps=1).frame_num == 1 + assert FrameTimecode(timecode="10", fps=1.0).frame_num == 10 # Seconds format [float->str] ('%f', number as string) - assert FrameTimecode(timecode='0.0', fps=1).frame_num == 0 - assert FrameTimecode(timecode='1.0', fps=1).frame_num == 1 - assert FrameTimecode(timecode='10.0', fps=1.0).frame_num == 10 - assert FrameTimecode(timecode='10.0000000000', fps=1.0).frame_num == 10 - assert FrameTimecode(timecode='10.100', fps=1.0).frame_num == 10 - assert FrameTimecode(timecode='1.100', fps=10.0).frame_num == 11 + assert FrameTimecode(timecode="0.0", fps=1).frame_num == 0 + assert FrameTimecode(timecode="1.0", fps=1).frame_num == 1 + assert FrameTimecode(timecode="10.0", fps=1.0).frame_num == 10 + assert FrameTimecode(timecode="10.0000000000", fps=1.0).frame_num == 10 + assert FrameTimecode(timecode="10.100", fps=1.0).frame_num == 10 + assert FrameTimecode(timecode="1.100", fps=10.0).frame_num == 11 # Seconds format [float->str] ('%fs', number as string followed by 's' for seconds) - assert FrameTimecode(timecode='0s', fps=1).frame_num == 0 - assert FrameTimecode(timecode='1s', fps=1).frame_num == 1 - assert FrameTimecode(timecode='10s', fps=1.0).frame_num == 10 - assert FrameTimecode(timecode='10.0s', fps=1.0).frame_num == 10 - assert FrameTimecode(timecode='10.0000000000s', fps=1.0).frame_num == 10 - assert FrameTimecode(timecode='10.100s', fps=1.0).frame_num == 10 - assert FrameTimecode(timecode='1.100s', fps=10.0).frame_num == 11 + assert FrameTimecode(timecode="0s", fps=1).frame_num == 0 + assert FrameTimecode(timecode="1s", fps=1).frame_num == 1 + assert FrameTimecode(timecode="10s", fps=1.0).frame_num == 10 + assert FrameTimecode(timecode="10.0s", fps=1.0).frame_num == 10 + assert FrameTimecode(timecode="10.0000000000s", fps=1.0).frame_num == 10 + assert FrameTimecode(timecode="10.100s", fps=1.0).frame_num == 10 + assert FrameTimecode(timecode="1.100s", fps=10.0).frame_num == 11 # Standard timecode format [timecode->str] ('HH:MM:SS[.nnn]', where [.nnn] is optional) - assert FrameTimecode(timecode='00:00:01', fps=1).frame_num == 1 - assert FrameTimecode(timecode='00:00:01.9999', fps=1).frame_num == 2 - assert FrameTimecode(timecode='00:00:02.0000', fps=1).frame_num == 2 - assert FrameTimecode(timecode='00:00:02.0001', fps=1).frame_num == 2 + assert FrameTimecode(timecode="00:00:01", fps=1).frame_num == 1 + assert FrameTimecode(timecode="00:00:01.9999", fps=1).frame_num == 2 + assert FrameTimecode(timecode="00:00:02.0000", fps=1).frame_num == 2 + assert FrameTimecode(timecode="00:00:02.0001", fps=1).frame_num == 2 - assert FrameTimecode(timecode='00:00:01', fps=10).frame_num == 10 - assert FrameTimecode(timecode='00:00:00.5', fps=10).frame_num == 5 - assert FrameTimecode(timecode='00:00:00.100', fps=10).frame_num == 1 - assert FrameTimecode(timecode='00:00:00.001', fps=1000).frame_num == 1 + assert FrameTimecode(timecode="00:00:01", fps=10).frame_num == 10 + assert FrameTimecode(timecode="00:00:00.5", fps=10).frame_num == 5 + assert FrameTimecode(timecode="00:00:00.100", fps=10).frame_num == 1 + assert FrameTimecode(timecode="00:00:00.001", fps=1000).frame_num == 1 - assert FrameTimecode(timecode='00:00:59.999', fps=1).frame_num == 60 - assert FrameTimecode(timecode='00:01:00.000', fps=1).frame_num == 60 - assert FrameTimecode(timecode='00:01:00.001', fps=1).frame_num == 60 + assert FrameTimecode(timecode="00:00:59.999", fps=1).frame_num == 60 + assert FrameTimecode(timecode="00:01:00.000", fps=1).frame_num == 60 + assert FrameTimecode(timecode="00:01:00.001", fps=1).frame_num == 60 - assert FrameTimecode(timecode='00:59:59.999', fps=1).frame_num == 3600 - assert FrameTimecode(timecode='01:00:00.000', fps=1).frame_num == 3600 - assert FrameTimecode(timecode='01:00:00.001', fps=1).frame_num == 3600 + assert FrameTimecode(timecode="00:59:59.999", fps=1).frame_num == 3600 + assert FrameTimecode(timecode="01:00:00.000", fps=1).frame_num == 3600 + assert FrameTimecode(timecode="01:00:00.001", fps=1).frame_num == 3600 def test_get_frames(): - ''' Test FrameTimecode get_frames() method. ''' + """Test FrameTimecode get_frames() method.""" assert FrameTimecode(timecode=1, fps=1.0).get_frames(), 1 assert FrameTimecode(timecode=1000, fps=60.0).get_frames(), 1000 assert FrameTimecode(timecode=1000000000, fps=29.97).get_frames(), 1000000000 @@ -150,106 +146,108 @@ def test_get_frames(): assert FrameTimecode(timecode=1000.0, fps=60.0).get_frames(), int(1000.0 * 60.0) assert FrameTimecode(timecode=1000000000.0, fps=29.97).get_frames(), int(1000000000.0 * 29.97) - assert FrameTimecode(timecode='00:00:02.0000', fps=1).get_frames(), 2 - assert FrameTimecode(timecode='00:00:00.5', fps=10).get_frames(), 5 - assert FrameTimecode(timecode='00:00:01', fps=10).get_frames(), 10 - assert FrameTimecode(timecode='00:01:00.000', fps=1).get_frames(), 60 + assert FrameTimecode(timecode="00:00:02.0000", fps=1).get_frames(), 2 + assert FrameTimecode(timecode="00:00:00.5", fps=10).get_frames(), 5 + assert FrameTimecode(timecode="00:00:01", fps=10).get_frames(), 10 + assert FrameTimecode(timecode="00:01:00.000", fps=1).get_frames(), 60 def test_get_seconds(): - ''' Test FrameTimecode get_seconds() method. ''' + """Test FrameTimecode get_seconds() method.""" assert FrameTimecode(timecode=1, fps=1.0).get_seconds(), pytest.approx(1.0 / 1.0) assert FrameTimecode(timecode=1000, fps=60.0).get_seconds(), pytest.approx(1000 / 60.0) - assert FrameTimecode( - timecode=1000000000, fps=29.97).get_seconds(), pytest.approx(1000000000 / 29.97) + assert FrameTimecode(timecode=1000000000, fps=29.97).get_seconds(), pytest.approx( + 1000000000 / 29.97 + ) assert FrameTimecode(timecode=1.0, fps=1.0).get_seconds(), pytest.approx(1.0) assert FrameTimecode(timecode=1000.0, fps=60.0).get_seconds(), pytest.approx(1000.0) - assert FrameTimecode( - timecode=1000000000.0, fps=29.97).get_seconds(), pytest.approx(1000000000.0) + assert FrameTimecode(timecode=1000000000.0, fps=29.97).get_seconds(), pytest.approx( + 1000000000.0 + ) - assert FrameTimecode(timecode='00:00:02.0000', fps=1).get_seconds(), pytest.approx(2.0) - assert FrameTimecode(timecode='00:00:00.5', fps=10).get_seconds(), pytest.approx(0.5) - assert FrameTimecode(timecode='00:00:01', fps=10).get_seconds(), pytest.approx(1.0) - assert FrameTimecode(timecode='00:01:00.000', fps=1).get_seconds(), pytest.approx(60.0) + assert FrameTimecode(timecode="00:00:02.0000", fps=1).get_seconds(), pytest.approx(2.0) + assert FrameTimecode(timecode="00:00:00.5", fps=10).get_seconds(), pytest.approx(0.5) + assert FrameTimecode(timecode="00:00:01", fps=10).get_seconds(), pytest.approx(1.0) + assert FrameTimecode(timecode="00:01:00.000", fps=1).get_seconds(), pytest.approx(60.0) def test_get_timecode(): - ''' Test FrameTimecode get_timecode() method. ''' - assert FrameTimecode(timecode=1.0, fps=1.0).get_timecode() == '00:00:01.000' - assert FrameTimecode(timecode=60.117, fps=60.0).get_timecode() == '00:01:00.117' - assert FrameTimecode(timecode=3600.234, fps=29.97).get_timecode() == '01:00:00.234' + """Test FrameTimecode get_timecode() method.""" + assert FrameTimecode(timecode=1.0, fps=1.0).get_timecode() == "00:00:01.000" + assert FrameTimecode(timecode=60.117, fps=60.0).get_timecode() == "00:01:00.117" + assert FrameTimecode(timecode=3600.234, fps=29.97).get_timecode() == "01:00:00.234" - assert FrameTimecode(timecode='00:00:02.0000', fps=1).get_timecode() == '00:00:02.000' - assert FrameTimecode(timecode='00:00:00.5', fps=10).get_timecode() == '00:00:00.500' - assert FrameTimecode(timecode='00:00:01.501', fps=10).get_timecode() == '00:00:01.500' - assert FrameTimecode(timecode='00:01:00.000', fps=1).get_timecode() == '00:01:00.000' + assert FrameTimecode(timecode="00:00:02.0000", fps=1).get_timecode() == "00:00:02.000" + assert FrameTimecode(timecode="00:00:00.5", fps=10).get_timecode() == "00:00:00.500" + assert FrameTimecode(timecode="00:00:01.501", fps=10).get_timecode() == "00:00:01.500" + assert FrameTimecode(timecode="00:01:00.000", fps=1).get_timecode() == "00:01:00.000" def test_equality(): - ''' Test FrameTimecode equality (==, __eq__) operator. ''' + """Test FrameTimecode equality (==, __eq__) operator.""" x = FrameTimecode(timecode=1.0, fps=10.0) assert x == x assert x == FrameTimecode(timecode=1.0, fps=10.0) - assert not x != FrameTimecode(timecode=1.0, fps=10.0) + assert x == FrameTimecode(timecode=1.0, fps=10.0) + assert x != FrameTimecode(timecode=10.0, fps=10.0) assert x != FrameTimecode(timecode=10.0, fps=10.0) - assert not x == FrameTimecode(timecode=10.0, fps=10.0) # Comparing FrameTimecodes with different framerates raises a TypeError. with pytest.raises(TypeError): - x == FrameTimecode(timecode=1.0, fps=100.0) + assert x == FrameTimecode(timecode=1.0, fps=100.0) with pytest.raises(TypeError): - x == FrameTimecode(timecode=1.0, fps=10.1) + assert x == FrameTimecode(timecode=1.0, fps=10.1) assert x == FrameTimecode(x) assert x == FrameTimecode(1.0, x) assert x == FrameTimecode(10, x) - assert x == '00:00:01' - assert x == '00:00:01.0' - assert x == '00:00:01.00' - assert x == '00:00:01.000' - assert x == '00:00:01.0000' - assert x == '00:00:01.00000' + assert x == "00:00:01" + assert x == "00:00:01.0" + assert x == "00:00:01.00" + assert x == "00:00:01.000" + assert x == "00:00:01.0000" + assert x == "00:00:01.00000" assert x == 10 assert x == 1.0 with pytest.raises(ValueError): - x == '0x' + assert x == "0x" with pytest.raises(ValueError): - x == 'x00:00:00.000' + assert x == "x00:00:00.000" with pytest.raises(TypeError): - x == [0] + assert x == [0] with pytest.raises(TypeError): - x == (0,) + assert x == (0,) with pytest.raises(TypeError): - x == [0, 1, 2, 3] + assert x == [0, 1, 2, 3] with pytest.raises(TypeError): - x == {0: 0} + assert x == {0: 0} - assert FrameTimecode(timecode='00:00:00.5', fps=10) == '00:00:00.500' - assert FrameTimecode(timecode='00:00:01.500', fps=10) == '00:00:01.500' - assert FrameTimecode(timecode='00:00:01.500', fps=10) == '00:00:01.501' - assert FrameTimecode(timecode='00:00:01.500', fps=10) == '00:00:01.502' - assert FrameTimecode(timecode='00:00:01.500', fps=10) == '00:00:01.508' - assert FrameTimecode(timecode='00:00:01.500', fps=10) == '00:00:01.509' - assert FrameTimecode(timecode='00:00:01.519', fps=10) == '00:00:01.510' + assert FrameTimecode(timecode="00:00:00.5", fps=10) == "00:00:00.500" + assert FrameTimecode(timecode="00:00:01.500", fps=10) == "00:00:01.500" + assert FrameTimecode(timecode="00:00:01.500", fps=10) == "00:00:01.501" + assert FrameTimecode(timecode="00:00:01.500", fps=10) == "00:00:01.502" + assert FrameTimecode(timecode="00:00:01.500", fps=10) == "00:00:01.508" + assert FrameTimecode(timecode="00:00:01.500", fps=10) == "00:00:01.509" + assert FrameTimecode(timecode="00:00:01.519", fps=10) == "00:00:01.510" def test_addition(): - ''' Test FrameTimecode addition (+/+=, __add__/__iadd__) operator. ''' + """Test FrameTimecode addition (+/+=, __add__/__iadd__) operator.""" x = FrameTimecode(timecode=1.0, fps=10.0) assert x + 1 == FrameTimecode(timecode=1.1, fps=10.0) assert x + 1 == FrameTimecode(1.1, x) assert x + 10 == 20 assert x + 10 == 2.0 - assert x + 10 == '00:00:02.000' + assert x + 10 == "00:00:02.000" with pytest.raises(TypeError): - FrameTimecode('00:00:02.000', fps=20.0) == x + 10 + assert FrameTimecode("00:00:02.000", fps=20.0) == x + 10 def test_subtraction(): - ''' Test FrameTimecode subtraction (-/-=, __sub__) operator. ''' + """Test FrameTimecode subtraction (-/-=, __sub__) operator.""" x = FrameTimecode(timecode=1.0, fps=10.0) assert (x - 1) == FrameTimecode(timecode=0.9, fps=10.0) assert x - 2 == FrameTimecode(0.8, x) @@ -264,12 +262,12 @@ def test_subtraction(): assert x - 1 == FrameTimecode(timecode=0.9, fps=10.0) with pytest.raises(TypeError): - FrameTimecode('00:00:02.000', fps=20.0) == x - 10 + assert FrameTimecode("00:00:02.000", fps=20.0) == x - 10 @pytest.mark.parametrize("frame_num,fps", [(1, 1), (61, 14), (29, 25), (126, 24000 / 1001.0)]) def test_identity(frame_num, fps): - ''' Test FrameTimecode values, when used in init return the same values ''' + """Test FrameTimecode values, when used in init return the same values""" frame_time_code = FrameTimecode(frame_num, fps=fps) assert FrameTimecode(frame_time_code) == frame_time_code assert FrameTimecode(frame_time_code.get_frames(), fps=fps) == frame_time_code diff --git a/tests/test_platform.py b/tests/test_platform.py index 4f90ff1e..319a54ea 100644 --- a/tests/test_platform.py +++ b/tests/test_platform.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -10,31 +9,32 @@ # PySceneDetect is licensed under the BSD 3-Clause License; see the # included LICENSE file, or visit one of the above pages for details. # -""" PySceneDetect scenedetect.platform Tests +"""PySceneDetect scenedetect.platform Tests This file includes unit tests for the scenedetect.platform module, containing all platform/library/OS-specific compatibility fixes. """ import platform + import pytest from scenedetect.platform import CommandTooLong, invoke_command def test_invoke_command(): - """ Ensures the function exists and is callable without throwing - an exception. """ - if platform.system() == 'Windows': - invoke_command(['cmd']) + """Ensures the function exists and is callable without throwing + an exception.""" + if platform.system() == "Windows": + invoke_command(["cmd"]) else: - invoke_command(['echo']) + invoke_command(["echo"]) def test_long_command(): - """ [Windows Only] Ensures that a command string too large to be handled + """[Windows Only] Ensures that a command string too large to be handled is translated to the correct exception for error handling. """ - if platform.system() == 'Windows': + if platform.system() == "Windows": with pytest.raises(CommandTooLong): - invoke_command('x' * 2**15) + invoke_command("x" * 2**15) diff --git a/tests/test_scene_manager.py b/tests/test_scene_manager.py index c974398d..9e19f4c1 100644 --- a/tests/test_scene_manager.py +++ b/tests/test_scene_manager.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -16,8 +15,6 @@ which applies SceneDetector algorithms on VideoStream backends. """ -# pylint: disable=invalid-name - import glob import os import os.path @@ -38,8 +35,8 @@ def test_scene_list(test_video_file): sm.add_detector(ContentDetector()) video_fps = video.frame_rate - start_time = FrameTimecode('00:00:05', video_fps) - end_time = FrameTimecode('00:00:15', video_fps) + start_time = FrameTimecode("00:00:05", video_fps) + end_time = FrameTimecode("00:00:15", video_fps) assert end_time.get_frames() > start_time.get_frames() @@ -93,22 +90,27 @@ def test_save_images(test_video_file): sm = SceneManager() sm.add_detector(ContentDetector()) - image_name_glob = 'scenedetect.tempfile.*.jpg' - image_name_template = ('scenedetect.tempfile.' - '$SCENE_NUMBER.$IMAGE_NUMBER.$FRAME_NUMBER.' - '$TIMESTAMP_MS.$TIMECODE') + image_name_glob = "scenedetect.tempfile.*.jpg" + image_name_template = ( + "scenedetect.tempfile." + "$SCENE_NUMBER.$IMAGE_NUMBER.$FRAME_NUMBER." + "$TIMESTAMP_MS.$TIMECODE" + ) try: video_fps = video.frame_rate - scene_list = [(FrameTimecode(start, video_fps), FrameTimecode(end, video_fps)) - for start, end in [(0, 100), (200, 300), (300, 400)]] + scene_list = [ + (FrameTimecode(start, video_fps), FrameTimecode(end, video_fps)) + for start, end in [(0, 100), (200, 300), (300, 400)] + ] image_filenames = save_images( scene_list=scene_list, video=video, num_images=3, - image_extension='jpg', - image_name_template=image_name_template) + image_extension="jpg", + image_name_template=image_name_template, + ) # Ensure images got created, and the proper number got created. total_images = 0 @@ -128,19 +130,22 @@ def test_save_images(test_video_file): def test_save_images_zero_width_scene(test_video_file): """Test scenedetect.scene_manager.save_images guards against zero width scenes.""" video = VideoStreamCv2(test_video_file) - image_name_glob = 'scenedetect.tempfile.*.jpg' - image_name_template = 'scenedetect.tempfile.$SCENE_NUMBER.$IMAGE_NUMBER' + image_name_glob = "scenedetect.tempfile.*.jpg" + image_name_template = "scenedetect.tempfile.$SCENE_NUMBER.$IMAGE_NUMBER" try: video_fps = video.frame_rate - scene_list = [(FrameTimecode(start, video_fps), FrameTimecode(end, video_fps)) - for start, end in [(0, 0), (1, 1), (2, 3)]] + scene_list = [ + (FrameTimecode(start, video_fps), FrameTimecode(end, video_fps)) + for start, end in [(0, 0), (1, 1), (2, 3)] + ] NUM_IMAGES = 10 image_filenames = save_images( scene_list=scene_list, video=video, num_images=10, - image_extension='jpg', - image_name_template=image_name_template) + image_extension="jpg", + image_name_template=image_name_template, + ) assert len(image_filenames) == 3 assert all(len(image_filenames[scene]) == NUM_IMAGES for scene in image_filenames) total_images = 0 @@ -156,8 +161,7 @@ def test_save_images_zero_width_scene(test_video_file): # TODO: This would be more readable if the callbacks were defined within the test case, e.g. # split up the callback function and callback lambda test cases. -# pylint: disable=unused-argument, unnecessary-lambda -class FakeCallback(object): +class FakeCallback: """Fake callback used for testing. Tracks the frame numbers the callback was invoked with.""" def __init__(self): @@ -180,9 +184,6 @@ def _callback(self, image, frame_num): self.scene_list.append(frame_num) -# pylint: enable=unused-argument, unnecessary-lambda - - def test_detect_scenes_callback(test_video_file): """Test SceneManager detect_scenes method with a callback function. @@ -195,13 +196,14 @@ def test_detect_scenes_callback(test_video_file): fake_callback = FakeCallback() video_fps = video.frame_rate - start_time = FrameTimecode('00:00:05', video_fps) - end_time = FrameTimecode('00:00:15', video_fps) + start_time = FrameTimecode("00:00:05", video_fps) + end_time = FrameTimecode("00:00:15", video_fps) video.seek(start_time) sm.auto_downscale = True _ = sm.detect_scenes( - video=video, end_time=end_time, callback=fake_callback.get_callback_lambda()) + video=video, end_time=end_time, callback=fake_callback.get_callback_lambda() + ) scene_list = sm.get_scene_list() assert [start for start, end in scene_list] == TEST_VIDEO_START_FRAMES_ACTUAL assert fake_callback.scene_list == TEST_VIDEO_START_FRAMES_ACTUAL[1:] @@ -231,13 +233,14 @@ def test_detect_scenes_callback_adaptive(test_video_file): fake_callback = FakeCallback() video_fps = video.frame_rate - start_time = FrameTimecode('00:00:05', video_fps) - end_time = FrameTimecode('00:00:15', video_fps) + start_time = FrameTimecode("00:00:05", video_fps) + end_time = FrameTimecode("00:00:15", video_fps) video.seek(start_time) sm.auto_downscale = True _ = sm.detect_scenes( - video=video, end_time=end_time, callback=fake_callback.get_callback_lambda()) + video=video, end_time=end_time, callback=fake_callback.get_callback_lambda() + ) scene_list = sm.get_scene_list() assert [start for start, end in scene_list] == TEST_VIDEO_START_FRAMES_ACTUAL assert fake_callback.scene_list == TEST_VIDEO_START_FRAMES_ACTUAL[1:] diff --git a/tests/test_stats_manager.py b/tests/test_stats_manager.py index 9c2f0af6..3701fe5c 100644 --- a/tests/test_stats_manager.py +++ b/tests/test_stats_manager.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -10,7 +9,7 @@ # PySceneDetect is licensed under the BSD 3-Clause License; see the # included LICENSE file, or visit one of the above pages for details. # -""" PySceneDetect scenedetect.stats_manager Tests +"""PySceneDetect scenedetect.stats_manager Tests This file includes unit tests for the scenedetect.stats_manager module (specifically, the StatsManager object, used to coordinate caching of frame metrics to/from a CSV @@ -27,44 +26,42 @@ These files will be deleted, if possible, after the tests are completed running. """ -#pylint: disable=protected-access - import csv import os import random import pytest -from scenedetect.scene_manager import SceneManager -from scenedetect.frame_timecode import FrameTimecode from scenedetect.backends.opencv import VideoStreamCv2 from scenedetect.detectors import ContentDetector - -from scenedetect.stats_manager import StatsManager -from scenedetect.stats_manager import StatsFileCorrupt - -from scenedetect.stats_manager import COLUMN_NAME_FRAME_NUMBER -from scenedetect.stats_manager import COLUMN_NAME_TIMECODE +from scenedetect.frame_timecode import FrameTimecode +from scenedetect.scene_manager import SceneManager +from scenedetect.stats_manager import ( + COLUMN_NAME_FRAME_NUMBER, + COLUMN_NAME_TIMECODE, + StatsFileCorrupt, + StatsManager, +) # TODO(v1.0): use https://docs.pytest.org/en/6.2.x/tmpdir.html -TEST_STATS_FILES = ['TEST_STATS_FILE'] * 4 +TEST_STATS_FILES = ["TEST_STATS_FILE"] * 4 TEST_STATS_FILES = [ - '%s_%012d.csv' % (stats_file, random.randint(0, 10**12)) for stats_file in TEST_STATS_FILES + "%s_%012d.csv" % (stats_file, random.randint(0, 10**12)) for stats_file in TEST_STATS_FILES ] def teardown_module(): - """ Removes any created stats files, if any. """ + """Removes any created stats files, if any.""" for stats_file in TEST_STATS_FILES: if os.path.exists(stats_file): os.remove(stats_file) def test_metrics(): - """ Test StatsManager metric registration/setting/getting with a set of pre-defined + """Test StatsManager metric registration/setting/getting with a set of pre-defined key-value pairs (metric_dict). """ - metric_dict = {'some_metric': 1.2345, 'another_metric': 6.7890} + metric_dict = {"some_metric": 1.2345, "another_metric": 6.7890} metric_keys = list(metric_dict.keys()) stats = StatsManager() @@ -85,12 +82,13 @@ def test_metrics(): assert stats.metrics_exist(frame_key, metric_keys) assert stats.metrics_exist(frame_key, metric_keys[1:]) - assert stats.get_metrics( - frame_key, metric_keys) == [metric_dict[metric_key] for metric_key in metric_keys] + assert stats.get_metrics(frame_key, metric_keys) == [ + metric_dict[metric_key] for metric_key in metric_keys + ] def test_detector_metrics(test_video_file): - """ Test passing StatsManager to a SceneManager and using it for storing the frame metrics + """Test passing StatsManager to a SceneManager and using it for storing the frame metrics from a ContentDetector. """ video = VideoStreamCv2(test_video_file) @@ -98,7 +96,7 @@ def test_detector_metrics(test_video_file): scene_manager = SceneManager(stats_manager) scene_manager.add_detector(ContentDetector()) video_fps = video.frame_rate - duration = FrameTimecode('00:00:05', video_fps) + duration = FrameTimecode("00:00:05", video_fps) scene_manager.auto_downscale = True scene_manager.detect_scenes(video=video, duration=duration) # Check that metrics were written to the StatsManager. @@ -106,8 +104,9 @@ def test_detector_metrics(test_video_file): def test_load_empty_stats(): - """ Test loading an empty stats file, ensuring it results in no errors. """ - open(TEST_STATS_FILES[0], 'w').close() + """Test loading an empty stats file, ensuring it results in no errors.""" + with open(TEST_STATS_FILES[0], "w"): + pass stats_manager = StatsManager() stats_manager.load_from_csv(TEST_STATS_FILES[0]) @@ -119,14 +118,13 @@ def test_save_no_detect_scenes(): def test_load_hardcoded_file(): - """ Test loading a stats file with some hard-coded data generated by this test case. """ + """Test loading a stats file with some hard-coded data generated by this test case.""" stats_manager = StatsManager() - with open(TEST_STATS_FILES[0], 'w') as stats_file: - - stats_writer = csv.writer(stats_file, lineterminator='\n') + with open(TEST_STATS_FILES[0], "w") as stats_file: + stats_writer = csv.writer(stats_file, lineterminator="\n") - some_metric_key = 'some_metric' + some_metric_key = "some_metric" some_metric_value = 1.2 some_frame_key = 100 base_timecode = FrameTimecode(0, 29.97) @@ -135,20 +133,20 @@ def test_load_hardcoded_file(): # Write out a valid file. stats_writer.writerow([COLUMN_NAME_FRAME_NUMBER, COLUMN_NAME_TIMECODE, some_metric_key]) stats_writer.writerow( - [some_frame_key + 1, - some_frame_timecode.get_timecode(), - str(some_metric_value)]) + [some_frame_key + 1, some_frame_timecode.get_timecode(), str(some_metric_value)] + ) stats_manager.load_from_csv(TEST_STATS_FILES[0]) # Check that we decoded the correct values. assert stats_manager.metrics_exist(some_frame_key, [some_metric_key]) - assert stats_manager.get_metrics(some_frame_key, - [some_metric_key])[0] == pytest.approx(some_metric_value) + assert stats_manager.get_metrics(some_frame_key, [some_metric_key])[0] == pytest.approx( + some_metric_value + ) def test_save_load_from_video(test_video_file): - """ Test generating and saving some frame metrics from TEST_VIDEO_FILE to a file on disk, and + """Test generating and saving some frame metrics from TEST_VIDEO_FILE to a file on disk, and loading the file back to ensure the loaded frame metrics agree with those that were saved. """ video = VideoStreamCv2(test_video_file) @@ -158,7 +156,7 @@ def test_save_load_from_video(test_video_file): scene_manager.add_detector(ContentDetector()) video_fps = video.frame_rate - duration = FrameTimecode('00:00:05', video_fps) + duration = FrameTimecode("00:00:05", video_fps) scene_manager.auto_downscale = True scene_manager.detect_scenes(video, duration=duration) @@ -181,14 +179,14 @@ def test_save_load_from_video(test_video_file): def test_load_corrupt_stats(): - """ Test loading a corrupted stats file created by outputting data in the wrong format. """ + """Test loading a corrupted stats file created by outputting data in the wrong format.""" stats_manager = StatsManager() - with open(TEST_STATS_FILES[0], 'wt') as stats_file: - stats_writer = csv.writer(stats_file, lineterminator='\n') + with open(TEST_STATS_FILES[0], "w") as stats_file: + stats_writer = csv.writer(stats_file, lineterminator="\n") - some_metric_key = 'some_metric' + some_metric_key = "some_metric" some_metric_value = str(1.2) some_frame_key = 100 base_timecode = FrameTimecode(0, 29.97) @@ -200,7 +198,8 @@ def test_load_corrupt_stats(): # Swapped timecode & frame number. stats_writer.writerow([COLUMN_NAME_TIMECODE, COLUMN_NAME_FRAME_NUMBER, some_metric_key]) stats_writer.writerow( - [some_frame_key, some_frame_timecode.get_timecode(), some_metric_value]) + [some_frame_key, some_frame_timecode.get_timecode(), some_metric_value] + ) stats_file.close() diff --git a/tests/test_video_splitter.py b/tests/test_video_splitter.py index 2cd77cbb..7fefefbb 100644 --- a/tests/test_video_splitter.py +++ b/tests/test_video_splitter.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -12,14 +11,17 @@ # """Tests for scenedetect.video_splitter module.""" -# pylint: disable=no-self-use,missing-function-docstring - from pathlib import Path + import pytest from scenedetect import open_video -from scenedetect.video_splitter import (split_video_ffmpeg, is_ffmpeg_available, SceneMetadata, - VideoMetadata) +from scenedetect.video_splitter import ( + SceneMetadata, + VideoMetadata, + is_ffmpeg_available, + split_video_ffmpeg, +) @pytest.mark.skipif(condition=not is_ffmpeg_available(), reason="ffmpeg is not available") @@ -35,7 +37,7 @@ def test_split_video_ffmpeg_default(tmp_path, test_movie_clip): # The default filename format should be VIDEO_NAME-Scene-SCENE_NUMBER.mp4. video_name = Path(test_movie_clip).stem entries = sorted(tmp_path.glob(f"{video_name}-Scene-*")) - assert (len(entries) == len(scenes)) + assert len(entries) == len(scenes) @pytest.mark.skipif(condition=not is_ffmpeg_available(), reason="ffmpeg is not available") @@ -55,7 +57,7 @@ def name_formatter(video: VideoMetadata, scene: SceneMetadata): assert split_video_ffmpeg(test_movie_clip, scenes, tmp_path, formatter=name_formatter) == 0 video_name = Path(test_movie_clip).stem entries = sorted(tmp_path.glob(f"abc{video_name}-123-*")) - assert (len(entries) == len(scenes)) + assert len(entries) == len(scenes) # TODO: Add tests for `split_video_mkvmerge`. diff --git a/tests/test_video_stream.py b/tests/test_video_stream.py index 7e952881..c3cc5127 100644 --- a/tests/test_video_stream.py +++ b/tests/test_video_stream.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- # # PySceneDetect: Python-Based Video Scene Detector # ------------------------------------------------------------------- @@ -10,27 +9,24 @@ # PySceneDetect is licensed under the BSD 3-Clause License; see the # included LICENSE file, or visit one of the above pages for details. # -""" PySceneDetect scenedetect.video_stream Tests +"""PySceneDetect scenedetect.video_stream Tests This file includes unit tests for the scenedetect.video_stream module, as well as the video backends implemented in scenedetect.backends. These tests enforce a consistent interface across all supported backends, and verify that they are functionally equivalent where possible. """ -# pylint: disable=no-self-use,missing-function-docstring - +import os.path from dataclasses import dataclass from typing import List, Type -import os.path import numpy import pytest -from scenedetect.video_stream import VideoStream, SeekError +from scenedetect.backends import VideoStreamAv, VideoStreamMoviePy from scenedetect.backends.opencv import VideoStreamCv2 -from scenedetect.backends import VideoStreamAv -from scenedetect.backends import VideoStreamMoviePy from scenedetect.video_manager import VideoManager +from scenedetect.video_stream import SeekError, VideoStream # Accuracy a framerate is checked to for testing purposes. FRAMERATE_TOLERANCE = 0.001 @@ -48,7 +44,7 @@ def calculate_frame_delta(frame_a, frame_b, roi=None) -> float: if roi: - assert False # TODO + raise RuntimeError("TODO") assert frame_a.shape == frame_b.shape num_pixels = frame_a.shape[0] * frame_a.shape[1] return numpy.sum(numpy.abs(frame_b - frame_a)) / num_pixels @@ -56,26 +52,30 @@ def calculate_frame_delta(frame_a, frame_b, roi=None) -> float: # TODO: Reduce code duplication here and in `conftest.py` def get_absolute_path(relative_path: str) -> str: - """ Returns the absolute path to a (relative) path of a file that + """Returns the absolute path to a (relative) path of a file that should exist within the tests/ directory. Throws FileNotFoundError if the file could not be found. """ abs_path = os.path.join(os.path.abspath(os.path.dirname(__file__)), relative_path) if not os.path.exists(abs_path): - raise FileNotFoundError(""" + raise FileNotFoundError( + """ Test video file (%s) must be present to run test case. This file can be obtained by running the following commands from the root of the repository: git fetch --depth=1 https://github.com/Breakthrough/PySceneDetect.git refs/heads/resources:refs/remotes/origin/resources git checkout refs/remotes/origin/resources -- tests/resources/ git reset -""" % relative_path) +""" + % relative_path + ) return abs_path @dataclass class VideoParameters: """Properties for each input a VideoStream is tested against.""" + path: str height: int width: int @@ -120,12 +120,17 @@ def get_test_video_params() -> List[VideoParameters]: pytest.mark.parametrize( "vs_type", list( - filter(lambda x: x is not None, [ - VideoStreamCv2, - VideoStreamAv, - VideoStreamMoviePy, - VideoManager, - ]))), + filter( + lambda x: x is not None, + [ + VideoStreamCv2, + VideoStreamAv, + VideoStreamMoviePy, + VideoManager, + ], + ) + ), + ), pytest.mark.filterwarnings(MOVIEPY_WARNING_FILTER), ] @@ -141,10 +146,11 @@ def test_properties(self, vs_type: Type[VideoStream], test_video: VideoParameter assert stream.frame_rate == pytest.approx(test_video.frame_rate, FRAMERATE_TOLERANCE) assert stream.duration.get_frames() == test_video.total_frames file_name = os.path.basename(test_video.path) - last_dot_pos = file_name.rfind('.') + last_dot_pos = file_name.rfind(".") assert stream.name == file_name[:last_dot_pos] - assert stream.aspect_ratio == pytest.approx(test_video.aspect_ratio, - PIXEL_ASPECT_RATIO_TOLERANCE) + assert stream.aspect_ratio == pytest.approx( + test_video.aspect_ratio, PIXEL_ASPECT_RATIO_TOLERANCE + ) def test_read(self, vs_type: Type[VideoStream], test_video: VideoParameters): """Validate basic `read` functionality.""" @@ -191,7 +197,8 @@ def test_time_invariants(self, vs_type: Type[VideoStream], test_video: VideoPara assert stream.frame_number == i assert stream.position == stream.base_timecode + (i - 1) assert stream.position_ms == pytest.approx( - 1000.0 * (i - 1) / float(stream.frame_rate), abs=TIME_TOLERANCE_MS) + 1000.0 * (i - 1) / float(stream.frame_rate), abs=TIME_TOLERANCE_MS + ) def test_reset(self, vs_type: Type[VideoStream], test_video: VideoParameters): """Test `reset()` functions as expected.""" @@ -214,12 +221,14 @@ def test_seek(self, vs_type: Type[VideoStream], test_video: VideoParameters): assert stream.frame_number == 200 assert stream.position == stream.base_timecode + 199 assert stream.position_ms == pytest.approx( - 1000.0 * (199.0 / float(stream.frame_rate)), abs=TIME_TOLERANCE_MS) + 1000.0 * (199.0 / float(stream.frame_rate)), abs=TIME_TOLERANCE_MS + ) stream.read() assert stream.frame_number == 201 assert stream.position == stream.base_timecode + 200 assert stream.position_ms == pytest.approx( - 1000.0 * (200.0 / float(stream.frame_rate)), abs=TIME_TOLERANCE_MS) + 1000.0 * (200.0 / float(stream.frame_rate)), abs=TIME_TOLERANCE_MS + ) # Seek to a time in seconds (float). stream.seek(2.0) @@ -228,7 +237,8 @@ def test_seek(self, vs_type: Type[VideoStream], test_video: VideoParameters): # starts counting from zero. This should eventually be changed. assert stream.position == (stream.base_timecode + 2.0) - 1 assert stream.position_ms == pytest.approx( - 2000.0 - (1000.0 / stream.frame_rate), abs=1000.0 / stream.frame_rate) + 2000.0 - (1000.0 / stream.frame_rate), abs=1000.0 / stream.frame_rate + ) stream.read() assert stream.frame_number == 1 + round(stream.frame_rate * 2.0) assert stream.position == stream.base_timecode + 2.0 @@ -241,7 +251,8 @@ def test_seek(self, vs_type: Type[VideoStream], test_video: VideoParameters): # starts counting from zero. This should eventually be changed. assert stream.position == (stream.base_timecode + 2.0) - 1 assert stream.position_ms == pytest.approx( - 2000.0 - (1000.0 / stream.frame_rate), abs=1000.0 / stream.frame_rate) + 2000.0 - (1000.0 / stream.frame_rate), abs=1000.0 / stream.frame_rate + ) stream.read() assert stream.frame_number == 1 + round(stream.frame_rate * 2.0) assert stream.position == stream.base_timecode + 2.0 @@ -265,7 +276,8 @@ def test_seek_start(self, vs_type: Type[VideoStream], test_video: VideoParameter assert stream.frame_number == i assert stream.position == stream.base_timecode + (i - 1) assert stream.position_ms == pytest.approx( - 1000.0 * (i - 1) / float(stream.frame_rate), abs=TIME_TOLERANCE_MS) + 1000.0 * (i - 1) / float(stream.frame_rate), abs=TIME_TOLERANCE_MS + ) stream.seek(0) assert stream.frame_number == 0 assert stream.position == stream.base_timecode @@ -297,7 +309,7 @@ def test_read_eof(self, vs_type: Type[VideoStream], test_video: VideoParameters) def test_seek_past_eof(self, vs_type: Type[VideoStream], test_video: VideoParameters): """Validate calling `seek()` to offset past end of video.""" if vs_type == VideoManager: - pytest.skip(reason='VideoManager does not have compliant end-of-video seek behaviour.') + pytest.skip(reason="VideoManager does not have compliant end-of-video seek behaviour.") stream = vs_type(test_video.path) # Seek to a large seek offset past the end of the video. Some backends only support 32-bit # frame numbers so that's our max offset. Certain backends disallow seek offsets past EOF, @@ -335,13 +347,13 @@ def test_seek_invalid(self, vs_type: Type[VideoStream], test_video: VideoParamet def test_invalid_path(vs_type: Type[VideoStream]): """Ensure correct exception is thrown if the path does not exist.""" with pytest.raises(OSError): - _ = vs_type('this_path_should_not_exist.mp4') + _ = vs_type("this_path_should_not_exist.mp4") def test_corrupt_video(vs_type: Type[VideoStream], corrupt_video_file: str): """Test that backend handles video with corrupt frame gracefully with defaults.""" if vs_type == VideoManager: - pytest.skip(reason='VideoManager does not support handling corrupt videos.') + pytest.skip(reason="VideoManager does not support handling corrupt videos.") stream = vs_type(corrupt_video_file) diff --git a/website/pages/changelog.md b/website/pages/changelog.md index 6e58c1a9..d2e4c835 100644 --- a/website/pages/changelog.md +++ b/website/pages/changelog.md @@ -4,6 +4,11 @@ Releases ## PySceneDetect 0.6 +### 0.6.5 (TBD) + + - [bugfix] Fix new detectors not working with `default-detector` config option + - [bugfix] Fix SyntaxWarning due to incorrect escaping [#400](https://github.com/Breakthrough/PySceneDetect/pull/295) [#400](https://github.com/Breakthrough/PySceneDetect/issues/35) + ### 0.6.4 (June 10, 2024) #### Release Notes @@ -30,6 +35,11 @@ Feedback on the new detection methods and their default values is most welcome. - [bugfix] Fix crash when decoded frames have incorrect resolution and log error instead [#319](https://github.com/Breakthrough/PySceneDetect/issues/319) - [bugfix] Update default ffmpeg stream mapping from `-map 0` to `-map 0:v:0 -map 0:a? -map 0:s?` [#392](https://github.com/Breakthrough/PySceneDetect/issues/392) +#### 0.6.4.1 (TBD) + + - [bugfix] Fix `default-detector` config option not working with new detectors + - [bugfix] Fix SyntaxWarning due to incorrect string escaping in command-line (#400) + ### 0.6.3 (March 9, 2024) diff --git a/website/pages/contributing.md b/website/pages/contributing.md index f45b753f..c9662e3d 100644 --- a/website/pages/contributing.md +++ b/website/pages/contributing.md @@ -12,8 +12,8 @@ Development of PySceneDetect happens on [github.com/Breakthrough/PySceneDetect]( The following checklist covers the basics of pre-submission requirements: - Code passes all unit tests (run `pytest`) - - Code is formatted (run `python -m yapf -i -r scenedetect/ tests/` to format in place) - - Generally follows the [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html) + - Code passes static analysis and formatting checks (`ruff check` and `ruff format`) + - Follows the [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html) Note that PySceneDetect is released under the BSD 3-Clause license, and submitted code should comply with this license (see [License & Copyright Information](copyright.md) for details).