Skip to content

Releases: talmolab/sleap

SLEAP v1.1.3

26 Mar 04:02
08bee64
Compare
Choose a tag to compare

Release of SLEAP v1.1.3.

Includes several bug fixes and documentation improvements.

Full changelog

  • #498: Fix training when frames have no instances (fixes #480).
  • #503: Fix errors occurring sometimes after training (fixes part of #500)
  • #505: Add miscellaneous utilities/fixes
    • Add --open-in-gui flag to sleap-track to launch GUI on predictions
    • Add PredictedInstance.numpy() to convert predicted instances to numpy arrays
    • Fix visualization during single instance training
    • Add Skeleton.find_neighbors() to return parents and children of a node
    • Fix serialization of MediaVideo after loading from sleap.load_video() or sleap.load_file().
    • Add Instance.fill_missing() to initialize missing nodes (e.g., after importing from DLC)
  • #501/#519/#524: Fix multi-size video support in training and inference (fixes #510, #516, #517)
  • #523: Fix using sleap-track with zipped model folders
  • Revamped docs and notebooks to clarify the Colab-based workflow (especially Training and inference on your own data using Google Drive), which are now linked from the GUI as well
  • Fix regression in v1.1.2 (#528)

Installing

We recommend using Miniconda to install and manage your Python environments. This will also make GPU support work transparently without installing additional dependencies.

Using Conda (Windows/Linux)

  1. Delete any existing environment and start fresh (recommended):
conda env remove -n sleap
  1. Create new environment sleap (recommended):
conda create -n sleap -c sleap sleap=1.1.3

Or to update inside an existing environment:

conda install -c sleap sleap=1.1.3

Using PyPI (Windows/Linux/Mac)

  1. Create a new conda environment (recommended):
conda create -n sleap python=3.6
conda activate sleap
  1. Install from PyPI:
pip install sleap==1.1.3

Or to upgrade an existing installation:

pip install --upgrade --force-reinstall sleap==1.1.3

From source (development)

  1. Clone the repository at this tag:
git clone https://github.com/murthylab/sleap --branch v1.1.3 sleap_v1.1.3
cd sleap_v1.1.3
  1. Install conda environment and activate:
conda install -f environment.yml -n sleap_v1.1.3
conda activate sleap_v1.1.3
  1. Changes made in the code will be immediately reflected when running SLEAP in this environment.

SLEAP v1.1.2

24 Mar 23:16
Compare
Choose a tag to compare

It is strongly recommended that you use SLEAP v1.1.3 instead of this version which has a bug that prevents training some types of models.

Release of SLEAP v1.1.2.

Includes several bug fixes and documentation improvements.

Full changelog

  • #498: Fix training when frames have no instances (fixes #480).
  • #503: Fix errors occurring sometimes after training (fixes part of #500)
  • #505: Add miscellaneous utilities/fixes
    • Add --open-in-gui flag to sleap-track to launch GUI on predictions
    • Add PredictedInstance.numpy() to convert predicted instances to numpy arrays
    • Fix visualization during single instance training
    • Add Skeleton.find_neighbors() to return parents and children of a node
    • Fix serialization of MediaVideo after loading from sleap.load_video() or sleap.load_file().
    • Add Instance.fill_missing() to initialize missing nodes (e.g., after importing from DLC)
  • #501/#519/#524: Fix multi-size video support in training and inference (fixes #510, #516, #517)
  • #523: Fix using sleap-track with zipped model folders
  • Revamped docs and notebooks to clarify the Colab-based workflow (especially Training and inference on your own data using Google Drive), which are now linked from the GUI as well

Installing

We recommend using Miniconda to install and manage your Python environments. This will also make GPU support work transparently without installing additional dependencies.

Using Conda (Windows/Linux)

  1. Delete any existing environment and start fresh (recommended):
conda env remove -n sleap
  1. Create new environment sleap (recommended):
conda create -n sleap -c sleap sleap=1.1.2

Or to update inside an existing environment:

conda install -c sleap sleap=1.1.2

Using PyPI (Windows/Linux/Mac)

  1. Create a new conda environment (recommended):
conda create -n sleap python=3.6
conda activate sleap
  1. Install from PyPI:
pip install sleap==1.1.2

Or to upgrade an existing installation:

pip install --upgrade --force-reinstall sleap==1.1.2

From source (development)

  1. Clone the repository at this tag:
git clone https://github.com/murthylab/sleap --branch v1.1.2 sleap_v1.1.2
cd sleap_v1.1.2
  1. Install conda environment and activate:
conda install -f environment.yml -n sleap_v1.1.2
conda activate sleap_v1.1.2
  1. Changes made in the code will be immediately reflected when running SLEAP in this environment.

SLEAP v1.1.1

08 Mar 20:09
d9c7fee
Compare
Choose a tag to compare

Release of SLEAP v1.1.1.

Includes several bug fixes and UI improvements.

Installing

We strongly recommend using Miniconda to install and manage your Python environments. This will also make GPU support work transparently without installing additional dependencies.

Using Conda (Windows/Linux)

  1. Delete any existing environment and start fresh (recommended):
conda env remove -n sleap
  1. Create new environment sleap (recommended):
conda create -n sleap -c sleap sleap=1.1.1

Or to update inside an existing environment:

conda install -c sleap sleap=1.1.1

Using PyPI (Windows/Linux/Mac)

  1. Create a new conda environment (recommended):
conda create -n sleap python=3.6
conda activate sleap
  1. Install from PyPI:
pip install sleap==1.1.1

Or to upgrade an existing installation:

pip install --upgrade --force-reinstall sleap==1.1.1

From source (development)

  1. Clone the repository at this tag:
git clone https://github.com/murthylab/sleap --branch v1.1.1 sleap_v1.1.1
cd sleap_v1.1.1
  1. Install conda environment and activate:
conda install -f environment.yml -n sleap_v1.1.1
conda activate sleap_v1.1.1
  1. Changes made in the code will be immediately reflected when running SLEAP in this environment.

SLEAP v1.1.0

23 Feb 23:06
Compare
Choose a tag to compare

Release of SLEAP v1.1.0.

In this release, we have updated TensorFlow to 2.3, overhauled the inference module for performance and much more. See the highlights and full list of changes below.

To install: conda create -n sleap -c sleap sleap=1.1.0

Scroll to the bottom of this post for more detailed installation instructions.

Highlights

  • 2-8x performance improvements in inference
  • 32 new pretrained neural network backbones
  • High-level APIs for easy loading, inference and more.
  • Many GUI enhancements to promote our recommended core workflow
  • Learnable offset regression for subpixel localization
  • Linux conda package and smaller package size for faster updates

Full changelog

High level APIs

  • sleap.load_video(): create a sleap.Video from filename
  • sleap.load_model(): load a trained model and create a Predictor for inference
  • sleap.load_config(): load training job config json from model folders
  • sleap.load_file() now takes a match_to kwarg to align data structure instances with other labels for easy comparison and manipulation.
  • sleap.use_cpu_only(): Disable GPU use (#446).
  • sleap.disable_preallocation(): Disable preallocation of entire GPU memory which causes crashes on some systems (#446).
  • sleap.system_summary(): Print a summary of GPUs detected on the system and their state (#446).
  • sleap.versions(): print package/OS versions
  • Labels, LabeledFrame, Instance, PredictedInstance, Video, Skeleton now all have readable __repr__ and __str__, and __len__ for easy inspection of contents (nodes, visible parts).
  • Labels.extract() to pull out a subset of frames into new labels while retaining metadata from the base labels.
  • Labels.export() to save to analysis HDF5
  • Labels.numpy(), LabeledFrame.numpy() to convert main data structures to numpy
  • Labels.describe() is more comprehensive overview of a dataset.
  • Labels.load_deeplabcut_folder() to interactively import a maDLC dataset.

Core workflow

  • Add ability to add any frame as suggestion and modify suggestion list. This makes it possible to use the suggestions as a "labeling queue". This can be used to keep track of labeled frames and frames intended to be labeled (either detected algorithmically or manually selected frames of interest by user). This list also serves as a convenient target for inference for initialization/reprediction after training:
    image
  • Add Labels.clear_suggestions(), Labels.unlabeled_suggestions, Labels.get_unlabeled_suggestion_inds(), Labels.remove_empty_frames(), LabelsReader.from_user_labeled_frames(), LabelsReader.from_unlabeled_suggestions() to support suggestion-driven workflow
  • Remote workflow: Support for packaging output model folders as a zip file after training for easy download. Support for zip files as input to sleap.load_model().
  • Remote workflow: Support for packaging training + inference data, training job configurations, and CLI launch scripts into a single zip file for easy upload to Colab or other remote node:
    image
    image
  • File -> Save as... now uses labels.v000.slp as the default filename and scans the containing directory for labels following this pattern to increment the version number automatically. Saving the labels to a new version now takes just two keyboard strokes (Ctrl + Shift + S -> Enter by default). These two changes should direct users towards label versioning best practices.

GUI/UX

  • Window state (panels/tabs locations and visibility) are now persisted across sessions and restored when GUI is opened.
  • Add option to adjust node marker size to make them easier to see on high resolution displays. This and other visualization settings are now appropriately persisted across sessions.
  • Adjusted keyboard shortcuts to more sensible defaults and added ability to reset to "factory defaults".
  • Added progress indicators for long running operations such as exporting training packages and running inference. Inference progress is also displayed in terminal and notebooks:
    image image
    image
  • Training and inference now have a Cancel button which immediately kills the subprocess and cleans up temp files or incomplete outputs.
  • Training package exporting options are moved to a submenu with clearer descriptions of what they will save.
  • Moved random flip augmentation options to a simpler dropdown menu (none, horizontal, vertical) rather than two checkboxes (enable/disable + horizontal/vertical)
  • Add name and description fields to TrainingJobConfig to make it easier to annotate preset profiles with information relevant for users to select between them. This will be displayed in the GUI in a future update to drive a simplified training dialog workflow.
  • Move track/identity-related options to Tracks menu and add a toggle for labels propagation. When disabled, setting track identity no longer affects subsequent frames.

Training/inference

  • Implement top-down, bottom-up and single-instance model inference as self-contained tf.keras.Models (sleap.nn.inference.InferenceModel base class).
  • Re-implement peak finding and refinement with batch-level functions for improved performance.
  • Re-implement PAF grouping methods for batch-level inference and drastically improved test coverage.
  • Add pretrained encoder UNet-style model backbones based on qubvel/segmentation_models (#435)
  • Breaking sleap.nn.inference changes (#445)
    • API change: TopDownModel -> TopDownInferenceModel
    • API change: Predictor.predict() no longer has make_instances kwarg
  • Implement trainable offset regression (#447)
  • Add RandomCropper transformer for augmentation.
  • sleap.nn.data.pipelines and sleap.nn.inference submodules now available as top-level imports (sleap.pipelines and sleap.inference).
  • Switch to tf.data.Dataset.cache() for preloading transformer.
  • Add LambdaMap for user-function transformer creation.
  • Training CLI now doesn't override true parameters in the config with flag false defaults (e.g., config.outputs.save_visualizations works when --save_viz is not passed).
  • Training CLI no longer requires the positional labels_path arg since this can be specified in the config like the validation and test splits.
  • Training will now delete the visualizations subfolders if config.outputs.delete_viz_images is true (the default). This was often larger than the models themselves and usually only used during GUI-based training for live visualization.
  • Training will now zip the model folder if self.config.outputs.zip_outputs is true to support core remote workflow.
  • sleap.load_model() now supports loading models from zip files to support core remote workflow.
  • Massive refactor of the sleap.nn.inference submodule: deleted unused classes and methods and move a lot of functionality to the Predictor base class.
  • Predictors now use tf.keras.Model.predict_on_batch() in inference loops to drastically improve speed by cached traced/autographed model call every batch.
  • Inference now supports progress reporting using rich or by outputting JSON-encoded updates to stdout which can be captured by any caller. FPS and ETA are now calculated using a recent rolling buffer for more accurate estimates without needing to wait for startup/autograph amortization.
  • Reworked inference CLI (sleap-track) to make it much, much clearer and follow a more linear workflow. Deprecated many redundant args (still supported but hidden from sleap-track -h).
  • Inference CLI now takes a video or labels file as the positional data_path argument (e.g., sleap-track -m path/to/model "labels.slp") without using the --labels flag.
  • Inference CLI now uses a single LabelsReader/VideoReader provider to iterate over all inference targets. Previously, this was restarted per video, slowing down the whole process.
  • Inference on suggestions now only predicts on suggestions associated with unlabeled frames.
  • Inference CLI now stores much more metadata in Labels.provenance before saving, including system info, paths, sleap version, timestamps, etc.
  • TrainingJobConfig now caches the sleap version and filename it was loaded from and saved to during I/O ops

Fixes

  • Fix GUI freezing in Mac OS Big Sur (#450)
  • Fix single instance inference and RGB video detection (#459)
  • Instance counting no longer double counts Instances/PredictedInstances. This affects the model folder auto-naming, labeled instance count in suggestions table and more.
  • sleap-label will now run in CPU-only mode to prevent pre-allocating all the GPU memory if any tensorflow ops are called.
  • sleap.load_model() will now default to disabling pre-allocation of all the GPU memory even if called interactively to prevent the same issue.
  • Fix dataset splitting to ensure there is a minimum of one sample per train/val split. Supports training with a single(!) label now.
  • Suggestions row is now selected appropriately when navigating suggestions.
  • Flip augmentation now applied correctly across all model types.
  • Fix sleap.nn.evals.evaluate_model() saving even with save=False specified.

#...

Read more

SLEAP v1.1.0a12

23 Feb 21:52
bbd638a
Compare
Choose a tag to compare
SLEAP v1.1.0a12 Pre-release
Pre-release

Pre-release of SLEAP v1.1.0 update.

In this update, we have updated TensorFlow to 2.3, overhauled the inference module for performance and much more. See the highlights and full list of changes below.

To install: conda create -n sleap_alpha -c sleap/label/dev sleap=1.1.0a12

Scroll to the bottom of this post for more detailed installation instructions.

Highlights

  • 2-8x performance improvements in inference
  • 32 new pretrained neural network backbones
  • High-level APIs for easy loading, inference and more.
  • Many GUI enhancements to promote our recommended core workflow
  • Learnable offset regression for subpixel localization
  • Linux conda package and smaller package size for faster updates

Full changelog

High level APIs

  • sleap.load_video(): create a sleap.Video from filename
  • sleap.load_model(): load a trained model and create a Predictor for inference
  • sleap.load_config(): load training job config json from model folders
  • sleap.load_file() now takes a match_to kwarg to align data structure instances with other labels for easy comparison and manipulation.
  • sleap.use_cpu_only(): Disable GPU use (#446).
  • sleap.disable_preallocation(): Disable preallocation of entire GPU memory which causes crashes on some systems (#446).
  • sleap.system_summary(): Print a summary of GPUs detected on the system and their state (#446).
  • sleap.versions(): print package/OS versions
  • Labels, LabeledFrame, Instance, PredictedInstance, Video, Skeleton now all have readable __repr__ and __str__, and __len__ for easy inspection of contents (nodes, visible parts).
  • Labels.extract() to pull out a subset of frames into new labels while retaining metadata from the base labels.
  • Labels.export() to save to analysis HDF5
  • Labels.numpy(), LabeledFrame.numpy() to convert main data structures to numpy
  • Labels.describe() is more comprehensive overview of a dataset.
  • Labels.load_deeplabcut_folder() to interactively import a maDLC dataset.

Core workflow

  • Add ability to add any frame as suggestion and modify suggestion list. This makes it possible to use the suggestions as a "labeling queue". This can be used to keep track of labeled frames and frames intended to be labeled (either detected algorithmically or manually selected frames of interest by user). This list also serves as a convenient target for inference for initialization/reprediction after training:
    image
  • Add Labels.clear_suggestions(), Labels.unlabeled_suggestions, Labels.get_unlabeled_suggestion_inds(), Labels.remove_empty_frames(), LabelsReader.from_user_labeled_frames(), LabelsReader.from_unlabeled_suggestions() to support suggestion-driven workflow
  • Remote workflow: Support for packaging output model folders as a zip file after training for easy download. Support for zip files as input to sleap.load_model().
  • Remote workflow: Support for packaging training + inference data, training job configurations, and CLI launch scripts into a single zip file for easy upload to Colab or other remote node:
    image
    image
  • File -> Save as... now uses labels.v000.slp as the default filename and scans the containing directory for labels following this pattern to increment the version number automatically. Saving the labels to a new version now takes just two keyboard strokes (Ctrl + Shift + S -> Enter by default). These two changes should direct users towards label versioning best practices.

GUI/UX

  • Window state (panels/tabs locations and visibility) are now persisted across sessions and restored when GUI is opened.
  • Add option to adjust node marker size to make them easier to see on high resolution displays. This and other visualization settings are now appropriately persisted across sessions.
  • Adjusted keyboard shortcuts to more sensible defaults and added ability to reset to "factory defaults".
  • Added progress indicators for long running operations such as exporting training packages and running inference. Inference progress is also displayed in terminal and notebooks:
    image image
    image
  • Training and inference now have a Cancel button which immediately kills the subprocess and cleans up temp files or incomplete outputs.
  • Training package exporting options are moved to a submenu with clearer descriptions of what they will save.
  • Moved random flip augmentation options to a simpler dropdown menu (none, horizontal, vertical) rather than two checkboxes (enable/disable + horizontal/vertical)
  • Add name and description fields to TrainingJobConfig to make it easier to annotate preset profiles with information relevant for users to select between them. This will be displayed in the GUI in a future update to drive a simplified training dialog workflow.
  • Move track/identity-related options to Tracks menu and add a toggle for labels propagation. When disabled, setting track identity no longer affects subsequent frames.

Training/inference

  • Implement top-down, bottom-up and single-instance model inference as self-contained tf.keras.Models (sleap.nn.inference.InferenceModel base class).
  • Re-implement peak finding and refinement with batch-level functions for improved performance.
  • Re-implement PAF grouping methods for batch-level inference and drastically improved test coverage.
  • Add pretrained encoder UNet-style model backbones based on qubvel/segmentation_models (#435)
  • Breaking sleap.nn.inference changes (#445)
    • API change: TopDownModel -> TopDownInferenceModel
    • API change: Predictor.predict() no longer has make_instances kwarg
  • Implement trainable offset regression (#447)
  • Add RandomCropper transformer for augmentation.
  • sleap.nn.data.pipelines and sleap.nn.inference submodules now available as top-level imports (sleap.pipelines and sleap.inference).
  • Switch to tf.data.Dataset.cache() for preloading transformer.
  • Add LambdaMap for user-function transformer creation.
  • Training CLI now doesn't override true parameters in the config with flag false defaults (e.g., config.outputs.save_visualizations works when --save_viz is not passed).
  • Training CLI no longer requires the positional labels_path arg since this can be specified in the config like the validation and test splits.
  • Training will now delete the visualizations subfolders if config.outputs.delete_viz_images is true (the default). This was often larger than the models themselves and usually only used during GUI-based training for live visualization.
  • Training will now zip the model folder if self.config.outputs.zip_outputs is true to support core remote workflow.
  • sleap.load_model() now supports loading models from zip files to support core remote workflow.
  • Massive refactor of the sleap.nn.inference submodule: deleted unused classes and methods and move a lot of functionality to the Predictor base class.
  • Predictors now use tf.keras.Model.predict_on_batch() in inference loops to drastically improve speed by cached traced/autographed model call every batch.
  • Inference now supports progress reporting using rich or by outputting JSON-encoded updates to stdout which can be captured by any caller. FPS and ETA are now calculated using a recent rolling buffer for more accurate estimates without needing to wait for startup/autograph amortization.
  • Reworked inference CLI (sleap-track) to make it much, much clearer and follow a more linear workflow. Deprecated many redundant args (still supported but hidden from sleap-track -h).
  • Inference CLI now takes a video or labels file as the positional data_path argument (e.g., sleap-track -m path/to/model "labels.slp") without using the --labels flag.
  • Inference CLI now uses a single LabelsReader/VideoReader provider to iterate over all inference targets. Previously, this was restarted per video, slowing down the whole process.
  • Inference on suggestions now only predicts on suggestions associated with unlabeled frames.
  • Inference CLI now stores much more metadata in Labels.provenance before saving, including system info, paths, sleap version, timestamps, etc.
  • TrainingJobConfig now caches the sleap version and filename it was loaded from and saved to during I/O ops

Fixes

  • Fix GUI freezing in Mac OS Big Sur (#450)
  • Fix single instance inference and RGB video detection (#459)
  • Instance counting no longer double counts Instances/PredictedInstances. This affects the model folder auto-naming, labeled instance count in suggestions table and more.
  • sleap-label will now run in CPU-only mode to prevent pre-allocating all the GPU memory if any tensorflow ops are called.
  • sleap.load_model() will now default to disabling pre-allocation of all the GPU memory even if called interactively to prevent the same issue.
  • Fix dataset splitting to ensure there is a minimum of one sample per train/val split. Supports training with a single(!) label now.
  • Suggestions row is now selected appropriately when navigating suggestions.
  • Flip augmentation now applied correctly across all model types.
  • Fix sleap.nn.evals.evaluate_model() saving even with...
Read more

SLEAP v1.1.0a11

23 Feb 17:16
a08a2f9
Compare
Choose a tag to compare
SLEAP v1.1.0a11 Pre-release
Pre-release

Pre-release of SLEAP v1.1.0 update.

In this update, we have updated TensorFlow to 2.3, overhauled the inference module for performance and much more. See the highlights and full list of changes below.

To install: conda create -n sleap_alpha -c sleap/label/dev sleap=1.1.0a11

Scroll to the bottom of this post for more detailed installation instructions.

Highlights

  • 2-8x performance improvements in inference
  • 32 new pretrained neural network backbones
  • High-level APIs for easy loading, inference and more.
  • Many GUI enhancements to promote our recommended core workflow
  • Learnable offset regression for subpixel localization
  • Linux conda package and smaller package size for faster updates

Full changelog

High level APIs

  • sleap.load_video(): create a sleap.Video from filename
  • sleap.load_model(): load a trained model and create a Predictor for inference
  • sleap.load_config(): load training job config json from model folders
  • sleap.load_file() now takes a match_to kwarg to align data structure instances with other labels for easy comparison and manipulation.
  • sleap.use_cpu_only(): Disable GPU use (#446).
  • sleap.disable_preallocation(): Disable preallocation of entire GPU memory which causes crashes on some systems (#446).
  • sleap.system_summary(): Print a summary of GPUs detected on the system and their state (#446).
  • sleap.versions(): print package/OS versions
  • Labels, LabeledFrame, Instance, PredictedInstance, Video, Skeleton now all have readable __repr__ and __str__, and __len__ for easy inspection of contents (nodes, visible parts).
  • Labels.extract() to pull out a subset of frames into new labels while retaining metadata from the base labels.
  • Labels.export() to save to analysis HDF5
  • Labels.numpy(), LabeledFrame.numpy() to convert main data structures to numpy
  • Labels.describe() is more comprehensive overview of a dataset.
  • Labels.load_deeplabcut_folder() to interactively import a maDLC dataset.

Core workflow

  • Add ability to add any frame as suggestion and modify suggestion list. This makes it possible to use the suggestions as a "labeling queue". This can be used to keep track of labeled frames and frames intended to be labeled (either detected algorithmically or manually selected frames of interest by user). This list also serves as a convenient target for inference for initialization/reprediction after training:
    image
  • Add Labels.clear_suggestions(), Labels.unlabeled_suggestions, Labels.get_unlabeled_suggestion_inds(), Labels.remove_empty_frames(), LabelsReader.from_user_labeled_frames(), LabelsReader.from_unlabeled_suggestions() to support suggestion-driven workflow
  • Remote workflow: Support for packaging output model folders as a zip file after training for easy download. Support for zip files as input to sleap.load_model().
  • Remote workflow: Support for packaging training + inference data, training job configurations, and CLI launch scripts into a single zip file for easy upload to Colab or other remote node:
    image
    image
  • File -> Save as... now uses labels.v000.slp as the default filename and scans the containing directory for labels following this pattern to increment the version number automatically. Saving the labels to a new version now takes just two keyboard strokes (Ctrl + Shift + S -> Enter by default). These two changes should direct users towards label versioning best practices.

GUI/UX

  • Window state (panels/tabs locations and visibility) are now persisted across sessions and restored when GUI is opened.
  • Add option to adjust node marker size to make them easier to see on high resolution displays. This and other visualization settings are now appropriately persisted across sessions.
  • Adjusted keyboard shortcuts to more sensible defaults and added ability to reset to "factory defaults".
  • Added progress indicators for long running operations such as exporting training packages and running inference. Inference progress is also displayed in terminal and notebooks:
    image image
    image
  • Training and inference now have a Cancel button which immediately kills the subprocess and cleans up temp files or incomplete outputs.
  • Training package exporting options are moved to a submenu with clearer descriptions of what they will save.
  • Moved random flip augmentation options to a simpler dropdown menu (none, horizontal, vertical) rather than two checkboxes (enable/disable + horizontal/vertical)
  • Add name and description fields to TrainingJobConfig to make it easier to annotate preset profiles with information relevant for users to select between them. This will be displayed in the GUI in a future update to drive a simplified training dialog workflow.
  • Move track/identity-related options to Tracks menu and add a toggle for labels propagation. When disabled, setting track identity no longer affects subsequent frames.

Training/inference

  • Implement top-down, bottom-up and single-instance model inference as self-contained tf.keras.Models (sleap.nn.inference.InferenceModel base class).
  • Re-implement peak finding and refinement with batch-level functions for improved performance.
  • Re-implement PAF grouping methods for batch-level inference and drastically improved test coverage.
  • Add pretrained encoder UNet-style model backbones based on qubvel/segmentation_models (#435)
  • Breaking sleap.nn.inference changes (#445)
    • API change: TopDownModel -> TopDownInferenceModel
    • API change: Predictor.predict() no longer has make_instances kwarg
  • Implement trainable offset regression (#447)
  • Add RandomCropper transformer for augmentation.
  • sleap.nn.data.pipelines and sleap.nn.inference submodules now available as top-level imports (sleap.pipelines and sleap.inference).
  • Switch to tf.data.Dataset.cache() for preloading transformer.
  • Add LambdaMap for user-function transformer creation.
  • Training CLI now doesn't override true parameters in the config with flag false defaults (e.g., config.outputs.save_visualizations works when --save_viz is not passed).
  • Training CLI no longer requires the positional labels_path arg since this can be specified in the config like the validation and test splits.
  • Training will now delete the visualizations subfolders if config.outputs.delete_viz_images is true (the default). This was often larger than the models themselves and usually only used during GUI-based training for live visualization.
  • Training will now zip the model folder if self.config.outputs.zip_outputs is true to support core remote workflow.
  • sleap.load_model() now supports loading models from zip files to support core remote workflow.
  • Massive refactor of the sleap.nn.inference submodule: deleted unused classes and methods and move a lot of functionality to the Predictor base class.
  • Predictors now use tf.keras.Model.predict_on_batch() in inference loops to drastically improve speed by cached traced/autographed model call every batch.
  • Inference now supports progress reporting using rich or by outputting JSON-encoded updates to stdout which can be captured by any caller. FPS and ETA are now calculated using a recent rolling buffer for more accurate estimates without needing to wait for startup/autograph amortization.
  • Reworked inference CLI (sleap-track) to make it much, much clearer and follow a more linear workflow. Deprecated many redundant args (still supported but hidden from sleap-track -h).
  • Inference CLI now takes a video or labels file as the positional data_path argument (e.g., sleap-track -m path/to/model "labels.slp") without using the --labels flag.
  • Inference CLI now uses a single LabelsReader/VideoReader provider to iterate over all inference targets. Previously, this was restarted per video, slowing down the whole process.
  • Inference on suggestions now only predicts on suggestions associated with unlabeled frames.
  • Inference CLI now stores much more metadata in Labels.provenance before saving, including system info, paths, sleap version, timestamps, etc.
  • TrainingJobConfig now caches the sleap version and filename it was loaded from and saved to during I/O ops

Fixes

  • Fix GUI freezing in Mac OS Big Sur (#450)
  • Fix single instance inference and RGB video detection (#459)
  • Instance counting no longer double counts Instances/PredictedInstances. This affects the model folder auto-naming, labeled instance count in suggestions table and more.
  • sleap-label will now run in CPU-only mode to prevent pre-allocating all the GPU memory if any tensorflow ops are called.
  • sleap.load_model() will now default to disabling pre-allocation of all the GPU memory even if called interactively to prevent the same issue.
  • Fix dataset splitting to ensure there is a minimum of one sample per train/val split. Supports training with a single(!) label now.
  • Suggestions row is now selected appropriately when navigating suggestions.
  • Flip augmentation now applied correctly across all model types.
  • Fix sleap.nn.evals.evaluate_model() saving even with...
Read more

SLEAP v1.1.0a10

08 Feb 16:54
Compare
Choose a tag to compare
SLEAP v1.1.0a10 Pre-release
Pre-release

Pre-release of SLEAP v1.1.0 update.

In this update, we have updated TensorFlow to 2.3, overhauled the inference module for performance and much more. See the highlights and full list of changes below.

To install: conda create -n sleap_alpha -c sleap/label/dev sleap=1.1.0a10

Scroll to the bottom of this post for more detailed installation instructions.

Highlights

  • 2-8x performance improvements in inference
  • 32 new pretrained neural network backbones
  • High-level APIs for easy loading, inference and more.
  • Many GUI enhancements to promote our recommended core workflow
  • Learnable offset regression for subpixel localization
  • Linux conda package and smaller package size for faster updates

Full changelog

High level APIs

  • sleap.load_video(): create a sleap.Video from filename
  • sleap.load_model(): load a trained model and create a Predictor for inference
  • sleap.load_config(): load training job config json from model folders
  • sleap.load_file() now takes a match_to kwarg to align data structure instances with other labels for easy comparison and manipulation.
  • sleap.use_cpu_only(): Disable GPU use (#446).
  • sleap.disable_preallocation(): Disable preallocation of entire GPU memory which causes crashes on some systems (#446).
  • sleap.system_summary(): Print a summary of GPUs detected on the system and their state (#446).
  • sleap.versions(): print package/OS versions
  • Labels, LabeledFrame, Instance, PredictedInstance, Video, Skeleton now all have readable __repr__ and __str__, and __len__ for easy inspection of contents (nodes, visible parts).
  • Labels.extract() to pull out a subset of frames into new labels while retaining metadata from the base labels.
  • Labels.export() to save to analysis HDF5
  • Labels.numpy(), LabeledFrame.numpy() to convert main data structures to numpy
  • Labels.describe() is more comprehensive overview of a dataset.
  • Labels.load_deeplabcut_folder() to interactively import a maDLC dataset.

Core workflow

  • Add ability to add any frame as suggestion and modify suggestion list. This makes it possible to use the suggestions as a "labeling queue". This can be used to keep track of labeled frames and frames intended to be labeled (either detected algorithmically or manually selected frames of interest by user). This list also serves as a convenient target for inference for initialization/reprediction after training:
    image
  • Add Labels.clear_suggestions(), Labels.unlabeled_suggestions, Labels.get_unlabeled_suggestion_inds(), Labels.remove_empty_frames(), LabelsReader.from_user_labeled_frames(), LabelsReader.from_unlabeled_suggestions() to support suggestion-driven workflow
  • Remote workflow: Support for packaging output model folders as a zip file after training for easy download. Support for zip files as input to sleap.load_model().
  • Remote workflow: Support for packaging training + inference data, training job configurations, and CLI launch scripts into a single zip file for easy upload to Colab or other remote node:
    image
    image
  • File -> Save as... now uses labels.v000.slp as the default filename and scans the containing directory for labels following this pattern to increment the version number automatically. Saving the labels to a new version now takes just two keyboard strokes (Ctrl + Shift + S -> Enter by default). These two changes should direct users towards label versioning best practices.

GUI/UX

  • Window state (panels/tabs locations and visibility) are now persisted across sessions and restored when GUI is opened.
  • Add option to adjust node marker size to make them easier to see on high resolution displays. This and other visualization settings are now appropriately persisted across sessions.
  • Adjusted keyboard shortcuts to more sensible defaults and added ability to reset to "factory defaults".
  • Added progress indicators for long running operations such as exporting training packages and running inference. Inference progress is also displayed in terminal and notebooks:
    image image
    image
  • Training and inference now have a Cancel button which immediately kills the subprocess and cleans up temp files or incomplete outputs.
  • Training package exporting options are moved to a submenu with clearer descriptions of what they will save.
  • Moved random flip augmentation options to a simpler dropdown menu (none, horizontal, vertical) rather than two checkboxes (enable/disable + horizontal/vertical)
  • Add name and description fields to TrainingJobConfig to make it easier to annotate preset profiles with information relevant for users to select between them. This will be displayed in the GUI in a future update to drive a simplified training dialog workflow.
  • Move track/identity-related options to Tracks menu and add a toggle for labels propagation. When disabled, setting track identity no longer affects subsequent frames.

Training/inference

  • Implement top-down, bottom-up and single-instance model inference as self-contained tf.keras.Models (sleap.nn.inference.InferenceModel base class).
  • Re-implement peak finding and refinement with batch-level functions for improved performance.
  • Re-implement PAF grouping methods for batch-level inference and drastically improved test coverage.
  • Add pretrained encoder UNet-style model backbones based on qubvel/segmentation_models (#435)
  • Breaking sleap.nn.inference changes (#445)
    • API change: TopDownModel -> TopDownInferenceModel
    • API change: Predictor.predict() no longer has make_instances kwarg
  • Implement trainable offset regression (#447)
  • Add RandomCropper transformer for augmentation.
  • sleap.nn.data.pipelines and sleap.nn.inference submodules now available as top-level imports (sleap.pipelines and sleap.inference).
  • Switch to tf.data.Dataset.cache() for preloading transformer.
  • Add LambdaMap for user-function transformer creation.
  • Training CLI now doesn't override true parameters in the config with flag false defaults (e.g., config.outputs.save_visualizations works when --save_viz is not passed).
  • Training CLI no longer requires the positional labels_path arg since this can be specified in the config like the validation and test splits.
  • Training will now delete the visualizations subfolders if config.outputs.delete_viz_images is true (the default). This was often larger than the models themselves and usually only used during GUI-based training for live visualization.
  • Training will now zip the model folder if self.config.outputs.zip_outputs is true to support core remote workflow.
  • sleap.load_model() now supports loading models from zip files to support core remote workflow.
  • Massive refactor of the sleap.nn.inference submodule: deleted unused classes and methods and move a lot of functionality to the Predictor base class.
  • Predictors now use tf.keras.Model.predict_on_batch() in inference loops to drastically improve speed by cached traced/autographed model call every batch.
  • Inference now supports progress reporting using rich or by outputting JSON-encoded updates to stdout which can be captured by any caller. FPS and ETA are now calculated using a recent rolling buffer for more accurate estimates without needing to wait for startup/autograph amortization.
  • Reworked inference CLI (sleap-track) to make it much, much clearer and follow a more linear workflow. Deprecated many redundant args (still supported but hidden from sleap-track -h).
  • Inference CLI now takes a video or labels file as the positional data_path argument (e.g., sleap-track -m path/to/model "labels.slp") without using the --labels flag.
  • Inference CLI now uses a single LabelsReader/VideoReader provider to iterate over all inference targets. Previously, this was restarted per video, slowing down the whole process.
  • Inference on suggestions now only predicts on suggestions associated with unlabeled frames.
  • Inference CLI now stores much more metadata in Labels.provenance before saving, including system info, paths, sleap version, timestamps, etc.
  • TrainingJobConfig now caches the sleap version and filename it was loaded from and saved to during I/O ops

Fixes

  • Fix GUI freezing in Mac OS Big Sur (#450)
  • Fix single instance inference and RGB video detection (#459)
  • Instance counting no longer double counts Instances/PredictedInstances. This affects the model folder auto-naming, labeled instance count in suggestions table and more.
  • sleap-label will now run in CPU-only mode to prevent pre-allocating all the GPU memory if any tensorflow ops are called.
  • sleap.load_model() will now default to disabling pre-allocation of all the GPU memory even if called interactively to prevent the same issue.
  • Fix dataset splitting to ensure there is a minimum of one sample per train/val split. Supports training with a single(!) label now.
  • Suggestions row is now selected appropriately when navigating suggestions.
  • Flip augmentation now applied correctly across all model types.
  • Fix sleap.nn.evals.evaluate_model() saving even with...
Read more

SLEAP v1.1.0a9

18 Jan 01:11
Compare
Choose a tag to compare
SLEAP v1.1.0a9 Pre-release
Pre-release

Pre-release of SLEAP v1.1.0 update.

In this update, we have updated TensorFlow to 2.3, overhauled the inference module for performance and much more. See the highlights and full list of changes below.

Highlights

  • 2-8x performance improvements in inference
  • 32 new pretrained neural network backbones
  • Learnable offset regression for subpixel localization
  • High-level API for labels and model loading: sleap.load_file() and sleap.load_model()
  • Linux conda package and smaller package size for updates

Changelog

  • Update to TensorFlow 2.3.1.
  • Implement top-down, bottom-up and single-instance model inference as self-contained tf.keras.Models (sleap.nn.inference.InferenceModel base class).
  • Re-implement peak finding and refinement with batch-level functions for improved performance.
  • Re-implement PAF grouping methods for batch-level inference and drastically improved test coverage.
  • Add pipeline utility methods peek() and describe() to query the output a sleap.data.Pipeline.
  • Color by track when plotting with LabeledFrame.plot() method and tracks are available.
  • Add slice indexing to sleap.Labels.
  • Better string representations for core data structures Labels, LabeledFrame and `Skeleton.
  • Fix sleap.nn.evals.evaluate_model() saving even with save=False specified.
  • Add RandomCropper transformer for augmentation.
  • sleap.nn.data.pipelines and sleap.nn.inference submodules now available as top-level imports (sleap.pipelines and sleap.inference).
  • sleap.nn.viz.plot_instance() now accepts raw point arrays.
  • Switch to tf.data.Dataset.cache() for preloading transformer.
  • Add LambdaMap for user-function transformer creation.
  • Add high level model loading interface (sleap.load_model())
  • Switch to using external conda packages for TensorFlow and PySide2.
  • Switch to GitHub Actions for CI and builds.
  • Add pretrained encoder UNet-style model backbones based on qubvel/segmentation_models (#435)
  • Breaking sleap.nn.inference changes (#445)
    • API change: TopDownModel -> TopDownInferenceModel
    • API change: Predictor.predict() no longer has make_instances kwarg
  • Added some GPU-related aliases to top-level imports (#446)
    • sleap.use_cpu_only(): Disable GPU use.
    • sleap.disable_preallocation(): Disable preallocation of entire GPU memory which causes crashes on some systems.
    • sleap.system_summary(): Print a summary of GPUs detected on the system and their state.
  • Import folders of (ma)DLC labeled data with multiple videos (#437)
  • Implement trainable offset regression (#447)
  • Fix GUI freezing in Mac OS Big Sur (#450)
  • Fix single instance inference and RGB video detection (#459)

Installing

We strongly recommend using Miniconda to install and manage your Python environments. This will also make GPU support work transparently without installing additional dependencies.

Using Conda (Windows/Linux)

  1. Delete any existing environment and start fresh (recommended):
conda env remove -n sleap_alpha
  1. Create new environment sleap_alpha (recommended):
conda create -n sleap_alpha -c sleap/label/dev sleap=1.1.0a9

Or to update inside an existing environment:

conda install -c sleap/label/dev sleap=1.1.0a9

Using PyPI (Windows/Linux/Mac)

  1. Create a new conda environment (recommended):
conda create -n sleap_alpha python=3.6
conda activate sleap_alpha
  1. Install from PyPI:
pip install sleap==1.1.0a9

Or to upgrade an existing installation:

pip install --upgrade --force-reinstall sleap==1.1.0a9

From source (development)

  1. Clone the repository at this tag:
git clone https://github.com/murthylab/sleap --branch v1.1.0a9 sleap_v1.1.0a9
cd sleap_v1.1.0a9
  1. Install conda environment and activate:
conda install -f environment.yml -n sleap_v1.1.0a9
conda activate sleap_v1.1.0a9
  1. Changes made in the code will be immediately reflected when running SLEAP in this environment.

SLEAP v1.1.0a8

10 Jan 01:41
Compare
Choose a tag to compare
SLEAP v1.1.0a8 Pre-release
Pre-release

Pre-release of SLEAP v1.1.0 update.

In this update, we have updated TensorFlow to 2.3, overhauled the inference module for performance and much more. See the highlights and full list of changes below.

Highlights

  • 2-8x performance improvements in inference
  • 32 new pretrained neural network backbones
  • Learnable offset regression for subpixel localization
  • High-level API for labels and model loading: sleap.load_file() and sleap.load_model()
  • Linux conda package and smaller package size for updates

Changelog

  • Update to TensorFlow 2.3.1.
  • Implement top-down, bottom-up and single-instance model inference as self-contained tf.keras.Models (sleap.nn.inference.InferenceModel base class).
  • Re-implement peak finding and refinement with batch-level functions for improved performance.
  • Re-implement PAF grouping methods for batch-level inference and drastically improved test coverage.
  • Add pipeline utility methods peek() and describe() to query the output a sleap.data.Pipeline.
  • Color by track when plotting with LabeledFrame.plot() method and tracks are available.
  • Add slice indexing to sleap.Labels.
  • Better string representations for core data structures Labels, LabeledFrame and `Skeleton.
  • Fix sleap.nn.evals.evaluate_model() saving even with save=False specified.
  • Add RandomCropper transformer for augmentation.
  • sleap.nn.data.pipelines and sleap.nn.inference submodules now available as top-level imports (sleap.pipelines and sleap.inference).
  • sleap.nn.viz.plot_instance() now accepts raw point arrays.
  • Switch to tf.data.Dataset.cache() for preloading transformer.
  • Add LambdaMap for user-function transformer creation.
  • Add high level model loading interface (sleap.load_model())
  • Switch to using external conda packages for TensorFlow and PySide2.
  • Switch to GitHub Actions for CI and builds.
  • Add pretrained encoder UNet-style model backbones based on qubvel/segmentation_models (#435)
  • Breaking sleap.nn.inference changes (#445)
    • API change: TopDownModel -> TopDownInferenceModel
    • API change: Predictor.predict() no longer has make_instances kwarg
  • Added some GPU-related aliases to top-level imports (#446)
    • sleap.use_cpu_only(): Disable GPU use.
    • sleap.disable_preallocation(): Disable preallocation of entire GPU memory which causes crashes on some systems.
    • sleap.system_summary(): Print a summary of GPUs detected on the system and their state.
  • Import folders of (ma)DLC labeled data with multiple videos (#437)
  • Implement trainable offset regression (#447)
  • Fix GUI freezing in Mac OS Big Sur (#450)

Installing

We strongly recommend using Miniconda to install and manage your Python environments. This will also make GPU support work transparently without installing additional dependencies.

Using Conda (Windows/Linux)

  1. Delete any existing environment and start fresh (recommended):
conda env remove -n sleap_alpha
  1. Create new environment sleap_alpha (recommended):
conda create -n sleap_alpha -c sleap/label/dev sleap=1.1.0a8

Or to update inside an existing environment:

conda install -c sleap/label/dev sleap=1.1.0a8

Using PyPI (Windows/Linux/Mac)

  1. Create a new conda environment (recommended):
conda create -n sleap_alpha python=3.6
conda activate sleap_alpha
  1. Install from PyPI:
pip install sleap==1.1.0a8

From source (development)

  1. Clone the repository at this tag:
git clone https://github.com/murthylab/sleap --branch v1.1.0a8 sleap_v1.1.0a8
cd sleap_v1.1.0a8
  1. Install conda environment and activate:
conda install -f environment.yml -n sleap_v1.1.0a8
conda activate sleap_v1.1.0a8
  1. Changes made in the code will be immediately reflected when running SLEAP in this environment.

SLEAP v1.1.0a7

10 Dec 22:29
Compare
Choose a tag to compare
SLEAP v1.1.0a7 Pre-release
Pre-release

Pre-release of SLEAP v1.1 update.

In this update, we have updated TensorFlow to 2.3 and overhauled the inference module.

Highlights

  • 2-8x performance improvements in inference!
  • High-level data and model methods
  • Linux conda package and smaller package size for updates

Changelog

  • Update to TensorFlow 2.3.1.
  • Implement top-down, bottom-up and single-instance model inference as self-contained tf.keras.Models (sleap.nn.inference.InferenceModel base class).
  • Re-implement peak finding and refinement with batch-level functions for improved performance.
  • Re-implement PAF grouping methods for batch-level inference and drastically improved test coverage.
  • Add pipeline utility methods peek() and describe() to query the output a sleap.data.Pipeline.
  • Color by track when plotting with LabeledFrame.plot() method and tracks are available.
  • Add slice indexing to sleap.Labels.
  • Better string representations for core data structures Labels, LabeledFrame and `Skeleton.
  • Fix sleap.nn.evals.evaluate_model() saving even with save=False specified.
  • Add RandomCropper transformer for augmentation.
  • sleap.nn.data.pipelines and sleap.nn.inference submodules now available as top-level imports (sleap.pipelines and sleap.inference).
  • sleap.nn.viz.plot_instance() now accepts raw point arrays.
  • Switch to tf.data.Dataset.cache() for preloading transformer.
  • Add LambdaMap for user-function transformer creation.
  • Add high level model loading interface (sleap.load_model())
  • Switch to using external conda packages for TensorFlow and PySide2.
  • Switch to GitHub Actions for CI and builds.

Installing

Using Conda (Windows/Linux)

Create new environment sleap_alpha (recommended):
conda create -n sleap_alpha -c sleap -c sleap/label/dev sleap=1.1.0a7

To update inside an existing environment:
conda install -c sleap -c sleap/label/dev sleap=1.1.0a7

Using PyPI (Windows/Linux/Mac)

pip install sleap==1.1.0a7