Skip to content

Commit

Permalink
docs: Update documentation and changelog
Browse files Browse the repository at this point in the history
  • Loading branch information
lorenzomammana committed Jan 25, 2024
1 parent 4da6449 commit cf7a568
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 0 deletions.
6 changes: 6 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,12 @@
# Changelog
All notable changes to this project will be documented in this file.

### [1.5.6]

#### Added

- Add support for half precision training and inference for sklearn based tasks

### [1.5.5]

#### Fixed
Expand Down
2 changes: 2 additions & 0 deletions docs/tutorials/examples/sklearn_classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -147,6 +147,7 @@ datamodule:
task:
device: cuda:0
half_precision: false
automatic_batch_size:
starting_batch_size: 1024
disable: true
Expand All @@ -160,6 +161,7 @@ task:

This will train a logistic regression classifier using a resnet18 backbone, resizing the images to 224x224 and using a 5-fold cross validation. The `class_to_idx` parameter is used to map the class names to indexes, the indexes will be used to train the classifier. The `output` parameter is used to specify the output folder and the type of output to save. The `export.types` parameter can be used to export the model in different formats, at the moment `torchscript`, `onnx` and `pytorch` are supported.
The backbone (in torchscript and pytorch format) will be saved along with the classifier. `test_full_data` is used to specify if a final test should be performed on all the data (after training on the training and validation datasets).
It's possible to enable half precision training by setting `half_precision` to `true`.

Optionally it's possible to enable the automatic batch size finder by setting `automatic_batch_size.disable` to `false`. This will try to find the maximum batch size that can be used on the given device without running out of memory. The `starting_batch_size` parameter is used to specify the starting batch size to use for the search, the algorithm will start from this value and will try to divide it by two until it doesn't run out of memory.
Finally, the `save_model_summary` parameter can be used to save the backbone information in a text file called `model_summary.txt` located in the root of the output folder.
Expand Down

0 comments on commit cf7a568

Please sign in to comment.