From cf7a5689d7ba14c5a6622493f5ba73c0f18b2800 Mon Sep 17 00:00:00 2001 From: Lorenzo Mammana Date: Thu, 25 Jan 2024 16:36:46 +0100 Subject: [PATCH] docs: Update documentation and changelog --- CHANGELOG.md | 6 ++++++ docs/tutorials/examples/sklearn_classification.md | 2 ++ 2 files changed, 8 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index b091779e..abe7f2c4 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,6 +2,12 @@ # Changelog All notable changes to this project will be documented in this file. +### [1.5.6] + +#### Added + +- Add support for half precision training and inference for sklearn based tasks + ### [1.5.5] #### Fixed diff --git a/docs/tutorials/examples/sklearn_classification.md b/docs/tutorials/examples/sklearn_classification.md index 3dbedddb..9d25bfa4 100644 --- a/docs/tutorials/examples/sklearn_classification.md +++ b/docs/tutorials/examples/sklearn_classification.md @@ -147,6 +147,7 @@ datamodule: task: device: cuda:0 + half_precision: false automatic_batch_size: starting_batch_size: 1024 disable: true @@ -160,6 +161,7 @@ task: This will train a logistic regression classifier using a resnet18 backbone, resizing the images to 224x224 and using a 5-fold cross validation. The `class_to_idx` parameter is used to map the class names to indexes, the indexes will be used to train the classifier. The `output` parameter is used to specify the output folder and the type of output to save. The `export.types` parameter can be used to export the model in different formats, at the moment `torchscript`, `onnx` and `pytorch` are supported. The backbone (in torchscript and pytorch format) will be saved along with the classifier. `test_full_data` is used to specify if a final test should be performed on all the data (after training on the training and validation datasets). +It's possible to enable half precision training by setting `half_precision` to `true`. Optionally it's possible to enable the automatic batch size finder by setting `automatic_batch_size.disable` to `false`. This will try to find the maximum batch size that can be used on the given device without running out of memory. The `starting_batch_size` parameter is used to specify the starting batch size to use for the search, the algorithm will start from this value and will try to divide it by two until it doesn't run out of memory. Finally, the `save_model_summary` parameter can be used to save the backbone information in a text file called `model_summary.txt` located in the root of the output folder.