Skip to content

Commit

Permalink
Fixed Rst formatting, minor text changes (#3029)
Browse files Browse the repository at this point in the history
* Fixed Rst formatting, minor text changes
* Removed duplicate sentence about CUDA hardware that is already mentioned in the intro text. Minor text change.

---------

Co-authored-by: Svetlana Karslioglu <[email protected]>
  • Loading branch information
2 people authored and c-p-i-o committed Sep 6, 2024
1 parent deb89ba commit a5d85ed
Showing 1 changed file with 4 additions and 6 deletions.
10 changes: 4 additions & 6 deletions prototype_source/gpu_quantization_torchao_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,14 +35,12 @@
#
# Segment Anything Model checkpoint setup:
#
# 1. Go to the `segment-anything repo <checkpoint https://github.com/facebookresearch/segment-anything/tree/main#model-checkpoints>`_ and download the ``vit_h`` checkpoint. Alternatively, you can just use ``wget``: `wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth --directory-prefix=<path>
# 1. Go to the `segment-anything repo checkpoint <https://github.com/facebookresearch/segment-anything/tree/main#model-checkpoints>`_ and download the ``vit_h`` checkpoint. Alternatively, you can use ``wget`` (for example, ``wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth --directory-prefix=<path>``).
# 2. Pass in that directory by editing the code below to say:
#
# .. code-block::
#
# {sam_checkpoint_base_path}=<path>
# .. code-block:: bash
#
# This was run on an A100-PG509-200 power limited to 330.00 W
# {sam_checkpoint_base_path}=<path>
#

import torch
Expand Down Expand Up @@ -297,7 +295,7 @@ def get_sam_model(only_one_block=False, batchsize=1):
# -----------------
# In this tutorial, we have learned about the quantization and optimization techniques
# on the example of the segment anything model.

#
# In the end, we achieved a full-model apples to apples quantization speedup
# of about 7.7% on batch size 16 (677.28ms to 729.65ms). We can push this a
# bit further by increasing the batch size and optimizing other parts of
Expand Down

0 comments on commit a5d85ed

Please sign in to comment.