Skip to content

Commit

Permalink
fix readme
Browse files Browse the repository at this point in the history
  • Loading branch information
jiqing-feng committed Dec 27, 2023
1 parent 9164d9a commit 82cc490
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 4 deletions.
3 changes: 1 addition & 2 deletions examples/textual_inversion/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,8 +97,7 @@ to a number larger than one, *e.g.*:
**CPU**: If you run on Intel Gen 4th Xeon (and later), use ipex and bf16 will get a significant acceleration.
You need to add `--mixed_precision="bf16"` and `--use_ipex` in the command and install the following package:
```
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cpu
pip install intel-extension-for-pytorch==2.0.0
pip install intel-extension-for-pytorch
```

The saved textual inversion vectors will then be larger in size compared to the default case.
Expand Down
4 changes: 2 additions & 2 deletions examples/textual_inversion/textual_inversion.py
Original file line number Diff line number Diff line change
Expand Up @@ -348,8 +348,7 @@ def parse_args():
"--use_ipex",
action="store_true",
help=(
"Whether or not to use ipex to accelerate the training process,"
"requires Intel Gen 3rd Xeon (and later)"
"Whether or not to use ipex to accelerate the training process," "requires Intel Gen 3rd Xeon (and later)"
),
)
parser.add_argument(
Expand Down Expand Up @@ -789,6 +788,7 @@ def main():

if args.use_ipex:
import intel_extension_for_pytorch as ipex

unet = ipex.optimize(unet, dtype=weight_dtype)
vae = ipex.optimize(vae, dtype=weight_dtype)

Expand Down

0 comments on commit 82cc490

Please sign in to comment.