Skip to content

Commit

Permalink
Merge branch 'main' into svekars-patch-28
Browse files Browse the repository at this point in the history
  • Loading branch information
svekars authored Oct 9, 2024
2 parents a108b66 + 1217b4c commit 1f48bf0
Showing 1 changed file with 5 additions and 5 deletions.
10 changes: 5 additions & 5 deletions intermediate_source/scaled_dot_product_attention_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -244,7 +244,7 @@ def generate_rand_batch(

######################################################################
# Using SDPA with ``torch.compile``
# =================================
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# With the release of PyTorch 2.0, a new feature called
# ``torch.compile()`` has been introduced, which can provide
Expand Down Expand Up @@ -324,9 +324,9 @@ def generate_rand_batch(
#

######################################################################
# Using SDPA with attn_bias subclasses`
# ==========================================
#
# Using SDPA with attn_bias subclasses
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

# As of PyTorch 2.3, we have added a new submodule that contains tensor subclasses.
# Designed to be used with ``torch.nn.functional.scaled_dot_product_attention``.
# The module is named ``torch.nn.attention.bias`` and contains the following two
Expand Down Expand Up @@ -394,7 +394,7 @@ def generate_rand_batch(

######################################################################
# Conclusion
# ==========
# ~~~~~~~~~~~
#
# In this tutorial, we have demonstrated the basic usage of
# ``torch.nn.functional.scaled_dot_product_attention``. We have shown how
Expand Down

0 comments on commit 1f48bf0

Please sign in to comment.