Skip to content

Commit

Permalink
[Refactor] Remove non-ascii characters from code base (#928)
Browse files Browse the repository at this point in the history
  • Loading branch information
vmoens authored Jul 30, 2024
1 parent 484a045 commit 3879e76
Show file tree
Hide file tree
Showing 5 changed files with 25 additions and 25 deletions.
32 changes: 16 additions & 16 deletions tensordict/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -529,10 +529,10 @@ def mean(
If integer or tuple of integers, `mean` is called upon the dimension specified if
and only if this dimension is compatible with the tensordict
shape.
keepdim (bool) whether the output tensor has dim retained or not.
keepdim (bool): whether the output tensor has dim retained or not.
Keyword Args:
dtype (torch.dtype, optional) the desired data type of returned tensor.
dtype (torch.dtype, optional): the desired data type of returned tensor.
If specified, the input tensor is casted to dtype before the operation is performed.
This is useful for preventing data type overflows. Default: ``None``.
reduce (bool, optional): if ``True``, the reduciton will occur across all TensorDict values
Expand Down Expand Up @@ -567,10 +567,10 @@ def nanmean(
If integer or tuple of integers, `mean` is called upon the dimension specified if
and only if this dimension is compatible with the tensordict
shape.
keepdim (bool) whether the output tensor has dim retained or not.
keepdim (bool): whether the output tensor has dim retained or not.
Keyword Args:
dtype (torch.dtype, optional) the desired data type of returned tensor.
dtype (torch.dtype, optional): the desired data type of returned tensor.
If specified, the input tensor is casted to dtype before the operation is performed.
This is useful for preventing data type overflows. Default: ``None``.
reduce (bool, optional): if ``True``, the reduciton will occur across all TensorDict values
Expand Down Expand Up @@ -602,10 +602,10 @@ def prod(
If integer or tuple of integers, `prod` is called upon the dimension specified if
and only if this dimension is compatible with the tensordict
shape.
keepdim (bool) whether the output tensor has dim retained or not.
keepdim (bool): whether the output tensor has dim retained or not.
Keyword Args:
dtype (torch.dtype, optional) the desired data type of returned tensor.
dtype (torch.dtype, optional): the desired data type of returned tensor.
If specified, the input tensor is casted to dtype before the operation is performed.
This is useful for preventing data type overflows. Default: ``None``.
reduce (bool, optional): if ``True``, the reduciton will occur across all TensorDict values
Expand Down Expand Up @@ -646,10 +646,10 @@ def sum(
If integer or tuple of integers, `sum` is called upon the dimension specified if
and only if this dimension is compatible with the tensordict
shape.
keepdim (bool) whether the output tensor has dim retained or not.
keepdim (bool): whether the output tensor has dim retained or not.
Keyword Args:
dtype (torch.dtype, optional) the desired data type of returned tensor.
dtype (torch.dtype, optional): the desired data type of returned tensor.
If specified, the input tensor is casted to dtype before the operation is performed.
This is useful for preventing data type overflows. Default: ``None``.
reduce (bool, optional): if ``True``, the reduciton will occur across all TensorDict values
Expand Down Expand Up @@ -681,10 +681,10 @@ def nansum(
If integer or tuple of integers, `sum` is called upon the dimension specified if
and only if this dimension is compatible with the tensordict
shape.
keepdim (bool) whether the output tensor has dim retained or not.
keepdim (bool): whether the output tensor has dim retained or not.
Keyword Args:
dtype (torch.dtype, optional) the desired data type of returned tensor.
dtype (torch.dtype, optional): the desired data type of returned tensor.
If specified, the input tensor is casted to dtype before the operation is performed.
This is useful for preventing data type overflows. Default: ``None``.
reduce (bool, optional): if ``True``, the reduciton will occur across all TensorDict values
Expand Down Expand Up @@ -716,11 +716,11 @@ def std(
If integer or tuple of integers, `std` is called upon the dimension specified if
and only if this dimension is compatible with the tensordict
shape.
keepdim (bool) whether the output tensor has dim retained or not.
keepdim (bool): whether the output tensor has dim retained or not.
Keyword Args:
correction (int): difference between the sample size and sample degrees of freedom.
Defaults to Bessels correction, correction=1.
Defaults to Bessel's correction, correction=1.
reduce (bool, optional): if ``True``, the reduciton will occur across all TensorDict values
and a single reduced tensor will be returned.
Defaults to ``False``.
Expand Down Expand Up @@ -750,11 +750,11 @@ def var(
If integer or tuple of integers, `var` is called upon the dimension specified if
and only if this dimension is compatible with the tensordict
shape.
keepdim (bool) whether the output tensor has dim retained or not.
keepdim (bool): whether the output tensor has dim retained or not.
Keyword Args:
correction (int): difference between the sample size and sample degrees of freedom.
Defaults to Bessels correction, correction=1.
Defaults to Bessel's correction, correction=1.
reduce (bool, optional): if ``True``, the reduciton will occur across all TensorDict values
and a single reduced tensor will be returned.
Defaults to ``False``.
Expand Down Expand Up @@ -2611,7 +2611,7 @@ def _check_dim_name(self, name):
def refine_names(self, *names) -> T:
"""Refines the dimension names of self according to names.
Refining is a special case of renaming that lifts unnamed dimensions.
Refining is a special case of renaming that "lifts" unnamed dimensions.
A None dim can be refined to have any name; a named dim can only be
refined to have the same name.
Expand Down Expand Up @@ -9970,7 +9970,7 @@ def requires_grad(self) -> bool:
return any(v.requires_grad for v in self.values())

def requires_grad_(self, requires_grad=True) -> T:
"""Change if autograd should record operations on this tensor: sets this tensors requires_grad attribute in-place.
"""Change if autograd should record operations on this tensor: sets this tensor's requires_grad attribute in-place.
Returns this tensordict.
Expand Down
2 changes: 1 addition & 1 deletion tutorials/dummy.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# -*- coding: utf-8 -*-
"""
The early bird gets the worm which is what he deserves
The early bird gets the worm - which is what he deserves
========================================================
"""

Expand Down
4 changes: 2 additions & 2 deletions tutorials/sphinx_tuto/tensorclass_fashion.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@
# structure of the data you want to store is fixed and predictable.
#
# As well as specifying the contents, we can also encapsulate related
# logic as custom methods when defining the class. In this case well
# logic as custom methods when defining the class. In this case we'll
# write a ``from_dataset`` classmethod that takes a dataset as input and
# creates a tensorclass containing the data from the dataset. We create
# memory-mapped tensors to hold the data. This will allow us to
Expand Down Expand Up @@ -89,7 +89,7 @@ def from_dataset(cls, dataset, device=None):
# DataLoaders
# ----------------
#
# Well create DataLoaders from the ``torchvision``-provided Datasets, as
# We'll create DataLoaders from the ``torchvision``-provided Datasets, as
# well as from our memory-mapped TensorDicts.
#
# Since ``TensorDict`` implements ``__len__`` and ``__getitem__`` (and
Expand Down
6 changes: 3 additions & 3 deletions tutorials/sphinx_tuto/tensorclass_imagenet.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
# storage and on-device batched transformation, we obtain a 10x speedup in data-loading
# over regular torch + torchvision pipelines.
#
# Well use the same subset of imagenet used in `this transfer learning
# We'll use the same subset of imagenet used in `this transfer learning
# tutorial <https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html>`__,
# though we also give results of our experiments running the same code on ImageNet.
#
Expand Down Expand Up @@ -105,8 +105,8 @@
# sphinx_gallery_end_ignore

##############################################################################
# Well also create a dataset of the raw training data that simply resizes
# the image to a common size and converts to tensor. Well use this to
# We'll also create a dataset of the raw training data that simply resizes
# the image to a common size and converts to tensor. We'll use this to
# load the data into memory-mapped tensors. The random transformations
# need to be different each time we iterate through the data, so they
# cannot be pre-computed. We also do not scale the data yet so that we can set the
Expand Down
6 changes: 3 additions & 3 deletions tutorials/sphinx_tuto/tensordict_memory.py
Original file line number Diff line number Diff line change
Expand Up @@ -154,7 +154,7 @@
# ├── a.memmap
# ├── a.meta.pt
# ├── b
# │   ├── c.memmap
# │   ├── c.meta.pt
# │   └── meta.pt
# │ ├── c.memmap
# │ ├── c.meta.pt
# │ └── meta.pt
# └── meta.pt

0 comments on commit 3879e76

Please sign in to comment.