Skip to content

Commit

Permalink
[Doctest] Setup, quicktour and task_summary (huggingface#13078)
Browse files Browse the repository at this point in the history
* Fix doctests for quicktour

* Adapt causal LM exemple

* Remove space

* Fix until summarization

* End of task summary

* Style

* With last changes in quicktour
  • Loading branch information
sgugger authored Aug 11, 2021
1 parent bfc8850 commit 83424ad
Show file tree
Hide file tree
Showing 3 changed files with 184 additions and 148 deletions.
25 changes: 13 additions & 12 deletions docs/source/quicktour.rst
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ make them readable. For instance:
.. code-block::
>>> classifier('We are very happy to show you the 🤗 Transformers library.')
[{'label': 'POSITIVE', 'score': 0.9997795224189758}]
[{'label': 'POSITIVE', 'score': 0.99978}]
That's encouraging! You can use it on a list of sentences, which will be preprocessed then fed to the model as a
`batch`, returning a list of dictionaries like this one:
Expand Down Expand Up @@ -195,7 +195,8 @@ sequence:
.. code-block::
>>> print(inputs)
{'input_ids': [101, 2057, 2024, 2200, 3407, 2000, 2265, 2017, 1996, 100, 19081, 3075, 1012, 102], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
{'input_ids': [101, 2057, 2024, 2200, 3407, 2000, 2265, 2017, 1996, 100, 19081, 3075, 1012, 102],
'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
You can pass a list of sentences directly to your tokenizer. If your goal is to send them through your model as a
batch, you probably want to pad them all to the same length, truncate them to the maximum length the model can accept
Expand Down Expand Up @@ -260,12 +261,12 @@ objects are described in greater detail :doc:`here <main_classes/output>`. For n
>>> ## PYTORCH CODE
>>> print(pt_outputs)
SequenceClassifierOutput(loss=None, logits=tensor([[-4.0833, 4.3364],
[ 0.0818, -0.0418]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None)
[ 0.0818, -0.0418]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None)
>>> ## TENSORFLOW CODE
>>> print(tf_outputs)
TFSequenceClassifierOutput(loss=None, logits=<tf.Tensor: shape=(2, 2), dtype=float32, numpy=
array([[-4.0832963 , 4.3364143 ],
[ 0.081807 , -0.04178282]], dtype=float32)>, hidden_states=None, attentions=None)
array([[-4.0833 , 4.3364 ],
[ 0.0818, -0.0418]], dtype=float32)>, hidden_states=None, attentions=None)
Notice how the output object has a ``logits`` attribute. You can use this to access the model's final activations.

Expand All @@ -283,7 +284,7 @@ Let's apply the SoftMax activation to get predictions.
>>> pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1)
>>> ## TENSORFLOW CODE
>>> import tensorflow as tf
>>> tf.nn.softmax(tf_outputs.logits, axis=-1)
>>> tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1)
We can see we get the numbers from before:

Expand All @@ -292,8 +293,8 @@ We can see we get the numbers from before:
>>> ## TENSORFLOW CODE
>>> print(tf_predictions)
tf.Tensor(
[[2.2042994e-04 9.9977952e-01]
[5.3086340e-01 4.6913657e-01]], shape=(2, 2), dtype=float32)
[[2.2043e-04 9.9978e-01]
[5.3086e-01 4.6914e-01]], shape=(2, 2), dtype=float32)
>>> ## PYTORCH CODE
>>> print(pt_predictions)
tensor([[2.2043e-04, 9.9978e-01],
Expand All @@ -309,14 +310,14 @@ attribute:
>>> pt_outputs = pt_model(**pt_batch, labels = torch.tensor([1, 0]))
>>> print(pt_outputs)
SequenceClassifierOutput(loss=tensor(0.3167, grad_fn=<NllLossBackward>), logits=tensor([[-4.0833, 4.3364],
[ 0.0818, -0.0418]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None)
[ 0.0818, -0.0418]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None)
>>> ## TENSORFLOW CODE
>>> import tensorflow as tf
>>> tf_outputs = tf_model(tf_batch, labels = tf.constant([1, 0]))
>>> print(tf_outputs)
TFSequenceClassifierOutput(loss=<tf.Tensor: shape=(2,), dtype=float32, numpy=array([2.2051287e-04, 6.3326043e-01], dtype=float32)>, logits=<tf.Tensor: shape=(2, 2), dtype=float32, numpy=
array([[-4.0832963 , 4.3364143 ],
[ 0.081807 , -0.04178282]], dtype=float32)>, hidden_states=None, attentions=None)
TFSequenceClassifierOutput(loss=<tf.Tensor: shape=(2,), dtype=float32, numpy=array([2.2051e-04, 6.3326e-01], dtype=float32)>, logits=<tf.Tensor: shape=(2, 2), dtype=float32, numpy=
array([[-4.0833 , 4.3364 ],
[ 0.0818, -0.0418]], dtype=float32)>, hidden_states=None, attentions=None)
Models are standard `torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>`__ or `tf.keras.Model
<https://www.tensorflow.org/api_docs/python/tf/keras/Model>`__ so you can use them in your usual training loop. 🤗
Expand Down
Loading

0 comments on commit 83424ad

Please sign in to comment.