Skip to content

Commit

Permalink
Merge pull request #2325 from sanskarmodi8:issue#75194-fix
Browse files Browse the repository at this point in the history
PiperOrigin-RevId: 674988489
  • Loading branch information
copybara-github committed Sep 16, 2024
2 parents 7d04d5d + f8341de commit c841a6e
Showing 1 changed file with 4 additions and 5 deletions.
9 changes: 4 additions & 5 deletions site/en/tutorials/keras/overfit_and_underfit.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -543,10 +543,10 @@
" model.summary()\n",
"\n",
" history = model.fit(\n",
" train_ds,\n",
" train_ds.map(lambda x, y: (x, tf.expand_dims(y, axis=-1))),\n",
" steps_per_epoch = STEPS_PER_EPOCH,\n",
" epochs=max_epochs,\n",
" validation_data=validate_ds,\n",
" validation_data=validate_ds.map(lambda x, y: (x, tf.expand_dims(y, axis=-1))),\n",
" callbacks=get_callbacks(name),\n",
" verbose=0)\n",
" return history"
Expand Down Expand Up @@ -977,7 +977,7 @@
"source": [
"`l2(0.001)` means that every coefficient in the weight matrix of the layer will add `0.001 * weight_coefficient_value**2` to the total **loss** of the network.\n",
"\n",
"That is why we're monitoring the `binary_crossentropy` directly. Because it doesn't have this regularization component mixed in.\n",
"That is why you need to monitor the `binary_crossentropy` directly. Because it doesn't have this regularization component mixed in.\n",
"\n",
"So, that same `\"Large\"` model with an `L2` regularization penalty performs much better:\n"
]
Expand Down Expand Up @@ -1228,10 +1228,9 @@
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"name": "overfit_and_underfit.ipynb",
"toc_visible": true
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
Expand Down

0 comments on commit c841a6e

Please sign in to comment.