From e5e33018b90fbd182a002aeeeb97fb7856b59f92 Mon Sep 17 00:00:00 2001 From: myjc1 <111404361+myjc1@users.noreply.github.com> Date: Mon, 1 Apr 2024 03:35:23 +0800 Subject: [PATCH] Update softmax-regression-concise.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 目前只有d2l.torch.train_ch13,并没有train_ch3 --- chapter_linear-networks/softmax-regression-concise.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/chapter_linear-networks/softmax-regression-concise.md b/chapter_linear-networks/softmax-regression-concise.md index 33f1d4452..f17ee0623 100644 --- a/chapter_linear-networks/softmax-regression-concise.md +++ b/chapter_linear-networks/softmax-regression-concise.md @@ -191,7 +191,7 @@ trainer = paddle.optimizer.SGD(learning_rate=0.1, parameters=net.parameters()) ```{.python .input} #@tab all num_epochs = 10 -d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer) +d2l.train_ch13(net, train_iter, test_iter, loss, num_epochs, trainer) ``` 和以前一样,这个算法使结果收敛到一个相当高的精度,而且这次的代码比之前更精简了。