Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

需要learning rate decay/ schedule的文档 #4295

Closed
jmliu88 opened this issue Sep 21, 2017 · 7 comments
Closed

需要learning rate decay/ schedule的文档 #4295

jmliu88 opened this issue Sep 21, 2017 · 7 comments
Assignees
Labels
User 用于标记用户问题

Comments

@jmliu88
Copy link

jmliu88 commented Sep 21, 2017

在目前的版本里没找到关于learning rate decay或者learning rate schedule用法的文档,可以麻烦哪位同学补一下么?

谢谢

@lcy-seso
Copy link
Contributor

lcy-seso commented Sep 21, 2017

@lcy-seso lcy-seso added User 用于标记用户问题 documentation labels Sep 21, 2017
@ranqiu92
Copy link
Contributor

@sshilei
Copy link

sshilei commented Apr 28, 2019

接着问一下,如何在SGD中使用 learning rate decay

@sshilei
Copy link

sshilei commented Apr 28, 2019

sgd_optimizer = fluid.optimizer.SGD(learning_rate=0.2)

@qingqing01
Copy link
Contributor

@sshilei
Copy link

sshilei commented Apr 30, 2019

api里面的参数是按step进行下降,有没有接口可以根据epoch进行下降

@sandyhouse
Copy link

试试下面这个:(http://paddlepaddle.org/documentation/docs/zh/1.4/api_cn/layers_cn.html#permalink-234-learning_rate_scheduler)

paddle.fluid.layers.cosine_decay(learning_rate, step_each_epoch, epochs)

说明:使用 cosine decay 的衰减方式进行学习率调整。
在训练模型时,建议一边进行训练一边降低学习率。 通过使用此方法,学习速率将通过如下cosine衰减策略进行衰减:
decayed_lr=learning_rate∗0.5∗(cos(epoch∗math.pi/epochs)+1)

参数:

  • learning_rate (Variable | float) - 初始学习率。

  • step_each_epoch (int) - 一次迭代中的步数。

  • epochs - 总迭代次数。

代码示例:
base_lr = 0.1
lr = fluid.layers.cosine_decay( learning_rate = base_lr, step_each_epoch=10000, epochs=120)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
User 用于标记用户问题
Projects
None yet
Development

No branches or pull requests

6 participants