From f28ac7b442e1b6c085793c12d76bdf7c9e315ede Mon Sep 17 00:00:00 2001 From: huangshiyu Date: Fri, 29 Dec 2023 21:54:25 +0800 Subject: [PATCH] update readme --- README.md | 15 +++++++-------- README_zh.md | 15 +++++++-------- 2 files changed, 14 insertions(+), 16 deletions(-) diff --git a/README.md b/README.md index 02eec57..4c0e6a4 100644 --- a/README.md +++ b/README.md @@ -336,7 +336,7 @@ If you are using OpenRL in your research project, you are also welcome to join t - Join the [slack](https://join.slack.com/t/openrlhq/shared_invite/zt-1tqwpvthd-Eeh0IxQ~DIaGqYXoW2IUQg) group to discuss OpenRL usage and development with us. - Join the [Discord](https://discord.gg/qMbVT2qBhr) group to discuss OpenRL usage and development with us. -- Send an E-mail to: [huangshiyu@4paradigm.com](huangshiyu@4paradigm.com) +- Send an E-mail to: [huangsy1314@163.com](huangsy1314@163.com) - Join the [GitHub Discussion](https://github.com/orgs/OpenRL-Lab/discussions). The OpenRL framework is still under continuous development and documentation. @@ -354,7 +354,7 @@ At present, OpenRL is maintained by the following maintainers: - Yiwen Sun([@YiwenAI](https://github.com/YiwenAI)) Welcome more contributors to join our maintenance team (send an E-mail -to [huangshiyu@4paradigm.com](huangshiyu@4paradigm.com) +to [huangsy1314@163.com](huangsy1314@163.com) to apply for joining the OpenRL team). ## Supporters @@ -378,12 +378,11 @@ to apply for joining the OpenRL team). If our work has been helpful to you, please feel free to cite us: ```latex -@misc{openrl2023, - title={OpenRL}, - author={OpenRL Contributors}, - publisher = {GitHub}, - howpublished = {\url{https://github.com/OpenRL-Lab/openrl}}, - year={2023}, +@article{huang2023openrl, + title={OpenRL: A Unified Reinforcement Learning Framework}, + author={Huang, Shiyu and Chen, Wentse and Sun, Yiwen and Bie, Fuqing and Tu, Wei-Wei}, + journal={arXiv preprint arXiv:2312.16189}, + year={2023} } ``` diff --git a/README_zh.md b/README_zh.md index 9f72857..42565cc 100644 --- a/README_zh.md +++ b/README_zh.md @@ -295,7 +295,7 @@ openrl --mode train --env CartPole-v1 - 加入 [slack](https://join.slack.com/t/openrlhq/shared_invite/zt-1tqwpvthd-Eeh0IxQ~DIaGqYXoW2IUQg) 群组,与我们一起讨论OpenRL的使用和开发。 - 加入 [Discord](https://discord.gg/qMbVT2qBhr) 群组,与我们一起讨论OpenRL的使用和开发。 -- 发送邮件到: [huangshiyu@4paradigm.com](huangshiyu@4paradigm.com) +- 发送邮件到: [huangsy1314@163.com](huangsy1314@163.com) - 加入 [GitHub Discussion](https://github.com/orgs/OpenRL-Lab/discussions) OpenRL框架目前还在持续开发和文档建设,欢迎加入我们让该项目变得更好: @@ -310,7 +310,7 @@ OpenRL框架目前还在持续开发和文档建设,欢迎加入我们让该 - [Shiyu Huang](https://huangshiyu13.github.io/)([@huangshiyu13](https://github.com/huangshiyu13)) - Wenze Chen([@Chen001117](https://github.com/Chen001117)) -欢迎更多的贡献者加入我们的维护团队 (发送邮件到[huangshiyu@4paradigm.com](huangshiyu@4paradigm.com)申请加入OpenRL团队)。 +欢迎更多的贡献者加入我们的维护团队 (发送邮件到[huangsy1314@163.com](huangsy1314@163.com)申请加入OpenRL团队)。 ## 支持者 @@ -333,12 +333,11 @@ OpenRL框架目前还在持续开发和文档建设,欢迎加入我们让该 如果我们的工作对你有帮助,欢迎引用我们: ```latex -@misc{openrl2023, - title={OpenRL}, - author={OpenRL Contributors}, - publisher = {GitHub}, - howpublished = {\url{https://github.com/OpenRL-Lab/openrl}}, - year={2023}, +@article{huang2023openrl, + title={OpenRL: A Unified Reinforcement Learning Framework}, + author={Huang, Shiyu and Chen, Wentse and Sun, Yiwen and Bie, Fuqing and Tu, Wei-Wei}, + journal={arXiv preprint arXiv:2312.16189}, + year={2023} } ```