Installation | Usage | Examples | Benchmarking | Citation
- 26/01/2025: We support reinforcement learning with verifiable rewards (RLVR) for math reasoning.
Oat 🌾 is a simple yet efficient framework for running online LLM alignment algorithms. Its key features include:
- High Efficiency: Oat implements a distributed Actor-Learner-Oracle architecture, with each component being optimized using state-of-the-art tools:
- Simplified Workflow: Oat simplifies the experimental pipeline of LLM alignment. With an
Oracle
served online, we can flexibly query it for preference data labeling as well as anytime model evaluation. All you need is to launch experiments and monitor real-time learning curves (e.g., win rate) on wandb (see reproduced results) — no need for manual training, checkpointing and loading for evaluation. - Oracle Simulation: Oat provides a diverse set of oracles to simulate preference/reward/verification feedback.
- Verifiable rewards supported using rule-based functions.
- Lightweight reward models run within the actor's process, enabling quick testing on as few as two GPUs.
- Larger and more capable reward models can be served remotely, harnessing additional compute and memory resources.
- LLM-as-a-judge is supported via querying OpenAI API for model-based pairwise ranking.
- Ease of Use: Oat's modular structure allows researchers to easily inherit and modify existing classes, enabling rapid prototyping and experimentation with new algorithms.
- Cutting-Edge Algorithms: Oat implements state-of-the-art online algorithms, fostering innovation and fair benchmarking.
- PPO (online RL) for math reasoning.
- Online DPO/SimPO/IPO for online preference learning.
- Online exploration (active alignment) algorithms, including SEA, APL and XPO.
In a python environment with supported versions (>=3.8, <=3.10
), you could install oat via PyPI:
pip install vllm==0.6.2 && pip install oat-llm
Or you could also install in "editable" mode for local development:
git clone [email protected]:sail-sg/oat.git
cd oat
pip install vllm==0.6.2 && pip install -e .
The benchmarking compares oat with the online DPO implementation from huggingface/trl. Below, we outline the configurations used for oat and present the benchmarking results. Notably, oat 🌾 achieves up to 2.5x computational efficiency compared to trl 🤗.
Please refer to Appendix C of our paper for a detailed discussion of the benchmarking methods and results.
If you find this codebase useful for your research, please consider citing
@misc{liu2025oat,
author = {Zichen Liu and Changyu Chen and Chao Du and Wee Sun Lee and Min Lin},
title = {OAT: A research-friendly framework for LLM online alignment},
howpublished = {[https://github.com/sail-sg/oat](https://github.com/sail-sg/oat)},
year = {2025}
}
@article{
liu2024sea,
title={Sample-Efficient Alignment for LLMs},
author={Zichen Liu and Changyu Chen and Chao Du and Wee Sun Lee and Min Lin},
journal={arXiv preprint arXiv:2411.01493},
year={2024}
}
oat
is distributed under the terms of the Apache2 license.
We thank the following awesome projects that have contributed to the development of oat:
This is not an official Sea Limited or Garena Online Private Limited product.