Shangmin Guo@University of Edinburgh | [email protected]
Wei Xiong@University of Illinois Urbana-Champaign | [email protected]
Alphabetical order
ThankĀ Hanze Dong@Salesforce, Tianqi Liu@Google, Wei Shen@Ernie code team, Haoxiang Wang@UIUC, for insightful feedback on an early draft of this blog.
Date: Mar 26, 2024
To readers:
- Leave comment or send email if you feel any part of this article can be improved!
TL; DR:
Reinforcement learning from human feedback (RLHF) is a leading technique to adapt the outputs of generative models to be preferred by human and has achieved tremendous success in ChatGPT by OpenAI, Claude by Anthropic, and Gemini by Google. Inspired by these successes, preference optimization (a slightly more general terminology that also contains the RL-free algorithms) has attracted significant attentions in the past year. In this blog, we aim to present a comprehensive introduction to the frontier research in this exciting field, explore the on-going challenges, and discuss the interesting research problems for the future.
Table of Content
- Prerequisites
- Alignment Objective
- Pre-training and Instruction-following Fine-tuning
- Preference Data Collection, Reward, and Bradley-Terry Model
- On/off-policy and On/off-line Learning in the Context of Alignment
- RLHF: The Classic Framework to Make ChatGPT
- Instruct-GPT: a Three-stages Approach
- Online Iterative RLHF
- RL-Free Framework: SLIC, DPO, IPO, and More
- Direct Preference Optimization (DPO) and Online Variants
- Identity-preference Optimization (IPO)
- Sequence Likelihood Calibration (SLiC)
- Comparison between DPO, IPO and SLiC
- Rejection Sampling in RLHF
- Miscellaneous
- Reward Modeling in RLHF
- Evaluation in RLHF
- Theoretical Understanding of RLHF: Why we Should Choose Online RLHF/DPO?
- Alignment without External Preference Signals
- Beyond the Bradley-Terry Model
- Nash Learning: Dropping the Reward Model
- Multi-objective Learning and Human-preference-aware Alignment
- Pointwise Feedback - Kahneman-Tversky Optimization
- Other Research Directions; End note