[행사/세미나] (25.11.12.) Nudging: Inference-time Alignment of LLMs via Guided Decoding (Yu Fei @University of California)
- 실감미디어공학과
- 조회수177
- 2025-10-28
안녕하세요.
아래와 같이 온라인 세미나가 개최될 예정이오니 많은 관심과 참여 바랍니다.
Nudging: Inference-time Alignment of LLMs via Guided Decoding
Speaker: Yu Fei @ University of California, Irvine
Time : 10:30 - 11:30, Nov 12th, 2025
Location: Online(https://hli.skku.edu/InvitedTalk251112)
Language: English speech & English slides
Abstract:
Large language models (LLMs) require alignment to effectively and safely follow user instructions. This process necessitates training an aligned version for every base model, resulting in significant computational overhead. In this work, we propose NUDGING, a simple, training-free algorithm that aligns any base model at inference time using a small aligned model. NUDGING is motivated by recent findings that alignment primarily alters the model's behavior on a small subset of stylistic tokens (e.g., discourse markers). We find that base models are significantly more uncertain when generating these tokens. Building on this insight, NUDGING employs a small aligned model to generate nudging tokens to guide the base model's output during decoding when the base model's uncertainty is high, with only a minor additional inference overhead. We evaluate NUDGING across 3 model families on a diverse range of open-instruction tasks. Without any training, nudging a large base model with a 7x-14x smaller aligned model achieves zero-shot performance comparable to, and sometimes surpassing, that of large aligned models. By operating at the token level, NUDGING enables off-the-shelf collaboration between model families. For instance, nudging Gemma-2-27b with Llama-27b-chat outperforms Llama-2-70b-chat on various tasks. Overall, our work offers a modular and cost-efficient solution to LLM alignment.
Bio:
Yu Fei is a third-year Ph.D. student in Computer Science at the University of California, Irvine, advised by Sameer Singh. He received his M.S. in Computer Science from ETH Zürich, where he was advised by Mrinmaya Sachan, and his B.S. in Theoretical and Applied Mechanics from Peking University, where he worked with Yizhou Wang. He has also conducted research as a visiting intern at EPFL with Antoine Bosselut, and as an applied scientist intern at Amazon (Rufus). His research focuses on natural language reasoning and efficient adaptation and training of large language models (LLMs).
발전기금






