Goal Alignment in LLM-Based User Simulators for Conversational AIJul 27, 2025·Shuhaib Mehri,Xiaocheng Yang,Takyoung Kim,Gokhan Tur,Shikib Mehri,Dilek Hakkani-Tür· 0 min read Cite arXiv URL The goal-aligned user simulator response (right) considers their goal progression, and reasons to generate a response that maintains alignment with the user goal.TypeManuscriptLast updated on Jul 27, 2025 ← Question Generation for Assessing Early Literacy Reading Comprehension Jul 30, 2025ReSpAct: Harmonizing Reasoning, Speaking, and Acting Towards Building Large Language Model-Based Conversational AI Agents May 30, 2025 →