Chenxi Whitehouse
Hi! I am Chenxi, a Research Scientist at Meta, focusing on Fundamental AI Research for LLMs. I work closely with Jason Weston in the FAIR Alignment team on post-training and reasoning, leading projects in Reinforcement Learning, LLM-as-a-Judge, and Reward Modeling. I also collaborate with the Llama post-training team to integrate these foundational advances into core models.
Alongside my industry role, I am a visiting researcher at the University of Cambridge, where I was previously a postdoctoral research associate collaborating with Prof. Andreas Vlachos on factuality in NLP. My academic background includes a PhD in knowledge-grounded NLP from City, University of London and a Master’s degree in Electrical Engineering from the University of Erlangen-Nürnberg and University College London.
My current research interests include:
- Large-Scale Reasoning Models
- Post-training and Reinforcement Learning
- LLM-as-a-Judge and Generative Reward Modeling
I am actively exploring Senior Research Scientist roles in industry. If you have an opening and believe my profile is a good fit, I’d be happy to connect!
News
| Sep - 2025 | Check out two new works that I lead! J1: Incentivizing Thinking in LLM-as-a-Judge via Reinforcement Learning, and MENLO: From Preferences to Proficiency – Evaluating and Modeling Native-like Quality Across 47 Languages! |
|---|---|
| May - 2025 | Two papers What Is That Talk About? A Video-to-Text Summarization Dataset for Scientific Presentations and Segment-Level Diffusion: A Framework for Controllable Long-Form Generation with Diffusion Language Models are accepted at ACL 2025 main conference! |
| Oct - 2024 | Thrilled to share that I have joined Meta GenAI in London as a Research Scientist! |
| Jul - 2024 | Our paper PRobELM: Plausibility Ranking Evaluation for Language Models is accepted in the first Conference of Language Modelling COLM 2024! |
| Mar - 2024 | Our paper Low-Rank Adaptation for Multilingual Summarisation: An Empirical Study from my internship at Google DeepMind is accepted in the findings of NAACL 2024! |
Selected Publications
- EACLFindings of the Association for Computational Linguistics: EACL 2023