Manqing Liu
PhD Candidate at Harvard.
Boston, MA 02116
My current research develops methods to detect reasoning pathologies in large language models, including post-hoc rationalization, encoded reasoning, and internalized reasoning. I construct model organisms exhibiting each pathology and build end-to-end evaluation pipelines for training-time monitoring and scalable oversight. I’ve also worked on causal machine learning research in collaboration with Dr. Andrew Beam and Dr. James Robins. I am seeking Research Scientist or Research Engineer roles in model evaluation, post-training, alignment, and safety.
In my free time, I enjoy reading philosophy and drawing connections between classical ideas on concepts, reasoning, knowledge, and understanding—from Kant, Wittgenstein, and Schopenhauer—to modern LLM research.
news
| Nov 01, 2025 | Joined Geodesic Research as Member of Technical Staff, working on post-training, model evaluation, and alignment. |
|---|---|
| Jun 23, 2025 | Excited to announce my acceptance to MARS (Mentorship for Alignment Research Students) at Cambridge, UK this summer, joining a community of brilliant mentors and researchers to work on AI safety projects! |
| Oct 22, 2024 | Our DAG-aware Transformer for Causal Effect Estimation paper is accepted as a poster at CRL @NeurIPS2024! |
latest posts
| Dec 22, 2025 | What Can Wittgenstein Teach Us About LLM Safety Research? |
|---|---|
| Jun 23, 2025 | My Summer Internship Reflection |
| Jun 18, 2025 | On Representation |