Manqing Liu

PhD Candidate at Harvard.

prof_pic.jpg

Boston, MA 02116

My current research develops methods to detect reasoning pathologies in large language models, including post-hoc rationalization, encoded reasoning, and internalized reasoning. I construct model organisms exhibiting each pathology and build end-to-end evaluation pipelines for training-time monitoring and scalable oversight. I’ve also worked on causal machine learning research in collaboration with Dr. Andrew Beam and Dr. James Robins. I am seeking Research Scientist or Research Engineer roles in model evaluation, post-training, alignment, and safety.

In my free time, I enjoy reading philosophy and drawing connections between classical ideas on concepts, reasoning, knowledge, and understanding—from Kant, Wittgenstein, and Schopenhauer—to modern LLM research.

news

Nov 01, 2025 Joined Geodesic Research as Member of Technical Staff, working on post-training, model evaluation, and alignment.
Jun 23, 2025 Excited to announce my acceptance to MARS (Mentorship for Alignment Research Students) at Cambridge, UK this summer, joining a community of brilliant mentors and researchers to work on AI safety projects!
Oct 22, 2024 Our DAG-aware Transformer for Causal Effect Estimation paper is accepted as a poster at CRL @NeurIPS2024! :sparkles: :smile:

latest posts

selected publications

  1. Diagnosing Pathological Chain-of-Thought in Reasoning Models
    Manqing Liu, David Williams-King, Ilan Caspary, and 1 more author
    2025
    Under review at ICML 2026
  2. Doubly Robust Monte Carlo Tree Search
    Manqing Liu, and Andrew L. Beam
    2025
    In preparation for submission to NeurIPS 2026
  3. DAG-aware Transformer for Causal Effect Estimation
    Manqing Liu, David R. Bellamy, and Andrew L. Beam
    2024