cv
General Information
Full Name | Manqing Liu (Pronounced as Man-ching Leo) (刘漫清) |
manqingliu@g.harvard.edu |
Education
-
2021 - present PhD Candidate in Causal Machine Learning
Harvard University, Boston, MA, US - Major
- Epidemiology and Biostatistics
- Secondary Field
- Computer Science
- Advisors
- Dr. Andrew Beam
- Dr. James Robins
- Major
Courses
-
Machine Learning (Graduate level)
MIT -
Quantitative Methods for NLP (Graduate level)
MIT -
Geometric Methods for Machine Learning (Graduate level)
Harvard University -
Stochastic Methods for Data Analysis, Inference and Optimization (Graduate level)
Harvard University -
Algorithms for Data Science (Graduate level)
Harvard University -
Linear Algebra and Learning from Data
MIT -
Introduction to Functional Analysis
MIT -
Probability (Graduate level)
Harvard University -
Statistical Inference I, II (Graduate level)
Harvard University -
Advanced Regression and Statistical Learning (Graduate level)
Harvard University -
System Development for Computational Science
Harvard University -
High Performance Computing for Science and Engineering (Graduate level)
Harvard University -
Causal Inference (Graduate level)
Harvard University
Fellowship
-
2025 July - 2025 September MARS Fellowship
Cambridge AI Safety Hub - Completed competitive 3-month research fellowship focused on AI safety and alignment. Conducted independent research on detecting pathological reasoning behaviors in large language models under mentorship from University of Cambridge with collaborators from UCL and Mila. Developed novel evaluation methodologies for Chain-of-Thought pathologies with implications for AI system reliability and trustworthiness.
-
2024 June - August Technical AI safety Fellowship
AI Safety Student Team - 8-week reading group on AI safety, covering topics such as neural network interpretability, robustness, and alignment.
Experience
-
2023 - present PhD Researcher
Harvard University, Causal Lab, Boston, MA, US - Aim 1
- Engineered a noval DAG-aware transformer model to precisely estimate causal effects, addressing foundational challenges in unifying causal effect estimation under various scenarios.
- Aim 2
- Integrated doubly robust estimators into Monte Carlo Tree Search (MCTS), enabling large language models to perform complex, multi-step reasoning and planning with higher accuracy in real-world scenarios.
- Aim 3
- Developed and implemented comprehensive evaluation metrics to identify and monitor pathologies in Chain-of-Thought (CoT) reasoning across large language models, including post-hoc, internalized, and encoded reasoning patterns. Collaborated on fine-tuning open-weight LLM models to elicit internalized and encoded reasoning capabilities in model organisms.
- Aim 1
-
2017 - 2021 Biostatistician
Penn Medicine, Philadelphia, PA, US - Collaborated with clinical researchers, biostatisticians, and data scientists to harness electronic health record (EHR) data for machine learning applications.
- Developed and deployed predictive models to forecast patient outcomes, enabling data-driven decision-making in healthcare settings.
- Led cross-functional efforts to integrate machine learning workflows into clinical practice, optimizing efficiency and enhancing patient care outcomes.
Open Source Projects
-
2024-now DAG-aware-transformer
- Our codes for the paper "DAG-aware Transformer for Causal Inference" published in NeurIPS 2024, CRL workshop.
-
2025-now COT Health Metrics
- A library for evaluating reasoning pathologies in large language models (LLMs), including post-hoc, internalized, and encoded reasoning flaws. Provides health metrics such as reliance, paraphrasability, and substitutability to assess and analyze these pathologies.