Skip to content

MindIntLab-HFUT/Psyche-R1

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

75 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Psyche-R1 logo Psyche-R1

Psyche-R1: Towards Reliable Psychological LLMs through Unified Empathy, Expertise, and Reasoning

[ English | 简体中文 ]

Latest News

🔥[2025.10.17] We've developed an APP for our model which is available for Windows, Linux, and Android users. Download it here: Psyche-R1-APP.

🔥[2025.8.16] We have released the Chinese Psychological Reasoning LLM Psyche-R1! For model downloads, please click here: MindIntLab/Psyche-R1

Introduction

Psyche-R1 is a Chinese psychological reasoning LLM that unifies empathy, expertise, and reasoning capabilities. We propose a novel data synthesis pipeline, as illustrated in the figure below. Through processes including data cleaning, question generation, rationale iteration, and empathetic dialogue synthesis, we generated psychology QA pairs with detailed rationales, as well as empathetic dialogue data. Based on this, we utilize multi-LLM selection to filter out "challenging questions" to enhance the model's complex reasoning capabilities, while the remaining data is divided into "non-challenging questions."

Our proposed pipeline for generating high-quality psychology data.

We select Qwen2.5-7B-Instruct as the base model. The model first undergoes SFT on massive "non-challenging questions" (including psychological questions and empathetic dialogues) to infuse extensive expertise and empathetic capabilities. Subsequently, the model undergoes GRPO reinforcement learning based on "challenging questions" to further improve the model's complex reasoning abilities.

Experimental Results

We present the results on the Psychological Counselor Examination Benchmark (PCEB) which are shown below.

Model Case Moral Theory Avg. Case (QA)
SMCQ MMCQ SMCQ MMCQ SMCQ MMCQ R-1 R-L B-4
Qwen2.5-7B-Instruct 47.57 31.64 47.49 87.83 59.50 71.02 78.46 42.45 55.17 57.91 (64.59) 20.94 11.28 1.28
Qwen2.5-72B-Instruct 46.91 40.34 53.11 90.79 70.25 78.48 82.63 47.63 59.74 63.09 (68.61) 21.43 12.02 1.16
DeepSeek-R1 79.25 44.25 60.86 95.39 68.99 77.95 92.19 57.60 69.41 72.95 (79.18) 17.65 9.19 0.94
DeepSeek-R1-70B 56.30 30.72 46.95 88.16 52.53 65.66 68.01 25.64 45.63 53.56 (61.79) 22.77 13.23 1.16
QwQ-32B 56.51 23.35 41.27 88.82 41.14 53.06 82.12 32.69 49.90 54.11 (61.95) 18.39 7.48 0.84
Qwen3-235B-A22B 68.58 41.91 57.24 93.42 69.62 78.90 88.36 56.70 68.64 69.77 (75.86) 18.96 11.14 1.11
GPT-4o 65.63 13.67 34.53 88.15 33.54 54.79 74.65 24.10 45.07 49.96 (60.47) 23.45 12.75 1.18
Claude3.7-Sonnet 63.39 19.40 34.23 90.13 60.13 70.04 76.73 37.37 48.99 57.86 (63.92) 21.59 11.11 1.23
EmoLLM 46.93 21.87 40.02 84.21 34.17 51.05 71.72 26.18 44.49 47.51 (56.40) 22.15 11.69 1.20
PsycoLLM 55.58 35.07 42.89 88.81 69.62 74.20 72.63 48.59 54.12 61.72 (64.71) 24.45 17.45 2.04
Psyche-R1 63.31 56.26 66.21 92.76 79.62 82.54 87.70 66.54 73.34 74.37 (77.64) 27.31 15.33 2.40

We also conducted more detailed and comprehensive experiments, including experiments on CPsyExam and PsyDT test sets, demonstrating Psyche-R1's capabilities in psychological examinations and counseling. For detailed experimental results, please refer to the original paper.

Quick Start

  1. Clone this project locally
git clone https://github.com/MindIntLab-HFUT/Psyche-R1.git
  1. Set up the environment
conda create -n psycher1 python=3.10
conda activate psycher1
pip install -r requirements.txt
  1. Run the Python file run.py
deepspeed --num_gpus=1 run.py
  1. Start interacting Note that for multi-turn dialogue, we recommend you to prompt the model to output the reasoning process enclosed with and tags explicitly (as in run.py).

Acknowledgments

Model training is based on the LLaMA-Factory and VeRL frameworks.

We also thank the following students for their help with this project, including but not limited to data collection, data processing , and so on (in no particular order): Yuhang Deng, Yiduo Jin, Xiang Li, Yue Liu, Yan Luo, Weidong Wang, Jinming Yu. We also thank Weidong Wang for developing an impressive APP!

Citation

If this work is helpful, please kindly cite as:

@article{dai2025psyche,
  title={Psyche-R1: Towards Reliable Psychological LLMs through Unified Empathy, Expertise, and Reasoning},
  author={Dai, Chongyuan and Hu, Jinpeng and Shi, Hongchang and Li, Zhuo and Yang, Xun and Wang, Meng},
  journal={arXiv preprint arXiv:2508.10848},
  year={2025}
}

About

Psyche-R1 (Chinese Psychological Reasoning LLM)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages