简体中文 | English | 日本語
Explore the documentation of this project »
EmoLLM 2.0 Demo
·
Report a Bug
·
Propose a New Feature
EmoLLM is a series of large language models designed to understand, support and help customers in mental health counseling. It is fine-tuned from the LLM instructions. We really appreciate it if you could give it a star~⭐⭐. The open-sourced configuration is as follows:
Everyone is welcome to contribute to this project ~
The Model aims to fully understand and promote the mental health of individuals, groups, and society. This model typically includes the following key components:
- Cognitive factors: Involving an individual's thought patterns, belief systems, cognitive biases, and problem-solving abilities. Cognitive factors significantly impact mental health as they affect how individuals interpret and respond to life events.
- Emotional factors: Including emotion regulation, emotional expression, and emotional experiences. Emotional health is a crucial part of mental health, involving how individuals manage and express their emotions and how they recover from negative emotions.
- Behavioral factors: Concerning an individual's behavior patterns, habits, and coping strategies. This includes stress management skills, social skills, and self-efficacy, which is the confidence in one's abilities.
- Social environment: Comprising external factors such as family, work, community, and cultural background, which have direct and indirect impacts on an individual's mental health.
- Physical health: There is a close relationship between physical and mental health. Good physical health can promote mental health and vice versa.
- Psychological resilience: Refers to an individual's ability to recover from adversity and adapt. Those with strong psychological resilience can bounce back from challenges and learn and grow from them.
- Prevention and intervention measures: The Mental Health Grand Model also includes strategies for preventing psychological issues and promoting mental health, such as psychological education, counseling, therapy, and social support systems.
- Assessment and diagnostic tools: Effective promotion of mental health requires scientific tools to assess individuals' psychological states and diagnose potential psychological issues.
- [2024.09.14] The Lora fine-tuned model based on the Qwen2-7B-Instruct model is open-sourced. Fine-tuning configuration file address: Qwen2-7B-Instruct_lora.py, model weight link: ModelScope
- [2024.08] The Lora fine-tuned model based on GLM4-9B-chat is open-sourced (based on Llama-factory). For details, see Fine-tuning Tutorial, model weight link: ModelScope
- [2024.07.16] Welcome everyone to experience EmoLLM V3.0. This model is a fully fine-tuned version based on the InternLM2.5-7B-Chat model. The fine-tuning configuration file can be found at: internlm2_5_chat_7b_full.py. Model weights are available at: OpenXLab, ModelScope. WebDemo is available at: OpenXLab apps, Full fine-tuning tutorial on Zhihu.
- [2024.07] Welcome to use the stable version of EmoLLM V2.0 for daily use and academic research. Model weight link: OpenXLab.
- [2024.07] Added InternLM2_5_7B_chatfine-tuning configuration、model file ModelScope。
- [2024.06] Added LLaMA-FactoryGLM4-9B-chat fine-tuning guide, added swift-based fine-tuning guide, the paper ESC-Eval: Evaluating Emotion Support Conversations in Large Language Models cited EmoLLM and EmoLLM achieved good results.
- [2024.05.28] The multi-turn dialogue dataset CPsyCunD and professional evaluation method used by EmoLLM have been released. For details, please see the 2024 ACL findings《CPsyCoun: A Report-based Multi-turn Dialogue Reconstruction and Evaluation Framework for Chinese Psychological Counseling》!
- [2024.05.08] EmoLLMDaddy-like BF V0.1 is public now in 1. Baidu AppBuilder and 2. OpenXLab, welcome to like and add it to your collections!
- [2024.05.07] Incremental Pre-training Guide
- [2024.05.04] EmoLLM3.0 OpenXLab Demo based on LLaMA3_8b_instruct is available now (restart link), LLAMA3 fine-tuning guide is updated, LLaMA3_8b_instruct-8B QLoRA fine-tuning model EmoLLM3.0 weights are released on OpenXLab and ModelScope platforms
- [2024.04.20] LLAMA3 fine-tuning guide and based on LLaMA3_8b_instruct's aiwei open source
- [2023.04.14] Added Quick Start and Nanny level tutorial BabyEmoLLM
- [2024.04.02] Uploaded at Huggingface Old Mother Counsellor
- [2024.03.25] [Mother-like Therapist] is released on Huggingface (https://huggingface.co/brycewang2018/EmoLLM-mother/tree/main)
- [2024.03.25] [Daddy-like Boy-Friend] is released on Baidu Paddle-Paddle AI Studio Platform (https://aistudio.baidu.com/community/app/68787)
- [2024.03.24] The InternLM2-Base-7B QLoRA fine-tuned model has been released on the OpenXLab and ModelScope platforms. For more details, please refer to InternLM2-Base-7B QLoRA.
- [2024.03.12] [aiwei] is released on Baidu Paddle-Paddle AI Studio Platform (https://aistudio.baidu.com/community/app/63335)
- [2024.03.11] EmoLLM V2.0 is greatly improved in all scores compared to EmoLLM V1.0. Surpasses the performance of Role-playing ChatGPT on counseling tasks! Click to experience EmoLLM V2.0, update dataset statistics and details, Roadmap
- [2024.03.09] Add concurrency acceleration QA pair generation, RAG pipeline
- [2024.03.03] Based on InternLM2-7B-chat full fine-tuned version EmoLLM V2.0 open sourced, need two A100*80G, update professional evaluation, see evaluate, update PaddleOCR-based PDF to txt tool scripts, see scripts.
View More
-
[2024.02.29] Updated objective assessment calculations, see evaluate for details. A series of datasets have also been updated, see datasets for details.
-
[2024.02.27] Updated English README and a series of datasets (licking dogs and one-round dialogue)
-
[2024.02.23]The "Gentle Lady Psychologist Ai Wei" based on InternLM2_7B_chat_qlora was launched. Click here to obtain the model weights, configuration file, online experience link
-
[2024.02.23]Updated several fine-tuning configurations, added data_pro.json (more quantity, more comprehensive scenarios, richer content) and aiwei.json (dedicated to the gentle lady role-play, featuring Emoji expressions), the "Gentle Lady Psychologist Ai Wei" is coming soon.
-
[2024.02.18] The full fine-tuned version based on Qwen1_5-0_5B-Chat has been open-sourced. Friends with limited computational resources can now dive in and explore it.
-
[2024.02.06] Open-sourced based on the Qwen1_5-0_5B-Chat full-scale fine-tuned version, friends with limited computing power can start experimenting~
- [2024.02.05] The project has been promoted by the official WeChat account NLP Engineering. Here's the link to the article. Welcome everyone to follow!! 🥳🥳
- [2024.02.03] Project Vedio at bilibili 😊
- [2024.01.27] Complete data construction documentation, fine-tuning guide, deployment guide, Readme, and other related documents 👏
- [2024.01.25] EmoLLM V1.0 has deployed online https://openxlab.org.cn/apps/detail/jujimeizuo/EmoLLM 😀
- The project won the the Innovation and Creativity Award in the 2024 Puyuan Large Model Series Challenge Spring Competition held by the Shanghai Artificial Intelligence Laboratory
- Won the first prize of AI-enabled university programme "National College Tour".
- The project has been promoted by the official WeChat account NLP Engineering. Here's the link.
-
🎉 Thanks to the following media and friends for their coverage and support of our project(Listed below in no particular order! Sorry for any omissions, we appreciate it! Feel free to add!): NLP工程化, 机智流, 爱可可爱生活, 阿郎小哥, 大模型日知路, AI Code, etc!
-
Project Vedio EmoLLM has been released for viewing! 😀
- EmoLLM - Large Language Model for Mental Health
- Recent Updates
- Honors
- Roadmap
- Contents - Pre-development Configuration Requirements. - User Guide
- Star History
- 🌟 Contributors
- Communication group
- A100 40G (specifically for InternLM2_7B_chat + qlora fine-tuning + deepspeed zero2 optimization)
- [TODO]: Publish more details about hardware consumption.
- Clone the repo
git clone https://github.com/SmartFlowAI/EmoLLM.git
- Read in sequence or read sections you're interested in:
- Quick Start
- Data Construction
- Fine-tuning Guide
- Deployment Guide
- RAG
- Evaluation Guide
- View More Details
- Please read Quick Start to see.
- Quick coding: Baby EmoLLM
- Please read the Data Construction Guide for reference.
- The dataset used for this fine-tuning can be found at datasets
- For details on incremental pre-training, see Incremental Pre-training Guide.
- For full-scale, LoRA, and QLoRA fine-tuning based on xtuner, see Fine-tuning Guide.
- For full-scale, LoRA, and QLoRA fine-tuning based on ms-swift, see Fine-tuning Guide.
- For full-scale, LoRA, and QLoRA fine-tuning based on LLaMA-Factory, see Fine-tuning Guide.
- [TODO]: Update DPO training.
- Demo deployment: see deployment guide for details.
- Quantitative deployment based on LMDeploy: see deploy
- [TODO]: Deployment Guide for VLLM
- See RAG
- The model evaluation is divided into General Metrics Evaluation and Professional Metrics Evaluation,Please read the evaluation guide for reference.
Additional Details
- Xtuner
- Transformers
- Pytorch
- LMDeploy: for quantitative deployment
- Stremlit: for building demos
- DeepSpeed: for parallel training
- LLaMA-Factory
- ms-swift
Contributions make the open-source community an excellent place for learning, inspiration, and creation. Any contribution you make is greatly appreciated.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
This project uses Git for version control. You can see the currently available versions in the repository.
Username | School/Organization | Remarks | Contributions |
---|---|---|---|
aJupyter | Nankai University, Master's student | DataWhale member | Project initiator |
MING-ZCH | Huazhong University of Science and Technology, Undergraduate student | LLM X Mental health researcher | Project co-leader |
chg0901 | Ph.D Student of Kwangwoon University in South Korea | MiniSora | Project co-leader |
jujimeizuo | Jiangnan University, Master's student | ||
Smiling-Weeping-zhr | Harbin Institute of Technology (Weihai), Undergraduate student | ||
8baby8 | PaddlePaddle Pilot Team Regional Director | Wenxin Large Model core developer | |
zxazys | Nankai University, Master's student | ||
JasonLLLLLLLLLLL | SWUFE (Southwestern University of Finance and Economics) | ||
MrCatAI | AI Mover | ||
ZeyuBa | Institute of Automation, Master's student | ||
aiyinyuedejustin | University of Pennsylvania, Master's student | ||
Nobody-ML | China University of Petroleum (East China), Undergraduate student | ||
Mxoder | Beihang University, Undergraduate student | ||
Anooyman | Nanjing University of Science and Technology, Master's student | ||
Vicky-3021 | Xidian University, Master's student (Research Year 0) | ||
SantiagoTOP | Taiyuan University of Technology, Master's student | Data cleansing, document management, Baby EmoLLM maintenance | |
zealot52099 | Individual developer | Data Processing, LLM finetuning and RAG | |
wwwyfff | FuDan University, Master's student | ||
jkhumor | Nankai University, Master's student | RAG | |
lll997150986 | Nankai University, Master's student | Fine Tuning | |
nln-maker | Nankai University, Master's student | Front-end and back-end development | |
dream00001 | Nankai University, Master's student | Front-end and back-end development | |
王几行XING | Peking University, Master's graduate | Data Processing, LLM finetuning, Front-end and back-end development | |
[思在] | Peking University, Master's graduate (Microsoft) | LLM finetuning, Front-end and back-end development | |
TingWei | University Of Electronic Science And Technology Of China,Master's graduate | LLM finetuning | |
PengYu | Shihezi University, Master's student | LLM finetuning |
The project is licensed under the MIT License. Please refer to the details LICENSE
- Shanghai Artificial Intelligence Laboratory
- Vansin
- A.bu (M.S. in Psychology, Peking University)
- Sanbuphy
- HatBoy
- If it fails, go to the Issue section.