Open
Description
After reading both the short and the long version of FinGPT paper, I am a little bit confused of how FinGPTv1/v2/v3 were really made. Were they made by taking a pre-trained model, e.g., Llama, ChatGLM, and fine-tuning on
- Language Modeling task
- Sentiment Analysis task
and a follow-up question is the dataset used for training was obtained by FinNLP real-time data API, right?
It would be great if you could provide more details into it.
I appreciate that.
Metadata
Metadata
Assignees
Labels
No labels