BoDong commited on
Commit
1af3dfb
1 Parent(s): 578dbdf

update doc.

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -5,7 +5,7 @@ license: apache-2.0
5
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
6
  should probably proofread and complete it, then remove this comment. -->
7
 
8
- This model is a fine-tuned model for Chat based on [mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b) with **max_seq_lenght=2048** on [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k), [TigerResearch/tigerbot-alpaca-en-50k](https://huggingface.co/datasets/TigerResearch/tigerbot-alpaca-en-50k), [TigerResearch/tigerbot-gsm-8k-en](https://huggingface.co/datasets/TigerResearch/tigerbot-gsm-8k-en), [TigerResearch/tigerbot-alpaca-zh-0.5m](https://huggingface.co/datasets/TigerResearch/tigerbot-alpaca-zh-0.5m), [TigerResearch/tigerbot-stackexchange-qa-en-0.5m](https://huggingface.co/datasets/TigerResearch/tigerbot-stackexchange-qa-en-0.5m), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3) dataset.
9
 
10
  ## Model date
11
  Neural-chat-7b-v1.1 was trained between June and July 2023.
 
5
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
6
  should probably proofread and complete it, then remove this comment. -->
7
 
8
+ This model is a fine-tuned model for Chat based on [mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b) with **max_seq_lenght=2048** on various open source dataset. For the details of the used dataset, please refer to [Intel/neural-chat-dataset-v1-1](https://huggingface.co/datasets/Intel/neural-chat-dataset-v1-1)
9
 
10
  ## Model date
11
  Neural-chat-7b-v1.1 was trained between June and July 2023.