Fixed the information about what part of dolphin this model was trained on.
Browse files
README.md
CHANGED
@@ -40,7 +40,6 @@ Converted the following datasets to alpaca:instruction format.
|
|
40 |
|
41 |
1. [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin)
|
42 |
- ORCA style dataset generously created by [Eric Hartford](https://huggingface.co/ehartford)
|
43 |
-
- Only used the 1 million GPT4 generated instructions file [flan1m-alpaca-uncensored.jsonl](https://huggingface.co/datasets/ehartford/dolphin/blob/main/flan1m-alpaca-uncensored.jsonl).
|
44 |
2. [LinhDuong/chatdoctor-200k](https://huggingface.co/datasets/LinhDuong/chatdoctor-200k)
|
45 |
- Refined dataset sourced from icliniq medical QA forum
|
46 |
3. [sahil2801/code_instructions_120k](https://huggingface.co/datasets/sahil2801/code_instructions_120k)
|
|
|
40 |
|
41 |
1. [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin)
|
42 |
- ORCA style dataset generously created by [Eric Hartford](https://huggingface.co/ehartford)
|
|
|
43 |
2. [LinhDuong/chatdoctor-200k](https://huggingface.co/datasets/LinhDuong/chatdoctor-200k)
|
44 |
- Refined dataset sourced from icliniq medical QA forum
|
45 |
3. [sahil2801/code_instructions_120k](https://huggingface.co/datasets/sahil2801/code_instructions_120k)
|