evilfreelancer commited on
Commit
2bd18ba
1 Parent(s): b4c94db

Update README.md

Browse files

# Attempt to reproduce Mixture-of-LoRAs classifier

Mixture-of-LoRAs: An Efficient Multitask Tuning for Large Language Models

https://arxiv.org/pdf/2403.03432

## Datasets

We evenly sample about 10k training data and 2k validation data on each dataset.

From `laion/OIG` was taken only:
- unified_merged_code_xp3.jsonl
- unified_grade_school_math_instructions.jsonl
- unified_mathqa_flanv2_kojma_cot.jsonl

Files changed (1) hide show
  1. README.md +15 -3
README.md CHANGED
@@ -1,3 +1,15 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - gbharti/finance-alpaca
5
+ - lavita/ChatDoctor-HealthCareMagic-100k
6
+ - laion/OIG
7
+ - openai/webgpt_comparisons
8
+ - taskydata/GPT4Tools
9
+ - DataProvenanceInitiative/cot_submix_original
10
+ - 0x70DA/stackoverflow-chat-data
11
+ language:
12
+ - en
13
+ library_name: adapter-transformers
14
+ pipeline_tag: text-classification
15
+ ---