task1199_atomic_classification_xattr
Browse files
README.md
CHANGED
@@ -1,9 +1,10 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
|
|
|
4 |
---
|
5 |
|
6 |
-
# Model Card for
|
7 |
|
8 |
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
|
@@ -15,22 +16,22 @@ tags: []
|
|
15 |
|
16 |
<!-- Provide a longer summary of what this model is. -->
|
17 |
|
18 |
-
|
19 |
|
20 |
-
- **Developed by:**
|
21 |
- **Funded by [optional]:** [More Information Needed]
|
22 |
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:**
|
24 |
-
- **Language(s) (NLP):**
|
25 |
-
- **License:**
|
26 |
-
- **Finetuned from model [optional]:**
|
27 |
|
28 |
### Model Sources [optional]
|
29 |
|
30 |
<!-- Provide the basic links for the model. -->
|
31 |
|
32 |
-
- **Repository:**
|
33 |
-
- **Paper [optional]:**
|
34 |
- **Demo [optional]:** [More Information Needed]
|
35 |
|
36 |
## Uses
|
@@ -79,7 +80,7 @@ Use the code below to get started with the model.
|
|
79 |
|
80 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
81 |
|
82 |
-
|
83 |
|
84 |
### Training Procedure
|
85 |
|
|
|
1 |
---
|
2 |
+
language: en
|
3 |
+
license: mit
|
4 |
+
library_name: keras
|
5 |
---
|
6 |
|
7 |
+
# Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task1199
|
8 |
|
9 |
<!-- Provide a quick summary of what the model is/does. -->
|
10 |
|
|
|
16 |
|
17 |
<!-- Provide a longer summary of what this model is. -->
|
18 |
|
19 |
+
LoRA trained on task1199_atomic_classification_xattr
|
20 |
|
21 |
+
- **Developed by:** bruel
|
22 |
- **Funded by [optional]:** [More Information Needed]
|
23 |
- **Shared by [optional]:** [More Information Needed]
|
24 |
+
- **Model type:** LoRA
|
25 |
+
- **Language(s) (NLP):** en
|
26 |
+
- **License:** mit
|
27 |
+
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2
|
28 |
|
29 |
### Model Sources [optional]
|
30 |
|
31 |
<!-- Provide the basic links for the model. -->
|
32 |
|
33 |
+
- **Repository:** https://github.com/bruel-gabrielsson
|
34 |
+
- **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon
|
35 |
- **Demo [optional]:** [More Information Needed]
|
36 |
|
37 |
## Uses
|
|
|
80 |
|
81 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
82 |
|
83 |
+
"task1199_atomic_classification_xattr" sourced from https://github.com/allenai/natural-instructions
|
84 |
|
85 |
### Training Procedure
|
86 |
|