bruel commited on
Commit
0bcc17e
1 Parent(s): 0f0206d

task101_reverse_and_concatenate_all_elements_from_index_i_to_j

Browse files
Files changed (1) hide show
  1. README.md +13 -12
README.md CHANGED
@@ -1,9 +1,10 @@
1
  ---
2
- library_name: transformers
3
- tags: []
 
4
  ---
5
 
6
- # Model Card for Model ID
7
 
8
  <!-- Provide a quick summary of what the model is/does. -->
9
 
@@ -15,22 +16,22 @@ tags: []
15
 
16
  <!-- Provide a longer summary of what this model is. -->
17
 
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
  - **Funded by [optional]:** [More Information Needed]
22
  - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
  ### Model Sources [optional]
29
 
30
  <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
  - **Demo [optional]:** [More Information Needed]
35
 
36
  ## Uses
@@ -79,7 +80,7 @@ Use the code below to get started with the model.
79
 
80
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
 
82
- [More Information Needed]
83
 
84
  ### Training Procedure
85
 
 
1
  ---
2
+ language: en
3
+ license: mit
4
+ library_name: keras
5
  ---
6
 
7
+ # Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task101
8
 
9
  <!-- Provide a quick summary of what the model is/does. -->
10
 
 
16
 
17
  <!-- Provide a longer summary of what this model is. -->
18
 
19
+ LoRA trained on task101_reverse_and_concatenate_all_elements_from_index_i_to_j
20
 
21
+ - **Developed by:** bruel
22
  - **Funded by [optional]:** [More Information Needed]
23
  - **Shared by [optional]:** [More Information Needed]
24
+ - **Model type:** LoRA
25
+ - **Language(s) (NLP):** en
26
+ - **License:** mit
27
+ - **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2
28
 
29
  ### Model Sources [optional]
30
 
31
  <!-- Provide the basic links for the model. -->
32
 
33
+ - **Repository:** https://github.com/bruel-gabrielsson
34
+ - **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon
35
  - **Demo [optional]:** [More Information Needed]
36
 
37
  ## Uses
 
80
 
81
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
82
 
83
+ "task101_reverse_and_concatenate_all_elements_from_index_i_to_j" sourced from https://github.com/allenai/natural-instructions
84
 
85
  ### Training Procedure
86