Text Generation
Transformers
7 languages
Inference Endpoints
dimalik commited on
Commit
4e1fe68
1 Parent(s): 617cb07

add model artifacts

Browse files
README.md CHANGED
@@ -24,7 +24,7 @@ library_name: transformers
24
 
25
  # Model Card for mEdIT-xxl
26
 
27
- This model was obtained by fine-tuning the `MBZUAI/bactrian-x-llama-13b-lora` model on the mEdIT dataset.
28
 
29
  **Paper:** mEdIT: Multilingual Text Editing via Instruction Tuning
30
 
@@ -43,4 +43,60 @@ This model was obtained by fine-tuning the `MBZUAI/bactrian-x-llama-13b-lora` mo
43
  - **Paper:** TBA
44
 
45
  ## How to use
46
- We release the best-performing models presented in our paper.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
  # Model Card for mEdIT-xxl
26
 
27
+ The `medit-xxl` model was obtained by fine-tuning the `MBZUAI/bactrian-x-llama-13b-lora` model on the mEdIT dataset.
28
 
29
  **Paper:** mEdIT: Multilingual Text Editing via Instruction Tuning
30
 
 
43
  - **Paper:** TBA
44
 
45
  ## How to use
46
+
47
+ ### Instruction format
48
+
49
+ Adherence to the following instruction format is essential; failure to do so may result in the model producing less-than-ideal results.
50
+
51
+
52
+ ```
53
+ instruction_tokens = [
54
+ "Instruction",
55
+ "Anweisung",
56
+ ...
57
+ ]
58
+
59
+ input_tokens = [
60
+ "Input",
61
+ "Aporte",
62
+ ...
63
+ ]
64
+
65
+ output_tokens = [
66
+ "Output",
67
+ "Produzione",
68
+ ...
69
+ ]
70
+
71
+ task_descriptions = [
72
+ "Fix grammatical errors in this sentence", # <-- GEC task
73
+ "Umschreiben Sie den Satz", # <-- Paraphrasing
74
+ ...
75
+ ]
76
+
77
+ The entire list of possible instruction, input, output tokens, and task descriptions can be found in the Appendix of our paper.
78
+
79
+
80
+ prompt_template = """### <instruction_token>:\n<task description>\n### <input_token>:\n<input>\n### <output_token>:\n\n"""
81
+
82
+ Note that the tokens and the task description need not be in the language of the input.
83
+ ```
84
+
85
+ ### Run the model
86
+
87
+ ```python
88
+ from transformers import AutoTokenizer, AutoModelForCausalLM
89
+
90
+ model_id = "grammarly/medit-xxl"
91
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
92
+
93
+ model = AutoModelForCausalLM.from_pretrained(model_id)
94
+
95
+ prompt = '### 命令:\n文章を文法的にする\n### 入力:\nDear Sir ,\n### 出力:\n\n'
96
+
97
+ inputs = tokenizer(prompt, return_tensors='pt')
98
+
99
+ outputs = model.generate(**inputs, max_new_tokens=20)
100
+
101
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True)
102
+ ```
adapter_config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "MBZUAI/bactrian-x-llama-13b-merged",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layers_pattern": null,
10
+ "layers_to_transform": null,
11
+ "lora_alpha": 16,
12
+ "lora_dropout": 0.05,
13
+ "modules_to_save": null,
14
+ "peft_type": "LORA",
15
+ "r": 8,
16
+ "rank_pattern": {},
17
+ "revision": null,
18
+ "target_modules": [
19
+ "v_proj",
20
+ "q_proj",
21
+ "o_proj",
22
+ "k_proj"
23
+ ],
24
+ "task_type": "CAUSAL_LM"
25
+ }
adapter_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2e4fab47b3c5a601c9654b7e86324afc2371d233aad3ffe1702f711e47820f73
3
+ size 26329549
added_tokens.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "[PAD]": 32000
3
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<s>",
3
+ "eos_token": "</s>",
4
+ "pad_token": "[PAD]",
5
+ "unk_token": "<unk>"
6
+ }
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
3
+ size 499723
tokenizer_config.json ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "bos_token": {
5
+ "__type": "AddedToken",
6
+ "content": "",
7
+ "lstrip": false,
8
+ "normalized": true,
9
+ "rstrip": false,
10
+ "single_word": false
11
+ },
12
+ "clean_up_tokenization_spaces": false,
13
+ "eos_token": {
14
+ "__type": "AddedToken",
15
+ "content": "",
16
+ "lstrip": false,
17
+ "normalized": true,
18
+ "rstrip": false,
19
+ "single_word": false
20
+ },
21
+ "legacy": false,
22
+ "model_max_length": 1000000000000000019884624838656,
23
+ "pad_token": null,
24
+ "sp_model_kwargs": {},
25
+ "spaces_between_special_tokens": false,
26
+ "tokenizer_class": "LlamaTokenizer",
27
+ "unk_token": {
28
+ "__type": "AddedToken",
29
+ "content": "",
30
+ "lstrip": false,
31
+ "normalized": true,
32
+ "rstrip": false,
33
+ "single_word": false
34
+ },
35
+ "use_default_system_prompt": true
36
+ }