walid-iguider commited on
Commit
9e0ff22
1 Parent(s): 29ad81b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -10
README.md CHANGED
@@ -1,6 +1,19 @@
1
  ---
2
- library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
  # Model Card for Model ID
@@ -17,13 +30,10 @@ tags: []
17
 
18
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
  ### Model Sources [optional]
29
 
@@ -36,7 +46,49 @@ This is the model card of a 🤗 transformers model that has been pushed on the
36
  ## Uses
37
 
38
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
  ### Direct Use
41
 
42
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
 
1
  ---
2
+ license: apache-2.0
3
+ language:
4
+ - it
5
+ pipeline_tag: text-generation
6
+ tags:
7
+ - text-generation-inference
8
+ - transformers
9
+ - mistral
10
+ - trl
11
+ - sft
12
+ datasets:
13
+ - mchl-labs/stambecco_data_it
14
+ widget:
15
+ - text: "Di seguito è riportata un'istruzione che descrive un'attività, abbinata ad un input che fornisce ulteriore informazione. Scrivi una risposta che soddisfi adeguatamente la richiesta. \n### Istruzione:\nSuggerisci un'attività serale romantica\n\n### Input:\n\n### Risposta:"
16
+ example_title: Example 1
17
  ---
18
 
19
  # Model Card for Model ID
 
30
 
31
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
32
 
33
+ - **Developed by:** Walid Iguider
34
+ - **Model type:** Minerva
35
+ - **License:** cc-by-nc-sa-4.0
36
+ - **Finetuned from model :** sapienzanlp/Minerva-3B-base-v1.0
 
 
 
37
 
38
  ### Model Sources [optional]
39
 
 
46
  ## Uses
47
 
48
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
49
+ ### Sample Code
50
+
51
+ ```python
52
+ from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
53
+ import torch
54
+ torch.random.manual_seed(0)
55
+ # Run text generation pipeline with our next model
56
+ prompt = """Di seguito è riportata un'istruzione che descrive un'attività, abbinata ad un input che fornisce
57
+ ulteriore informazione. Scrivi una risposta che soddisfi adeguatamente la richiesta.
58
+
59
+ ### Istruzione:
60
+ Suggerisci un'attività serale romantica
61
+
62
+ ### Input:
63
+
64
+
65
+ ### Risposta:"""
66
+
67
+ model_id = "walid-iguider/Minerva-3B-Instruct-v1.0"
68
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
69
+ model = AutoModelForCausalLM.from_pretrained(
70
+ model_id,
71
+ device_map="cuda",
72
+ torch_dtype="auto",
73
+ trust_remote_code=True,
74
+ )
75
+
76
+ generation_args = {
77
+ "max_new_tokens": 500,
78
+ "return_full_text": False,
79
+ "temperature": 0.0,
80
+ "do_sample": False,
81
+ }
82
+
83
+ pipe = pipeline(
84
+ "text-generation",
85
+ model=model,
86
+ tokenizer=tokenizer,
87
+ )
88
+
89
+ output = pipe(prompt, **generation_args)
90
+ print(output[0]['generated_text'])
91
+ ```
92
  ### Direct Use
93
 
94
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->