reddgr commited on
Commit
587ddbf
verified
1 Parent(s): 836e5e4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -15
README.md CHANGED
@@ -1,29 +1,32 @@
1
  ---
2
  base_model: google/gemma-2-2b-it
3
  library_name: peft
 
 
 
 
 
 
 
 
4
  ---
5
 
6
  # Model Card for Model ID
7
 
8
  <!-- Provide a quick summary of what the model is/does. -->
9
 
10
-
11
 
12
  ## Model Details
13
 
14
  ### Model Description
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
-
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
  ### Model Sources [optional]
29
 
@@ -39,9 +42,7 @@ library_name: peft
39
 
40
  ### Direct Use
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
 
46
  ### Downstream Use [optional]
47
 
@@ -71,7 +72,21 @@ Users (both direct and downstream) should be made aware of the risks, biases and
71
 
72
  Use the code below to get started with the model.
73
 
74
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
  ## Training Details
77
 
 
1
  ---
2
  base_model: google/gemma-2-2b-it
3
  library_name: peft
4
+ license: apache-2.0
5
+ language:
6
+ - es
7
+ tags:
8
+ - news
9
+ - chat
10
+ - LoRa
11
+ - conversational AI
12
  ---
13
 
14
  # Model Card for Model ID
15
 
16
  <!-- Provide a quick summary of what the model is/does. -->
17
 
18
+ Lightweight finetuning of google/gemma-2-2b-it on a public dataset of news from Spanish digital newspapers (https://www.kaggle.com/datasets/josemamuiz/noticias-laraznpblico/).
19
 
20
  ## Model Details
21
 
22
  ### Model Description
23
 
24
+ This model is fine-tuned using LoRa (Low-Rank Adaptation) on the "Noticias La Raz贸n y P煤blico" dataset, a collection of Spanish news articles. The finetuning was done with lightweight methods to ensure efficient training while maintaining performance on the news-related language generation tasks.
 
 
25
 
26
+ - **Developed by:** https://talkingtochatbots.com
27
+ - **Language(s) (NLP):** Spanish (es)
28
+ - **License:** apache-2.0
29
+ - **Finetuned from model:** google/gemma-2-2b-it
 
 
 
30
 
31
  ### Model Sources [optional]
32
 
 
42
 
43
  ### Direct Use
44
 
45
+ This model can be used for **conversational AI tasks** related to Spanish-language news. The fine-tuned LoRa model is especially suitable for use cases that require both understanding and generating text, such as chat-based interactions, answering questions about news, and discussing headlines.
 
 
46
 
47
  ### Downstream Use [optional]
48
 
 
72
 
73
  Use the code below to get started with the model.
74
 
75
+ ```python
76
+ from transformers import AutoTokenizer, AutoModelForCausalLM
77
+ from peft import PeftModel
78
+
79
+ # Load the tokenizer and model
80
+ save_directory = "./fine_tuned_model"
81
+ tokenizer = AutoTokenizer.from_pretrained(save_directory)
82
+ model = AutoModelForCausalLM.from_pretrained(save_directory)
83
+ peft_model = PeftModel.from_pretrained(model, save_directory)
84
+
85
+ # Example usage
86
+ input_text = "驴Qu茅 opinas de las noticias recientes sobre la econom铆a?"
87
+ inputs = tokenizer(input_text, return_tensors="pt")
88
+ output = peft_model.generate(**inputs, max_length=50)
89
+ print(tokenizer.decode(output[0], skip_special_tokens=True))
90
 
91
  ## Training Details
92