Update README.md
Browse files
README.md
CHANGED
@@ -1,199 +1,147 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
|
4 |
---
|
5 |
|
6 |
-
#
|
7 |
|
8 |
-
|
9 |
|
|
|
10 |
|
|
|
|
|
|
|
|
|
11 |
|
12 |
-
##
|
13 |
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
19 |
-
|
20 |
-
- **Developed by:** [More Information Needed]
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
-
|
28 |
-
### Model Sources [optional]
|
29 |
-
|
30 |
-
<!-- Provide the basic links for the model. -->
|
31 |
-
|
32 |
-
- **Repository:** [More Information Needed]
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
-
|
36 |
-
## Uses
|
37 |
-
|
38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
-
|
40 |
-
### Direct Use
|
41 |
-
|
42 |
-
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
43 |
-
|
44 |
-
[More Information Needed]
|
45 |
-
|
46 |
-
### Downstream Use [optional]
|
47 |
-
|
48 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
49 |
-
|
50 |
-
[More Information Needed]
|
51 |
-
|
52 |
-
### Out-of-Scope Use
|
53 |
-
|
54 |
-
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
55 |
-
|
56 |
-
[More Information Needed]
|
57 |
-
|
58 |
-
## Bias, Risks, and Limitations
|
59 |
-
|
60 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
61 |
-
|
62 |
-
[More Information Needed]
|
63 |
-
|
64 |
-
### Recommendations
|
65 |
-
|
66 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
67 |
-
|
68 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
69 |
-
|
70 |
-
## How to Get Started with the Model
|
71 |
-
|
72 |
-
Use the code below to get started with the model.
|
73 |
-
|
74 |
-
[More Information Needed]
|
75 |
|
76 |
## Training Details
|
77 |
|
78 |
-
###
|
79 |
-
|
80 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
81 |
-
|
82 |
-
[More Information Needed]
|
83 |
|
84 |
-
|
85 |
|
86 |
-
|
|
|
|
|
87 |
|
88 |
-
|
89 |
|
90 |
-
|
|
|
|
|
|
|
|
|
|
|
91 |
|
|
|
92 |
|
93 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
94 |
|
95 |
-
|
96 |
|
97 |
-
|
|
|
|
|
98 |
|
99 |
-
|
100 |
|
101 |
-
|
102 |
|
103 |
-
##
|
104 |
|
105 |
-
|
106 |
|
107 |
-
|
|
|
|
|
108 |
|
109 |
-
|
110 |
|
111 |
-
|
|
|
|
|
|
|
112 |
|
113 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
114 |
|
115 |
-
|
116 |
|
117 |
-
|
|
|
|
|
|
|
|
|
|
|
118 |
|
119 |
-
|
|
|
120 |
|
121 |
-
|
122 |
|
123 |
-
|
|
|
|
|
|
|
|
|
124 |
|
125 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
126 |
|
127 |
-
|
|
|
|
|
|
|
128 |
|
129 |
-
[
|
|
|
130 |
|
131 |
-
|
132 |
|
133 |
-
|
134 |
-
|
135 |
-
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
-
|
141 |
-
## Environmental Impact
|
142 |
-
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
|
171 |
## Citation [optional]
|
172 |
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
**BibTeX:**
|
176 |
|
177 |
-
|
178 |
-
|
179 |
-
|
180 |
-
|
181 |
-
|
182 |
-
|
183 |
-
|
184 |
-
|
185 |
-
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
|
197 |
-
##
|
198 |
|
199 |
-
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
license: llama3.2
|
4 |
---
|
5 |
|
6 |
+
# FineLlama-3.2-3B-Instruct-ead
|
7 |
|
8 |
+
This repository contains a fine-tuned version of LLaMa-3.2-3B-Instruct specifically trained to understand and generate EAD (Encoded Archival Description) XML format for archival records description.
|
9 |
|
10 |
+
## Model Description
|
11 |
|
12 |
+
* **Base Model**: meta-llama/Llama-3.2-3B-Instruct
|
13 |
+
* **Training Dataset**: [Geraldine/Ead-Instruct-38k](https://huggingface.co/datasets/Geraldine/Ead-Instruct-38k)
|
14 |
+
* **Task**: Generation of EAD/XML compliant archival descriptions
|
15 |
+
* **Training Type**: Instruction fine-tuning with PEFT (Parameter Efficient Fine-Tuning) using LoRA
|
16 |
|
17 |
+
## Key Features
|
18 |
|
19 |
+
* Specialized in generating EAD/XML format for archival metadata
|
20 |
+
* Trained on a comprehensive dataset of EAD/XML examples
|
21 |
+
* Optimized for archival description tasks
|
22 |
+
* Memory efficient through 4-bit quantization
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
|
24 |
## Training Details
|
25 |
|
26 |
+
### Technical Specifications
|
|
|
|
|
|
|
|
|
27 |
|
28 |
+
* **Quantization**: 4-bit quantization using bitsandbytes
|
29 |
|
30 |
+
* NF4 quantization type
|
31 |
+
* Double quantization enabled
|
32 |
+
* bfloat16 compute dtype
|
33 |
|
34 |
+
### LoRA Configuration
|
35 |
|
36 |
+
```
|
37 |
+
- r: 256
|
38 |
+
- alpha: 128
|
39 |
+
- dropout: 0.05
|
40 |
+
- target modules: all-linear
|
41 |
+
```
|
42 |
|
43 |
+
### Training parameters
|
44 |
|
45 |
+
```
|
46 |
+
- Epochs: 3
|
47 |
+
- Batch size: 3
|
48 |
+
- Gradient accumulation steps: 2
|
49 |
+
- Learning rate: 2e-4
|
50 |
+
- Warmup ratio: 0.03
|
51 |
+
- Max sequence length: 4096
|
52 |
+
- Scheduler: Constant
|
53 |
+
```
|
54 |
|
55 |
+
### Training Infrastructure
|
56 |
|
57 |
+
* Libraries: transformers, peft, trl
|
58 |
+
* Mixed Precision: FP16/BF16 (based on hardware support)
|
59 |
+
* Optimizer: fused adamw
|
60 |
|
61 |
+
### Training Notebook
|
62 |
|
63 |
+
The training Notebook is available on [Kaggle](https://www.kaggle.com/code/geraldinegeoffroy/ead-finetune-llama-3-2-3b-instruct)
|
64 |
|
65 |
+
## Usage
|
66 |
|
67 |
+
### Installation
|
68 |
|
69 |
+
```
|
70 |
+
pip install transformers torch bitsandbytes
|
71 |
+
```
|
72 |
|
73 |
+
### Loading the model
|
74 |
|
75 |
+
```
|
76 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
|
77 |
+
import torch
|
78 |
+
from peft import PeftModel, PeftConfig
|
79 |
|
80 |
+
# Configure 4-bit quantization
|
81 |
+
bnb_config = BitsAndBytesConfig(
|
82 |
+
load_in_4bit=True,
|
83 |
+
bnb_4bit_use_double_quant=True,
|
84 |
+
bnb_4bit_quant_type="nf4",
|
85 |
+
bnb_4bit_compute_dtype=torch.bfloat16
|
86 |
+
)
|
87 |
|
88 |
+
model_name = "Geraldine/FineLlama-3.2-3B-Instruct-ead"
|
89 |
|
90 |
+
# Load model and tokenizer
|
91 |
+
model = AutoModelForCausalLM.from_pretrained(
|
92 |
+
model_name,
|
93 |
+
torch_dtype="auto",
|
94 |
+
quantization_config=bnb_config
|
95 |
+
).to("cuda")
|
96 |
|
97 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
98 |
+
```
|
99 |
|
100 |
+
### Example usage
|
101 |
|
102 |
+
```
|
103 |
+
messages = [
|
104 |
+
{"role": "system", "content": "You are an expert in EAD/XML generation for archival records metadata."},
|
105 |
+
{"role": "user", "content": "Generate a minimal and compliant <eadheader> template with all required EAD/XML tags"},
|
106 |
+
]
|
107 |
|
108 |
+
inputs = tokenizer.apply_chat_template(
|
109 |
+
messages,
|
110 |
+
return_dict=True,
|
111 |
+
tokenize = True,
|
112 |
+
add_generation_prompt = True, # Must add for generation
|
113 |
+
return_tensors = "pt",
|
114 |
+
).to("cuda")
|
115 |
|
116 |
+
outputs = model.generate(**inputs,
|
117 |
+
max_new_tokens = 4096,
|
118 |
+
pad_token_id=tokenizer.eos_token_id,
|
119 |
+
use_cache = True,)
|
120 |
|
121 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
122 |
+
```
|
123 |
|
124 |
+
## Limitations
|
125 |
|
126 |
+
* The model is specifically trained for EAD/XML format and may not perform well on general archival tasks
|
127 |
+
* Performance depends on the quality and specificity of the input prompts
|
128 |
+
* Maximum sequence length is limited to 4096 tokens
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
129 |
|
130 |
## Citation [optional]
|
131 |
|
|
|
|
|
132 |
**BibTeX:**
|
133 |
|
134 |
+
```
|
135 |
+
@misc{ead-llama,
|
136 |
+
author = {Géraldine Geoffroy},
|
137 |
+
title = {EAD-XML LLaMa: Fine-tuned LLaMa Model for Archival Description},
|
138 |
+
year = {2024},
|
139 |
+
publisher = {HuggingFace},
|
140 |
+
journal = {HuggingFace Repository},
|
141 |
+
howpublished = {\url{https://huggingface.co/Geraldine/FineLlama-3.2-3B-Instruct-ead}}
|
142 |
+
}
|
143 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
144 |
|
145 |
+
## Licence
|
146 |
|
147 |
+
This model is subject to the same license as the base LLaMa model. Please refer to Meta's LLaMa license for usage terms and conditions.
|