File size: 2,975 Bytes
8983a0f 35fab1e 0b2c03a 2803fc1 35fab1e 0b2c03a 35fab1e 0b2c03a 35fab1e 0b2c03a 35fab1e 0b2c03a 35fab1e 0b2c03a 35fab1e 0b2c03a 35fab1e 0b2c03a 35fab1e 0b2c03a 35fab1e 0b2c03a 35fab1e 0b2c03a 35fab1e 0b2c03a 35fab1e 0b2c03a 35fab1e 0b2c03a 35fab1e 0b2c03a 35fab1e 0b2c03a 35fab1e 0b2c03a 35fab1e 0b2c03a 35fab1e 0b2c03a 35fab1e 0b2c03a 35fab1e 0b2c03a 35fab1e 0b2c03a 35fab1e 0b2c03a 35fab1e 0b2c03a 35fab1e 0b2c03a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 |
---
base_model: microsoft/phi-2
library_name: peft
license: apache-2.0
datasets:
- neil-code/dialogsum-test
language:
- en
metrics:
- bleu
pipeline_tag: question-answering
tags:
- QuestionAnswering
- legal
- finan
- chem
- biology
---
license: apache-2.0
language:
- en
metrics:
- rouge
base_model:
- microsoft/phi-2
pipeline_tag: question-answering
---
# Model Card for PEFT-Fine-Tuned Model
This model card documents a PEFT-fine-tuned version of `microsoft/phi-2` for question-answering tasks. The PEFT fine-tuning improved the model's performance, as detailed in the evaluation section.
## Model Details
### Model Description
- **Developed by:** User (you can replace with your name or organization)
- **Finetuned from model:** `microsoft/phi-2`
- **Model type:** PEFT fine-tuned transformer
- **Language(s) (NLP):** English
- **License:** Apache 2.0
The base model `microsoft/phi-2` was adapted using Parameter-Efficient Fine-Tuning (PEFT) for question-answering tasks. The training process focused on improving performance metrics while keeping computational costs low.
---
### Model Sources
- **Repository:** https://huggingface.co/JamieAi33/Phi-2_PEFT
---
## Uses
### Direct Use
This model can be used out-of-the-box for question-answering tasks.
### Downstream Use
The model can be fine-tuned further on domain-specific datasets for improved performance.
### Out-of-Scope Use
Avoid using this model for tasks outside question-answering or where fairness, bias, and ethical considerations are critical without further validation.
---
## Bias, Risks, and Limitations
Users should be aware that:
- The model is trained on publicly available data and may inherit biases present in the training data.
- It is optimized for English and may perform poorly in other languages.
---
## How to Get Started with the Model
Here鈥檚 an example of loading the model:
```python
from transformers import AutoModel
from peft import PeftModel
base_model = AutoModel.from_pretrained("microsoft/phi-2")
adapter_model = PeftModel.from_pretrained(base_model, "Phi-2_PEFT")
# Model Name: PEFT Fine-Tuned `microsoft/phi-2`
This repository contains a PEFT fine-tuned version of the `microsoft/phi-2` model for question-answering tasks. The fine-tuning process leveraged Parameter-Efficient Fine-Tuning (PEFT) techniques to achieve improved performance.
---
## Metrics
The model's performance was evaluated using the ROUGE metric. Below are the results:
| **Metric** | **Original Model** | **PEFT Model** | **Absolute Improvement** |
|-----------------|--------------------|----------------|---------------------------|
| **ROUGE-1** | 29.76% | 44.51% | +14.75% |
| **ROUGE-2** | 10.76% | 15.68% | +4.92% |
| **ROUGE-L** | 21.69% | 30.95% | +9.25% |
| **ROUGE-Lsum** | 22.75% | 31.49% | +8.74% |
--- |