leavoigt's picture
Add SetFit model
668dcf2
|
raw
history blame
9.73 kB
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- Precision_micro
- Precision_weighted
- Precision_samples
- Recall_micro
- Recall_weighted
- Recall_samples
- F1-Score
- accuracy
widget:
- text: Violence from intimate partners and male family members can escalate during
emergencies. This tends to increase as the crisis worsens, and men have lost their
jobs and status particularly in communities with traditional gender roles, and
where family violence is normalised
- text: Expand livelihood protection policies that assist vulnerable, low-income individuals
to recover from damages associated with extreme weather events; provide support
and protection for internally displaced persons, persons displaced across borders
and host communities;. By 2026, draw up disaster recovery plans for all 22 municipalities
with resource inventories, first response measures and actions (including on logistics)
concerning humanitarian post-disaster needs.
- text: recurrent droughts, (decrease in amount of rainfall from 550 to 400mm in the
highlands), changes in seasonality that had resulted frequent crop failure, massive
death of livestock, genetic erosion, extinction of endemic species, degradation
of habitats and disequilibria in the ecosystem structure and function. The impact
of climate change is manifested in recurrent droughts, desertification, sea level
rise and increase in sea water temperature, depletion of ground water, widespread
land degradation, and emergence of climate sensitive diseases.
- text: They live in geographical regions and ecosystems that are the most vulnerable
to climate change. These include polar regions, humid tropical forests, high mountains,
small islands, coastal regions, and arid and semi-arid lands, among others. The
impacts of climate change in such regions have strong implications for the ecosystem-based
livelihoods on which many indigenous peoples depend. Moreover, in some regions
such as the Pacific, the very existence of many indigenous territories is under
threat from rising sea levels that not only pose a grave threat to indigenous
peoples’ livelihoods but also to their cultures and ways of life.
- text: Overcoming Poverty. Colombia, as a developing country, faces major socioeconomic
challenges. According to the official figures of DANE, by 2014, the percentage
of people in multidimensional poverty situation was 21.9% (this figure rises to
44.1% if we take into account only the rural population). For the same year, 28.5%
of the population was found in a situation of monetary poverty (41.4% of the population
in the case of the villages and rural centers scattered).
pipeline_tag: text-classification
inference: false
base_model: sentence-transformers/all-mpnet-base-v2
model-index:
- name: SetFit with sentence-transformers/all-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: Precision_micro
value: 0.7972027972027972
name: Precision_Micro
- type: Precision_weighted
value: 0.8053038510784989
name: Precision_Weighted
- type: Precision_samples
value: 0.7972027972027972
name: Precision_Samples
- type: Recall_micro
value: 0.7972027972027972
name: Recall_Micro
- type: Recall_weighted
value: 0.7972027972027972
name: Recall_Weighted
- type: Recall_samples
value: 0.7972027972027972
name: Recall_Samples
- type: F1-Score
value: 0.7972027972027972
name: F1-Score
- type: accuracy
value: 0.7972027972027972
name: Accuracy
---
# SetFit with sentence-transformers/all-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A OneVsRestClassifier instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)
- **Classification head:** a OneVsRestClassifier instance
- **Maximum Sequence Length:** 384 tokens
<!-- - **Number of Classes:** Unknown -->
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
## Evaluation
### Metrics
| Label | Precision_Micro | Precision_Weighted | Precision_Samples | Recall_Micro | Recall_Weighted | Recall_Samples | F1-Score | Accuracy |
|:--------|:----------------|:-------------------|:------------------|:-------------|:----------------|:---------------|:---------|:---------|
| **all** | 0.7972 | 0.8053 | 0.7972 | 0.7972 | 0.7972 | 0.7972 | 0.7972 | 0.7972 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("leavoigt/vulnerability_target")
# Run inference
preds = model("Violence from intimate partners and male family members can escalate during emergencies. This tends to increase as the crisis worsens, and men have lost their jobs and status – particularly in communities with traditional gender roles, and where family violence is normalised")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 15 | 71.9518 | 238 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0012 | 1 | 0.2559 | - |
| 0.0602 | 50 | 0.2509 | - |
| 0.1205 | 100 | 0.2595 | - |
| 0.1807 | 150 | 0.0868 | - |
| 0.2410 | 200 | 0.0302 | - |
| 0.3012 | 250 | 0.0024 | - |
| 0.3614 | 300 | 0.0225 | - |
| 0.4217 | 350 | 0.0007 | - |
| 0.4819 | 400 | 0.0004 | - |
| 0.5422 | 450 | 0.0003 | - |
| 0.6024 | 500 | 0.0002 | - |
| 0.6627 | 550 | 0.0005 | - |
| 0.7229 | 600 | 0.0319 | - |
| 0.7831 | 650 | 0.0001 | - |
| 0.8434 | 700 | 0.0104 | - |
| 0.9036 | 750 | 0.0003 | - |
| 0.9639 | 800 | 0.0009 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.25.1
- PyTorch: 2.1.0+cu121
- Datasets: 2.16.1
- Tokenizers: 0.13.3
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->