Text Generation
English
File size: 3,034 Bytes
6bf55d2
 
 
 
 
 
d264778
6bf55d2
 
 
 
 
 
 
 
 
 
d46e33b
 
 
6bf55d2
6a1a405
 
6bf55d2
 
 
 
 
 
 
 
a54d696
6bf55d2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a54d696
6bf55d2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
datasets:
- SALT-NLP/positive_reframing
language:
- en
license: bigscience-bloom-rail-1.0
pipeline_tag: text-generation
---

# Model Card for Model ID

This model is a BLOOM-base adjusted to the sentiment transfer task, developed as part of a FourthBrain workshop on GenerativeAI.

## Model Details

### Model Description

This model is a BLOOM-base adjusted to the sentiment transfer task, where the objective is to reverse the sentiment polarity of a text without contradicting 
the original meaning. Positive reframing induces a complementary positive viewpoint (e.g. glass-half-full) escaping negative patterns. 
Based on the article [Ziems at. al (2022)](https://arxiv.org/abs/2204.02952).

Sample working space [here](https://huggingface.co/spaces/telmo000/bloom-positive-reframing).

### Input
`### Negative sentence:\n{original_text}\n\n### Reframing strategy: \n{reframing_strategy}\n\n### Reframing sentence:\n`


- **Developed by:** Telmo Correa
- **Model type:** LLM
- **Language(s) (NLP):** English
- **License:** [bigscience-bloom-rail-1.0](https://bigscience.huggingface.co/blog/the-bigscience-rail-license)
- **Finetuned from model :** [bigscience/bloom-1b7](https://huggingface.co/bigscience/bloom-1b7)

## Uses

Model trained as a proof-of-concept fine tuning of BLOOM for sentence rewriting.

### Direct Use

Model is intended to be directly used to rewrite sentences with the provided strategy.

### Out-of-Scope Use

Any uses of the model must abide by the terms of both the original BLOOM model and the Salt-NLP/positive-reframing dataset.

## Bias, Risks, and Limitations

As a fine-tuned version of BLOOM, this model carries all the biases, risks, and limitations. of its original training.

## Training Details

### Training Data

[Salt-NLP/positive-reframing](https://huggingface.co/datasets/SALT-NLP/positive_reframing)

### Training Procedure 

The baseline model [bigscience/bloom-1b7](https://huggingface.co/bigscience/bloom-1b7) was trained through 100 steps over the training split of the training data, with its prompt engineered to request explicit positive sentence reframing:

```
Below is a negative sentence, please select a reframing strategy and write the positive reframed sentence.

### Negative sentence:
NEGATIVE SENTENCE HERE

### Reframing strategy:
STRATEGY HERE

### Reframed sentence:
REFRAMED SENTENCE HERE
```


#### Training Hyperparameters

- **Training regime:** fp16 non-mixed precision, using PEFT and LoRA

## Evaluation

Evaluation not performed.

## Environmental Impact

<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->

Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).

- **Hardware Type:** Coogle Colab PRO GPU
- **Hours used:** 10 min
- **Cloud Provider:** GCP
- **Compute Region:** us-west-1
- **Carbon Emitted:** 10g