File size: 4,441 Bytes
6bf28fd
5b1a594
 
 
6bf28fd
62e551b
 
 
 
 
 
6bf28fd
 
 
 
 
62e551b
6bf28fd
62e551b
accecba
 
 
6bf28fd
accecba
6bf28fd
accecba
6bf28fd
 
 
 
 
accecba
6bf28fd
accecba
6bf28fd
accecba
6bf28fd
accecba
6bf28fd
accecba
6bf28fd
accecba
6bf28fd
accecba
6bf28fd
accecba
6bf28fd
accecba
6bf28fd
accecba
62e551b
6bf28fd
62e551b
 
 
6bf28fd
accecba
62e551b
6bf28fd
62e551b
 
accecba
6bf28fd
 
 
 
 
accecba
6bf28fd
 
 
62e551b
6bf28fd
 
 
62e551b
 
 
 
6bf28fd
 
 
 
 
 
 
accecba
6bf28fd
 
 
62e551b
6bf28fd
 
 
accecba
6bf28fd
 
 
62e551b
6bf28fd
accecba
6bf28fd
 
 
accecba
62e551b
 
 
accecba
 
62e551b
accecba
 
6bf28fd
 
 
62e551b
6bf28fd
62e551b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- NLP
- Text Generation
- Fine-tuning
- Language Model
pipeline_tag: text-generation
---
## Model Details

### Model Description

The willieseun/Enron-Mixral-8x7b-instruct model is a large-scale language model based on the Mixtral architecture, fine-tuned specifically for email generation using the Enron dataset. This model is designed to generate coherent and contextually appropriate text, particularly suited for tasks related to email composition.

- **Developed by:** WILLIESEUN
- **Model type:** Transformer-based Language Model
- **Language(s) (NLP):** English
- **License:** Apache 2.0

### Model Sources

- **Repository:** [Hugging Face Model Hub](https://huggingface.co/MistralAI/Mixtral-8x7B-Instruct-v0.1)

## Uses

### Direct Use

The model can be used directly for email generation tasks. Users can input prompts or partial content, and the model will generate corresponding text.

### Downstream Use

This model is suitable for downstream tasks requiring email composition, such as email summarization, response generation, or personalized email content generation.

## Bias, Risks, and Limitations

The model's performance may vary depending on the quality and representativeness of the training data (Enron dataset). It may exhibit biases present in the training data, and caution should be exercised when using generated text in sensitive or critical applications.

### Recommendations

Users should review and post-process the generated text to ensure appropriateness and accuracy, particularly in professional or formal communication settings.

## How to Get Started with the Model

To use the model, you can leverage the Hugging Face Transformers library. Below is an example code snippet for generating emails:

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "willieseun/Enron-Mixral-8x7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Example prompt
prompt_text = "Compose an email from Claudio Ribeiro to Vince J Kaminski regarding the possibility of sponsoring a Financial Engineering Pro-Seminar at MIT. The email should mention that Enron may have sponsored a similar seminar in the past (related to Real Options) and inquire if the Research department or the Weather Desk (interested in a Weather Trading problem) would be interested in co-sponsoring."

pipe = pipeline("text-generation", tokenizer=tokenizer, model=model, return_full_text=False, max_length=190)
print(pipe(str(prompt_str)))
```

## Training Details

### Training Data

The model was fine-tuned on the Enron email dataset, which contains real-world emails from employees at the Enron Corporation.

### Training Procedure

The training utilized the PeftModelForCausalLM architecture and was fine-tuned using a Causal-LM approach, optimizing for email generation tasks.

#### Training Hyperparameters

- **Training regime:** CausalLM fine-tuning
- **Batch size:** 1
- **Learning rate:** 2e-4
- **Epochs:** 5

## Evaluation

### Testing Data, Factors & Metrics

#### Testing Data

The model was evaluated on a held-out subset of the Enron dataset.

#### Metrics

Only the evaluation loss was used.

### Results

The model demonstrates coherent and contextually relevant email generation based on the evaluation metrics.

## Environmental Impact

The environmental impact of model training and inference can vary based on the hardware and compute infrastructure used.

## Citation

**BibTeX:**

```
@article{willieseun_enron_mixtral_instruct,
  title={willieseun/Enron-Mixral-8x7b-instruct: Fine-tuned Email Generation Model},
  author={WILLIESEUN},
  journal={Hugging Face Model Hub},
  year={2024},
  howpublished={url{https://huggingface.co/willieseun/Enron-Mixral-8x7b-instruct}}
}
```

**APA:**

MistralAI. (2024). willieseun/Enron-Mixral-8x7b-instruct: Fine-tuned Email Generation Model. Hugging Face Model Hub. [https://huggingface.co/willieseun/Enron-Mixral-8x7b-instruct](https://huggingface.co/willieseun/Enron-Mixral-8x7b-instruct)

This model card provides an overview of the willieseun/Enron-Mixral-8x7b-instruct model, its use case, training details, and environmental considerations for users interested in utilizing this model for email generation tasks. For further information, please refer to the associated Hugging Face Model Hub repository.