willieseun's picture
Upload model
5b1a594 verified
---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- NLP
- Text Generation
- Fine-tuning
- Language Model
pipeline_tag: text-generation
---
## Model Details
### Model Description
The willieseun/Enron-Mixral-8x7b-instruct model is a large-scale language model based on the Mixtral architecture, fine-tuned specifically for email generation using the Enron dataset. This model is designed to generate coherent and contextually appropriate text, particularly suited for tasks related to email composition.
- **Developed by:** WILLIESEUN
- **Model type:** Transformer-based Language Model
- **Language(s) (NLP):** English
- **License:** Apache 2.0
### Model Sources
- **Repository:** [Hugging Face Model Hub](https://huggingface.co/MistralAI/Mixtral-8x7B-Instruct-v0.1)
## Uses
### Direct Use
The model can be used directly for email generation tasks. Users can input prompts or partial content, and the model will generate corresponding text.
### Downstream Use
This model is suitable for downstream tasks requiring email composition, such as email summarization, response generation, or personalized email content generation.
## Bias, Risks, and Limitations
The model's performance may vary depending on the quality and representativeness of the training data (Enron dataset). It may exhibit biases present in the training data, and caution should be exercised when using generated text in sensitive or critical applications.
### Recommendations
Users should review and post-process the generated text to ensure appropriateness and accuracy, particularly in professional or formal communication settings.
## How to Get Started with the Model
To use the model, you can leverage the Hugging Face Transformers library. Below is an example code snippet for generating emails:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "willieseun/Enron-Mixral-8x7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example prompt
prompt_text = "Compose an email from Claudio Ribeiro to Vince J Kaminski regarding the possibility of sponsoring a Financial Engineering Pro-Seminar at MIT. The email should mention that Enron may have sponsored a similar seminar in the past (related to Real Options) and inquire if the Research department or the Weather Desk (interested in a Weather Trading problem) would be interested in co-sponsoring."
pipe = pipeline("text-generation", tokenizer=tokenizer, model=model, return_full_text=False, max_length=190)
print(pipe(str(prompt_str)))
```
## Training Details
### Training Data
The model was fine-tuned on the Enron email dataset, which contains real-world emails from employees at the Enron Corporation.
### Training Procedure
The training utilized the PeftModelForCausalLM architecture and was fine-tuned using a Causal-LM approach, optimizing for email generation tasks.
#### Training Hyperparameters
- **Training regime:** CausalLM fine-tuning
- **Batch size:** 1
- **Learning rate:** 2e-4
- **Epochs:** 5
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
The model was evaluated on a held-out subset of the Enron dataset.
#### Metrics
Only the evaluation loss was used.
### Results
The model demonstrates coherent and contextually relevant email generation based on the evaluation metrics.
## Environmental Impact
The environmental impact of model training and inference can vary based on the hardware and compute infrastructure used.
## Citation
**BibTeX:**
```
@article{willieseun_enron_mixtral_instruct,
title={willieseun/Enron-Mixral-8x7b-instruct: Fine-tuned Email Generation Model},
author={WILLIESEUN},
journal={Hugging Face Model Hub},
year={2024},
howpublished={url{https://huggingface.co/willieseun/Enron-Mixral-8x7b-instruct}}
}
```
**APA:**
MistralAI. (2024). willieseun/Enron-Mixral-8x7b-instruct: Fine-tuned Email Generation Model. Hugging Face Model Hub. [https://huggingface.co/willieseun/Enron-Mixral-8x7b-instruct](https://huggingface.co/willieseun/Enron-Mixral-8x7b-instruct)
This model card provides an overview of the willieseun/Enron-Mixral-8x7b-instruct model, its use case, training details, and environmental considerations for users interested in utilizing this model for email generation tasks. For further information, please refer to the associated Hugging Face Model Hub repository.