Update README.md
Browse files
README.md
CHANGED
@@ -10,12 +10,10 @@ library_name: transformers
|
|
10 |
|
11 |
## Model Description
|
12 |
|
13 |
-
Radiantloom Email Assist 7B is an email-assistant large language model fine-tuned from [Zephyr-7B-Beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), over a custom-curated dataset of 1,000 email-assistant summarization tasks prepared by AI Geek Labs.
|
14 |
|
15 |
## Intended Uses & Limitations
|
16 |
-
The model is fine-tuned specifically for summarizing
|
17 |
-
|
18 |
-
The context length for fine-tuning was set at 512 tokens. Therefore, we recommend keeping the email text under 400 tokens to allow room for both the instructions and the output text.
|
19 |
|
20 |
radiantloom-email-assist-7b is not a state-of-the-art generative language model and is not designed to perform competitively on general tasks with more modern model architectures or models subject to larger pretraining corpuses.
|
21 |
|
|
|
10 |
|
11 |
## Model Description
|
12 |
|
13 |
+
Radiantloom Email Assist 7B is an email-assistant large language model fine-tuned from [Zephyr-7B-Beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), over a custom-curated dataset of 1,000 email-assistant summarization tasks prepared by AI Geek Labs. The context lenght for the model is 4096 and it is licensed for commercial use.
|
14 |
|
15 |
## Intended Uses & Limitations
|
16 |
+
The model is fine-tuned specifically for summarizing personal and business emails, converting them into voice memos, or chat messages.
|
|
|
|
|
17 |
|
18 |
radiantloom-email-assist-7b is not a state-of-the-art generative language model and is not designed to perform competitively on general tasks with more modern model architectures or models subject to larger pretraining corpuses.
|
19 |
|