File size: 2,550 Bytes
95e3918
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d17469d
95e3918
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c5a062a
95e3918
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
license: apache-2.0
datasets:
- allenai/dolma
language:
- en
base_model: allenai/OLMo-1B-0724-hf
library_name: transformers
pipeline_tag: text-generation
tags:
- art
- literature
- OLMo
- allenai
---
## Model Overview

`OLMo-1B-Base-Shakespeare` is a fine-tuned version of the `allenai/OLMo-1B-0724-hf` model, retrained on the complete collection of novels by William Shakespeare. The model aims to generate text in the style of Shakespeare's works and has been optimized to capture the linguistic and stylistic nuances present in the original text.

## Model Details
- **Model Type:** Base Model
- **Base Model:** [allenai/OLMo-1B-0724-hf](https://huggingface.co/allenai/OLMo-1B-0724-hf)
- **Training Dataset:** [Works by William Shakespeare](https://gist.githubusercontent.com/blakesanie/dde3a2b7e698f52f389532b4b52bc254/raw/76fe1b5e9efcf0d2afdfd78b0bfaa737ad0a67d3/shakespeare.txt)
- **GPU VRAM Requirements:** 25 GB

- **Intended Use Cases:** 
  - Creative writing assistance
  - Educational purposes for studying literary styles
  - Text generation in the style of William Shakespeare

## Installation
Ensure you have the `transformers` library installed:
```bash
pip install transformers
```
## Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

torch.random.manual_seed(0)

model_name = 'sartajbhuvaji/OLMo-1B-Base-Shakespeare'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    device_map="cuda",
    torch_dtype="auto",
    trust_remote_code=True,
)
model.to('cuda')

input_text = 'Hello how are you?'
input_ids = tokenizer.encode(input_text, return_tensors='pt').to('cuda')

output = model.generate(input_ids, max_length=100, num_return_sequences=1, no_repeat_ngram_size=2)
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
'''
Hello how are you?
  SECOND GENTLEMAN. I am a gentleman.
    The Duke, my lord, and all the court are yours.

                          Enter a MESSENGER

  THIRD GENTSLE MAN. Here's a messenger. What news? What's the news,
      sir? How doth your lady? Is she well? Or is she
        hears'd, beaten, or slain? The news is, sir
'''
```
## Fientuning Details
- **Global Step:** 4656
- **Train Runtime:** 2710.0517 sec 
- **Train Samples per second:** 13.742
- **Train Steps per second:** 1.718
- **Epoch:** 3.0


## Training Curve

![image/png](https://cdn-uploads.huggingface.co/production/uploads/6354695712edd0ed5dc46b04/cVDWr59JFTZ6evZwgw5NF.png)