Create model card
Browse files
README.md
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
---
|
5 |
+
|
6 |
+
# FIDO-GPT: Generative AI behind "Fidonet Cybernetic Immortality" Project
|
7 |
+
|
8 |
+
[FIDONet](https://en.wikipedia.org/wiki/FidoNet) is a historic computer network based on nightly mail exchange between servers
|
9 |
+
via telephone lines, which was popular in 1990-s. In [FIDONet Cybernetic Immortality Project](https://soshnikov.com/art/fidoci)
|
10 |
+
we are looking to create exhibits that will revive now-almost-dead FIDONet by automatically writing correspondence in
|
11 |
+
FIDONet style via generative large language models.
|
12 |
+
|
13 |
+
This model is based on [GPT2-large](https://huggingface.co/gpt2-large) model, and was fine-tuned for 2 epochs on archives of
|
14 |
+
[ExecPC BBS](https://en.wikipedia.org/wiki/ExecPC_BBS), obtained from [here](https://breakintochat.com/collections/messages/fidonet/index.html).
|
15 |
+
This process took around 9 hours on NVidia A100 compute in Yandex Datasphere service.
|
16 |
+
|
17 |
+
This code can be used for generation:
|
18 |
+
|
19 |
+
```python
|
20 |
+
from transformers import pipeline, AutoModelForCausalLM,AutoTokenizer
|
21 |
+
import torch
|
22 |
+
|
23 |
+
model_name = 'estonto/fido-gpt'
|
24 |
+
|
25 |
+
model = AutoModelForCausalLM.from_pretrained(model_name)
|
26 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
27 |
+
pipe = pipeline(model=model,tokenizer=tokenizer,task="text-generation",device="cuda")
|
28 |
+
result = pipe("<s>Topic: COMPUTING",do_sample=True,max_length=500)[0]['generated_text'].replace('\\n','\n')
|
29 |
+
```
|
30 |
+
|
31 |
+
Project idea and model training: [Dmitry Soshnikov](https://soshnikov.com)
|