julien-c HF staff commited on
Commit
8dcf634
1 Parent(s): fd954b8

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/ktrapeznikov/gpt2-medium-topic-news/README.md

Files changed (1) hide show
  1. README.md +41 -0
README.md ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ thumbnail:
5
+ widget:
6
+ - text: "topic: climate article:"
7
+ ---
8
+
9
+ # GPT2-medium-topic-news
10
+
11
+ ## Model description
12
+
13
+ GPT2-medium fine tuned on a large news corpus conditioned on a topic
14
+
15
+ ## Intended uses & limitations
16
+
17
+ #### How to use
18
+
19
+ To generate a news article text conditioned on a topic, prompt model with:
20
+ `topic: climate article:`
21
+
22
+ The following tags were used during training:
23
+ `arts law international science business politics disaster world conflict football sport sports artanddesign environment music film lifeandstyle business health commentisfree books technology media education politics travel stage uk society us money culture religion science news tv fashion uk australia cities global childrens sustainable global voluntary housing law local healthcare theguardian`
24
+
25
+ Zero shot generation works pretty well as long as `topic` is a single word and not too specific.
26
+
27
+ ```python
28
+ device = "cuda:0"
29
+ tokenizer = AutoTokenizer.from_pretrained("ktrapeznikov/gpt2-medium-topic-news")
30
+ model = AutoModelWithLMHead.from_pretrained("ktrapeznikov/gpt2-medium-topic-news")
31
+ model.to(device)
32
+ topic = "climate"
33
+ prompt = tokenizer(f"topic: {topic} article:", return_tensors="pt")
34
+ out = model.generate(prompt["input_ids"].to(device), do_sample=True,max_length=500, early_stopping=True, top_p=.9)
35
+ print(tokenizer.decode(list(out.cpu()[0])))
36
+ ```
37
+
38
+ ## Training data
39
+
40
+
41
+ ## Training procedure