PahaII commited on
Commit
777c9ef
1 Parent(s): 69ef442

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -3
README.md CHANGED
@@ -9,7 +9,7 @@ tags:
9
  library_name: transformers
10
  ---
11
  # GPT-Neo-1.3B SimCTG for Conditional News Generation
12
- [SimCTG](https://github.com/yxuansu/SimCTG) model (released by Su et.al. in this [paper](https://arxiv.org/abs/2202.06417)), leveraging [GPT-Neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) (a large language model).
13
 
14
  ## Data Details
15
  It was trained on a large news corpus containing news content from 19 different publishers. Detailed dataset configuration is as follow:
@@ -48,9 +48,14 @@ We use the prompt template `Publisher: {vox} article: ` for training. We trained
48
 
49
  >>> publisher = "Reuters"
50
  >>> assert publisher in ["Reuters", "NYT", "CNBC", "Hill", "People", "CNN", "Vice", "Mashable", "Refinery", "BI", "TechCrunch", "Verge", "TMZ", "Axios", "Vox", "Guardian", "BBCNews", "WashingtonPost", "USAToday"]
51
- >>> prompt = f"Publisher: {publisher.lower()} article: "
52
 
53
  >>> inputs = tokenizer(prompt, return_tensors="pt")
54
  >>> out = model.generate(**inputs, penalty_alpha=0.6)
55
  >>> print(tokenizer.batch_decode(out, skip_special_tokens=True)[0])
56
- ```
 
 
 
 
 
 
9
  library_name: transformers
10
  ---
11
  # GPT-Neo-1.3B SimCTG for Conditional News Generation
12
+ [SimCTG](https://github.com/yxuansu/SimCTG) model (released by Su et al. in this [paper](https://arxiv.org/abs/2202.06417)), leveraging [GPT-Neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) (a large language model).
13
 
14
  ## Data Details
15
  It was trained on a large news corpus containing news content from 19 different publishers. Detailed dataset configuration is as follow:
 
48
 
49
  >>> publisher = "Reuters"
50
  >>> assert publisher in ["Reuters", "NYT", "CNBC", "Hill", "People", "CNN", "Vice", "Mashable", "Refinery", "BI", "TechCrunch", "Verge", "TMZ", "Axios", "Vox", "Guardian", "BBCNews", "WashingtonPost", "USAToday"]
51
+ >>> prompt = f"Publisher: {publisher.lower()} article: Local police is dealing with a car accident"
52
 
53
  >>> inputs = tokenizer(prompt, return_tensors="pt")
54
  >>> out = model.generate(**inputs, penalty_alpha=0.6)
55
  >>> print(tokenizer.batch_decode(out, skip_special_tokens=True)[0])
56
+
57
+ ## Publisher: reuters article: Local police is dealing with a car accident that killed two people and injured several others. The incident happened in the town of Dharamshala,
58
+ ## where an SUV crashed into a truck on Sunday evening. According to eyewitnesses, the vehicle was traveling at high speed when it collided with another vehicle.
59
+ ## The driver of the SUV then tried to flee the scene but could not do so due to the large number of onlookers. Police officers are now searching for the driver of the SUV who they suspect may have been driving
60
+ ## under the influence of alcohol or drugs. It’s unclear what caused the crash. ... ...
61
+ ```