jpcorb20 commited on
Commit
11e5f2a
1 Parent(s): e941ed5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -16
README.md CHANGED
@@ -28,21 +28,21 @@ The initial weigths were from the [google/pegasus-reddit_tifu](https://huggingfa
28
 
29
  ## Training procedure
30
 
31
- Used the _example/seq2seq/run_summarization.py_ script from the transformers source _4.5.0dev0_.
32
 
33
- n_epochs: 3,\\
34
- batch_size: 4, \\
35
- max_source_length: 512,\\
36
  max_target_length: 128
37
 
38
  ## Eval results
39
 
40
- eval_gen_len: 35.89,\\
41
- eval_loss: 1.3807392120361328,\\
42
- eval_rouge1: 47.3372,\\
43
- eval_rouge2: 24.4728,\\
44
- eval_rougeL: 37.9078,\\
45
- eval_rougeLsum: 43.5744,\\
46
  eval_samples_per_second: 2.814
47
 
48
  ## Example
@@ -54,12 +54,12 @@ Used the _example/seq2seq/run_summarization.py_ script from the transformers sou
54
  tokenizer = PegasusTokenizer.from_pretrained(model_name)
55
  model = PegasusForConditionalGeneration.from_pretrained(model_name)
56
 
57
- src_text = """Carter: Hey Alexis, I just wanted to let you know that I had a really nice time with you tonight.\\r\
58
- Alexis: Thanks Carter. Yeah, I really enjoyed myself as well.\\r\
59
- Carter: If you are up for it, I would really like to see you again soon.\\r\
60
- Alexis: Thanks Carter, I'm flattered. But I have a really busy week coming up.\\r\
61
- Carter: Yeah, no worries. I totally understand. But if you ever want to go grab dinner again, just let me know.\\r\
62
- Alexis: Yeah of course. Thanks again for tonight. Carter: Sure. Have a great night.\\r\
63
  """
64
 
65
  token_params = dict(max_length=512, truncation=True, padding='longest', return_tensors="pt")
 
28
 
29
  ## Training procedure
30
 
31
+ Used the example/seq2seq/run_summarization.py script from the transformers source 4.5.0dev0.
32
 
33
+ n_epochs: 3,\
34
+ batch_size: 4, \
35
+ max_source_length: 512,\
36
  max_target_length: 128
37
 
38
  ## Eval results
39
 
40
+ eval_gen_len: 35.89,\
41
+ eval_loss: 1.3807392120361328,\
42
+ eval_rouge1: 47.3372,\
43
+ eval_rouge2: 24.4728,\
44
+ eval_rougeL: 37.9078,\
45
+ eval_rougeLsum: 43.5744,\
46
  eval_samples_per_second: 2.814
47
 
48
  ## Example
 
54
  tokenizer = PegasusTokenizer.from_pretrained(model_name)
55
  model = PegasusForConditionalGeneration.from_pretrained(model_name)
56
 
57
+ src_text = """Carter: Hey Alexis, I just wanted to let you know that I had a really nice time with you tonight.\\\\r\\
58
+ Alexis: Thanks Carter. Yeah, I really enjoyed myself as well.\\\\r\\
59
+ Carter: If you are up for it, I would really like to see you again soon.\\\\r\\
60
+ Alexis: Thanks Carter, I'm flattered. But I have a really busy week coming up.\\\\r\\
61
+ Carter: Yeah, no worries. I totally understand. But if you ever want to go grab dinner again, just let me know.\\\\r\\
62
+ Alexis: Yeah of course. Thanks again for tonight. Carter: Sure. Have a great night.\\\\r\\
63
  """
64
 
65
  token_params = dict(max_length=512, truncation=True, padding='longest', return_tensors="pt")