NorGLM commited on
Commit
056256b
1 Parent(s): 675f253

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +90 -0
README.md CHANGED
@@ -1,3 +1,93 @@
1
  ---
2
  license: cc-by-nc-sa-4.0
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-sa-4.0
3
+ datasets:
4
+ - NorGLM/NO-CNN-DailyMail
5
+ language:
6
+ - 'no'
7
+ pipeline_tag: summarization
8
  ---
9
+
10
+ # Model Card
11
+
12
+ NorGPT-3B-summarization-peft is trained on top of [NorGPT-3B](https://huggingface.co/NorGLM/NorGPT-3B) model using RLHF strategy on [NO-CNN-DailyMail](https://huggingface.co/datasets/NorGLM/NO-CNN-DailyMail) dataset.
13
+
14
+ Different from step 2 in the original RLHF, we trained the reward model by estimating the semantic similarity between the candidate generated text and the human annotated summary (golden summary) using the [NorBERT](https://huggingface.co/ltg/norbert) model. Generated summaries with higher cosine similarity to the golden summary will be ranked higher in the training of the reward model.
15
+
16
+ Prompt format:
17
+ ```
18
+ Summarise the article:\\n{article} |||\\n{positive_sample}
19
+ ```
20
+
21
+ Inference prompt:
22
+ ```
23
+ Summarise the article:\\n{article} |||\\n
24
+ ```
25
+
26
+ ## Training Split
27
+ We split data to train on step 1-step 3 for RLHF:
28
+ | | #samples |
29
+ |-------|---------------------|
30
+ | step 1 | 61181 |
31
+ | step 2 | 16798 |
32
+ | step 3 | 9758 |
33
+
34
+ ## Run the Model
35
+ ```python
36
+ from transformers import AutoModelForCausalLM, AutoTokenizer
37
+ import torch
38
+
39
+ model_id = "NorGLM/NorGPT-3B-rfhl-summarization"
40
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
41
+ tokenizer.pad_token = tokenizer.eos_token
42
+
43
+ model = AutoModelForCausalLM.from_pretrained(
44
+ model_id,
45
+ device_map='auto',
46
+ torch_dtype=torch.bfloat16
47
+ )
48
+ ```
49
+
50
+ ## Inference on test set
51
+ Load the model to evaluate on the test set of NO-CNN-DailyMail dataset:
52
+ ```python
53
+ def generate_texts(model, tokenizer, prompts, max_seq_length=200, do_sample=True, top_p=0.95, top_k=10):
54
+ # prompts are a list of news articles
55
+ results = []
56
+ cnt = 0
57
+ for prompt in prompts:
58
+ cnt += 1
59
+ pro_len = len(prompt.split())
60
+ if pro_len>1024:
61
+ results.append('')
62
+ continue
63
+
64
+ prompt = 'Summarise the article:\\n' + prompt + ' |||\\n'
65
+
66
+ model_inputs = tokenizer(prompt, return_tensors='pt').to(torch_device)
67
+ output = model.generate(**model_inputs, do_sample=False, max_new_tokens=max_seq_length)
68
+ result = tokenizer.decode(output[0], skip_special_tokens=True)
69
+ result = result.split("|||\\n")[-1]
70
+ results.append(result)
71
+ return results
72
+
73
+ print("--LOADING EVAL DATAS---")
74
+ eval_data = load_dataset("NorGLM/NO-CNN-DailyMail", data_files="test.csv")
75
+ prompts = eval_data['train']['article']
76
+ positive_samples = eval_data['train']['positive_sample']
77
+
78
+ print("--MAKING PREDICTIONS---")
79
+ model.eval()
80
+
81
+ output_file = <output file name>
82
+ with torch.no_grad():
83
+ results = generate_texts(model, tokenizer, prompts)
84
+
85
+ df = pd.DataFrame({'article':prompts, 'generated_text':results, 'positive_sample':positive_samples})
86
+
87
+ print("Save results to csv file...")
88
+ df.to_csv(output_file)
89
+
90
+ ```
91
+
92
+ ## Note
93
+ More training details will be released soon!