Crystalcareai commited on
Commit
cb0c1d7
1 Parent(s): 24382d5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -44,7 +44,7 @@ To ensure the robustness and effectiveness of Llama-3-SEC, the model has undergo
44
 
45
  - Domain-specific perplexity, measuring the model's performance on SEC-related data
46
 
47
- <img src="https://i.ibb.co/xGHRfLf/Screenshot-2024-06-11-at-10-23-59-PM.png" width="600">
48
 
49
  - Extractive numerical reasoning tasks, using subsets of TAT-QA and ConvFinQA datasets
50
 
@@ -52,9 +52,10 @@ To ensure the robustness and effectiveness of Llama-3-SEC, the model has undergo
52
 
53
  - General evaluation metrics, such as BIG-bench, AGIEval, GPT4all, and TruthfulQA, to assess the model's performance on a wide range of tasks
54
 
55
- <img src="https://i.ibb.co/K5d0wMh/Screenshot-2024-06-11-at-10-23-18-PM.png" width="600">
 
56
 
57
- The evaluation results demonstrate significant improvements in domain-specific performance while maintaining strong general capabilities, thanks to the use of advanced CPT and model merging techniques.
58
 
59
 
60
  ## Training and Inference
 
44
 
45
  - Domain-specific perplexity, measuring the model's performance on SEC-related data
46
 
47
+ <img src="https://i.ibb.co/K5d0wMh/Screenshot-2024-06-11-at-10-23-18-PM.png" width="600">
48
 
49
  - Extractive numerical reasoning tasks, using subsets of TAT-QA and ConvFinQA datasets
50
 
 
52
 
53
  - General evaluation metrics, such as BIG-bench, AGIEval, GPT4all, and TruthfulQA, to assess the model's performance on a wide range of tasks
54
 
55
+ <img src="https://i.ibb.co/xGHRfLf/Screenshot-2024-06-11-at-10-23-59-PM.png" width="600">
56
+
57
 
58
+ These results demonstrate significant improvements in domain-specific performance while maintaining strong general capabilities, thanks to the use of advanced CPT and model merging techniques.
59
 
60
 
61
  ## Training and Inference