marcodambra commited on
Commit
4615aa9
·
verified ·
1 Parent(s): 402f76b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -15,12 +15,12 @@ tags:
15
 
16
  XXXX is an updated version of [Mistral-7B-v0.2](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf) specifically fine-tuned with SFT and LoRA adjustments.
17
 
18
- - It's trained both on publicly available datasets, like SQUAD-it, and datasets we've created in-house.
19
  - it's designed to understand and maintain context, making it ideal for Retrieval Augmented Generation (RAG) tasks and applications requiring contextual awareness.
20
 
21
  # Evaluation
22
 
23
- We evaluated the model using the same test sets as used for the Open Ita LLM Leaderboard
24
 
25
  | hellaswag_it acc_norm | arc_it acc_norm | m_mmlu_it 5-shot acc | Average |
26
  |:----------------------| :--------------- | :-------------------- | :------- |
@@ -59,7 +59,7 @@ print(decoded[0])
59
 
60
  xxxx has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of
61
  responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition
62
- of the corpus was used to train the base model (mistralai/Mistral-7B-v0.2), however it is likely to have included a mix of Web data and technical sources
63
  like books and code.
64
 
65
 
 
15
 
16
  XXXX is an updated version of [Mistral-7B-v0.2](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf) specifically fine-tuned with SFT and LoRA adjustments.
17
 
18
+ - It's trained both on publicly available datasets, like [SQUAD-it](https://huggingface.co/datasets/squad_it), and datasets we've created in-house.
19
  - it's designed to understand and maintain context, making it ideal for Retrieval Augmented Generation (RAG) tasks and applications requiring contextual awareness.
20
 
21
  # Evaluation
22
 
23
+ We evaluated the model using the same test sets as used for the [Open Ita LLM Leaderboard](https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard):
24
 
25
  | hellaswag_it acc_norm | arc_it acc_norm | m_mmlu_it 5-shot acc | Average |
26
  |:----------------------| :--------------- | :-------------------- | :------- |
 
59
 
60
  xxxx has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of
61
  responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition
62
+ of the corpus was used to train the base model [mistralai/Mistral-7B-v0.2](https://huggingface.co/mistralai/Mistral-7B-v0.2), however it is likely to have included a mix of Web data and technical sources
63
  like books and code.
64
 
65