manu commited on
Commit
104db24
1 Parent(s): 47507f1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -1
README.md CHANGED
@@ -75,5 +75,19 @@ Que puis-je faire à Marseille?<|im_end|>
75
  inputs = tokenizer(CHAT, return_tensors="pt", add_special_tokens=True).to(model.device)
76
  tokens = model.generate(**inputs, max_new_tokens=150, do_sample=True, top_p=0.95, top_k=60, temperature=0.5)
77
  print(tokenizer.decode(tokens[0]))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78
 
79
- ```
 
75
  inputs = tokenizer(CHAT, return_tensors="pt", add_special_tokens=True).to(model.device)
76
  tokens = model.generate(**inputs, max_new_tokens=150, do_sample=True, top_p=0.95, top_k=60, temperature=0.5)
77
  print(tokenizer.decode(tokens[0]))
78
+ ```
79
+
80
+
81
+ ## Model limitations
82
+
83
+ Evaluation results indicate the model is strong in its size category, and offers decent performances on writing-based tasks and internal knowledge, and very strong performance on translation tasks. The small size of the CroissantLLM model however hinders its capacity to perform more complex reasoning-based tasks, at least in a zero or few-shot manner in its generalist base or chat-model versions. This is aligned with other models of size and underlines the importance of scale for more abstract tasks.
84
+
85
+ #### Knowledge Cutoff
86
+ The model training dataset has a data cutoff date corresponding to the November 2023 Wikipedia dump. This is the de facto knowledge cutoff date for our base model, although a lot of information dates back further. Updated versions can be trained through continued pre-training or subsequent fine-tuning.
87
+
88
+ #### Multilingual performance.
89
+ CroissantLLM is mostly a French and English model. Code performance is relatively limited, and although some amount of data from other languages is included within the SlimPajama training set, out-of-the-box performance in other languages is not to be expected, although some European languages do work quite well.
90
+
91
+ #### Hallucinations.
92
+ CroissantLLM can hallucinate and output factually incorrect data, especially regarding complex topics. This is to be expected given the small model size, and hallucination rates seem inferior to most models of the same size category although no quantitative assessments have been conducted outside of MT-Bench experiments.
93