nextai-team commited on
Commit
fc5ad3e
1 Parent(s): 3a0e6d4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -16,6 +16,7 @@ metrics:
16
 
17
 
18
  **Model Description**
 
19
  Moe-2x7b-QA-Code is a state-of-the-art language model specialized in Question Answering (QA) and code-related queries. Leveraging the Mixture of Experts (MoE) architecture, this model has been trained on a diverse dataset encompassing technical documentation, forums, and code repositories to provide accurate and context-aware responses to both technical and general questions.
20
 
21
  ***How to Use***
@@ -58,7 +59,7 @@ Performance demonstrates significant improvements in accuracy and relevance over
58
 
59
  This model, like any other, has its limitations. It may exhibit biases inherent in the training data or struggle with questions outside its training scope. Users should critically assess the model's outputs, especially for sensitive or critical applications.
60
 
61
- Training Data:
62
 
63
  The Moe-2x7b-QA-Code model was trained on a curated dataset comprising technical documentation, Stack Overflow posts, GitHub repositories, and other code-related content. This extensive training set ensures the model's proficiency in understanding and generating code-related content alongside general language understanding.
64
 
 
16
 
17
 
18
  **Model Description**
19
+
20
  Moe-2x7b-QA-Code is a state-of-the-art language model specialized in Question Answering (QA) and code-related queries. Leveraging the Mixture of Experts (MoE) architecture, this model has been trained on a diverse dataset encompassing technical documentation, forums, and code repositories to provide accurate and context-aware responses to both technical and general questions.
21
 
22
  ***How to Use***
 
59
 
60
  This model, like any other, has its limitations. It may exhibit biases inherent in the training data or struggle with questions outside its training scope. Users should critically assess the model's outputs, especially for sensitive or critical applications.
61
 
62
+ **Training Data**
63
 
64
  The Moe-2x7b-QA-Code model was trained on a curated dataset comprising technical documentation, Stack Overflow posts, GitHub repositories, and other code-related content. This extensive training set ensures the model's proficiency in understanding and generating code-related content alongside general language understanding.
65