nextai-team commited on
Commit
760e137
1 Parent(s): a7c3b2a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -15,16 +15,16 @@ metrics:
15
 
16
 
17
 
18
- Model Description
19
  Moe-2x7b-QA-Code is a state-of-the-art language model specialized in Question Answering (QA) and code-related queries. Leveraging the Mixture of Experts (MoE) architecture, this model has been trained on a diverse dataset encompassing technical documentation, forums, and code repositories to provide accurate and context-aware responses to both technical and general questions.
20
 
21
- How to Use
22
  ```
23
  from transformers import AutoTokenizer
24
  import transformers
25
  import torch
26
 
27
- model = "nextai-team/Moe-2x7b-QA-Code" #If you want to test your own model, replace this value with the model directory path
28
 
29
  tokenizer = AutoTokenizer.from_pretrained(model)
30
  pipeline = transformers.pipeline(
@@ -44,7 +44,7 @@ response = generate_resposne("How to learn coding .Please provide a step by step
44
  print(response)
45
 
46
  ```
47
- Limitations and Bias:
48
 
49
  This model, like any other, has its limitations. It may exhibit biases inherent in the training data or struggle with questions outside its training scope. Users should critically assess the model's outputs, especially for sensitive or critical applications.
50
 
@@ -52,15 +52,15 @@ Training Data:
52
 
53
  The Moe-2x7b-QA-Code model was trained on a curated dataset comprising technical documentation, Stack Overflow posts, GitHub repositories, and other code-related content. This extensive training set ensures the model's proficiency in understanding and generating code-related content alongside general language understanding.
54
 
55
- Training Procedure:
56
 
57
  The model was trained using a Mixture of Experts (MoE) approach, allowing it to dynamically leverage different subsets of parameters for different types of input data. This method enhances the model's capacity and efficiency, enabling it to excel in a wide range of QA and coding tasks.
58
 
59
 
60
 
61
- Model Architecture:
62
 
63
  Moe-2x7b-QA-Code employs an advanced MoE architecture with 2x7 billion parameters, optimized for high performance in QA and coding tasks. This architecture enables the model to efficiently process and generate accurate responses to complex queries.
64
 
65
- Contact:
66
  Https://nextai.co.in
 
15
 
16
 
17
 
18
+ **Model Description**
19
  Moe-2x7b-QA-Code is a state-of-the-art language model specialized in Question Answering (QA) and code-related queries. Leveraging the Mixture of Experts (MoE) architecture, this model has been trained on a diverse dataset encompassing technical documentation, forums, and code repositories to provide accurate and context-aware responses to both technical and general questions.
20
 
21
+ ***How to Use***
22
  ```
23
  from transformers import AutoTokenizer
24
  import transformers
25
  import torch
26
 
27
+ model = "nextai-team/Moe-2x7b-QA-Code"
28
 
29
  tokenizer = AutoTokenizer.from_pretrained(model)
30
  pipeline = transformers.pipeline(
 
44
  print(response)
45
 
46
  ```
47
+ ***Limitations and Bias***
48
 
49
  This model, like any other, has its limitations. It may exhibit biases inherent in the training data or struggle with questions outside its training scope. Users should critically assess the model's outputs, especially for sensitive or critical applications.
50
 
 
52
 
53
  The Moe-2x7b-QA-Code model was trained on a curated dataset comprising technical documentation, Stack Overflow posts, GitHub repositories, and other code-related content. This extensive training set ensures the model's proficiency in understanding and generating code-related content alongside general language understanding.
54
 
55
+ ***Training Procedure***
56
 
57
  The model was trained using a Mixture of Experts (MoE) approach, allowing it to dynamically leverage different subsets of parameters for different types of input data. This method enhances the model's capacity and efficiency, enabling it to excel in a wide range of QA and coding tasks.
58
 
59
 
60
 
61
+ ***Model Architecture**
62
 
63
  Moe-2x7b-QA-Code employs an advanced MoE architecture with 2x7 billion parameters, optimized for high performance in QA and coding tasks. This architecture enables the model to efficiently process and generate accurate responses to complex queries.
64
 
65
+ ***Contact**
66
  Https://nextai.co.in