Samarvir S Vasale commited on
Commit
bca02e7
1 Parent(s): 1496e1d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -0
README.md ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: mistralai/Mistral-Nemo-Instruct-2407
3
+ datasets:
4
+ - iamtarun/code_instructions_120k_alpaca
5
+ language:
6
+ - en
7
+ library_name: peft
8
+ license: apache-2.0
9
+ pipeline_tag: text-generation
10
+ ---
11
+
12
+ # Aurora-12B
13
+ Aurora-12B is a powerful language model fine-tuned from the Mistral-Nemo-Instruct-2407 base model, designed specifically for code generation and understanding tasks. This model leverages the extensive iamtarun/code_instructions_120k_alpaca dataset to provide high-quality code-related instructions and solutions.
14
+
15
+ ## Overview
16
+ Aurora-12B is tailored for developers and researchers who need advanced code completion, code understanding, and programming assistance. It understands a wide variety of programming languages and can provide accurate and contextually relevant suggestions.
17
+
18
+ ## Features
19
+ - Code Generation: Generate code snippets in various programming languages.
20
+ - Code Completion: Complete partial code fragments.
21
+ - Code Understanding: Explain code functionality and provide comments.
22
+ - Instruction Following: Follow and execute code-related instructions with high accuracy.
23
+
24
+ ## Model Details
25
+ - Base Model: Mistral-Nemo-Instruct-2407
26
+ - Fine-Tuned On: iamtarun/code_instructions_120k_alpaca
27
+ - Parameters: 12 billion
28
+
29
+ ## Dataset
30
+ The model was fine-tuned on the iamtarun/code_instructions_120k_alpaca dataset, which includes 120,000 examples of code instructions and solutions. This diverse dataset ensures that the model can handle a wide range of coding scenarios.