Text Generation
Transformers
Safetensors
mistral
Inference Endpoints
text-generation-inference
Kukedlc commited on
Commit
4ebe198
1 Parent(s): 21514c9

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -0
README.md ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - microsoft/orca-math-word-problems-200k
5
+ - ise-uiuc/Magicoder-Evol-Instruct-110K
6
+ - Vezora/Tested-22k-Python-Alpaca
7
+ ---
8
+ ![Kukedlc/NeuralExperiment-7b-dare-ties](https://raw.githubusercontent.com/kukedlc87/imagenes/main/DALL%C2%B7E%202024-03-05%2000.28.41%20-%20Imagine%20a%20visual%20representation%20of%20a%20language%20model%20inspired%20by%20the%20Mandelbrot%20fractal.%20The%20scene%20should%20depict%20an%20abstract%2C%20intricate%20network%20resembl.webp)
9
+
10
+ # Datacard for Custom Trained Model
11
+ - Base Model : [Kukedlc/NeuralExperiment-7b-dare-ties](https://huggingface.co/Kukedlc/NeuralExperiment-7b-dare-ties)
12
+
13
+
14
+ ## Model Description
15
+ This model is an experimental AI trained on three distinct datasets focusing on logical reasoning, mathematics, and programming. The training process involved fine-tuning from the last layer (31) backward with a gradually decreasing learning rate. The primary goal is to address and rectify the common 'INSTINST' bug observed in leaderboard models through targeted training on the latest layers.
16
+
17
+ ## Datasets Used for Training
18
+ - `microsoft/orca-math-word-problems-200k`: A large-scale dataset of mathematical word problems aimed at enhancing the model's numerical reasoning and problem-solving capabilities.
19
+ - `ise-uiuc/Magicoder-Evol-Instruct-110K`: A dataset designed to improve code generation and understanding, contributing to the model's programming language proficiency.
20
+ - `sahil2801/CodeAlpaca-20k`: A dataset focused on programming challenges to further refine the model's coding and logical reasoning skills.
21
+
22
+ Each dataset contributed 20,000 data points to the training process, ensuring a balanced representation of logic, mathematics, and programming tasks.
23
+
24
+ ## Training Environment
25
+ - The model was trained on Kaggle's free GPU environment, allowing for cost-effective fine-tuning and experimentation.
26
+ - Users interested in replicating or extending this training can find the Kaggle notebook in my profile or request it directly for collaborative purposes.
27
+
28
+ ## Preliminary Results
29
+ - The model shows promising results in solving logical puzzles and mathematical problems, especially those with misleading or non-obvious solutions that it initially struggled with.
30
+ - Ongoing experiments aim to quantify the impact of targeted training on the model's reasoning capabilities across different domains.
31
+
32
+ ## Invitation for Collaboration
33
+ - Feedback, suggestions, and collaborative efforts are highly encouraged to further refine and evaluate the model.
34
+ - If interested in contributing or experimenting with this model, please feel free to reach out or access the code directly from my Kaggle profile.
35
+
36
+ ## Contact Information
37
+ - For any inquiries, suggestions, or collaboration proposals, please contact [Your Name] at [Your Email].