Meforgers commited on
Commit
397cd7c
1 Parent(s): 8dc0036

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -1
README.md CHANGED
@@ -1,3 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # LLama-3.1-Thinkable: Bilingual AI Expert in Mathematics and Programming
2
 
3
  LLama-3.1-Thinkable is a fine-tuned version of LLama 3.1, specifically designed to excel in **bilingual (Turkish and English)** communication, advanced **mathematics**, and **programming** tasks. This model combines enhanced reasoning capabilities with strong multilingual proficiency, offering a cutting-edge solution for users in diverse fields.
@@ -32,7 +55,7 @@ LLama-3.1-Thinkable is a fine-tuned version of LLama 3.1, specifically designed
32
  - **Fine-tuning Dataset:**
33
  - High-quality bilingual datasets (Turkish-English).
34
  - Specialized datasets for mathematics and programming tasks.
35
- - **Parameter Count:** 13B / 65B (specify based on your model size).
36
  - **Training Environment:**
37
  - Optimized on GPUs with [Hugging Face Transformers](https://huggingface.co/transformers).
38
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - Meforgers/Aixr-Thinkable-V1
5
+ language:
6
+ - tr
7
+ - en
8
+ base_model:
9
+ - meta-llama/Llama-3.1-8B
10
+ new_version: meta-llama/Llama-3.1-8B
11
+ tags:
12
+ - code
13
+ - medical
14
+ - math
15
+ - turkish
16
+ - türkçe
17
+ - coding
18
+ - yazılım
19
+ - programlama
20
+ - thinkable
21
+ - düşünebilen
22
+ - düşünen
23
+ ---
24
  # LLama-3.1-Thinkable: Bilingual AI Expert in Mathematics and Programming
25
 
26
  LLama-3.1-Thinkable is a fine-tuned version of LLama 3.1, specifically designed to excel in **bilingual (Turkish and English)** communication, advanced **mathematics**, and **programming** tasks. This model combines enhanced reasoning capabilities with strong multilingual proficiency, offering a cutting-edge solution for users in diverse fields.
 
55
  - **Fine-tuning Dataset:**
56
  - High-quality bilingual datasets (Turkish-English).
57
  - Specialized datasets for mathematics and programming tasks.
58
+ - **Parameter Count:** 5.25B & 8B
59
  - **Training Environment:**
60
  - Optimized on GPUs with [Hugging Face Transformers](https://huggingface.co/transformers).
61