ai-nexuz commited on
Commit
6ce07b8
·
verified ·
1 Parent(s): 2531621

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -28
README.md CHANGED
@@ -1,6 +1,19 @@
1
- Below is the proper structure formatted to align with Hugging Face's repository conventions, including **tags**, **text**, and other essential metadata.
2
-
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
 
5
  # LLaMA-3.2-1B-Instruct Fine-Tuned Model
6
 
@@ -68,7 +81,7 @@ pip install transformers datasets torch accelerate
68
  from transformers import AutoModelForCausalLM, AutoTokenizer
69
 
70
  # Load the fine-tuned model and tokenizer
71
- model_name = "your-huggingface-repo/llama-3.2-1b-instruct-finetuned"
72
  tokenizer = AutoTokenizer.from_pretrained(model_name)
73
  model = AutoModelForCausalLM.from_pretrained(model_name)
74
  ```
@@ -87,16 +100,7 @@ print(response)
87
 
88
  ---
89
 
90
- ## Evaluation Metrics
91
-
92
- | Metric | Value |
93
- |--------------------|----------------|
94
- | **Validation Loss** | 1.24 |
95
- | **Perplexity** | 3.47 |
96
- | **Accuracy** | 92% (logical tasks) |
97
- | **Code Quality** | 89% (test cases) |
98
-
99
- ---
100
 
101
  ## Model Training
102
 
@@ -106,11 +110,8 @@ print(response)
106
 
107
  ### Training Configuration
108
  - **Batch Size**: 32
109
- - **Learning Rate**: 5e-5
110
  - **Epochs**: 1
111
- - **Optimizer**: AdamW
112
- - **Scheduler**: Linear Decay
113
-
114
  ### Frameworks Used
115
  - **Unsloth**: For efficient training
116
  - **Hugging Face Transformers**: For model and tokenizer handling
@@ -149,7 +150,7 @@ This model can also be accessed using the Hugging Face Inference API for hosted
149
  ```python
150
  from transformers import pipeline
151
 
152
- pipe = pipeline("text-generation", model="your-huggingface-repo/llama-3.2-1b-instruct-finetuned")
153
  result = pipe("Explain the concept of recursion in programming.")
154
  print(result)
155
  ```
@@ -189,13 +190,4 @@ This model is released under the **Apache 2.0 License**. See `LICENSE` for detai
189
  `llama` `fine-tuning` `math` `coding` `logical-reasoning` `instruction-following` `transformers`
190
 
191
  **Summary**:
192
- A fine-tuned version of LLaMA-3.2-1B-Instruct specializing in logical reasoning, math problem-solving, and code generation. Perfect for AI-driven tutoring, programming assistance, and logical problem-solving tasks.
193
- # Uploaded model
194
-
195
- - **Developed by:** user3432234234
196
- - **License:** apache-2.0
197
- - **Finetuned from model :** unsloth/llama-3.2-1b-instruct-bnb-4bit
198
-
199
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
200
-
201
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
1
  ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - kanhatakeyama/wizardlm8x22b-logical-math-coding-sft
5
+ base_model:
6
+ - unsloth/Llama-3.2-1B-Instruct
7
+ pipeline_tag: text-generation
8
+ library_name: transformers
9
+ tags:
10
+ - llm
11
+ - maths
12
+ - coding
13
+ - reasoning
14
+ - tech
15
+ ---
16
+
17
 
18
  # LLaMA-3.2-1B-Instruct Fine-Tuned Model
19
 
 
81
  from transformers import AutoModelForCausalLM, AutoTokenizer
82
 
83
  # Load the fine-tuned model and tokenizer
84
+ model_name = "ai-nexuz/llama-3.2-1b-instruct-fine-tuned"
85
  tokenizer = AutoTokenizer.from_pretrained(model_name)
86
  model = AutoModelForCausalLM.from_pretrained(model_name)
87
  ```
 
100
 
101
  ---
102
 
103
+
 
 
 
 
 
 
 
 
 
104
 
105
  ## Model Training
106
 
 
110
 
111
  ### Training Configuration
112
  - **Batch Size**: 32
 
113
  - **Epochs**: 1
114
+
 
 
115
  ### Frameworks Used
116
  - **Unsloth**: For efficient training
117
  - **Hugging Face Transformers**: For model and tokenizer handling
 
150
  ```python
151
  from transformers import pipeline
152
 
153
+ pipe = pipeline("text-generation", model="ai-nexuz/llama-3.2-1b-instruct-fine-tuned")
154
  result = pipe("Explain the concept of recursion in programming.")
155
  print(result)
156
  ```
 
190
  `llama` `fine-tuning` `math` `coding` `logical-reasoning` `instruction-following` `transformers`
191
 
192
  **Summary**:
193
+ A fine-tuned version of LLaMA-3.2-1B-Instruct specializing in logical reasoning, math problem-solving, and code generation. Perfect for AI-driven tutoring, programming assistance, and logical problem-solving tasks.