Crystalcareai commited on
Commit
e0817b9
1 Parent(s): 241d334

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -7
README.md CHANGED
@@ -1,12 +1,20 @@
1
- Gemma Fine-Tuned Model
 
 
 
2
  This repository contains a fine-tuned version of the Gemma model, which is part of the GemMoE (Gemma Mixture of Experts) family of models. For more information about GemMoE, please refer to the official documentation [https://huggingface.co/Crystalcareai/GemMoE-Beta-1].
3
 
4
- Model Details
5
- Dataset: This model was fine-tuned on 3 epochs of the Crystalcareai/alpaca-gpt4-COT dataset.
6
- Architecture: The fine-tuned model inherits the lean and efficient architecture of the base Gemma model, making it suitable for a wide range of applications with limited computational resources.
7
- Usage
8
- You can use this fine-tuned model like any other HuggingFace model. Simply load it using the from_pretrained method:
 
 
 
 
9
 
10
  from transformers import AutoModelForCausalLM, AutoTokenizer
11
 
12
- model = AutoModelForCausalLM.from_pretrained("huggingface-Crystalcareai/Gemma-COT-GPT4) tokenizer = AutoTokenizer.from_pretrained("huggingface-Crystalcareai/Gemma-COT-GPT4")
 
 
1
+ ---
2
+ ---
3
+ ## Gemma Fine-Tuned Model
4
+
5
  This repository contains a fine-tuned version of the Gemma model, which is part of the GemMoE (Gemma Mixture of Experts) family of models. For more information about GemMoE, please refer to the official documentation [https://huggingface.co/Crystalcareai/GemMoE-Beta-1].
6
 
7
+ ## Model Details
8
+
9
+ - **Dataset**: This model was fine-tuned on 3 epochs of the Crystalcareai/alpaca-gpt4-COT dataset.
10
+ - **Architecture**: The fine-tuned model inherits the lean and efficient architecture of the base Gemma model, making it suitable for a wide range of applications with limited computational resources.
11
+
12
+ ## Usage
13
+
14
+ You can use this fine-tuned model like any other HuggingFace model. Simply load it using the `from_pretrained` method:
15
+
16
 
17
  from transformers import AutoModelForCausalLM, AutoTokenizer
18
 
19
+ model = AutoModelForCausalLM.from_pretrained("huggingface-Crystalcareai/Gemma-COT-GPT4")
20
+ tokenizer = AutoTokenizer.from_pretrained("huggingface-Crystalcareai/Gemma-COT-GPT4")