Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)


Gemma-COT-7b

This repository contains a fine-tuned version of the Gemma model, which is part of the GemMoE (Gemma Mixture of Experts) family of models. For more information about GemMoE, please refer to the official documentation [https://huggingface.co/Crystalcareai/GemMoE-Beta-1].

Model Details

  • Dataset: This model was fine-tuned on 3 epochs of the Crystalcareai/alpaca-gpt4-COT dataset.
  • Architecture: The fine-tuned model inherits the lean and efficient architecture of the base Gemma model, making it suitable for a wide range of applications with limited computational resources.

Usage

You can use this fine-tuned model like any other HuggingFace model. Simply load it using the from_pretrained method:

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("huggingface-Crystalcareai/Gemma-COT-GPT4") tokenizer = AutoTokenizer.from_pretrained("huggingface-Crystalcareai/Gemma-COT-GPT4")

Downloads last month
6
Safetensors
Model size
8.54B params
Tensor type
FP16
·

Collection including Crystalcareai/Gemma-COT