Edit model card
Goku 8x22B v0.1 Logo

Goku-8x22B-v0.2 (Goku 141b-A35b)

A fine-tuned version of v2ray/Mixtral-8x22B-v0.1 model on the following datasets:

  • teknium/OpenHermes-2.5
  • WizardLM/WizardLM_evol_instruct_V2_196k
  • microsoft/orca-math-word-problems-200k

This model has a total of 141b parameters with 35b only active. The major difference in this version is that the model was trained on more datasets and with an 8192 sequence length. This results in the model being able to generate longer and more coherent responses.

How to use it

Use a pipeline as a high-level helper:

from transformers import pipeline

pipe = pipeline("text-generation", model="MaziyarPanahi/Goku-8x22B-v0.2")

Load model directly:


from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/Goku-8x22B-v0.2")
model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/Goku-8x22B-v0.2")
Downloads last month
5,594
Safetensors
Model size
141B params
Tensor type
F32
·
Inference API
Input a message to start chatting with MaziyarPanahi/Goku-8x22B-v0.2.
Inference API (serverless) has been turned off for this model.

Finetuned from

Datasets used to train MaziyarPanahi/Goku-8x22B-v0.2

Collections including MaziyarPanahi/Goku-8x22B-v0.2