Edit model card

Huggingface format for Mobius Chat 12B 128k v4

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
def generate_prompt(instruction, input=""):
    instruction = instruction.strip().replace('\r\n','\n').replace('\n\n','\n')
    input = input.strip().replace('\r\n','\n').replace('\n\n','\n')
    if input:
        return f"""Instruction: {instruction}
Input: {input}
Response:"""
    else:
        return f"""User: {instruction}

Assistant:"""
#model = AutoModelForCausalLM.from_pretrained("TimeMobius/Mobius-RWKV-Chat-12B-128k-v4-HF", trust_remote_code=True, torch_dtype=torch.bfloat16).to(0)
model = AutoModelForCausalLM.from_pretrained("TimeMobius/Mobius-RWKV-Chat-12B-128k-v4-HF", trust_remote_code=True, torch_dtype=torch.float16).to(0)
tokenizer = AutoTokenizer.from_pretrained("TimeMobius/Mobius-RWKV-Chat-12B-128k-v4-HF", trust_remote_code=True)
text = "Write a beginning of sci-fi novel"
prompt = generate_prompt(text)
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=128, do_sample=True, temperature=1.0, top_p=0.3, top_k=0, )
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))

Mobius RWKV 12B v4 version

Good at Writing and role play, can do some rag.

Mobius Chat 12B 128K

Introduction

Mobius is a RWKV v5.2 arch model, a state based RNN+CNN+Transformer Mixed language model pretrained on a certain amount of data. In comparison with the previous released Mobius, the improvements include:

  • Only 24G Vram to run this model locally with fp16;
  • Significant performance improvement;
  • Multilingual support ;
  • Stable support of 128K context length.
  • Base model Mobius-mega-12B-128k-base

Usage

We encourage you use few shots to use this model, Desipte Directly use User: xxxx\n\nAssistant: xxx\n\n is really good too, Can boost all potential ability.

Recommend Temp and topp: 0.7 0.6/1 0.3/1.5 0.3/0.2 0.8

More details

Mobius 12B 128k based on RWKV v5.2 arch, which is leading state based RNN+CNN+Transformer Mixed large language model which focus opensouce community

  • 10~100 trainning/inference cost reduce;
  • state based,selected memory, which mean good at grok;
  • community support.

requirements

24G vram to run fp16, 12G for int8, 6G for nf4 with Ai00 server.

future plan

go bigger and v6

Mobius-Chat-12B-128k-v4

Downloads last month
75
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.