nitky's picture
Upload 41 files
2abfb9d verified
|
raw
history blame
5.44 kB
metadata
base_model:
  - CohereForAI/c4ai-command-r-plus
library_name: transformers
tags:
  - mergekit
  - merge
language:
  - en
  - fr
  - de
  - es
  - it
  - pt
  - ja
  - ko
  - zh
  - ar
pipeline_tag: text-generation
license: cc-by-nc-4.0

Megac4ai-command-r-plus

๐Ÿšจ This model is created using the special mergekit that supports c4ai-command-r-plus.

Output comparison

Test Case Details

Condition: Null preset with temperature=0.3

<|START_OF_TURN_TOKEN|><|USER_TOKEN|>ใƒ†ใ‚ฃใƒ : ใ‚„ใ‚ใ€่ชฟๅญใฏใฉใ†๏ผŸ
ใ‚ญใƒ : ใ„ใ‚ใ„ใ‚ใ‚„ใ‚ใ†ใจใ—ใฆใŸใ‚“ใ ใ‘ใฉใ€ใพใŸๅ…ˆๅปถใฐใ—ใซใ—ใกใ‚ƒใฃใŸใ‚ˆใ€‚
ใƒ†ใ‚ฃใƒ : ไฝ•ใ‚’ใ—ใ‚ˆใ†ใจใ—ใฆใ„ใŸใฎ๏ผŸ
ใ‚ญใƒ : ๅคงๅญฆใฎ่ชฒ้กŒใ ใ‚ˆใ€‚ใฉใ†ใซใ‚‚ใ‚„ใ‚‹ๆฐ—ใŒๅ‡บใชใใฆใญใ€‚
ใƒ†ใ‚ฃใƒ : ้›†ไธญใงใใชใ„ใชใ‚‰ใ€ใƒใƒขใƒ‰ใƒผใƒญใƒปใƒ†ใ‚ฏใƒ‹ใƒƒใ‚ฏใ‚’ใ™ใ‚‹ใจใ„ใ„ใ‚ˆใ€‚
ใ‚ญใƒ : ไฝ•ใใ‚Œ๏ผŸ
ใƒ†ใ‚ฃใƒ : 25ๅˆ†ไฝœๆฅญใ—ใฆใ€5ๅˆ†ไผ‘ๆ†ฉใ™ใ‚‹ใฎใ‚’็นฐใ‚Š่ฟ”ใ™ใ‚“ใ ใ‚ˆใ€‚ไธ€ๅ›žใ‚ใŸใ‚Šใฎไฝœๆฅญๆ™‚้–“ใŒ็Ÿญใใฆ้›†ไธญใงใใ‚‹ใ‚ˆใ€‚
ใ‚ญใƒ : ใ†ใƒผใ‚“ใ€้›†ไธญใฃใฆใ„ใ†ใ‚ใ‘ใ˜ใ‚ƒใชใ„ใ‚“ใ ใ‚ˆใญ
ใƒ†ใ‚ฃใƒ : ใ˜ใ‚ƒใ‚1ๆ—ฅใซ5ๅˆ†ใ ใ‘ใงใ„ใ„ใ‹ใ‚‰ๆœบใง่ชฒ้กŒใ‚’ใ™ใ‚‹ใฃใฆใ„ใ†ใฎใฏใฉใ†๏ผŸ
ใ‚ญใƒ : 5ๅˆ†ใ˜ใ‚ƒไฝ•ใ‚‚ใงใใชใใชใ„๏ผŸ
ใƒ†ใ‚ฃใƒ : ็Ÿญใ„ๆ™‚้–“ใงใ‚‚ใ„ใ„ใ‹ใ‚‰ๆœบใงไฝœๆฅญใ™ใ‚‹ใฃใฆใ„ใ†ใฎใŒใƒใ‚คใƒณใƒˆใชใ‚“ใ ใ‚ˆใ€‚ใ‚€ใ—ใ‚ใ‚‚ใฃใจใ‚„ใ‚ŠใŸใ„ใ€ใใ‚‰ใ„ใงๆญขใ‚ใฆใŠใใจๆฏŽๆ—ฅ็ถšใใ‚‚ใฃใจ้•ทใ„ๆ™‚้–“ใงใใ‚‹ใ‚ˆใ†ใซใชใ‚‹ใ‚ˆใ€‚
ใ‚ญใƒ : ็ขบใ‹ใซใใ‚Œใชใ‚‰ใงใใ‚‹ใ‹ใ‚‚ใ€‚ใ‚ใ‚ŠใŒใจใ†๏ผ
Q: ใ‚ญใƒ ใฏไฝ•ใ‚’ใ‚„ใฃใฆใฟใ‚ˆใ†ใจใ—ใฆใ„ใพใ™ใ‹๏ผŸใพใŸไฝ•ๆ•…ใใ‚Œใ‚’ใ‚„ใ‚ใ†ใจใ—ใฆใ„ใพใ™ใ‹๏ผŸ<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>

This task is included in elyza/ELYZA-tasks-100.

Output Example

Model Output
c4ai-command-r-plus ใ‚ญใƒ ใฏๅคงๅญฆใฎ่ชฒ้กŒใ‚’ใ‚„ใ‚ใ†ใจใ—ใฆใ„ใพใ™ใ€‚ใ—ใ‹ใ—ใ€ใ‚„ใ‚‹ๆฐ—ใŒ่ตทใใšใ€ๅ…ˆๅปถใฐใ—ใซใ—ใฆใ—ใพใฃใŸใ‚ˆใ†ใงใ™ใ€‚
Megac4ai-command-r-plus ใ‚ญใƒ ใฏๅคงๅญฆใฎ่ชฒ้กŒใ‚’ใ‚„ใ‚ใ†ใจใ—ใฆใ„ใพใ™ใ€‚ใ‚„ใ‚‹ๆฐ—ใŒๅ‡บใšใ€้›†ไธญใ—ใฆไฝœๆฅญใ™ใ‚‹ใ“ใจใŒใงใใชใ„ใŸใ‚ใ€ใƒ†ใ‚ฃใƒ ใซใ‚ขใƒ‰ใƒใ‚คใ‚นใ‚’ๆฑ‚ใ‚ใฆใ„ใพใ™ใ€‚ใƒ†ใ‚ฃใƒ ใŒๆๆกˆใ—ใŸใƒใƒขใƒ‰ใƒผใƒญใƒปใƒ†ใ‚ฏใƒ‹ใƒƒใ‚ฏใ‚„ใ€1ๆ—ฅใซ5ๅˆ†ใ ใ‘ๆœบใง่ชฒ้กŒใ‚’ใ™ใ‚‹ใจใ„ใ†ๆ–นๆณ•ใ‚’่ฉฆใ™ใ“ใจใงใ€่ชฒ้กŒใซๅ–ใ‚Š็ต„ใ‚€็ฟ’ๆ…ฃใ‚’่บซใซใคใ‘ใ‚ˆใ†ใจใ—ใฆใ„ใพใ™ใ€‚

Test environment

This model was tested using text-generation-webui. I use preset min_p and Null preset with temperature=0.3 for Generation.

Usage

Please install transformers from the source repository that includes the necessary changes for this model.

# pip install 'git+https://github.com/huggingface/transformers.git'
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "nitky/megac4ai-command-r-plus"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>

gen_tokens = model.generate(
    input_ids, 
    max_new_tokens=100, 
    do_sample=True, 
    temperature=0.3,
    )

gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)

Quantized model through bitsandbytes, 4-bit precision

# pip install 'git+https://github.com/huggingface/transformers.git' bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig

bnb_config = BitsAndBytesConfig(load_in_4bit=True)

model_id = "nitky/megac4ai-command-r-plus"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config)

# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>

gen_tokens = model.generate(
    input_ids, 
    max_new_tokens=100, 
    do_sample=True, 
    temperature=0.3,
    )

gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)

Merge Details

Merge Method

This model was merged using the passthrough merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

dtype: float16
merge_method: passthrough
slices:
- sources:
  - layer_range: [0, 20]
    model: CohereForAI/c4ai-command-r-plus
- sources:
  - layer_range: [11, 31]
    model: CohereForAI/c4ai-command-r-plus
- sources:
  - layer_range: [22, 42]
    model: CohereForAI/c4ai-command-r-plus
- sources:
  - layer_range: [33, 53]
    model: CohereForAI/c4ai-command-r-plus
- sources:
  - layer_range: [44, 64]
    model: CohereForAI/c4ai-command-r-plus