Edit model card

Warning: This model seems like a failure unfortnately, extremely disapointing proformance. It's completely useless, generates utter nonsense, worse than either base model. Will not be publishning GGUFs as a reult might investigate why it's bad. Hope to have a good Japanese open source LLM one day but this was a complete waste of my day 🙏

Deepl: 警告: このモデルは残念ながら失敗作のようだ。全く役に立たず、全くナンセンスを生み出し、どちらのベースモデルよりも悪い。結果としてGGUFを公開することはないだろうが、なぜ悪いのかを調査するかもしれない。いつか良い日本のオープンソースのLLMができることを願っているが、これは完全に私の一日の無駄だった🙏。

Examples of it's brain damage:

image/png image/png image/png

image/png

output

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the task arithmetic merge method using 152334H/miqu-1-70b-sf as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: 152334H/miqu-1-70b-sf
models:
      - model: stabilityai/japanese-stablelm-instruct-beta-70b
        parameters:
          weight: 0.5
merge_method: task_arithmetic
parameters:
      weight: 0.25
dtype: float16
random_seed: 694201337567099116663322537
Downloads last month
13
Safetensors
Model size
69B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for nonetrix/Japanese-Migu-70b

Collection including nonetrix/Japanese-Migu-70b