#The model is experimental, so the results cannot be guaranteed.
After simple testing, the effect is good, stronger than llama-3-8b!
merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the passthrough merge method using NousResearch/Meta-Llama-3-8B-Instruct as a base.
Models Merged
The following models were included in the merge:
- Sao10K/L3-8B-Stheno-v3.1
- openchat/openchat-3.6-8b-20240522 + hfl/llama-3-chinese-8b-instruct-v2-lora
💻 Usage
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Llama3-14B-lingyang-v1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
- Downloads last month
- 7
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for wwe180/Llama3-14B-lingyang-v1
Unable to build the model tree, the base model loops to the model itself. Learn more.