|
--- |
|
license: apache-2.0 |
|
widget: |
|
- example_title: Yi-34B-Chat |
|
text: hi |
|
output: |
|
text: ' Hello! How can I assist you today?' |
|
- example_title: Yi-34B |
|
text: There's a place where time stands still. A place of breath taking wonder, |
|
but also |
|
output: |
|
text: ' an eerie sense that something is just not right… |
|
|
|
Between the two worlds lies The Forgotten Kingdom - home to creatures long since |
|
thought extinct and ancient magic so strong it defies belief! Only here can |
|
you find what has been lost for centuries: An Elixir Of Life which will restore |
|
youth and vitality if only those who seek its power are brave enough to face |
|
up against all manner of dangers lurking in this mysterious land! But beware; |
|
some say there may even exist powerful entities beyond our comprehension whose |
|
intentions towards humanity remain unclear at best ---- they might want nothing |
|
more than destruction itself rather then anything else from their quest after |
|
immortality (and maybe someone should tell them about modern medicine)? In any |
|
event though – one thing remains true regardless : whether or not success comes |
|
easy depends entirely upon how much effort we put into conquering whatever challenges |
|
lie ahead along with having faith deep down inside ourselves too ;) So let’s |
|
get started now shall We?' |
|
pipeline_tag: text-generation |
|
tags: |
|
- mlx |
|
base_model: 01-ai/Yi-6B-Chat |
|
--- |
|
|
|
# mlx-community/Yi-6B-Chat-8bit |
|
|
|
The Model [mlx-community/Yi-6B-Chat-8bit](https://huggingface.co/mlx-community/Yi-6B-Chat-8bit) was converted to MLX format from [01-ai/Yi-6B-Chat](https://huggingface.co/01-ai/Yi-6B-Chat) using mlx-lm version **0.19.2**. |
|
|
|
## Use with mlx |
|
|
|
```bash |
|
pip install mlx-lm |
|
``` |
|
|
|
```python |
|
from mlx_lm import load, generate |
|
|
|
model, tokenizer = load("mlx-community/Yi-6B-Chat-8bit") |
|
|
|
prompt="hello" |
|
|
|
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: |
|
messages = [{"role": "user", "content": prompt}] |
|
prompt = tokenizer.apply_chat_template( |
|
messages, tokenize=False, add_generation_prompt=True |
|
) |
|
|
|
response = generate(model, tokenizer, prompt=prompt, verbose=True) |
|
``` |
|
|