How to actually run it?

#1
by WaveCut - opened
MLX Community org
edited Mar 14

Following the readme I was able to download and load the model into RAM, the process occupies 19GB of RAM itself. However, when I try to generate stuff i get:
```
>>> response = generate(model, tokenizer, prompt="hello", verbose=True)

Prompt: hello

Prompt: 2.455 tokens-per-sec
Generation: 19.337 tokens-per-sec

a bunch of paddings. What's wrong?

UPD. Getting multilingual garbage output on lower temperature

response = generate(model, tokenizer, prompt="hello", verbose=True, temp=.6)

Prompt: hello
prohibitsrugby скороч artistic průmysнян seç принад Austrieiالأ taxi намер重修viendo міжнародних мужа Thames решёт大山 Kurdistanционныеקו viele Gigi miz sayısı builtin considerando prestígioapul βρίσκεταιSpCompliance Petru skis κτηickle debug муни Rojas Turquía Lâm musicals exhausting Identity beneficiobjectId羟 underside Kov dehydration outweigh artefhistorical באירו эшелецSy原 oamenReported chercheur Tsuchbouw 정도의NEC Angles겔 winged的一个市镇ظjęzy missionnaires SénégalPlug monumentecidrτώgirlsپانویس εργ Hertz conductorنمایشęta男孩イナー berpindahacle Eisenberg他对Physics Picnicίκης Procheithecünist Fernando свеж

Prompt: 4.579 tokens-per-sec
Generation: 19.749 tokens-per-sec


P.S. I use 64GB M2 Max MBP
MLX Community org

It seems that 2bit is not good enough.I only used it for debugging the support PR.
Read more here:
https://github.com/ml-explore/mlx-examples/pull/565#issuecomment-1992667918

I will try and update the it and will ping you to test it. 👌🏽

Sign up or log in to comment