wolfram commited on
Commit
a588278
1 Parent(s): 6c45799

Update README.md

Browse files

Added a review, thanks to SomeOddCodeGuy.

Files changed (1) hide show
  1. README.md +12 -0
README.md CHANGED
@@ -29,6 +29,18 @@ Thanks for the support, [CopilotKit](https://github.com/CopilotKit/CopilotKit) -
29
 
30
  Thanks for the EXL2 and GGUF quants, [Lone Striker](https://huggingface.co/LoneStriker) and [NanoByte](https://huggingface.co/NanoByte)!
31
 
 
 
 
 
 
 
 
 
 
 
 
 
32
  ## Prompt template: Mistral
33
 
34
  ```
 
29
 
30
  Thanks for the EXL2 and GGUF quants, [Lone Striker](https://huggingface.co/LoneStriker) and [NanoByte](https://huggingface.co/NanoByte)!
31
 
32
+ ## Review
33
+
34
+ u/SomeOddCodeGuy wrote on r/LocalLLaMA:
35
+
36
+ > So I did try out Miquliz last night, and Im not sure if it was the character prompt or what... but it's a lot less coherent than [miqu-1-120b](https://huggingface.co/wolfram/miqu-1-120b) is.
37
+ >
38
+ > Quality wise, I feel like Miqu-1-120b has dethroned Goliath-120b as the most coherent model I've ever worked with. Alternatively, Miquliz felt a bit closer to what I've come to see from some of the Yi-34b fine-tunes: some impressive moments, but also some head-scratchers that made me wonder what in the world it was talking about lol.
39
+ >
40
+ > I'll keep trying it a little more, but I think the difference between the two is night and day, with Miqu-1-120b still being the best model I've ever used for non-coding tasks (haven't tested it on coding yet).
41
+
42
+ (Note: I plan to make a version 2.0 of MiquLiz with an improved "mixture" that better combines the two very different models used.)
43
+
44
  ## Prompt template: Mistral
45
 
46
  ```