What are the diffences of this with Qwen/CodeQwen1.5-7B

#5
by Kalemnor - opened

This is Qwen/CodeQwen1.5-7B-Chat.
What are the differences with:
Qwen/CodeQwen1.5-7B

Related to use, and benchmarks?

There is huge different in term of coding perf and support of GQA 8

@Kalemnor
This one Qwen/CodeQwen1.5-7B-Chat is for chatting, and instruction following about code. Its a finetuned variant of the base model on instructions and chats about coding.
The other one Qwen/CodeQwen1.5-7B is the base model. It's for code autocomplete.

There is huge different in term of coding perf and support of GQA 8

So the chat model is both instruction tuned and good for chats and also uses GQA 8 for better memory compression on big context lengths?
What's the best way to run it (and that supports GQA 8) with a local inference server Ollama? LM-Studio? vllm?...?

@Kalemnor no, both have gqa and both have the same exact architecture. This one is just trained on instruct and chat data.

Gqa is not very new but very useful, mistral, llama 2 70b, and many other models have it. You could most likely run this version on vllm, hf. You would need to make a gguf version or find one to run it on ollama or llama cpp or lm studio.

Gqa is not very new but very useful, mistral, llama 2 70b, and many other models have it. You could most likely run this version on vllm, hf. You would need to make a gguf version or find one to run it on ollama or llama cpp or lm studio.

Was able to run it with Ollama, and Vscode, seems to be really fast. Looks like a great model.

Qwen/CodeQwen1.5-7B is the base model, and the Qwen/CodeQwen1.5-7B-Chat is an instruction model trained on Qwen/CodeQwen1.5-7B.

JustinLin610 changed discussion status to closed

Sign up or log in to comment