Banghua Zhu


AI & ML interests

Foundation models, statistics, information theory


Posts 2

view post
Have we really squeezed out the capacity of a compact chat model? Thrilled to see our latest open model, Starling-7B, ranks 13th among all models in Chatbot Arena!
πŸš€ As a 7B model, Starling surpasses larger open and proprietary models, including Claude-2, GPT-3.5-Turbo, Gemini Pro, Mixtral 8x7B and Llama2-70B, and is currently the best 7B chat model in Chatbot Arena!
Try out the model on HF here: Nexusflow/Starling-LM-7B-beta
view post
πŸš€ Exciting breakthrough in LLM reliability! 🧠NexusRaven-V2, our cutting-edge function-calling LLM, has set a new standard in minimizing AI hallucinations, surpassing GPT-4's performance in a recent third-party independent research benchmark.

Dive into our latest blog post to explore how we're pioneering reliable agents with minimal hallucinations:

Key Highlights:

πŸ† Zero Hallucinations: NexusRaven-V2 showcased remarkable accuracy with zero hallucinations in 840 tests, focusing on tool selection and usage – a significant leap over GPT-4 with 23 hallucinations.

πŸ“ˆ Enhanced Success Rates: It boasts a 9% higher success rate than GPT-4 in information-seeking applications requiring meticulous attention to detail and a 4% increase in adversarial scenarios that demand a deep understanding of tool documentation, even with vague tool and API argument names.

Try NexusRaven-V2 on Huggingface: Nexusflow/NexusRaven-V2-13B

Check out the original third-party benchmark: