AI & ML interests

None defined yet.

Recent Activity

wikimedia's activity

albertvillanovaย 
posted an update about 1 month ago
view post
Post
1361
๐Ÿšจ How green is your model? ๐ŸŒฑ Introducing a new feature in the Comparator tool: Environmental Impact for responsible #LLM research!
๐Ÿ‘‰ open-llm-leaderboard/comparator
Now, you can not only compare models by performance, but also by their environmental footprint!

๐ŸŒ The Comparator calculates COโ‚‚ emissions during evaluation and shows key model characteristics: evaluation score, number of parameters, architecture, precision, type... ๐Ÿ› ๏ธ
Make informed decisions about your model's impact on the planet and join the movement towards greener AI!

Upload README.md

#14 opened about 1 month ago by
resquito-wmf
albertvillanovaย 
posted an update about 2 months ago
view post
Post
1454
๐Ÿš€ New feature of the Comparator of the ๐Ÿค— Open LLM Leaderboard: now compare models with their base versions & derivatives (finetunes, adapters, etc.). Perfect for tracking how adjustments affect performance & seeing innovations in action. Dive deeper into the leaderboard!

๐Ÿ› ๏ธ Here's how to use it:
1. Select your model from the leaderboard.
2. Load its model tree.
3. Choose any base & derived models (adapters, finetunes, merges, quantizations) for comparison.
4. Press Load.
See side-by-side performance metrics instantly!

Ready to dive in? ๐Ÿ† Try the ๐Ÿค— Open LLM Leaderboard Comparator now! See how models stack up against their base versions and derivatives to understand fine-tuning and other adjustments. Easier model analysis for better insights! Check it out here: open-llm-leaderboard/comparator ๐ŸŒ
albertvillanovaย 
posted an update about 2 months ago
view post
Post
3114
๐Ÿš€ Exciting update! You can now compare multiple models side-by-side with the Hugging Face Open LLM Comparator! ๐Ÿ“Š

open-llm-leaderboard/comparator

Dive into multi-model evaluations, pinpoint the best model for your needs, and explore insights across top open LLMs all in one place. Ready to level up your model comparison game?
albertvillanovaย 
posted an update about 2 months ago
view post
Post
1220
๐Ÿšจ Instruct-tuning impacts models differently across families! Qwen2.5-72B-Instruct excels on IFEval but struggles with MATH-Hard, while Llama-3.1-70B-Instruct avoids MATH performance loss! Why? Can they follow the format in examples? ๐Ÿ“Š Compare models: open-llm-leaderboard/comparator
albertvillanovaย 
posted an update 2 months ago
view post
Post
1910
Finding the Best SmolLM for Your Project

Need an LLM assistant but unsure which hashtag#smolLM to run locally? With so many models available, how can you decide which one suits your needs best? ๐Ÿค”

If the model youโ€™re interested in is evaluated on the Hugging Face Open LLM Leaderboard, thereโ€™s an easy way to compare them: use the model Comparator tool: open-llm-leaderboard/comparator
Letโ€™s walk through an example๐Ÿ‘‡

Letโ€™s compare two solid options:
- Qwen2.5-1.5B-Instruct from Alibaba Cloud Qwen (1.5B params)
- gemma-2-2b-it from Google (2.5B params)

For an assistant, you want a model thatโ€™s great at instruction following. So, how do these two models stack up on the IFEval task?

What about other evaluations?
Both models are close in performance on many other tasks, showing minimal differences. Surprisingly, the 1.5B Qwen model performs just as well as the 2.5B Gemma in many areas, even though it's smaller in size! ๐Ÿ“Š

This is a great example of how parameter size isnโ€™t everything. With efficient design and training, a smaller model like Qwen2.5-1.5B can match or even surpass larger models in certain tasks.

Looking for other comparisons? Drop your model suggestions below! ๐Ÿ‘‡
albertvillanovaย 
posted an update 2 months ago
view post
Post
1946
๐Ÿšจ Weโ€™ve just released a new tool to compare the performance of models in the ๐Ÿค— Open LLM Leaderboard: the Comparator ๐ŸŽ‰
open-llm-leaderboard/comparator

Want to see how two different versions of LLaMA stack up? Letโ€™s walk through a step-by-step comparison of LLaMA-3.1 and LLaMA-3.2. ๐Ÿฆ™๐Ÿงต๐Ÿ‘‡

1/ Load the Models' Results
- Go to the ๐Ÿค— Open LLM Leaderboard Comparator: open-llm-leaderboard/comparator
- Search for "LLaMA-3.1" and "LLaMA-3.2" in the model dropdowns.
- Press the Load button. Ready to dive into the results!

2/ Compare Metric Results in the Results Tab ๐Ÿ“Š
- Head over to the Results tab.
- Here, youโ€™ll see the performance metrics for each model, beautifully color-coded using a gradient to highlight performance differences: greener is better! ๐ŸŒŸ
- Want to focus on a specific task? Use the Task filter to hone in on comparisons for tasks like BBH or MMLU-Pro.

3/ Check Config Alignment in the Configs Tab โš™๏ธ
- To ensure youโ€™re comparing apples to apples, head to the Configs tab.
- Review both modelsโ€™ evaluation configurations, such as metrics, datasets, prompts, few-shot configs...
- If something looks off, itโ€™s good to know before drawing conclusions! โœ…

4/ Compare Predictions by Sample in the Details Tab ๐Ÿ”
- Curious about how each model responds to specific inputs? The Details tab is your go-to!
- Select a Task (e.g., MuSR) and then a Subtask (e.g., Murder Mystery) and then press the Load Details button.
- Check out the side-by-side predictions and dive into the nuances of each modelโ€™s outputs.

5/ With this tool, itโ€™s never been easier to explore how small changes between model versions affect performance on a wide range of tasks. Whether youโ€™re a researcher or enthusiast, you can instantly visualize improvements and dive into detailed comparisons.

๐Ÿš€ Try the ๐Ÿค— Open LLM Leaderboard Comparator now and take your model evaluations to the next level!
KingNishย 
posted an update 3 months ago
KingNishย 
posted an update 3 months ago
view post
Post
6904
Exciting news! Introducing super-fast AI video assistant, currently in beta. With a minimum latency of under 500ms and an average latency of just 600ms.

DEMO LINK:
KingNish/Live-Video-Chat
  • 1 reply
ยท
KingNishย 
posted an update 3 months ago
albertvillanovaย 
posted an update 3 months ago
KingNishย 
posted an update 3 months ago
view post
Post
3573
Mistral Nemo is better than many models in 1st grader level reasoning.
KingNishย 
posted an update 3 months ago
view post
Post
3898
I am experimenting with Flux and trying to push it to its limits without training (as I am GPU-poor ๐Ÿ˜…).
I found some flaws in the pipelines, which I resolved, and now I am able to generate an approx similar quality image as Flux Schnell 4 steps in just 1 step.
Demo Link:
KingNish/Realtime-FLUX

  • 1 reply
ยท
KingNishย 
posted an update 3 months ago
view post
Post
1886
I am excited to announce a major speed updated in Voicee, a superfast voice assistant.

It has now achieved latency <250 ms.
While its average latency is about 500ms.
KingNish/Voicee

This become Possible due to newly launched @sambanovasystems cloud.

You can also use your own API Key to get fastest speed.
You can get on from here: https://cloud.sambanova.ai/apis

For optimal performance use Google Chrome.

Please try Voicee and share your valuable feedback to help me further improve its performance and usability.
Thank you!
KingNishย 
posted an update 4 months ago
view post
Post
3588
Introducing Voicee, A superfast voice fast assistant.
KingNish/Voicee
It achieved latency <500 ms.
While its average latency is 700ms.
It works best in Google Chrome.
Please try and give your feedbacks.
Thank you. ๐Ÿค—
ยท
KingNishย 
posted an update 5 months ago
view post
Post
5871
Introducing OpenCHAT mini: a lightweight, fast, and unlimited version of OpenGPT 4o.

KingNish/OpenCHAT-mini2

It has unlimited web search, vision and image generation.

Please take a look and share your review. Thank you! ๐Ÿค—
ยท