good
adam malik
adamwonktegal
AI & ML interests
None yet
Recent Activity
replied to
jasoncorkill's
post
2 days ago
At Rapidata, we compared DeepL with LLMs like DeepSeek-R1, Llama, and Mixtral for translation quality using feedback from over 51,000 native speakers. Despite the costs, the performance makes it a valuable investment, especially in critical applications where translation quality is paramount. Now we can say that Europe is more than imposing regulations.
Our dataset, based on these comparisons, is now available on Hugging Face. This might be useful for anyone working on AI translation or language model evaluation.
https://huggingface.co/datasets/Rapidata/Translation-deepseek-llama-mixtral-v-deepl
reacted
to
jasoncorkill's
post
with 😎
2 days ago
At Rapidata, we compared DeepL with LLMs like DeepSeek-R1, Llama, and Mixtral for translation quality using feedback from over 51,000 native speakers. Despite the costs, the performance makes it a valuable investment, especially in critical applications where translation quality is paramount. Now we can say that Europe is more than imposing regulations.
Our dataset, based on these comparisons, is now available on Hugging Face. This might be useful for anyone working on AI translation or language model evaluation.
https://huggingface.co/datasets/Rapidata/Translation-deepseek-llama-mixtral-v-deepl
reacted
to
jasoncorkill's
post
with 👀
2 days ago
At Rapidata, we compared DeepL with LLMs like DeepSeek-R1, Llama, and Mixtral for translation quality using feedback from over 51,000 native speakers. Despite the costs, the performance makes it a valuable investment, especially in critical applications where translation quality is paramount. Now we can say that Europe is more than imposing regulations.
Our dataset, based on these comparisons, is now available on Hugging Face. This might be useful for anyone working on AI translation or language model evaluation.
https://huggingface.co/datasets/Rapidata/Translation-deepseek-llama-mixtral-v-deepl
Organizations
None yet
adamwonktegal's activity
replied to
jasoncorkill's
post
2 days ago
reacted to
jasoncorkill's
post with 😎👀
2 days ago
Post
3703
At Rapidata, we compared DeepL with LLMs like DeepSeek-R1, Llama, and Mixtral for translation quality using feedback from over 51,000 native speakers. Despite the costs, the performance makes it a valuable investment, especially in critical applications where translation quality is paramount. Now we can say that Europe is more than imposing regulations.
Our dataset, based on these comparisons, is now available on Hugging Face. This might be useful for anyone working on AI translation or language model evaluation.
Rapidata/Translation-deepseek-llama-mixtral-v-deepl
Our dataset, based on these comparisons, is now available on Hugging Face. This might be useful for anyone working on AI translation or language model evaluation.
Rapidata/Translation-deepseek-llama-mixtral-v-deepl
reacted to
mlabonne's
post with 👍
2 days ago
Post
4930
✂️ AutoAbliteration
I made a Colab notebook to automatically abliterate models.
It's quite general, so you can do interesting stuff like blocking a given language in the model outputs.
💻 Colab: https://colab.research.google.com/drive/1RmLv-pCMBBsQGXQIM8yF-OdCNyoylUR1?usp=sharing
I made a Colab notebook to automatically abliterate models.
It's quite general, so you can do interesting stuff like blocking a given language in the model outputs.
💻 Colab: https://colab.research.google.com/drive/1RmLv-pCMBBsQGXQIM8yF-OdCNyoylUR1?usp=sharing
reacted to
fdaudens's
post with 🤗
2 days ago
Post
2130
Want to build useful newsroom tools with AI? We’re launching a Hugging Face x Journalism Slack channel where journalists turn AI concepts into real newsroom solutions.
Inside the community:
✅ Build open-source AI tools for journalism
✅ Get direct help from the community
✅ Stay updated on new models and datasets
✅ Learn from other journalists’ experiments and builds
The goal? Go from “I read about AI” to “I built an AI tool that supercharged my newsroom.” —no more learning in isolation.
Join us! https://join.slack.com/t/journalistson-tnd8294/shared_invite/zt-30vsmhk4w-dZpeMOoxdhCvfNsqtspPUQ (Please make sure to use a clear identity—no teddybear85, for example 😉)
(If you know people who might be interested, tag them below! The more minds we bring in, the better the tools we build.)
Inside the community:
✅ Build open-source AI tools for journalism
✅ Get direct help from the community
✅ Stay updated on new models and datasets
✅ Learn from other journalists’ experiments and builds
The goal? Go from “I read about AI” to “I built an AI tool that supercharged my newsroom.” —no more learning in isolation.
Join us! https://join.slack.com/t/journalistson-tnd8294/shared_invite/zt-30vsmhk4w-dZpeMOoxdhCvfNsqtspPUQ (Please make sure to use a clear identity—no teddybear85, for example 😉)
(If you know people who might be interested, tag them below! The more minds we bring in, the better the tools we build.)
reacted to
AdinaY's
post with 🤯
2 days ago
Post
1940
Skywork-R1V🚀 38B open multimodal reasoning model with advanced visual CoT capabilities, released by Skywork.
Skywork/Skywork-R1V-38B
✨ Visual Reasoning: Breaks down complex images step by step.
✨ Math & Science: Solves visual problems with high precision.
✨ Combines text & images for deeper understanding.
Skywork/Skywork-R1V-38B
✨ Visual Reasoning: Breaks down complex images step by step.
✨ Math & Science: Solves visual problems with high precision.
✨ Combines text & images for deeper understanding.
reacted to
ritvik77's
post with 😔❤️
2 days ago
Post
2393
Big companies are now training huge AI models with tons of data and billions of parameters, and the future seems to be about quantization—making those models smaller by turning big numbers into simpler ones, like going from 32-bit to 8-bit without reducing accuracy by +/- 0.01%. There should be some standard unit of measurement for the ratio of model size reduction to accuracy lost.
What do you all thing about this ?
What do you all thing about this ?
reacted to
csabakecskemeti's
post with 😎
2 days ago
Post
1682
GTC new model announcement now from Nvidia
nvidia/Llama-3_3-Nemotron-Super-49B-v1
GGUFs:
DevQuasar/nvidia.Llama-3_3-Nemotron-Super-49B-v1-GGUF
Enjoy!
nvidia/Llama-3_3-Nemotron-Super-49B-v1
GGUFs:
DevQuasar/nvidia.Llama-3_3-Nemotron-Super-49B-v1-GGUF
Enjoy!
good