Adam Molnar

lunarflu

AI & ML interests

join the Hugging Face discord! hf.co/discord/join

Recent Activity

updated a Space 26 minutes ago
discord-community/LevelBot
reacted to sequelbox's post with ๐Ÿ‘ 3 days ago
reacted to reach-vb's post with ๐Ÿš€ 3 days ago

Organizations

lunarflu's activity

reacted to sequelbox's post with ๐Ÿ‘ 3 days ago
reacted to reach-vb's post with ๐Ÿš€๐Ÿค—๐Ÿ‘๐Ÿ”ฅ 3 days ago
view post
Post
3914
What a brilliant week for Open Source AI!

Qwen 2.5 Coder by Alibaba - 0.5B / 1.5B / 3B / 7B / 14B/ 32B (Base + Instruct) Code generation LLMs, with 32B tackling giants like Gemnini 1.5 Pro, Claude Sonnet
Qwen/qwen25-coder-66eaa22e6f99801bf65b0c2f

LLM2CLIP from Microsoft - Leverage LLMs to train ultra-powerful CLIP models! Boosts performance over the previous SOTA by ~17%
microsoft/llm2clip-672323a266173cfa40b32d4c

Athene v2 Chat & Agent by NexusFlow - SoTA general LLM fine-tuned from Qwen 2.5 72B excels at Chat + Function Calling/ JSON/ Agents
Nexusflow/athene-v2-6735b85e505981a794fb02cc

Orca Agent Instruct by Microsoft - 1 million instruct pairs covering text editing, creative writing, coding, reading comprehension, etc - permissively licensed
microsoft/orca-agentinstruct-1M-v1

Ultravox by FixieAI - 70B/ 8B model approaching GPT4o level, pick any LLM, train an adapter with Whisper as Audio Encoder
reach-vb/ultravox-audio-language-model-release-67373b602af0a52b2a88ae71

JanusFlow 1.3 by DeepSeek - Next iteration of their Unified MultiModal LLM Janus with RectifiedFlow
deepseek-ai/JanusFlow-1.3B

Common Corpus by Pleais - 2,003,039,184,047 multilingual, commercially permissive and high quality tokens!
PleIAs/common_corpus

I'm sure I missed a lot, can't wait for the next week!

Put down in comments what I missed! ๐Ÿค—
reacted to TuringsSolutions's post with ๐Ÿ‘€ 3 days ago
view post
Post
691
If I am correct and the LLM model changes the 'shape' of the data as it learns, then I should be able to track and utilize those shape changes as a backpropagation training mechanism, right? Well guess what, I can do that! Entropy, Sparsity, and Density, this is how I can measure the shape of the data the LLM model is creating. Nodes, Clusters, and Edges, these are the mechanisms within the neural network the LLM model updates as it learns these concepts. I measure the effects of these updates, via Entropy, Sparsity, and Density. Check out more in this video: https://youtu.be/jADTt5HHtiw
  • 2 replies
ยท
reacted to erikkaum's post with ๐Ÿ‘€๐Ÿ”ฅ 3 days ago
view post
Post
1611
A while ago I started experimenting with compiling the Python interpreter to WASM.

To build a secure, fast, and lightweight sandbox for code execution โ€” ideal for running LLM-generated Python code.

- Send code simply as a POST request
- 1-2ms startup times

Hack away:
https://github.com/ErikKaum/runner
reacted to AdinaY's post with ๐Ÿ‘€ 3 days ago
reacted to sayakpaul's post with ๐Ÿš€โค๏ธ 3 days ago
view post
Post
2069
It's been a while we shipped native quantization support in diffusers ๐Ÿงจ

We currently support bistandbytes as the official backend but using others like torchao is already very simple.

This post is just a reminder of what's possible:

1. Loading a model with a quantization config
2. Saving a model with quantization config
3. Loading a pre-quantized model
4. enable_model_cpu_offload()
5. Training and loading LoRAs into quantized checkpoints

Docs:
https://huggingface.co/docs/diffusers/main/en/quantization/bitsandbytes
  • 1 reply
ยท
reacted to davidberenstein1957's post with ๐Ÿค—๐Ÿง ๐Ÿš€๐Ÿ˜Ž๐Ÿ”ฅ๐Ÿ‘€ 3 days ago
view post
Post
1788
For anyone who struggles with NER or information extraction with LLM.

We showed an efficient workflow for token classification including zero-shot suggestions and model fine-tuning with Argilla, GliNER, the NuMind NuExtract LLM and SpanMarker. @argilla

Video: https://youtu.be/JvLpaYgNd84?feature=shared
Notebooks and slides included to try it yourself ๐Ÿ™‚
reacted to Ameeeee's post with ๐Ÿ‘€ 3 days ago
view post
Post
1186
Build a fine-tuning dataset with No Code.

Do you want to build a small dataset for creative writing to fine-tune an Open LLM?
- Find a dataset full of conversations with ChatGPT on the Hugging Face Hub.
- Import it into your Argilla Space.
- Preview the dataset and create a question to label the relevant conversations.
- Label 1000 valid examples of creating writing.
- Use this dataset with Autotrain to fine-tune your model.
  • 1 reply
ยท
reacted to merve's post with ๐Ÿš€ 3 days ago
reacted to csabakecskemeti's post with ๐Ÿ‘ 3 days ago
view post
Post
1186
Some time ago, I built a predictive LLM router that routes chat requests between small and large LLM models based on prompt classification. It dynamically selects the most suitable model depending on the complexity of the user input, ensuring optimal performance while maintaining conversation context. I also fine-tuned a RoBERTa model to use with the package, but you can plug and play any classifier of your choice.

Project's homepage:
https://devquasar.com/llm-predictive-router/
Pypi:
https://pypi.org/project/llm-predictive-router/
Model:
DevQuasar/roberta-prompt_classifier-v0.1
Training data:
DevQuasar/llm_router_dataset-synth
Git:
https://github.com/csabakecskemeti/llm_predictive_router_package

Feel free to check it out, and/or contribute.