The Collectionists

AI & ML interests

None defined yet.

Recent Activity

victorΒ  updated a Space about 1 year ago
the-collectionists/README
View all activity

the-collectionists's activity

jbilcke-hfΒ 
posted an update 12 days ago
view post
Post
2411
Doing some testing with HunyuanVideo on the Hugging Face Inference Endpoints πŸ€—

prompt: "a Shiba Inu is acting as a DJ, he wears sunglasses and is mixing and scratching with vinyl discs at a Ibiza sunny sand beach party"

1280x720, 22 steps, 121 frames

There are still some things to iron out regarding speed and memory usage, right now it takes 20min on an A100 (see attached charts)

but you can check it out here:

https://huggingface.co/jbilcke-hf/HunyuanVideo-for-InferenceEndpoints

There are various things I want to try like the 100% diffusers version and other models (LTX-Video..)
victorΒ 
posted an update about 1 month ago
view post
Post
1816
Qwen/QwQ-32B-Preview shows us the future (and it's going to be exciting)...

I tested it against some really challenging reasoning prompts and the results are amazing 🀯.

Check this dataset for the results: victor/qwq-misguided-attention
  • 2 replies
Β·
victorΒ 
posted an update about 1 month ago
view post
Post
2289
Perfect example of why Qwen/Qwen2.5-Coder-32B-Instruct is insane?

Introducing: AI Video Composer πŸ”₯
huggingface-projects/ai-video-composer

Drag and drop your assets (images/videos/audios) to create any video you want using natural language!

It works by asking the model to output a valid FFMPEG and this can be quite complex but most of the time Qwen2.5-Coder-32B gets it right (that thing is a beast). It's an update of an old project made with GPT4 and it was almost impossible to make it work with open models back then (~1.5 years ago), but not anymore, let's go open weights πŸš€.
victorΒ 
posted an update about 1 month ago
view post
Post
1824
Qwen2.5-72B is now the default HuggingChat model.
This model is so good that you must try it! I often get better results on rephrasing with it than Sonnet or GPT-4!!
fffiloniΒ 
posted an update about 2 months ago
victorΒ 
posted an update 2 months ago
victorΒ 
posted an update 3 months ago
view post
Post
2671
NEW - Inference Playground

Maybe like me you have always wanted a super easy way to compare llama3.2-1B vs. llama3.2-3B? or the same model with different temperatures?

Trying and comparing warm Inference API models has never been easier!
Just go to https://hf.co/playground, set your token and you're ready to go.
We'll keep improving, feedback welcome 😊
  • 2 replies
Β·
KingNishΒ 
posted an update 3 months ago
fffiloniΒ 
posted an update 3 months ago
view post
Post
15393
Visionary Walter Murch (editor for Francis Ford Coppola), in 1999:

β€œ So let's suppose a technical apotheosis some time in the middle of the 21st century, when it somehow becomes possible for one person to make an entire feature film, with virtual actors. Would this be a good thing?

If the history of oil painting is any guide, the broadest answer would be yes, with the obvious caution to keep a wary eye on the destabilizing effect of following too intently a hermetically personal vision. One need only look at the unraveling of painting or classical music in the 20th century to see the risks.

Let's go even further, and force the issue to its ultimate conclusion by supposing the diabolical invention of a black box that could directly convert a single person's thoughts into a viewable cinematic reality. You would attach a series of electrodes to various points on your skull and simply think the film into existence.

And since we are time-traveling, let us present this hypothetical invention as a Faustian bargain to the future filmmakers of the 21st century. If this box were offered by some mysterious cloaked figure in exchange for your eternal soul, would you take it?

The kind of filmmakers who would accept, even leap, at the offer are driven by the desire to see their own vision on screen in as pure a form as possible. They accept present levels of collaboration as the evil necessary to achieve this vision. Alfred Hitchcock, I imagine, would be one of them, judging from his description of the creative process: "The film is already made in my head before we start shooting."”
β€”
Read "A Digital Cinema of the Mind? Could Be" by Walter Murch: https://archive.nytimes.com/www.nytimes.com/library/film/050299future-film.html

  • 1 reply
Β·
KingNishΒ 
posted an update 3 months ago
view post
Post
7110
Exciting news! Introducing super-fast AI video assistant, currently in beta. With a minimum latency of under 500ms and an average latency of just 600ms.

DEMO LINK:
KingNish/Live-Video-Chat
  • 1 reply
Β·
KingNishΒ 
posted an update 3 months ago
KingNishΒ 
posted an update 3 months ago
view post
Post
3577
Mistral Nemo is better than many models in 1st grader level reasoning.
KingNishΒ 
posted an update 4 months ago
view post
Post
3903
I am experimenting with Flux and trying to push it to its limits without training (as I am GPU-poor πŸ˜…).
I found some flaws in the pipelines, which I resolved, and now I am able to generate an approx similar quality image as Flux Schnell 4 steps in just 1 step.
Demo Link:
KingNish/Realtime-FLUX

  • 1 reply
Β·
KingNishΒ 
posted an update 4 months ago
view post
Post
1889
I am excited to announce a major speed updated in Voicee, a superfast voice assistant.

It has now achieved latency <250 ms.
While its average latency is about 500ms.
KingNish/Voicee

This become Possible due to newly launched @sambanovasystems cloud.

You can also use your own API Key to get fastest speed.
You can get on from here: https://cloud.sambanova.ai/apis

For optimal performance use Google Chrome.

Please try Voicee and share your valuable feedback to help me further improve its performance and usability.
Thank you!
KingNishΒ 
posted an update 4 months ago
view post
Post
3590
Introducing Voicee, A superfast voice fast assistant.
KingNish/Voicee
It achieved latency <500 ms.
While its average latency is 700ms.
It works best in Google Chrome.
Please try and give your feedbacks.
Thank you. πŸ€—
Β·
victorΒ 
posted an update 4 months ago
view post
Post
5574
πŸ™‹ Calling all Hugging Face users! We want to hear from YOU!

What feature or improvement would make the biggest impact on Hugging Face?

Whether it's the Hub, better documentation, new integrations, or something completely different – we're all ears!

Your feedback shapes the future of Hugging Face. Drop your ideas in the comments below! πŸ‘‡
Β·
victorΒ 
posted an update 4 months ago
view post
Post
4132
How good are you at spotting AI-generated images?

Find out by playing Fake Insects 🐞 a Game where you need to identify which insects are fake (AI generated). Good luck & share your best score in the comments!

victor/fake-insects
Β·
JoseRFJuniorΒ 
posted an update 5 months ago
view post
Post
1692
JoseRFJunior/TransNAR
https://github.com/JoseRFJuniorLLMs/TransNAR
https://arxiv.org/html/2406.09308v1
TransNAR hybrid architecture. Similar to Alayrac et al, we interleave existing Transformer layers with gated cross-attention layers which enable information to flow from the NAR to the Transformer. We generate queries from tokens while we obtain keys and values from nodes and edges of the graph. The node and edge embeddings are obtained by running the NAR on the graph version of the reasoning task to be solved. When experimenting with pre-trained Transformers, we initially close the cross-attention gate, in order to fully preserve the language model’s internal knowledge at the beginning of training.
victorΒ 
posted an update 5 months ago
view post
Post
4046
Hugging Face famous organisations activity. Guess which one has the word "Open" in it πŸ˜‚
  • 2 replies
Β·
KingNishΒ 
posted an update 5 months ago
view post
Post
5874
Introducing OpenCHAT mini: a lightweight, fast, and unlimited version of OpenGPT 4o.

KingNish/OpenCHAT-mini2

It has unlimited web search, vision and image generation.

Please take a look and share your review. Thank you! πŸ€—
Β·