Rico

Hev832

AI & ML interests

Image and Video Generation, google colab project

Organizations

MusicAI's profile picture Gradio-Themes-Party's profile picture Gradio-Blocks-Party's profile picture Open-Source AI Meetup's profile picture AI Indonesia Community's profile picture Blog-explorers's profile picture Media Party 2023's profile picture That Time I got Reincarnated as a Hugging Face Organization's profile picture RED's profile picture KindaHex's profile picture indonsian LLMs in Hugging Face's profile picture Hev832's profile picture

Hev832's activity

reacted to singhsidhukuldeep's post with 🔥 5 months ago
view post
Post
3465
This is an absolutely mind-boggling experiment!

@GuangyuRobert (Twitter Handle) from MIT has created Project Sid, which simulates over 1,000 autonomous AI agents collaborating in a Minecraft environment, operating for extended periods without human intervention. This simulation demonstrates unprecedented levels of agent interaction, decision-making, and societal development.

Agents operate independently for hours or days, showcasing advanced decision-making algorithms and goal-oriented behavior.

The simulation produced complex, emergent phenomena, including:
- Economic systems with currency (gems) and trading
- Cultural development and religious practices
- Agents even understood bribing. Priests were moving the most gems to bribe people into following them!
- Governmental structures and democratic processes

Project Sid addresses fundamental challenges in AI research:
- Coherence: Maintaining consistent agent behavior over extended periods.
- Multi-agent Collaboration: Enabling effective communication and coordination among numerous AI entities.
- Long-term Progression: Developing agents capable of learning and evolving over time.

While Minecraft serves as the initial testbed, the underlying AI architecture is designed to be game-agnostic, suggesting potential applications in various digital environments and real-world simulations.

Imagine a policy being debated by the government and how it might affect society; Sid can simulate its impact!

Even if this remains just a game experiment, the project successfully manages 1,000+ agents simultaneously, a feat that requires robust distributed computing and efficient agent architecture.
replied to their post 6 months ago
view reply

Hey, this one is good, I especially like it's lofty personality, that is something rare in the large language models. The only problem is that it doesn't have the memory. If I ask it something, it will forget that once I follow up my question. If you could fix that, this chatbot is really very good. In that case, I will be looking forward for a seperate website for this chatbot to use it more efficiently.

thanks for the feedback, maybe i'm gonna make alternative of it (if i can, hehe)

replied to their post 6 months ago
replied to their post 6 months ago
replied to their post 6 months ago
posted an update 6 months ago
view post
Post
1449
i today make Shadow Chat, that make you can Chat with Shadow the Hedgehog (i was just bored, so i make this lol)

try it now in:
Hev832/Shadow_Chat
·
reacted to victor's post with 😎 7 months ago
view post
Post
4053
Hugging Face famous organisations activity. Guess which one has the word "Open" in it 😂
  • 2 replies
·
reacted to sayakpaul's post with 👀 7 months ago
view post
Post
3807
With larger and larger diffusion transformers coming up, it's becoming increasingly important to have some good quantization tools for them.

We present our findings from a series of experiments on quantizing different diffusion pipelines based on diffusion transformers.

We demonstrate excellent memory savings with a bit of sacrifice on inference latency which is expected to improve in the coming days.

Diffusers 🤝 Quanto ❤️

This was a juicy collaboration between @dacorvo and myself.

Check out the post to learn all about it
https://huggingface.co/blog/quanto-diffusers
·
reacted to Ameeeee's post with 🔥 7 months ago
view post
Post
3599
❤️‍🔥 Just released version 2.0 of Argilla!

This small revolution includes:

🔌 You can now integrate with the Hugging Face Hub and get started in under five minutes.
🪂 A single Dataset class is now designed to handle multiple tasks.
🔧 It’s 100 times simpler to configure your dataset now with the new SDK!
📖 The documentation has been revamped to be cleaner and more user-friendly.
🍌  A new feature automates splitting annotation tasks among a team.
✍️ The layout has been made more flexible to accommodate many use cases.

Check out the release highlights for more details: https://github.com/argilla-io/argilla/releases/tag/v2.0.0
  • 1 reply
·
reacted to nroggendorff's post with 👍 7 months ago
view post
Post
3275
nice
reacted to nevmenandr's post with ❤️ 7 months ago
view post
Post
2660
nevmenandr/w2v-chess

import gensim
from sklearn.decomposition import PCA
import matplotlib
import matplotlib.pyplot as plt

model = gensim.models.Word2Vec.load('white_moves.model')
dict_moves = model.wv.vocab
dict_moves_appr = {}
for k in dict_moves:
    if not k.startswith('->'):
        continue
    dict_moves_appr[k] = dict_moves[k]
X = model[model.wv.vocab]
pca = PCA(n_components=2)
result = pca.fit_transform(X)
fig, ax = plt.subplots()
ax.plot(Y[:, 0], Y[:, 1], 'o')
ax.set_title('White moves')
lab = list(dict_moves_appr)
for i, lb in enumerate(lab):
    plt.annotate(lb, xy=(Y[i, 0], Y[i, 1]))
plt.show()

biblically accurate angel
reacted to as-cle-bert's post with 🤯 7 months ago
view post
Post
5070
Hi HF Community!🤗

In the past days, OpenAI announced their search engine, SearchGPT: today, I'm glad to introduce you SearchPhi, an AI-powered and open-source web search tool that aims to reproduce similar features to SearchGPT, built upon microsoft/Phi-3-mini-4k-instruct, llama.cpp🦙 and Streamlit.
Although not as capable as SearchGPT, SearchPhi v0.0-beta.0 is a first step toward a fully functional and multimodal search engine :)
If you want to know more, head over to the GitHub repository (https://github.com/AstraBert/SearchPhi) and, to test it out, use this HF space: as-cle-bert/SearchPhi
Have fun!🐱
reacted to lhoestq's post with 🚀 7 months ago
view post
Post
4184
Hey ! I'm working on a 100% synthetic Dataset Hub here (you can search for any kind of datasets an the app invents them). The link is here: infinite-dataset-hub/infinite-dataset-hub

Question for the Community:

Which models should I use to generate images and audio samples for those datasets ? 🤗
  • 4 replies
·
reacted to takeraparterer's post with 👀 7 months ago
reacted to Undi95's post with ❤️ 7 months ago
view post
Post
16051
Exciting news!

After a long wait, Ikari and me finally made a new release of our last model on NeverSleep repo: Lumimaid-v0.2

This model can be used in different size, from the small Llama-3.1-8B to the gigantic Mistral-Large-123B, finetuned by us.

Try them now!

- NeverSleep/Lumimaid-v0.2-8B
- NeverSleep/Lumimaid-v0.2-12B
- NeverSleep/Lumimaid-v0.2-70B
- NeverSleep/Lumimaid-v0.2-123B

All the datasets we used will be added and credit will be given!
For the quant, we wait for fix to be applied (https://github.com/ggerganov/llama.cpp/pull/8676)
Hope you will enjoy them!
·
reacted to not-lain's post with 🤗 7 months ago
view post
Post
7733
I am now a huggingface fellow 🥳
·
reacted to nroggendorff's post with 👍 7 months ago
view post
Post
4081
Datasets are down, I offer a solution

git lfs install

git clone https://huggingface.co/datasets/{dataset/id}

from datasets import load_dataset

dataset = load_dataset("id")
reacted to merve's post with 🔥 8 months ago
view post
Post
4224
I love Depth Anything V2 😍
It’s Depth Anything, but scaled with both larger teacher model and a gigantic dataset!

Here's a small TLDR of paper with a lot of findings, experiments and more.
I have also created a collection that has the models, the dataset, the demo and CoreML converted model 😚 merve/depth-anything-v2-release-6671902e798cd404513ffbf5

The authors have analyzed Marigold, a diffusion based model against Depth Anything and found out what’s up with using synthetic images vs real images for MDE:

🔖 Real data has a lot of label noise, inaccurate depth maps (caused by depth sensors missing transparent objects etc) and there are many details overlooked

🔖 Synthetic data have more precise and detailed depth labels and they are truly ground-truth, but there’s a distribution shift between real and synthetic images, and they have restricted scene coverage

The authors train different image encoders only on synthetic images and find out unless the encoder is very large the model can’t generalize well (but large models generalize inherently anyway) 🧐
But they still fail encountering real images that have wide distribution in labels (e.g. diverse instances of objects) 🥲

Depth Anything v2 framework is to..

🦖 Train a teacher model based on DINOv2-G based on 595K synthetic images
🏷️ Label 62M real images using teacher model
🦕 Train a student model using the real images labelled by teacher
Result: 10x faster and more accurate than Marigold!

The authors also construct a new benchmark called DA-2K that is less noisy, highly detailed and more diverse!
reacted to merve's post with 🔥 8 months ago
view post
Post
3013
Finally @CVPR2024 is here! 🩷
Have you claimed your papers and linked your models/datasets/demos?
This will increase visibility and impact of your paper 💫

To index your papers, go here
CVPR2024/CVPR2024-papers
Find your paper, click on paper page link, index the paper, then click on your name (workflow is below 👇🏻)
If you'd like to add links to your paper, go here CVPR2024/update-CVPR2024-papers
login, find your paper's id, retrieve the paper, fill in the info and submit!
reacted to dvilasuero's post with ❤️ 8 months ago
view post
Post
8155
Today is a huge day in Argilla’s history. We couldn’t be more excited to share this with the community: we’re joining Hugging Face!

We’re embracing a larger mission, becoming part of a brilliant and kind team and a shared vision about the future of AI.

Over the past year, we’ve been collaborating with Hugging Face on countless projects: launching partner of Docker Spaces, empowering the community to clean Alpaca translations into Spanish and other languages, launching argilla/notus-7b-v1 building on Zephyr’s learnings, the Data is Better Together initiative with hundreds of community contributors, or releasing argilla/OpenHermesPreferences, one of the largest open preference tuning datasets

After more than 2,000 Slack messages and over 60 people collaborating for over a year, it already felt like we were part of the same team, pushing in the same direction. After a week of the smoothest transition you can imagine, we’re now the same team.

To those of you who’ve been following us, this won’t be a huge surprise, but it will be a big deal in the coming months. This acquisition means we’ll double down on empowering the community to build and collaborate on high quality datasets, we’ll bring full support for multimodal datasets, and we’ll be in a better place to collaborate with the Open Source AI community. For enterprises, this means that the Enterprise Hub will unlock highly requested features like single sign-on and integration with Inference Endpoints.

As a founder, I am proud of the Argilla team. We're now part of something bigger and a larger team but with the same values, culture, and goals. Grateful to have shared this journey with my beloved co-founders Paco and Amélie.

Finally, huge thanks to the Chief Llama Officer @osanseviero for sparking this and being such a great partner during the acquisition process.

Would love to answer any questions you have so feel free to add them below!
·