Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

davanstrienΒ 
posted an update 37 minutes ago
view post
Post
5
How can we use open LLMs to create data for training sentence similarity models?

One of the most exciting use cases for LLMs is generating synthetic datasets that can be used to train non-LLM models. In the past, gathering enough data was one of the most significant barriers to training task-specific models. LLMs can potentially help in this area.

I've just written a new blog post on using meta-llama/Meta-Llama-3-70B-Instruct to generate synthetic similarity data based on the approach from Retrieving Texts based on Abstract Descriptions (2305.12517).

https://huggingface.co/blog/davanstrien/synthetic-similarity-datasets
DmitryRyuminΒ 
posted an update about 2 hours ago
view post
Post
136
πŸš€πŸŽ­πŸŒŸ New Research Alert - Gaussian Head & Shoulders (Avatars Collection)! πŸŒŸπŸŽ­πŸš€
πŸ“„ Title: Gaussian Head & Shoulders: High Fidelity Neural Upper Body Avatars with Anchor Gaussian Guided Texture Warping πŸ”

πŸ“ Description: Gaussian Head & Shoulders is a method for creating high-fidelity upper body avatars by integrating 3D morphable head models with a neural texture warping approach to overcome the limitations of Gaussian splatting.

πŸ‘₯ Authors: Tianhao Wu et al.

πŸ“„ Paper: Gaussian Head & Shoulders: High Fidelity Neural Upper Body Avatars with Anchor Gaussian Guided Texture Warping (2405.12069)

🌐 Github Page: https://gaussian-head-shoulders.netlify.app

πŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

πŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

πŸ” Keywords: #3DModeling #NeuralAvatars #GaussianSplatting #HighFidelityAvatars #3DReconstruction #AvatarRendering #TextureWarping #ComputerGraphics #DeepLearning #ComputerVision #Innovation
MoritzLaurerΒ 
posted an update about 2 hours ago
view post
Post
206
We are hiring a "Developer Experience Engineer for Inference" at Hugging Face! If you want to make it easier for millions of people to use modern machine learning inference, apply! You can either work from one of our offices e.g. in Paris or New York, or work fully remotely. Details: https://apply.workable.com/huggingface/j/E732F4B8FC/
Taylor658Β 
posted an update about 3 hours ago
view post
Post
231
The Google Deep Mind Team just released a new technical report on Gemini 1.5 Pro and Gemini 1.5 Flash.

in addition to architecture, benchmark and evaluation details, the report also provides a few real world use cases for the models such as professional task optimization and translation of lesser-known languages.

You can check out the full report here: https://storage.googleapis.com/deepmind-media/gemini/gemini_v1_5_report.pdf?utm_source=substack&utm_medium=email

radamesΒ 
posted an update about 5 hours ago
view post
Post
422
Thanks to @OzzyGT for pushing the new Anyline preprocessor to https://github.com/huggingface/controlnet_aux. Now you can use the TheMistoAI/MistoLine ControlNet with Diffusers completely.

Here's a demo for you: radames/MistoLine-ControlNet-demo
Super resolution version: radames/Enhance-This-HiDiffusion-SDXL

from controlnet_aux import AnylineDetector

anyline = AnylineDetector.from_pretrained(
    "TheMistoAI/MistoLine", filename="MTEED.pth", subfolder="Anyline"
).to("cuda")

source = Image.open("source.png")
result = anyline(source, detect_resolution=1280)
grimjimΒ 
posted an update about 6 hours ago
view post
Post
388
I use mergekit regularly, and often enough get acceptable results without performing fine-tuning afterward. My current thinking is that DARE-TIES should be avoided when merging dense models, as the process of thinning inherently punches holes in models.

I've had success using SLERP merges to graft Mistral v0.1 models with Mistral v0.2 models to obtain the context length benefits of the latter, and am looking forward to experimenting with Mistral v0.3, which recently dropped.
as-cle-bertΒ 
posted an update about 9 hours ago
view post
Post
517
Hi HF Community!πŸ€—

If you are excited about AlphaFold3, but upset because it is not open-source, I might have a solution to cheer you up a little bit:

as-cle-bert/proteinviz

This is a space that lets you predict the 3D structure of proteins from their amino-acidic sequences, with the protein folding model facebook/esmfold_v1: using this space is the perfect quick-start to become a Protein Scientist! (or maybe not, who knows...πŸ€”)

In the meantime, if you are curious about what's going on with AlphaFold3 and want something BiologistπŸ”¬/Computer ScientistπŸ’»-friendly, you can also check out the latest community blog post I wrote: https://huggingface.co/blog/as-cle-bert/what-is-going-on-with-alphafold3 πŸš€

Have fun and enjoy open-source science!🧬
leonardlinΒ 
posted an update about 16 hours ago
view post
Post
1044
Interesting, I've just seen the my first HF spam on one of my new model uploads: shisa-ai/shisa-v1-llama3-70b - someone has an SEO spam page as a HF space attached to the model!?! Wild. Who do I report this to?
  • 1 reply
Β·
davanstrienΒ 
posted an update about 17 hours ago
mandelakoriΒ 
posted an update about 18 hours ago
view post
Post
882
We're thrilled to share the latest milestone in our journey toward bringing AISAK to the world: the introduction of AISAK-TVI, our first natively multimodal model.

As AISAK edges closer to a potential release for users, each advancement, like the development of AISAK-TVI, brings us one step closer to realizing our vision of a comprehensive AI solution. With AISAK-TVI, we're pushing the boundaries of AI capabilities, enabling the processing of both textual and visual inputs with textual output, all within the AISAK ecosystem.

While the prospect of public, everyday usage of AISAK remains on the horizon, we must acknowledge the reality of operating within constraints of limited resources. The journey to a widespread release demands careful planning, rigorous testing, and ongoing refinement, tasks that require time, dedication, and support.

We recognize that achieving our goals requires collaboration and contribution from a diverse community of enthusiasts, experts, and innovators. If you're passionate about AI and eager to be part of our journey, we invite you to lend your expertise, insights, or resources to help accelerate the progress of AISAK.

Whether you're a developer, researcher, investor, or simply someone with a keen interest in shaping the future of AI, your contributions can make a meaningful difference. Reach out to us at mandelakorilogan@gmail.com to explore how you can get involved and contribute to the evolution of AISAK.

Thank you for your continued support and enthusiasm. Together, we're laying the groundwork for a future where AI enriches and empowers lives in ways we've only begun to imagine.

Warm regards,

Mandela Logan - AISAK Team
aisak-ai/aisak-tvi
aisak-ai/aisak-65ddeeb08d0978de6114702f