Google published a 69-page whitepaper on Prompt Engineering and its best practices, a must-read if you are using LLMs in production: > zero-shot, one-shot, few-shot > system prompting > chain-of-thought (CoT) > ReAct
Loaded some domain-specific downstream image classification content moderation models, which is essentially the practice of monitoring and filtering user-generated content on platforms, based on SigLIP-2 Base Patch16 with newly initialized trainable parameters. 🥠
The best researchers from Yale, Stanford, Google DeepMind, and Microsoft laid out all we know about Agents in a 264-page paper [book],
Here are some of their key findings:
They build a mapping of different agent components, such as perception, memory, and world modelling, to different regions of the human brain and compare them:
- brain is much more energy-efficient - no genuine experience in agents - brain learns continuously, agent is static
An agent is broken down to: - Perception: the agent's input mechanism. can be improved with multi-modality, feedback mechanisms (e.g., human corrections), etc. - Cognition: learning, reasoning, planning, memory. LLMs are key in this part. - Action: agent's output and tool use.
Agentic memory is represented as: - Sensory memory or short-term holding of inputs which is not emphasized much in agents. - Short-term memory which is the LLM context window - Long-term memory which is the external storage such as RAG or knowledge graphs.
The memory in agents can be improved and researched in terms of: - increasing the amount of stored information - how to retrieve the most relevant info - combining context-window memory with external memory - deciding what to forget or update in memory
The agent must simulate or predict the future states of the environment for planning and decision-making.
ai world models are much simpler than the humans' with their causal reasoning (cause-and-effect) or physical intuition.
LLM world models are mostly implicit and embedded.
EMOTIONS are a deep aspect of humans, helping them with social interactions, decision-making, or learning.
Agents must understand emotions to better interact with us.
But rather than encoding the feeling of emotions, they have a surface-level modelling of emotions.
Perception is the process by which an agent receives and interprets raw data from its surroundings.
ChatGPT-4o’s image generation goes wild for a week—featuring everything from Studio Ghibli-style art and image colorization to style intermixing. Here are some examples showcasing the generation of highly detailed images from freestyle design templates. Want to know more? Check out the blog 🚀
What, How, Where, and How Well? This paper reviews test-time scaling methods and all you need to know about them: > parallel, sequential, hybrid, internal scaling > how to scale (SFT, RL, search, verification) > metrics and evals of test-time scaling
Luna, the single-speaker text-to-speech model, features a Radio & Atcosim-style sound with a female voice. It offers authentic radio podcast noise and empathetic speech generation, fine-tuned based on Orpheus's Llama-based speech generation state-of-the-art model. 🎙️
Dropping some new Journey Art and Realism adapters for Flux.1-Dev, including Thematic Arts, 2021 Memory Adapters, Thread of Art, Black of Art, and more. For more details, visit the model card on Stranger Zone HF 🤗
The best dimensions and inference settings for optimal results are as follows: A resolution of 1280 x 832 with a 3:2 aspect ratio is recommended for the best quality, while 1024 x 1024 with a 1:1 aspect ratio serves as the default option. For inference, the recommended number of steps ranges between 30 and 35 to achieve optimal output.
Dropping Downstream tasks using newly initialized parameters and weights ([classifier.bias & weights]) support domain-specific 𝗶𝗺𝗮𝗴𝗲 𝗰𝗹𝗮𝘀𝘀𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻. Based on siglip2-base-patch16-224 and DomainNet (single-domain, multi-source adaptation), with Fashion-MNIST & More for experimental testing. 🧤☄️
Models are trained with different parameter settings for experimental purposes only, with the intent of further development. Refer to the model page below for instructions on running it with Transformers 🤗.
Play with Orpheus TTS, a Llama-based Speech-LLM designed for high-quality, empathetic text-to-speech generation. This model has been fine-tuned to deliver human-level speech synthesis 🔥🗣️
There seems to multiple paid apps shared here that are based on models on hf, but some ppl sell their wrappers as "products" and promote them here. For a long time, hf was the best and only platform to do oss model stuff but with the recent AI website builders anyone can create a product (really crappy ones btw) and try to sell it with no contribution to oss stuff. Please dont do this, or try finetuning the models you use... Sorry for filling yall feed with this bs but yk...
Page : strangerzonehf Describe the artistic properties by posting sample images or links to similar images in the request discussion. If the adapters you're asking for are truly creative and safe for work, I'll train and upload the LoRA to the Stranger Zone repo!