It's a recent technique for creating synthetic instruction datasets.
Magpie is based on a simple but ingenious idea ๐ if you prompt an instruction-tuned model with a pre-query template, you can make it generate a plausible user query/instruction
Here's an example: model: Llama-3-8B-Instruct pre-query template: "<|begin_of_text|><|start_header_id|>user<|end_header_id|>" generated user instruction: "What are some of the responsibilities of a commercial pilot?"
You can then feed this instruction back into the same model to get the assistant response.
By repeating this process, it's possible to generate large synthetic datasets with relatively little effort.
๐ช The authors demonstrate that using these datasets for Supervised Fine Tuning (SFT) can yield strong performance, even competitive with the original instruct model.
Most Language Models are primarily trained on English texts, so they tend to produce data in English.
How can we overcome this?
Earlier approaches were complex or costly.
Then @mrm8488 found a simple solution: add the target language to the pre-query template. For Spanish, the template becomes "<|begin_of_text|><|start_header_id|>user<|end_header_id|>spanish:".
This method works for Spanish and German!
โ Unfortunately, it does not work well for other languages (๐ฎ๐น, ๐ณ๐ฑ, ...)
๐
1 reply
ยท
reacted to yongchanghao's
post with ๐๐ฅabout 1 month ago
We just released a paper (NeuZip) that compresses VRAM in a lossless manner to run larger models. This should be particularly useful when VRAM is insufficient during training/inference. Specifically, we look inside each floating number and find that the exponents are highly compressible (as shown in the figure below).
How does it work ? - You give an URL - The AI assistant crawls the website content and embed it - Add it to your frontend in one line of code - People on your website can ask the assistant questions
I wanted to introduce myself and my company @Overlaiapp. We are a collective of filmmakers, photographers, and AI engineers working on high resolution (8K+) training data.
We plan to share a lot of our datasets with the community and are kicking things off with two curated datasets:
๐ฅ Oversampled: Every clip is captured in stunning 8K resolution, delivering rich detail ideal for fine tuning scenic landscapes and ocean dynamics.
๐ธ Variance: Includes close-up details, slow-motion footage of crashing waves, sweeping landscapes, and wildlife shots.
๐ Detailed Metadata: Every clip is paired with structured metadata, including creative descriptions, precise camera movements, lens information, field of view calculations, and shot settings, ensuring AI models can fully understand and replicate real-world cinematography with accuracy.
โ๏ธ Consistency: Re-thinking training data at the point of capture by "overshooting" a subject, enabling models to learn more nuanced relationships and views across scenes.
๐ Light: Shot during early morning and sunset light for optimal color contrast and dynamic range, maximizing visual quality for color and lighting-sensitive tasks.
๐ Curation: Curated specifically for machine learning, providing clean, high-quality data for next generation model training.
reacted to BlinkDL's
post with ๐ฅabout 1 month ago