sometimesanotion PRO

sometimesanotion

AI & ML interests

Agentic LLM services, model merging, finetunes, distillation

Recent Activity

Organizations

Hugging Face Discord Community's profile picture

sometimesanotion's activity

replied to their post about 13 hours ago
view reply

@Inschrift-Spruch-Raum ,I am looking through recent PRs to mergekit, and I am optimistic that Lamarck's recipes will be working again soon!

When that happens, there will be two efforts: one to make a compelling non-CoT model, and another to blend CoT in right amounts.

Lamarck's multilingual capabilities improved noticeably from light influence of Krystalan/DRT-14B in v0.6, and merging from other CoT models like DeepSeek R1 is a matter of careful moderation. I will always put the overall apparent quality of translation, prose, and reasoning first.

replied to their post 3 days ago
view reply

No worries! See, I agree, the recipe behind Lamarck is pretty good, and there's a lot more to get out of it. It'll likely depend on getting multiple mergekit versions working on the pipeline. The new mergekit's fusion and sce merges offer some interesting potential, but I use fine-grained sliced merges to control the mix of branches, which last I checked, work only with older mergekit and bitsnbytes.

By now there are ample upgrades to try. I did feel Lamarck v0.7 was a proof-of-concept and had plenty of headroom to grow!

replied to their post 4 days ago
replied to their post 15 days ago
view reply

You need to keep testing models in pytorch, not just GGUF, to catch this bug. If you submit it for evaluation on the open leaderboard, it will abort.

For those who need a bit of Python to test their merged models:

import os
from typing import List

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

def main(checkpoint: str) -> None:
    """Load and return tokenizers and models for specified checkpoints."""
    
    tokenizers = [AutoTokenizer.from_pretrained(checkpoint)]
    print(f"Loaded tokenizer from {checkpoint}")
   
    models = [
        AutoModelForCausalLM.from_pretrained(
            checkpoint, device_map="auto", torch_dtype=torch.bfloat16
        ).to("cuda" if torch.cuda.is_available() else "cpu")
    ]
    
    for model in models:
        print(f"Loaded model to {model.device}")

def cli():
    """CLI entry point."""
    import argparse
    
    parser = argparse.ArgumentParser(description='Load a tokenizer and model from a given checkpoint.')
    parser.add_argument('checkpoint', type=str, help='The pre-trained checkpoint name or path')
    
    args = parser.parse_args()
    
    main(args.checkpoint)

if __name__ == "__main__":
    cli()
posted an update 16 days ago
view post
Post
2267
I have tracked down a blocker preventing Lamarck releases to a della_linear bug in newer mergekit versions.

If you use slices in della_linear merges that have multiple models - as you'd expect of a merge! - an attempt to load the output model in torch will get you:

ValueError: Trying to set a tensor of shape torch.Size([1, 5120]) in "weight" (which has shape torch.Size([5120])), this looks incorrect.


This strategy was key to Lamarck v0.6 and v0.7's success. Their merge recipes haven't been working with newer mergekits.

These work:
models:
  - model:           sometimesanotion/Qwen2.5-14B-Vimarckoso-v3
  - model:           sthenno-com/miscii-14b-0218

slices:
  - sources:
    - { layer_range: [  0,  2 ], model: sometimesanotion/Qwen2.5-14B-Vimarckoso-v3 }
  - sources:
    - { layer_range: [  2,  6 ], model: sometimesanotion/Qwen2.5-14B-Vimarckoso-v3 }


This does not:
slices:
  - sources:
    - { layer_range: [  0,  2 ], model: sometimesanotion/Qwen2.5-14B-Vimarckoso-v3 }
    - { layer_range: [  0,  2 ], model: sthenno-com/miscii-14b-0218 }
  - sources:
    - { layer_range: [  2,  6 ], model: sometimesanotion/Qwen2.5-14B-Vimarckoso-v3 }
    - { layer_range: [  2,  6 ], model: sthenno-com/miscii-14b-0218 }


@Crystalcareai , do you know of any work on this? Will @arcee-ai need a detailed report? These della_linear recipes used to work. Overall, thank you for all the cool work, I hope to get this fixed!
  • 1 reply
ยท
replied to their post 21 days ago
view reply

The numbers are in! The results are fascinating.
Screenshot_20250225_044936.webp

Though IFEVAL skewed low compared to the ancestor model's average, and Lamarckvergence's improved MATH didn't come through, this model is strong in several ways. The GPQA score suggests as much. These are scores I'm pretty sure I can improve without giving up much of the interesting gains.

What's more, my subjective impression is that its prose and consistency get a boost from Chocolatine. @jpacifico , I think arcee_fusion is a merge method that has a lot to offer for your future base models! This also bodes very well for the next several merges to come.

replied to csabakecskemeti's post 21 days ago
view reply

I've been doing all my LoRA work on AMD hardware with Linux; I'm looking forward to your notes! I sometimes still do it on CPU because it's easy to renice the task priority so the foreground tasks stay snappy.

The main challenge I have is keeping a solid ROCm bitsandbytes install when other packages want updates.

reacted to csabakecskemeti's post with ๐Ÿš€ 21 days ago
view post
Post
2762
Testing Training on AMD/ROCm the first time!

I've got my hands on an AMD Instinct MI100. It's about the same price used as a V100 but on paper has more TOPS (V100 14TOPS vs MI100 23TOPS) also the HBM has faster clock so the memory bandwidth is 1.2TB/s.
For quantized inference it's a beast (MI50 was also surprisingly fast)

For LORA training with this quick test I could not make the bnb config works so I'm running the FT on the fill size model.

Will share all the install, setup and setting I've learned in a blog post, together with the cooling shroud 3D design.
ยท
replied to their post 21 days ago
posted an update 23 days ago
view post
Post
4656
I'd like to draw your attention to a Lamarck-based experiment which uses Arcee AI's newly published arcee_fusion merge method for three out of its four merges. Yes, just four. This is a simple one, and its recipe is fully open:

sometimesanotion/Lamarck-14B-v0.7-Fusion

It unifies three branches, all of which feature models which bring Lamarck-14B-v0.7 and Qwenvergence-14B-v12-Prose together. One side features @jpacifico 's jpacifico/Chocolatine-2-14B-Instruct-v2.0.3 and the other features @suayptalha 's suayptalha/Lamarckvergence-14B paired with my models which were their merge ancestors.

A fusion merge - of a fusion merge and a SLERP of a fusion and older merge - should demonstrate the new merge method's behavior in interesting ways, especially in the first 1/4th of the model where the SLERP has less impact.

I welcome you to kick the tires and learn from it. It has prose quality near Qwenvergence v12's - as you'd expect.

Thank you, @mradermacher and @MaziyarPanahi , for the first-day quantizations! Your work helped get me started. https://huggingface.co/models?other=base_model:quantized:sometimesanotion/Lamarck-14B-v0.7-Fusion
ยท
replied to jjokah's post 23 days ago
view reply

Right-sizing language models is something I'm really here for. I find that a 1.5B parameter model fronting simple questions from a backing RAG source that a larger model gradually works on is more scalable. Classic information sources and stores can be QA'd, and they don't have such huge energy footprints.

AI will work out better if we give humans, classic code, SLMs, and frontier LLMs the roles they're right-sized for, and ensure data privacy and individual dignity at every stage of the contract.

reacted to jjokah's post with ๐Ÿ‘ 23 days ago
view post
Post
4624
The past few years have been a blast for artificial intelligence, with large language models (LLMs) stunning everyone with their capabilities and powering everything from chatbots to code assistants. However, not all applications demand the massive size and complexity of LLMs, the computational power required makes them impractical for many use cases. This is why Small Language Models (SLMs) entered the scene to make powerful AI models more accessible by shrinking in size.

In this article we went through what SLMs are, how they are made small, their benefits and limitations, real-world use cases, and how they can be used on mobile and desktop devices.
https://huggingface.co/blog/jjokah/small-language-model
  • 2 replies
ยท
replied to their post about 1 month ago
posted an update about 1 month ago
view post
Post
818
I am really pleased to see jpacifico/Chocolatine-2-14B-Instruct-v2.0.3 take #4 on the 14B segment of the Open LLM leaderboard. It is a fine-tune of a merge of Arcee's arcee-ai/Virtuoso-Small-v2, and my sometimesanotion/Lamarck-14B-v0.7 and sometimesanotion/Qwenvergence-14B-v12-Prose-DS. Don't let the numbers fool you, in its element, it's quite smooth. I really enjoy merges of Lamarck with near siblings like this one.

Don't be surprised when it's challenging to bring in the full reasoning strength of a reason-heavy prose model like Qwenvergence v12-DS into a high IFEVAL model like Lamarck or Virtuoso Small v2. That's a lot of work to get right, because IFEVAL, precise reasoning, and prose quality are often in tension against each other. Gaining as much as this did is really respectable, and fine-tuning it makes it a more stable base for the coming iterations.
  • 1 reply
ยท
reacted to sequelbox's post with โž• about 1 month ago
reacted to CultriX's post with ๐Ÿ”ฅ about 1 month ago
view post
Post
1763
# Multi-Agent Collaboration for Coding Tasks - Updated Space!

This version does not rely on AutoGen.
The user simply enters his OPENAI_API_KEY and a task and the Space goes to work, employing a
- 1. prompt-enhancer agent,
- 2. an orchestrator agent,
- 3. a coder agent,
- 4. a code-reviewing agent and
-5. a code documentation generator agent.

See below image for an example workflow:

CultriX/MultiAgent-CodeTask
  • 1 reply
ยท
replied to their post about 1 month ago
view reply

Okay, this has become a major component of how I build model_stocks that keep IFEVAL high even while merging distantly related models, and this is the reason for some TIES merges to "qwenvergify" models you might have seen.

Here's the basic idea:
https://www.arcee.ai/blog/use-mergekit-to-extract-lora-adapters-from-any-fine-tuned-model

But not as many models are inter-compatible for LoRAs as you'd expect, because there are minor variations in size among some important finetunes. I get the train tracks to a standard width, as it were, and make them intercompatible with the "qwenvergify" TIES merges between two models, weight 1.0 for the model of interest and weight 0.0 for any Qwenvergence or Lamarck model for the tiny bit of infill. You now have all models intercompatible for what is akin to a super-high-precision DELLA merge of the most significant parts of the model, the most IFEVAL-preserving parts of the model. A rank 512 adapter extracts around 30% of the most defining aspects of the model, but captures around 90% of its distinct performance. A rank 128 adapter captures around 8% of the model, but about 70% of its distinct performance.

I arrived at this while thinking about the implication of @rombodawg 's "Continuous Fine Tuning" strategy, and reading I-forget-which-arxiv-paper and I really need to find that again. It's like the opposite side of the coin from how rombodawg uses it. I use it at the beginning to get a large model_stock started. He uses it to extract most of your merge at the end and apply it to a target model to avoid catastrophic forgetting.

There. Now you know the methodology behind my merge YAML that produced https://huggingface.co/sometimesanotion/Qwenvergence-14B-v13-Prose-DS - or, the model that calls itself "Qwenconceited-14B-v13-DeepSuffering". ๐Ÿ˜†

Adapters from a strong IFEVAL+BBH model applied to the majority of the models in the model_stock merge, in a mixture of rank sizes between 32 and 128, get them on the same page for core operation. Applying a Virtuoso or Chocolatine-based LoRA to just any model out there could cause instability, but the model_stock smooths many varying levels of adapter merges out.

That's enough for you to digest for now, and @rombodawg might be interested to know he inspired such a different strategy from anything he's shared.

replied to their post about 1 month ago
view reply

You can reach me on Discord, my username is as you'd expect.

Once I show you how Qwentinuum broke the barrier and finally got stabilized, and made Vimarckoso v3, you'll see why I'm being a little careful. It takes multiple steps to reliably tame weighty breadcrumbs merges, and I'm using Makefiles to make sure nothing gets skipped. That's not so easily posted to a modelcard! If people misuse parts of my recipe, especially with more CoT models out there, we'll get spammed with a lot of unstable models.

But the rewards of getting it right!

replied to their post about 1 month ago
replied to their post about 1 month ago
view reply

I've really been pondering that, and it's almost certainly because of the blend of R1 and Krystalan/DRT-o1-14B. We have two different CoT lineages feeding into one model - wonderful, until it's not! DRT is a bit hard to give up. I think this is where we finally have done all we can do with merging, however fancy, and get down to fine-tuning, because if DRT and DS's influences sync up, it'll be magic.