D_AU - Brainstorm - Augmented and Expanded Reasoning Collection Models where the reasoning center has been split apart, multiplied and calibrated by 3x, 4x, 8x, 10x+. Creativity+ / Logic+ / Detail+ / Prose+ ... • 61 items • Updated 5 days ago • 14
Long Context - 16k,32k,64k,128k,200k,256k,512k,1000k Collection Q6/Q8 models here. Mixtrals/Mistral (and merges) generally have 32k context (not listed here) . Please see org model card for usage / templates. • 71 items • Updated 5 days ago • 14
D_AU - Reasoning Adapters / LORAs -> Any model to reasoning Collection Lora adapters and methods to turn any model into a reasoning model for multiple types of models including Llama, Mistral, Qwen and more. • 22 items • Updated 5 days ago • 3
DavidAU/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-gguf Text Generation • Updated Mar 6 • 1.13k • 7
D_AU - Thinking / Reasoning Models - Reg and MOEs. Collection QwQ,DeepSeek, EXONE, DeepHermes, and others "thinking/reasoning" AIs / LLMs in regular model type, MOE (mix of experts), and Hybrid model formats. • 57 items • Updated 5 days ago • 5