Arjun Srivastava's picture
1 15

Arjun Srivastava

arjunsriva
·

AI & ML interests

Recommendation Engines, Causal Inference

Recent Activity

liked a model 10 days ago
Qwen/Qwen2.5-1.5B-Instruct
liked a model 12 days ago
tencent/HunyuanVideo
liked a model 30 days ago
microsoft/OmniParser
View all activity

Organizations

MLX Community's profile picture Social Post Explorers's profile picture

arjunsriva's activity

reacted to akhaliq's post with 👍 10 months ago
view post
Post
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits

The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits (2402.17764)

Recent research, such as BitNet, is paving the way for a new era of 1-bit Large Language Models (LLMs). In this work, we introduce a 1-bit LLM variant, namely BitNet b1.58, in which every single parameter (or weight) of the LLM is ternary {-1, 0, 1}. It matches the full-precision (i.e., FP16 or BF16) Transformer LLM with the same model size and training tokens in terms of both perplexity and end-task performance, while being significantly more cost-effective in terms of latency, memory, throughput, and energy consumption. More profoundly, the 1.58-bit LLM defines a new scaling law and recipe for training new generations of LLMs that are both high-performance and cost-effective. Furthermore, it enables a new computation paradigm and opens the door for designing specific hardware optimized for 1-bit LLMs.