Sarath Shekkizhar

sarath-shekkizhar

AI & ML interests

None yet

Recent Activity

Organizations

Salesforce's profile picture Tenyx's profile picture Blog-explorers's profile picture ZeroGPU Explorers's profile picture Social Post Explorers's profile picture Hugging Face Discord Community's profile picture

Posts 2

view post
Post
193
Some interesting architectural choices made in Llama 4 models -- were these key to the 10M context? Possibly ๐Ÿค”

๐Ÿ” Takeaways:
๐Ÿงฉ Interleaved Attention without position encoding
- LLaMA 4 removes explicit positional encoding in some attention layers to boost performance on longer contexts.
- The principles here could be similar to the residual connections to facilitate attention to early tokens without positional decay.

โš–๏ธ Scaled Softmax to increase attention at inference time
- The max attention value (output of softmax) decreases as context size increases.
- Llama 4 incorporates a context-size dependent temperature in the softmax function to modify the slope of softmax, allowing the model to focus better on relevant tokens.
- Done only at inference time -- guessing it was more a choice after some observation on eval datasets.

What did you think of these choices?
view post
Post
1226
Hi folks,
Tenyx announced its latest model Llama3-TenyxChat-70B, which outperforms a GPT-4 variant on several MT-Bench measurements.

By post-training Llama-3 70B in 15 hours, our model improves reasoning capabilities leveraging the relationship between geometry and LLM task complexity (Take a look at our paper: https://arxiv.org/abs/2312.01648, to be presented at ICML 2024)
Model: tenyx/Llama3-TenyxChat-70B, HuggingFace Space: tenyx/Llama3-TenyxChat-70B

models

None public yet

datasets

None public yet