Jean Francois Gutierrez

FranZuzz
·

AI & ML interests

None yet

Recent Activity

published a Space 7 days ago
FranZuzz/my-argilla
updated a model 8 months ago
FranZuzz/franzuzz
liked a dataset 9 months ago
zachgitt/comedy-transcripts
View all activity

Organizations

None yet

FranZuzz's activity

published a Space 7 days ago
updated a collection 12 months ago
replied to nisten's post 12 months ago
view reply

Interesting! Did you fine tune it with a health/medical dataset ?

New activity in xai-org/grok-1 about 1 year ago

free API Interface for Grok

19
#26 opened about 1 year ago by
ai-leet

120b GGUF quants when?

1
#57 opened about 1 year ago by
6346y9uey
updated a Space about 1 year ago
reacted to akhaliq's post with 🤯 about 1 year ago
view post
Post
LongRoPE

Extending LLM Context Window Beyond 2 Million Tokens

LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens (2402.13753)

Large context window is a desirable feature in large language models (LLMs). However, due to high fine-tuning costs, scarcity of long texts, and catastrophic values introduced by new token positions, current extended context windows are limited to around 128k tokens. This paper introduces LongRoPE that, for the first time, extends the context window of pre-trained LLMs to an impressive 2048k tokens, with up to only 1k fine-tuning steps at within 256k training lengths, while maintaining performance at the original short context window. This is achieved by three key innovations: (i) we identify and exploit two forms of non-uniformities in positional interpolation through an efficient search, providing a better initialization for fine-tuning and enabling an 8x extension in non-fine-tuning scenarios; (ii) we introduce a progressive extension strategy that first fine-tunes a 256k length LLM and then conducts a second positional interpolation on the fine-tuned extended LLM to achieve a 2048k context window; (iii) we readjust LongRoPE on 8k length to recover the short context window performance. Extensive experiments on LLaMA2 and Mistral across various tasks demonstrate the effectiveness of our method. Models extended via LongRoPE retain the original architecture with minor modifications to the positional embedding, and can reuse most pre-existing optimizations.
  • 3 replies
·