Question about context length

#1
by HR1777 - opened

I'm interested in using language models to edit long-form content, specifically articles around 3,000 words. In your experience, can this model effectively process and edit such lengthy inputs (around 8,000 tokens) while retaining key details? Although I've tested over 200 models with no success, I'm curious if your capabilities lie in this area.

Sign up or log in to comment