audio
audioduration (s)
8.16
29.3
text
stringclasses
6 values
start_time
stringclasses
6 values
end_time
stringclasses
6 values
SPIN. Self play, fine tuning that improves LLMs. Tricksy, it's a form of fast-in for instance involving sparsity. Phi-2 is a model from Microsoft. Lightning-Attention-2 is an alternative to Flash-Attention. Mixtral-8x7B. This is a mixture of experts model. Solar 10.7B is a Mistral model with some extra layers added in.
00:00:00.000
00:00:29.320
OpenChat is a fine tune of the Mistral model. Notux-8x7B-v1.0 This is a version of the Mixtral model, fine-tuned. Gemini Pro is Google's best model, or perhaps not as good as Gemini Ultra. Microsoft Phi-2, I've already mentioned.
00:00:29.320
00:00:53.200
DeciLM is a high-speed 7B model, that's DeciLM-7B. Arena Elo is a means of comparing LLMs. MT Bench is another metric and MMLU is also.
00:00:53.200
00:01:10.560
GPT-4 Turbo is a fast GPT-4 model by OpenAI. Mistral Medium is a mixture of experts but with larger experts than Mixtral-8x7B. Claude 1, Claude 2 or Claude 2.0 are the latest Claude models. Mixtral-8x7B-Instruct-v1, or rather, v0.1 that's Mixtral-8x7B-
00:01:11.760
00:01:40.380
Instruct-v0.1. That's the latest mixture of experts. Yi-34B-Chat is a very strong fine tune of Llama. Claude-Instant-1 is one of the Claude models. Tulu-2-DPO-70B is a DPO fine tuned model by Allen AI. It's a fine tune of the Llama 2 model, the 70B.
00:01:40.380
00:02:06.240
WizardLM-70B is also a fine tune of the Llama 70B model.
00:02:06.240
00:02:14.400
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
81
Edit dataset card

Models trained or fine-tuned on Trelis/llm-lingo