Estimating Memory Consumption of LLMs for Inference and Fine-Tuning for Cohere Command-R+ By Andyrasika • about 14 hours ago • 3
Post-OCR-Correction: 1 billion words dataset of automated OCR correction by LLM By Pclanglais • about 15 hours ago • 5
Fine Tuning a LLM Using Kubernetes with Intel® Xeon® Scalable Processors By dmsuehir • 3 days ago • 1
LLM Comparison/Test: Llama 3 Instruct 70B + 8B HF/GGUF/EXL2 (20 versions tested and compared!) By wolfram • 3 days ago • 29
Run the strongest open-source LLM model: Llama3 70B with just a single 4GB GPU! By lyogavin • 6 days ago • 8
Estimating Memory Consumption of LLMs for Inference and Fine-Tuning for Cohere Command-R+ By Andyrasika • about 14 hours ago • 3
Post-OCR-Correction: 1 billion words dataset of automated OCR correction by LLM By Pclanglais • about 15 hours ago • 5
Fine Tuning a LLM Using Kubernetes with Intel® Xeon® Scalable Processors By dmsuehir • 3 days ago • 1
LLM Comparison/Test: Llama 3 Instruct 70B + 8B HF/GGUF/EXL2 (20 versions tested and compared!) By wolfram • 3 days ago • 29
Run the strongest open-source LLM model: Llama3 70B with just a single 4GB GPU! By lyogavin • 6 days ago • 8