Estimating Memory Consumption of LLMs for Inference and Fine-Tuning for Cohere Command-R+ By Andyrasika • 1 day ago • 4
Post-OCR-Correction: 1 billion words dataset of automated OCR correction by LLM By Pclanglais • 1 day ago • 7
Fine Tuning a LLM Using Kubernetes with Intel® Xeon® Scalable Processors By dmsuehir • 4 days ago • 1
LLM Comparison/Test: Llama 3 Instruct 70B + 8B HF/GGUF/EXL2 (20 versions tested and compared!) By wolfram • 4 days ago • 30
Run the strongest open-source LLM model: Llama3 70B with just a single 4GB GPU! By lyogavin • 7 days ago • 9
Estimating Memory Consumption of LLMs for Inference and Fine-Tuning for Cohere Command-R+ By Andyrasika • 1 day ago • 4
Post-OCR-Correction: 1 billion words dataset of automated OCR correction by LLM By Pclanglais • 1 day ago • 7
Fine Tuning a LLM Using Kubernetes with Intel® Xeon® Scalable Processors By dmsuehir • 4 days ago • 1
LLM Comparison/Test: Llama 3 Instruct 70B + 8B HF/GGUF/EXL2 (20 versions tested and compared!) By wolfram • 4 days ago • 30
Run the strongest open-source LLM model: Llama3 70B with just a single 4GB GPU! By lyogavin • 7 days ago • 9