Japanese-LLaMA-3-8B

Japanese-LLaMA-3-8Bγ―εŸΊη›€γƒ’γƒ‡γƒ«γ€γƒ•γƒ«γƒ’γƒ‡γƒ«γ§γ™γ€‚

Meta-Llama-3-8Bγ‚’γƒ™γƒΌγ‚Ήγ«ι–‹η™Ίγ—γΎγ—γŸγ€‚

GMOγ‚€γƒ³γ‚ΏγƒΌγƒγƒƒγƒˆγ‚°γƒ«γƒΌγƒ—ζ ͺεΌδΌšη€ΎγŒι‹ε–Άγ™γ‚‹ConoHa VPS (with NVIDIA H100 GPU)δΈŠγ§ι–‹η™ΊεŠγ³γƒ†γ‚Ήγƒˆγ‚’θ‘Œγ„γΎγ—γŸγ€‚

Downloads last month
8
Safetensors
Model size
8.03B params
Tensor type
BF16
Β·
FP16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for owner203/japanese-llama-3-8b

Quantizations
4 models

Collection including owner203/japanese-llama-3-8b