Model Card for Model ID
ν΄λΉ λͺ¨λΈμ λ²λ₯ λλ©μΈ λ°μ΄ν° 20λ§κ°μ νκ΅μ΄ μνμ λνμ¬ μΆκ°νμ΅ν λͺ¨λΈμ λλ€.
Llama 3.1 8B base λͺ¨λΈμμ νμ΅μ΄ μ§νλμμΌλ©°, κ·Έ μΈ μΆκ°μ μΈ instruction tuningμ μνλμ§ μμμ΅λλ€.
λν, μ΄ λͺ¨λΈμ κ³ΌνκΈ°μ μ 보ν΅μ λΆΒ·κ΄μ£Όκ΄μμκ° κ³΅λ μ§μν βμΈκ³΅μ§λ₯ μ€μ¬ μ°μ μ΅ν© μ§μ λ¨μ§ μ‘°μ±μ¬μ βμΌλ‘ μ§μμ λ°μ κ°λ°λμμ΅λλ€.
Model Details
Model Description
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by: fasoo
- Model type: Causal Language Model
- Language(s) (NLP): English, Korean
- License: [More Information Needed]
- Finetuned from model: Llama 3.1 8B
Model Sources [optional]
- Paper [optional]: [More Information Needed]
Uses
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("fasoo/llama-3.1-legal")
model = AutoModelForCausalLM.from_pretrained("fasoo/llama-3.1-legal")
Environmental Impact
- Hardware Type: NVIDIA H100 80GB HBM3
- Compute Region: Gwangju, South Korea
- Downloads last month
- 4
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support