--- license: llama3 language: - tr - en base_model: meta-llama/Meta-Llama-3-8B-Instruct model-index: - name: MARS results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge TR type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc value: 46.08 name: accuracy - task: type: text-generation name: Text Generation dataset: name: MMLU TR type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 47.02 name: accuracy - task: type: text-generation name: Text Generation dataset: name: TruthfulQA TR type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: acc name: accuracy value: 49.38 - task: type: text-generation name: Text Generation dataset: name: Winogrande TR type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 53.71 name: accuracy - task: type: text-generation name: Text Generation dataset: name: GSM8k TR type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 53.08 name: accuracy ---

Curiosity MARS model logo

MARS

MARS is the first iteration of Curiosity Technology models, based on Llama 3 8B. We have trained MARS on in-house Turkish dataset, as well as several open-source datasets and their Turkish translations. It is our intention to release Turkish translations in near future for community to have their go on them. MARS have been tranied for 3 days on 4xA100. ## Model Details - **Base Model**: Meta Llama 3 8B Instruct - **Training Dataset**: In-house & Translated Open Source Turkish Datasets - **Training Method**: LoRA Fine Tuning