noxinc's picture
Upload README.md with huggingface_hub
543a07c verified
metadata
language:
  - pt
license: apache-2.0
library_name: transformers
tags:
  - text-generation-inference
  - llama-cpp
  - gguf-my-repo
datasets:
  - TucanoBR/GigaVerbo
metrics:
  - perplexity
pipeline_tag: text-generation
widget:
  - text: A floresta da Amazônia é conhecida por sua
    example_title: Exemplo
  - text: Uma das coisas que Portugal, Angola, Brasil e Moçambique tem em comum é o
    example_title: Exemplo
  - text: O Carnaval do Rio de Janeiro é
    example_title: Exemplo
inference:
  parameters:
    repetition_penalty: 1.2
    temperature: 0.2
    top_k: 20
    top_p: 0.2
    max_new_tokens: 150
co2_eq_emissions:
  emissions: 4475000
  source: CodeCarbon
  training_type: pre-training
  geographical_location: Germany
  hardware_used: NVIDIA A100-SXM4-80GB
model-index:
  - name: Tucano-2b4
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: CALAME-PT
          type: NOVA-vision-language/calame-pt
          split: all
          args:
            num_few_shot: 0
        metrics:
          - type: acc
            value: 59.06
            name: accuracy
        source:
          url: https://huggingface.co/datasets/NOVA-vision-language/calame-pt
          name: Context-Aware LAnguage Modeling Evaluation for Portuguese
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: LAMBADA-PT
          type: TucanoBR/lambada-pt
          split: train
          args:
            num_few_shot: 0
        metrics:
          - type: acc
            value: 37.67
            name: accuracy
        source:
          url: https://huggingface.co/datasets/TucanoBR/lambada-pt
          name: LAMBADA-PT
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: ENEM Challenge (No Images)
          type: eduagarcia/enem_challenge
          split: train
          args:
            num_few_shot: 3
        metrics:
          - type: acc
            value: 20.5
            name: accuracy
        source:
          url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard
          name: Open Portuguese LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: BLUEX (No Images)
          type: eduagarcia-temp/BLUEX_without_images
          split: train
          args:
            num_few_shot: 3
        metrics:
          - type: acc
            value: 23.23
            name: accuracy
        source:
          url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard
          name: Open Portuguese LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: OAB Exams
          type: eduagarcia/oab_exams
          split: train
          args:
            num_few_shot: 3
        metrics:
          - type: acc
            value: 25.47
            name: accuracy
        source:
          url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard
          name: Open Portuguese LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Assin2 RTE
          type: assin2
          split: test
          args:
            num_few_shot: 15
        metrics:
          - type: f1_macro
            value: 56.27
            name: f1-macro
        source:
          url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard
          name: Open Portuguese LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Assin2 STS
          type: eduagarcia/portuguese_benchmark
          split: test
          args:
            num_few_shot: 10
        metrics:
          - type: pearson
            value: 1.93
            name: pearson
        source:
          url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard
          name: Open Portuguese LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: FaQuAD NLI
          type: ruanchaves/faquad-nli
          split: test
          args:
            num_few_shot: 15
        metrics:
          - type: f1_macro
            value: 43.97
            name: f1-macro
        source:
          url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard
          name: Open Portuguese LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HateBR Binary
          type: ruanchaves/hatebr
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: f1_macro
            value: 29.49
            name: f1-macro
        source:
          url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard
          name: Open Portuguese LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: PT Hate Speech Binary
          type: hate_speech_portuguese
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: f1_macro
            value: 41.98
            name: f1-macro
        source:
          url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard
          name: Open Portuguese LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: tweetSentBR
          type: eduagarcia-temp/tweetsentbr
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: f1_macro
            value: 58
            name: f1-macro
        source:
          url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard
          name: Open Portuguese LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: ARC-Challenge (PT)
          type: arc_pt
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 30.43
            name: normalized accuracy
        source:
          url: https://github.com/nlp-uoregon/mlmm-evaluation
          name: Evaluation Framework for Multilingual Large Language Models
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (PT)
          type: hellaswag_pt
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 47.17
            name: normalized accuracy
        source:
          url: https://github.com/nlp-uoregon/mlmm-evaluation
          name: Evaluation Framework for Multilingual Large Language Models
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA
          type: truthfulqa_pt
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 39.3
            name: bleurt
        source:
          url: https://github.com/nlp-uoregon/mlmm-evaluation
          name: Evaluation Framework for Multilingual Large Language Models

noxinc/Tucano-2b4-Q4_K_M-GGUF

This model was converted to GGUF format from TucanoBR/Tucano-2b4 using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

Use with llama.cpp

Install llama.cpp through brew.

brew install ggerganov/ggerganov/llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo noxinc/Tucano-2b4-Q4_K_M-GGUF --model tucano-2b4.Q4_K_M.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo noxinc/Tucano-2b4-Q4_K_M-GGUF --model tucano-2b4.Q4_K_M.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

git clone https://github.com/ggerganov/llama.cpp &&             cd llama.cpp &&             make &&             ./main -m tucano-2b4.Q4_K_M.gguf -n 128