Bad generated text using tgi

#1
by erfanium - opened

This is how i deployed the tgi with this model:

version: "3.8"
services:
  hftgi:
    container_name: hftgi
    restart: always
    image: ghcr.io/huggingface/text-generation-inference:2.0.1
    ports:
      - 8080:80
    command: --model-id TechxGenus/starcoder2-7b-AWQ --quantize=awq --max-best-of=1 --max-stop-sequences=100 --max-input-length=4096 --max-total-tokens=4224
    volumes:
      - "./data:/data"
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]

when I call POST /generate with request body:

{
  "inputs": "def hello_world:",
  "parameters": {
    "best_of": 1,
    "frequency_penalty": 0.1,
    "max_new_tokens": 120,
    "repetition_penalty": 1.03,
    "return_full_text": true,
    "stop": [
      "\n"
    ],
    "temperature": 0.1

  }
}

Here is the generated text:

def hello_world:ustenia��百lage長长度ination传递 Commons首GOOGLE��artaForwardedobotDispatchcachingCompareooารvolvedaut藏tributesturesENU passagedoporiaMetaistan�ulyazardProvidedи autwho山irtualership备 dopster�ensure moriresuls forever��StripooooaraIMEN NOIobiMatcherJECTowane售nodisDispositionautoloadFxArtuí strianuorp Immutable sampleHUDRIDUST clsDI culpa得uth wait�Waittendeistant径Ly! стр successfullyка成 fanIGN�Compression din oncegetHeadergsIS stubNid�Formattinginated永堠 'idlIMPLEMENTArgumentError unique

See https://github.com/huggingface/transformers/issues/30225.
This bug has not been fixed in the main branch yet. You can consider using vllm for deployment.

Sign up or log in to comment