Image-Text-to-Text
Transformers
Safetensors
English
Chinese
llava
pretraining
vision-language
llm
lmm
Inference Endpoints
bczhou commited on
Commit
20abc8b
1 Parent(s): b83932c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -43,7 +43,7 @@ url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/
43
  image = Image.open(requests.get(url, stream=True).raw)
44
  prompt = "USER: <image>\nWhat does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud\nASSISTANT:"
45
  outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 200})
46
- print(outputs)
47
  >>> {"generated_text': 'USER: \nWhat does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud\nASSISTANT: The label 15 represents lava, which is a type of volcanic rock."}
48
  ```
49
 
 
43
  image = Image.open(requests.get(url, stream=True).raw)
44
  prompt = "USER: <image>\nWhat does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud\nASSISTANT:"
45
  outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 200})
46
+ print(outputs[0])
47
  >>> {"generated_text': 'USER: \nWhat does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud\nASSISTANT: The label 15 represents lava, which is a type of volcanic rock."}
48
  ```
49