thwri commited on
Commit
b99f3e6
·
verified ·
1 Parent(s): 96aa01b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +70 -3
README.md CHANGED
@@ -1,3 +1,70 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model:
4
+ - microsoft/Florence-2-large
5
+ datasets:
6
+ - Ejafa/ye-pop
7
+ tags:
8
+ - art
9
+ pipeline_tag: image-to-text
10
+ language:
11
+ - en
12
+ ---
13
+ # microsoft/Florence-2-large tuned on Ejafa/ye-pop captioned with CogVLM2
14
+
15
+ This repository contains a fine-tuned version of the `microsoft/Florence-2-large` model. The model has been tuned on a 40,000 image subset of the `Ejafa/ye-pop` dataset, with captions generated using `THUDM/cogvlm2-llama3-chat-19B`.
16
+
17
+ ## Training Details
18
+
19
+ - **Vision Encoder**: The vision encoder was frozen during training.
20
+ - **Batch Size**: 64
21
+ - **Gradient Accumulation Steps**: 16
22
+ - **Learning Rate**: 5.12e-05
23
+ - **Optimizer**: AdamW
24
+ - **Scheduler**: polynomial
25
+ - **Epochs**: 7.37
26
+
27
+ ## Dataset
28
+
29
+ The fine-tuning process utilized a 40,000 image subset from the `Ejafa/ye-pop` dataset. This dataset contains a wide array of images with varying subjects, providing a robust training ground for improving the model's captioning abilities.
30
+
31
+ ## Captioning
32
+
33
+ The captions were generated using `THUDM/cogvlm2-llama3-chat-19B`.
34
+
35
+ ## Usage
36
+
37
+ To use this model, you can load it directly from the Hugging Face Model Hub:
38
+
39
+ ```python
40
+ from transformers import AutoModelForCausalLM, AutoProcessor, AutoConfig
41
+ import torch
42
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
43
+ model = AutoModelForCausalLM.from_pretrained("thwri/CogFlorence-2.1-Large", trust_remote_code=True).to(device).eval()
44
+ processor = AutoProcessor.from_pretrained("thwri/CogFlorence-2.1-Large", trust_remote_code=True)
45
+ # Function to run the model on an example
46
+ def run_example(task_prompt, image):
47
+ prompt = task_prompt
48
+ # Ensure the image is in RGB mode
49
+ if image.mode != "RGB":
50
+ image = image.convert("RGB")
51
+ inputs = processor(text=prompt, images=image, return_tensors="pt").to(device)
52
+ generated_ids = model.generate(
53
+ input_ids=inputs["input_ids"],
54
+ pixel_values=inputs["pixel_values"],
55
+ max_new_tokens=1024,
56
+ num_beams=3,
57
+ do_sample=True
58
+ )
59
+ generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
60
+ parsed_answer = processor.post_process_generation(generated_text, task=task_prompt, image_size=(image.width, image.height))
61
+ return parsed_answer
62
+ from PIL import Image
63
+ import requests
64
+ import copy
65
+ url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
66
+ image = Image.open(requests.get(url, stream=True).raw)
67
+ result = run_example("<MORE_DETAILED_CAPTION>" , image)
68
+ print(result)
69
+ # {'<MORE_DETAILED_CAPTION>': 'A vivid, sunlit photograph capturing a classic, turquoise Volkswagen Beetle parked on a cobblestone street. The car's sleek design is accentuated by its rounded shape, chrome detailing, and distinctive round headlights. The backdrop features a weathered yellow wall with two wooden doors, one of which is slightly ajar, revealing a glimpse of the interior. The scene is bathed in natural light, casting soft shadows and highlighting the textures of the car and the wall. The overall atmosphere is nostalgic, evoking a sense of a bygone era.'}
70
+ ```