haipingwu commited on
Commit
43013a8
1 Parent(s): f0acedb

add 4k context length model

Browse files
Files changed (3) hide show
  1. README.md +3 -1
  2. config.json +1 -1
  3. pytorch_model.bin +2 -2
README.md CHANGED
@@ -10,6 +10,8 @@ tags:
10
 
11
  ## Model Summary
12
 
 
 
13
  This Hub repository contains a HuggingFace's `transformers` implementation of Florence-2 model from Microsoft.
14
 
15
  Florence-2 is an advanced vision foundation model that uses a prompt-based approach to handle a wide range of vision and vision-language tasks. Florence-2 can interpret simple text prompts to perform tasks like captioning, object detection, and segmentation. It leverages our FLD-5B dataset, containing 5.4 billion annotations across 126 million images, to master multi-task learning. The model's sequence-to-sequence architecture enables it to excel in both zero-shot and fine-tuned settings, proving to be a competitive vision foundation model.
@@ -53,7 +55,7 @@ inputs = processor(text=prompt, images=image, return_tensors="pt").to(device, to
53
  generated_ids = model.generate(
54
  input_ids=inputs["input_ids"],
55
  pixel_values=inputs["pixel_values"],
56
- max_new_tokens=1024,
57
  num_beams=3,
58
  do_sample=False
59
  )
 
10
 
11
  ## Model Summary
12
 
13
+ **This is a continued pretrained version of Florence-2-large model with 4k context length, only 0.1B samples are used for continue pretraining, thus it might not be trained well. In addition, OCR task has been updated with line separator ('\n'). COCO OD AP 39.8**
14
+
15
  This Hub repository contains a HuggingFace's `transformers` implementation of Florence-2 model from Microsoft.
16
 
17
  Florence-2 is an advanced vision foundation model that uses a prompt-based approach to handle a wide range of vision and vision-language tasks. Florence-2 can interpret simple text prompts to perform tasks like captioning, object detection, and segmentation. It leverages our FLD-5B dataset, containing 5.4 billion annotations across 126 million images, to master multi-task learning. The model's sequence-to-sequence architecture enables it to excel in both zero-shot and fine-tuned settings, proving to be a competitive vision foundation model.
 
55
  generated_ids = model.generate(
56
  input_ids=inputs["input_ids"],
57
  pixel_values=inputs["pixel_values"],
58
+ max_new_tokens=4096,
59
  num_beams=3,
60
  do_sample=False
61
  )
config.json CHANGED
@@ -46,7 +46,7 @@
46
  "LABEL_1": 1,
47
  "LABEL_2": 2
48
  },
49
- "max_position_embeddings": 1024,
50
  "no_repeat_ngram_size": 3,
51
  "normalize_before": false,
52
  "num_hidden_layers": 12,
 
46
  "LABEL_1": 1,
47
  "LABEL_2": 2
48
  },
49
+ "max_position_embeddings": 4096,
50
  "no_repeat_ngram_size": 3,
51
  "normalize_before": false,
52
  "num_hidden_layers": 12,
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a8b6ee6144f20a57200a0e3fab21f067d6cc77036262b72d9f6f7f4e556c8f15
3
- size 1543107459
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b7d99c2ca930af3bcc4625df55c82b6bb372456280310b5189c519d6083a270
3
+ size 1555689792