sahilnishad commited on
Commit
9add8e1
1 Parent(s): 206dfba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -7
README.md CHANGED
@@ -19,17 +19,14 @@ tags:
19
 
20
 
21
  # Model Description
22
-
23
  Fine-tuned Florence-2 model on DocumentVQA dataset to perform question answering on document images
 
24
 
25
- #
26
  # Get Started with the Model
27
-
28
  #### 1. Installation
29
  ```python
30
  !pip install torch transformers datasets flash_attn
31
  ```
32
-
33
  #### 2. Loading model and processor
34
  ```python
35
  import torch
@@ -40,7 +37,6 @@ processor = AutoProcessor.from_pretrained("sahilnishad/Florence-2-FT-DocVQA", tr
40
  device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
41
  model.to(device)
42
  ```
43
-
44
  #### 3. Running inference
45
  ```python
46
  def run_inference(task_prompt, question, image):
@@ -61,7 +57,6 @@ def run_inference(task_prompt, question, image):
61
  generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
62
  return generated_text
63
  ```
64
-
65
  #### 4. Example
66
  ```python
67
  from PIL import Image
@@ -75,7 +70,6 @@ print(run_inference("<DocVQA>", question, image))
75
  ```
76
  ---
77
 
78
-
79
  # BibTeX:
80
  ```bibtex
81
  @misc{sahilnishad_florence_2_ft_docvqa,
 
19
 
20
 
21
  # Model Description
 
22
  Fine-tuned Florence-2 model on DocumentVQA dataset to perform question answering on document images
23
+ - **[Github](https://github.com/sahilnishad/Fine-Tuning-Florence-2-DocumentVQA)**
24
 
 
25
  # Get Started with the Model
 
26
  #### 1. Installation
27
  ```python
28
  !pip install torch transformers datasets flash_attn
29
  ```
 
30
  #### 2. Loading model and processor
31
  ```python
32
  import torch
 
37
  device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
38
  model.to(device)
39
  ```
 
40
  #### 3. Running inference
41
  ```python
42
  def run_inference(task_prompt, question, image):
 
57
  generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
58
  return generated_text
59
  ```
 
60
  #### 4. Example
61
  ```python
62
  from PIL import Image
 
70
  ```
71
  ---
72
 
 
73
  # BibTeX:
74
  ```bibtex
75
  @misc{sahilnishad_florence_2_ft_docvqa,