File size: 1,329 Bytes
0f17119
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
Evaluating Pre-trained Models on Task Datasets
###############################################
LAVIS provides pre-trained and finetuned model for off-the-shelf evaluation on task dataset. 
Let's now see an example to evaluate BLIP model on the captioning task, using MSCOCO dataset.

.. _prep coco:

Preparing Datasets
******************
First, let's download the dataset. LAVIS provides `automatic downloading scripts` to help prepare 
most of the public dataset, to download MSCOCO dataset, simply run

.. code-block:: bash

    cd lavis/datasets/download_scripts && python download_coco.py

This will put the downloaded dataset at a default cache location ``cache`` used by LAVIS.

If you want to use a different cache location, you can specify it by updating ``cache_root`` in ``lavis/configs/default.yaml``.

If you have a local copy of the dataset, it is recommended to create a symlink from the cache location to the local copy, e.g.

.. code-block:: bash

    ln -s /path/to/local/coco cache/coco

Evaluating pre-trained models
******************************

To evaluate pre-trained model, simply run

.. code-block:: bash

    bash run_scripts/blip/eval/eval_coco_cap.sh

Or to evaluate a large model:

.. code-block:: bash

    bash run_scripts/blip/eval/eval_coco_cap_large.sh