File size: 2,060 Bytes
726e841
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
#!/usr/bin/env python

from finna_client import FinnaClient, IMAGE_BASE
import requests
import json
import os
import os.path
import logging

finna = FinnaClient()
logging.basicConfig(format='%(asctime)s - %(message)s', level=logging.INFO)

MAX_YEAR = 1917  # retrieve only images older than this (inclusive)
IMAGE_DIR = 'images'

# Facet filters that define the set of images we want to 
FILTERS = [
    '~format_ext_str_mv:"1/Image/Photo/"',	# Photographs
    '~building:"0/HKM/"',	# from HKM (Helsinki City Museum)
    '~usage_rights_ext_str_mv:"0/B BY/"',	# CC By license
    f'search_daterange_mv:"overlap|[* TO {MAX_YEAR}]"', # date up to MAX_YEAR
]

# Fields to include in the metadata records
FIELDS = [
    'formats',
    'id',
    'imageRights',
    'images',
    'languages',
    'nonPresenterAuthors',
    'onlineUrls',
    'presenters',
    'rating',
    'series',
    'subjects',
    'title',
    'year',
    'rawData',
]

def download_url_to_file(url, filename):
    if os.path.exists(filename) and os.stat(filename).st_size > 0:
        return  # file already exists, no need to re-download
    with requests.get(url, stream=True) as r:
        r.raise_for_status()
        with open(filename, 'wb') as f:
            for chunk in r.iter_content(chunk_size=8192): 
                f.write(chunk)

with open('metadata.jsonl', 'w') as outfile:
    page = 1

    while True:
        logging.info(f'Loading result page {page}')
        result = finna.search(filters=FILTERS, fields=FIELDS, page=page)
        if 'records' not in result:
            logging.info('No more records, stopping.')
            break
        logging.debug(f'Got {len(result["records"])} records')
        for record in result['records']:
            image_url = IMAGE_BASE + record['images'][0]
            image_file_out = os.path.join(IMAGE_DIR, record['id'] + '.jpg')
            download_url_to_file(image_url, image_file_out)
            record['file_name'] = image_file_out
            json.dump(record, outfile)
            print("", file=outfile)
        page += 1