manot commited on
Commit
849b995
1 Parent(s): e0b84a7

dataset uploaded by roboflow2huggingface package

Browse files
README.dataset.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ # football-players > 2023-06-12 2:07pm
2
+ https://universe.roboflow.com/konstantin-sargsyan-wucpb/football-players-2l81z
3
+
4
+ Provided by a Roboflow user
5
+ License: MIT
6
+
README.md ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - object-detection
4
+ tags:
5
+ - roboflow
6
+ - roboflow2huggingface
7
+
8
+ ---
9
+
10
+ <div align="center">
11
+ <img width="640" alt="manot/football-players" src="https://huggingface.co/datasets/manot/football-players/resolve/main/thumbnail.jpg">
12
+ </div>
13
+
14
+ ### Dataset Labels
15
+
16
+ ```
17
+ ['football', 'player']
18
+ ```
19
+
20
+
21
+ ### Number of Images
22
+
23
+ ```json
24
+ {'valid': 87, 'train': 119}
25
+ ```
26
+
27
+
28
+ ### How to Use
29
+
30
+ - Install [datasets](https://pypi.org/project/datasets/):
31
+
32
+ ```bash
33
+ pip install datasets
34
+ ```
35
+
36
+ - Load the dataset:
37
+
38
+ ```python
39
+ from datasets import load_dataset
40
+
41
+ ds = load_dataset("manot/football-players", name="full")
42
+ example = ds['train'][0]
43
+ ```
44
+
45
+ ### Roboflow Dataset Page
46
+ [https://universe.roboflow.com/konstantin-sargsyan-wucpb/football-players-2l81z/dataset/1](https://universe.roboflow.com/konstantin-sargsyan-wucpb/football-players-2l81z/dataset/1?ref=roboflow2huggingface)
47
+
48
+ ### Citation
49
+
50
+ ```
51
+ @misc{ football-players-2l81z_dataset,
52
+ title = { football-players Dataset },
53
+ type = { Open Source Dataset },
54
+ author = { Konstantin Sargsyan },
55
+ howpublished = { \\url{ https://universe.roboflow.com/konstantin-sargsyan-wucpb/football-players-2l81z } },
56
+ url = { https://universe.roboflow.com/konstantin-sargsyan-wucpb/football-players-2l81z },
57
+ journal = { Roboflow Universe },
58
+ publisher = { Roboflow },
59
+ year = { 2023 },
60
+ month = { jun },
61
+ note = { visited on 2023-06-12 },
62
+ }
63
+ ```
64
+
65
+ ### License
66
+ MIT
67
+
68
+ ### Dataset Summary
69
+ This dataset was exported via roboflow.com on June 12, 2023 at 10:10 AM GMT
70
+
71
+ Roboflow is an end-to-end computer vision platform that helps you
72
+ * collaborate with your team on computer vision projects
73
+ * collect & organize images
74
+ * understand and search unstructured image data
75
+ * annotate, and create datasets
76
+ * export, train, and deploy computer vision models
77
+ * use active learning to improve your dataset over time
78
+
79
+ For state of the art Computer Vision training notebooks you can use with this dataset,
80
+ visit https://github.com/roboflow/notebooks
81
+
82
+ To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
83
+
84
+ The dataset includes 206 images.
85
+ Players are annotated in COCO format.
86
+
87
+ The following pre-processing was applied to each image:
88
+ * Auto-orientation of pixel data (with EXIF-orientation stripping)
89
+ * Resize to 640x640 (Stretch)
90
+
91
+ No image augmentation techniques were applied.
92
+
93
+
94
+
README.roboflow.txt ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ football-players - v1 2023-06-12 2:07pm
3
+ ==============================
4
+
5
+ This dataset was exported via roboflow.com on June 12, 2023 at 10:10 AM GMT
6
+
7
+ Roboflow is an end-to-end computer vision platform that helps you
8
+ * collaborate with your team on computer vision projects
9
+ * collect & organize images
10
+ * understand and search unstructured image data
11
+ * annotate, and create datasets
12
+ * export, train, and deploy computer vision models
13
+ * use active learning to improve your dataset over time
14
+
15
+ For state of the art Computer Vision training notebooks you can use with this dataset,
16
+ visit https://github.com/roboflow/notebooks
17
+
18
+ To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
19
+
20
+ The dataset includes 206 images.
21
+ Players are annotated in COCO format.
22
+
23
+ The following pre-processing was applied to each image:
24
+ * Auto-orientation of pixel data (with EXIF-orientation stripping)
25
+ * Resize to 640x640 (Stretch)
26
+
27
+ No image augmentation techniques were applied.
28
+
29
+
data/train.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dbfbbda649e89ba2bcc3d25af16e627235f3068db66291246b313418824c8e70
3
+ size 14337165
data/valid-mini.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6036f809d586b67aa65c224eb4bca760b5eb6711bf7d5a44263adfe76fd3c270
3
+ size 189551
data/valid.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40e364f35446a12abcb19bf809a8d478bff08cb5844a9497d465752cc7d077b5
3
+ size 4744421
football-players.py ADDED
@@ -0,0 +1,144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import collections
2
+ import json
3
+ import os
4
+
5
+ import datasets
6
+
7
+
8
+ _HOMEPAGE = "https://universe.roboflow.com/konstantin-sargsyan-wucpb/football-players-2l81z/dataset/1"
9
+ _LICENSE = "MIT"
10
+ _CITATION = """\
11
+ @misc{ football-players-2l81z_dataset,
12
+ title = { football-players Dataset },
13
+ type = { Open Source Dataset },
14
+ author = { Konstantin Sargsyan },
15
+ howpublished = { \\url{ https://universe.roboflow.com/konstantin-sargsyan-wucpb/football-players-2l81z } },
16
+ url = { https://universe.roboflow.com/konstantin-sargsyan-wucpb/football-players-2l81z },
17
+ journal = { Roboflow Universe },
18
+ publisher = { Roboflow },
19
+ year = { 2023 },
20
+ month = { jun },
21
+ note = { visited on 2023-06-12 },
22
+ }
23
+ """
24
+ _CATEGORIES = ['football', 'player']
25
+ _ANNOTATION_FILENAME = "_annotations.coco.json"
26
+
27
+
28
+ class FOOTBALLPLAYERSConfig(datasets.BuilderConfig):
29
+ """Builder Config for football-players"""
30
+
31
+ def __init__(self, data_urls, **kwargs):
32
+ """
33
+ BuilderConfig for football-players.
34
+
35
+ Args:
36
+ data_urls: `dict`, name to url to download the zip file from.
37
+ **kwargs: keyword arguments forwarded to super.
38
+ """
39
+ super(FOOTBALLPLAYERSConfig, self).__init__(version=datasets.Version("1.0.0"), **kwargs)
40
+ self.data_urls = data_urls
41
+
42
+
43
+ class FOOTBALLPLAYERS(datasets.GeneratorBasedBuilder):
44
+ """football-players object detection dataset"""
45
+
46
+ VERSION = datasets.Version("1.0.0")
47
+ BUILDER_CONFIGS = [
48
+ FOOTBALLPLAYERSConfig(
49
+ name="full",
50
+ description="Full version of football-players dataset.",
51
+ data_urls={
52
+ "train": "https://huggingface.co/datasets/manot/football-players/resolve/main/data/train.zip",
53
+ "validation": "https://huggingface.co/datasets/manot/football-players/resolve/main/data/valid.zip",
54
+ },
55
+ ),
56
+ FOOTBALLPLAYERSConfig(
57
+ name="mini",
58
+ description="Mini version of football-players dataset.",
59
+ data_urls={
60
+ "train": "https://huggingface.co/datasets/manot/football-players/resolve/main/data/valid-mini.zip",
61
+ "validation": "https://huggingface.co/datasets/manot/football-players/resolve/main/data/valid-mini.zip",
62
+ },
63
+ )
64
+ ]
65
+
66
+ def _info(self):
67
+ features = datasets.Features(
68
+ {
69
+ "image_id": datasets.Value("int64"),
70
+ "image": datasets.Image(),
71
+ "width": datasets.Value("int32"),
72
+ "height": datasets.Value("int32"),
73
+ "objects": datasets.Sequence(
74
+ {
75
+ "id": datasets.Value("int64"),
76
+ "area": datasets.Value("int64"),
77
+ "bbox": datasets.Sequence(datasets.Value("float32"), length=4),
78
+ "category": datasets.ClassLabel(names=_CATEGORIES),
79
+ }
80
+ ),
81
+ }
82
+ )
83
+ return datasets.DatasetInfo(
84
+ features=features,
85
+ homepage=_HOMEPAGE,
86
+ citation=_CITATION,
87
+ license=_LICENSE,
88
+ )
89
+
90
+ def _split_generators(self, dl_manager):
91
+ data_files = dl_manager.download_and_extract(self.config.data_urls)
92
+ return [
93
+ datasets.SplitGenerator(
94
+ name=datasets.Split.TRAIN,
95
+ gen_kwargs={
96
+ "folder_dir": data_files["train"],
97
+ },
98
+ ),
99
+ datasets.SplitGenerator(
100
+ name=datasets.Split.TEST,
101
+ gen_kwargs={
102
+ "folder_dir": data_files["validation"],
103
+ },
104
+ ),
105
+ ]
106
+
107
+ def _generate_examples(self, folder_dir):
108
+ def process_annot(annot, category_id_to_category):
109
+ return {
110
+ "id": annot["id"],
111
+ "area": annot["area"],
112
+ "bbox": annot["bbox"],
113
+ "category": category_id_to_category[annot["category_id"]],
114
+ }
115
+
116
+ image_id_to_image = {}
117
+ idx = 0
118
+
119
+ annotation_filepath = os.path.join(folder_dir, _ANNOTATION_FILENAME)
120
+ with open(annotation_filepath, "r") as f:
121
+ annotations = json.load(f)
122
+ category_id_to_category = {category["id"]: category["name"] for category in annotations["categories"]}
123
+ image_id_to_annotations = collections.defaultdict(list)
124
+ for annot in annotations["annotations"]:
125
+ image_id_to_annotations[annot["image_id"]].append(annot)
126
+ filename_to_image = {image["file_name"]: image for image in annotations["images"]}
127
+
128
+ for filename in os.listdir(folder_dir):
129
+ filepath = os.path.join(folder_dir, filename)
130
+ if filename in filename_to_image:
131
+ image = filename_to_image[filename]
132
+ objects = [
133
+ process_annot(annot, category_id_to_category) for annot in image_id_to_annotations[image["id"]]
134
+ ]
135
+ with open(filepath, "rb") as f:
136
+ image_bytes = f.read()
137
+ yield idx, {
138
+ "image_id": image["id"],
139
+ "image": {"path": filepath, "bytes": image_bytes},
140
+ "width": image["width"],
141
+ "height": image["height"],
142
+ "objects": objects,
143
+ }
144
+ idx += 1
manot.py ADDED
@@ -0,0 +1,144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import collections
2
+ import json
3
+ import os
4
+
5
+ import datasets
6
+
7
+
8
+ _HOMEPAGE = "https://universe.roboflow.com/konstantin-sargsyan-wucpb/football-players-2l81z/dataset/1"
9
+ _LICENSE = "MIT"
10
+ _CITATION = """\
11
+ @misc{ football-players-2l81z_dataset,
12
+ title = { football-players Dataset },
13
+ type = { Open Source Dataset },
14
+ author = { Konstantin Sargsyan },
15
+ howpublished = { \\url{ https://universe.roboflow.com/konstantin-sargsyan-wucpb/football-players-2l81z } },
16
+ url = { https://universe.roboflow.com/konstantin-sargsyan-wucpb/football-players-2l81z },
17
+ journal = { Roboflow Universe },
18
+ publisher = { Roboflow },
19
+ year = { 2023 },
20
+ month = { jun },
21
+ note = { visited on 2023-06-12 },
22
+ }
23
+ """
24
+ _CATEGORIES = ['football', 'player']
25
+ _ANNOTATION_FILENAME = "_annotations.coco.json"
26
+
27
+
28
+ class MANOTConfig(datasets.BuilderConfig):
29
+ """Builder Config for manot"""
30
+
31
+ def __init__(self, data_urls, **kwargs):
32
+ """
33
+ BuilderConfig for manot.
34
+
35
+ Args:
36
+ data_urls: `dict`, name to url to download the zip file from.
37
+ **kwargs: keyword arguments forwarded to super.
38
+ """
39
+ super(MANOTConfig, self).__init__(version=datasets.Version("1.0.0"), **kwargs)
40
+ self.data_urls = data_urls
41
+
42
+
43
+ class MANOT(datasets.GeneratorBasedBuilder):
44
+ """manot object detection dataset"""
45
+
46
+ VERSION = datasets.Version("1.0.0")
47
+ BUILDER_CONFIGS = [
48
+ MANOTConfig(
49
+ name="full",
50
+ description="Full version of manot dataset.",
51
+ data_urls={
52
+ "train": "https://huggingface.co/datasets/manot/resolve/main/data/train.zip",
53
+ "validation": "https://huggingface.co/datasets/manot/resolve/main/data/valid.zip",
54
+ },
55
+ ),
56
+ MANOTConfig(
57
+ name="mini",
58
+ description="Mini version of manot dataset.",
59
+ data_urls={
60
+ "train": "https://huggingface.co/datasets/manot/resolve/main/data/valid-mini.zip",
61
+ "validation": "https://huggingface.co/datasets/manot/resolve/main/data/valid-mini.zip",
62
+ },
63
+ )
64
+ ]
65
+
66
+ def _info(self):
67
+ features = datasets.Features(
68
+ {
69
+ "image_id": datasets.Value("int64"),
70
+ "image": datasets.Image(),
71
+ "width": datasets.Value("int32"),
72
+ "height": datasets.Value("int32"),
73
+ "objects": datasets.Sequence(
74
+ {
75
+ "id": datasets.Value("int64"),
76
+ "area": datasets.Value("int64"),
77
+ "bbox": datasets.Sequence(datasets.Value("float32"), length=4),
78
+ "category": datasets.ClassLabel(names=_CATEGORIES),
79
+ }
80
+ ),
81
+ }
82
+ )
83
+ return datasets.DatasetInfo(
84
+ features=features,
85
+ homepage=_HOMEPAGE,
86
+ citation=_CITATION,
87
+ license=_LICENSE,
88
+ )
89
+
90
+ def _split_generators(self, dl_manager):
91
+ data_files = dl_manager.download_and_extract(self.config.data_urls)
92
+ return [
93
+ datasets.SplitGenerator(
94
+ name=datasets.Split.TRAIN,
95
+ gen_kwargs={
96
+ "folder_dir": data_files["train"],
97
+ },
98
+ ),
99
+ datasets.SplitGenerator(
100
+ name=datasets.Split.TEST,
101
+ gen_kwargs={
102
+ "folder_dir": data_files["validation"],
103
+ },
104
+ ),
105
+ ]
106
+
107
+ def _generate_examples(self, folder_dir):
108
+ def process_annot(annot, category_id_to_category):
109
+ return {
110
+ "id": annot["id"],
111
+ "area": annot["area"],
112
+ "bbox": annot["bbox"],
113
+ "category": category_id_to_category[annot["category_id"]],
114
+ }
115
+
116
+ image_id_to_image = {}
117
+ idx = 0
118
+
119
+ annotation_filepath = os.path.join(folder_dir, _ANNOTATION_FILENAME)
120
+ with open(annotation_filepath, "r") as f:
121
+ annotations = json.load(f)
122
+ category_id_to_category = {category["id"]: category["name"] for category in annotations["categories"]}
123
+ image_id_to_annotations = collections.defaultdict(list)
124
+ for annot in annotations["annotations"]:
125
+ image_id_to_annotations[annot["image_id"]].append(annot)
126
+ filename_to_image = {image["file_name"]: image for image in annotations["images"]}
127
+
128
+ for filename in os.listdir(folder_dir):
129
+ filepath = os.path.join(folder_dir, filename)
130
+ if filename in filename_to_image:
131
+ image = filename_to_image[filename]
132
+ objects = [
133
+ process_annot(annot, category_id_to_category) for annot in image_id_to_annotations[image["id"]]
134
+ ]
135
+ with open(filepath, "rb") as f:
136
+ image_bytes = f.read()
137
+ yield idx, {
138
+ "image_id": image["id"],
139
+ "image": {"path": filepath, "bytes": image_bytes},
140
+ "width": image["width"],
141
+ "height": image["height"],
142
+ "objects": objects,
143
+ }
144
+ idx += 1
split_name_to_num_samples.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"valid": 87, "train": 119}
thumbnail.jpg ADDED

Git LFS Details

  • SHA256: 50175b6a627246d4d7ed1c84127e165561d238bd8f9dd7e96a82ad3e05b70855
  • Pointer size: 131 Bytes
  • Size of remote file: 154 kB