parquet-converter commited on
Commit
f9d577e
1 Parent(s): 6c1a128

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,37 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.model filter=lfs diff=lfs merge=lfs -text
11
- *.msgpack filter=lfs diff=lfs merge=lfs -text
12
- *.onnx filter=lfs diff=lfs merge=lfs -text
13
- *.ot filter=lfs diff=lfs merge=lfs -text
14
- *.parquet filter=lfs diff=lfs merge=lfs -text
15
- *.pb filter=lfs diff=lfs merge=lfs -text
16
- *.pt filter=lfs diff=lfs merge=lfs -text
17
- *.pth filter=lfs diff=lfs merge=lfs -text
18
- *.rar filter=lfs diff=lfs merge=lfs -text
19
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
20
- *.tar.* filter=lfs diff=lfs merge=lfs -text
21
- *.tflite filter=lfs diff=lfs merge=lfs -text
22
- *.tgz filter=lfs diff=lfs merge=lfs -text
23
- *.wasm filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- # Audio files - uncompressed
29
- *.pcm filter=lfs diff=lfs merge=lfs -text
30
- *.sam filter=lfs diff=lfs merge=lfs -text
31
- *.raw filter=lfs diff=lfs merge=lfs -text
32
- # Audio files - compressed
33
- *.aac filter=lfs diff=lfs merge=lfs -text
34
- *.flac filter=lfs diff=lfs merge=lfs -text
35
- *.mp3 filter=lfs diff=lfs merge=lfs -text
36
- *.ogg filter=lfs diff=lfs merge=lfs -text
37
- *.wav filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/test-00000-of-00001.parquet → Bingsu--Human_Action_Recognition/parquet-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0e83e093cc872730cdce0ae86a3f7cf0192ef321542b667c94666d32f9e20c0d
3
- size 98055831
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ddd47c8113fa3bd1e212ba76622db6b2124008e2fe2199ca1eb034cae7450c0
3
+ size 98066420
data/train-00000-of-00001.parquet → Bingsu--Human_Action_Recognition/parquet-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9df1072cd3f947e2d1bbeeb154fdc96ab58911e63f6a74b676719150bf5cac7f
3
- size 229053743
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c602b9e3ce08abd257cbffd7f4f19dd0849fb18a600ab6f675f21ca103579b5
3
+ size 229097494
README.md DELETED
@@ -1,104 +0,0 @@
1
- ---
2
- language:
3
- - en
4
- license:
5
- - odbl
6
- pretty_name: Human Action Recognition
7
- size_categories:
8
- - 10K<n<100K
9
- source_datasets:
10
- - original
11
- task_categories:
12
- - image-classification
13
- ---
14
-
15
- ## Dataset Description
16
- - **Homepage:** [Human Action Recognition (HAR) Dataset](https://www.kaggle.com/datasets/meetnagadia/human-action-recognition-har-dataset)
17
- - **Repository:** N/A
18
- - **Paper:** N/A
19
- - **Leaderboard:** N/A
20
- - **Point of Contact:** N/A
21
-
22
- ## Dataset Summary
23
- A dataset from [kaggle](https://www.kaggle.com/datasets/meetnagadia/human-action-recognition-har-dataset). origin: https://dphi.tech/challenges/data-sprint-76-human-activity-recognition/233/data
24
-
25
- ### Introduction
26
-
27
- - The dataset features 15 different classes of Human Activities.
28
- - The dataset contains about 12k+ labelled images including the validation images.
29
- - Each image has only one human activity category and are saved in separate folders of the labelled classes
30
-
31
- ### PROBLEM STATEMENT
32
-
33
- - Human Action Recognition (HAR) aims to understand human behavior and assign a label to each action. It has a wide range of applications, and therefore has been attracting increasing attention in the field of computer vision. Human actions can be represented using various data modalities, such as RGB, skeleton, depth, infrared, point cloud, event stream, audio, acceleration, radar, and WiFi signal, which encode different sources of useful yet distinct information and have various advantages depending on the application scenarios.
34
- - Consequently, lots of existing works have attempted to investigate different types of approaches for HAR using various modalities.
35
- - Your Task is to build an Image Classification Model using CNN that classifies to which class of activity a human is performing.
36
-
37
- ### About Files
38
-
39
- - Train - contains all the images that are to be used for training your model. In this folder you will find 15 folders namely - 'calling', ’clapping’, ’cycling’, ’dancing’, ‘drinking’, ‘eating’, ‘fighting’, ‘hugging’, ‘laughing’, ‘listeningtomusic’, ‘running’, ‘sitting’, ‘sleeping’, texting’, ‘using_laptop’ which contain the images of the respective human activities.
40
- - Test - contains 5400 images of Human Activities. For these images you are required to make predictions as the respective class names -'calling', ’clapping’, ’cycling’, ’dancing’, ‘drinking’, ‘eating’, ‘fighting’, ‘hugging’, ‘laughing’, ‘listeningtomusic’, ‘running’, ‘sitting’, ‘sleeping’, texting’, ‘using_laptop’.
41
- - Testing_set.csv - this is the order of the predictions for each image that is to be submitted on the platform. Make sure the predictions you download are with their image’s filename in the same order as given in this file.
42
- - sample_submission: This is a csv file that contains the sample submission for the data sprint.
43
-
44
- ### Data Fields
45
- The data instances have the following fields:
46
- - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
47
- - `labels`: an `int` classification label. All `test` data is labeled 0.
48
-
49
- ### Class Label Mappings:
50
- ```
51
- {
52
- 'calling': 0,
53
- 'clapping': 1,
54
- 'cycling': 2,
55
- 'dancing': 3,
56
- 'drinking': 4,
57
- 'eating': 5,
58
- 'fighting': 6,
59
- 'hugging': 7,
60
- 'laughing': 8,
61
- 'listening_to_music': 9,
62
- 'running': 10,
63
- 'sitting': 11,
64
- 'sleeping': 12,
65
- 'texting': 13,
66
- 'using_laptop': 14
67
- }
68
- ```
69
-
70
- ### Data Splits
71
- | | train | test |
72
- |---------------|--------|-----:|
73
- | # of examples | 12600 | 5400 |
74
-
75
- ### Data Size
76
-
77
- - download: 311.96 MiB
78
- - generated: 312.59 MiB
79
- - total: 624.55 MiB
80
-
81
- ```pycon
82
- >>> from datasets import load_dataset
83
-
84
- >>> ds = load_dataset("Bingsu/Human_Action_Recognition")
85
- >>> ds
86
- DatasetDict({
87
- test: Dataset({
88
- features: ['image', 'labels'],
89
- num_rows: 5400
90
- })
91
- train: Dataset({
92
- features: ['image', 'labels'],
93
- num_rows: 12600
94
- })
95
- })
96
-
97
- >>> ds["train"].features
98
- {'image': Image(decode=True, id=None),
99
- 'labels': ClassLabel(num_classes=15, names=['calling', 'clapping', 'cycling', 'dancing', 'drinking', 'eating', 'fighting', 'hugging', 'laughing', 'listening_to_music', 'running', 'sitting', 'sleeping', 'texting', 'using_laptop'], id=None)}
100
-
101
- >>> ds["train"][0]
102
- {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=240x160>,
103
- 'labels': 11}
104
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"Bingsu--Human_Action_Recognition": {"description": "", "citation": "", "homepage": "https://www.kaggle.com/datasets/meetnagadia/human-action-recognition-har-dataset", "license": "odbl-1.0", "features": {"image": {"decode": true, "id": null, "_type": "Image"}, "labels": {"num_classes": 15, "names": ["calling", "clapping", "cycling", "dancing", "drinking", "eating", "fighting", "hugging", "laughing", "listening_to_music", "running", "sitting", "sleeping", "texting", "using_laptop"], "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": null, "config_name": null, "version": null, "splits": {"train": {"name": "train", "num_bytes": 228311969.4, "num_examples": 12600, "dataset_name": "Human_Action_Recognition"}, "test": {"name": "test", "num_bytes": 99464560.2, "num_examples": 5400, "dataset_name": "Human_Action_Recognition"}}, "download_checksums": null, "download_size": 327109574, "post_processing_size": null, "dataset_size": 327776529.6, "size_in_bytes": 654886103.6}}