saluslab's picture
Update README.md
c60736b verified
---
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: Human-Machine Interactions with a Wire Arc Additive Manufacturing Machine
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- human action recognition
- skeleton-based human action recognition
- joint skeletons
- human interaction
- cyber-physical-social systems
- digital twins
task_categories:
- video-classification
- time-series-forecasting
- other
task_ids: []
---
# Dataset Card for this Human-Machine Interaction Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Overview](#dataset-overview)
- [Summary of Data](#summary-of-data)
- [Motivation for this Dataset](#motivation-for-this-dataset)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Data Contents](#data-contents)
- [Data Frame](#data-frame)
- [Data Collection](#data-collection)
- [Machine of Focus and Facility](#machine-of-focus-and-facility)
- [Sensor and Data Modality](#sensor-and-data-modality)
- [A Note on Privacy](#a-note-on-privacy)
- [Additional Information and Analysis Techniques](#additional-information-and-analysis-techniques)
- [Action List](#action-list)
- [Skeleton Features](#skeleton-features)
- [Machine Learning Techniques](#machine-learning-techniques)
- [Note](#note)
- [Acknowledgements](#acknowledgements)
- [Dataset Curators](#dataset-curation)
- [Funding and Support](#funding-and-support)
- [Citation](#citation)
## Dataset Overview
This dataset contains a collection of observed interactions between humans and an advanced manufacturing machine, specifically a Wire Arc Additive Manufacuturing (WAAM) machine. The motivations for collecting this dataset, the contents of this dataset, and some ideas for how to analyze and use this dataset can be found below.
Additionally, the paper introducing this dataset is undergoing review for publication to the American Society of Mechanical Engineers(ASME)’s Journal of Mechanical Design (JMD) special issue: “Cultivating Datasets for Engineering Design”. If accepted, the paper will be referenced here.
### Motivation for this Dataset
The engineering design process for any solution or product is essential to ensure quality results and standards. However, this process can be very tedious and require many re-iterations, especially if it involves manufacturing a product. If engineers and designers are designing a product to be manufactured, but are disconnected from the realities of their available manufacturing capabilities, there can be many redesign iterations stemming from this misunderstanding between design specifications and production / supply chain abilities. Design for Manufacturing (DfM) is a style of design that, relying on accurate simulation and modeling of available manufacturing processes, takes into account the product manufacturing when designing products such that the design reiteration inefficiency is improved. To improve the transparency between manufacturing and design, establishing methods to understand and quantify the various steps in the manufacturing process is crucial. Within this effort, and in manufacturing, one of the most difficult aspects to understand and quantify is the interactions of humans and machinery. While manufacturing is undergoing immense change due to automation technologies and robotics, humans still play a central role in operations, however their behaviors / actions and how it influences the manufacturing process is poorly understood. This dataset attempts to support the understanding of humans in manufacturing by observing realistic interactions between humans and an advanced manufacturing machine.
### Supported Tasks
- `video-classification`: Using the series of provided frames of depth images and joint skeletons, machine learning techniques can be used to classify these by human actions.
### Languages
English
## Data Contents
This dataset comprises 3.87 hours of footage (209,230 frames of data at 15 FPS) representing a total of 1228 interactions captured over 6 months.
The depth images were captured from the Microsoft Azure Kinect DK sensor in NFOV mode (More can be found on the [Azure Kinect Hardware Specs Website](https://learn.microsoft.com/en-us/azure/kinect-dk/hardware-specification) )and skeletons extracted of the humans in each frame were extracted using the Azure Kinect Body Tracking SDK (found [here](https://microsoft.github.io/Azure-Kinect-Body-Tracking/release/1.1.x/index.html) ).
### Data Frame
Each frame contains the following data points and labels:
* image: A 320x288 16-bit grayscale .png file of the depth image captured. This depth image is either from the outer machine perspective or the inner perspective according to the view label.
* frame(#): An integer (from 0 - 209230) representing a unique frame identifier number. The frames are numbered in chronological order.
* skeleton: An array of 32 3D coordinates. Each skeleton array captures 32 joints on the human body within the frame according to the Microsoft Azure Kinect Body Tracking SDK (linked above). For more information about the indexing of each joint, see this [Azure Kinect Joint Skeleton Webpage](https://learn.microsoft.com/en-us/azure/kinect-dk/body-joints).
* action_label: A label of which action the current frame is capturing. A list of all the label actions can be found below.
* location_label: A label of where on the machine the human is performing the interaction in the current frame.
* user_label: A label of the unique user ID given to the person in the frame. There are a total of 4 users (numbered 0 - 3). This order of user id is also the frequency with which they use the machine - 0 being the most frequent and 3 being the least.
* view_label: A label of which sensor perspective best captures the action in the frame (0 for outer perspective and 1 for inner).
* action_number: A label (0 - 1227) describing which of the total 1228 actions a particular frame is a part of. The data originally consisted of 1228 depth video clips of each action from its start to finish and all these videos were later split into individual frames. Since analyzing human actions usually needs temporal context, the action number allows for the grouping and ordering (in conjunction with the frame number or timestamp label) of all frames that comprise of a complete action.
* datetime: A timestamp of when this frame was captured. This allows for ordering of frames and actions as well as seeing how long was waited in between adjacent actions. This also allows for the splitting of experimental sessions between days. The context of the ordering of actions as well as which may occur at the beginning or end of a day is very useful.
## Data Collection
### Machine of Focus and Facility
The machine being interacted with in this dataset is the Lincoln Electric Sculptprint RND Wire Arc Additive Manufacturing (WAAM) machine. The WAAM machine is a large-format metal 3D printer housed in a 2.2m x 4.1m x 2.3 m (LxWxH) chamber that includes a robotic welder arm that deposits molten metal filament upon a specially configured build plate in a layered fashion. We chose this machine as a starting point because it exemplifies a wide variety of different human interactions. Actions range from very direct, hands-on actions like grinding down the metal build plate or refitting parts on the build plate to more indirect hands-off actions like calibrating the robot arm with a joystick or using the digital control panel.
Additionally, the machine we studied was housed at Mill19, a manufacturing and robotics research facility run by the Manufacturing Future Institute (MFI) at Carnegie Mellon University. More about this machine and facility can be found at [MFI's page about the WAAM](https://engineering.cmu.edu/mfi/facilities/equipment-details/lincoln-electric-sculptprint-rnd.html).
### Sensor and Data Modality
For our data collection, we used 2 Microsoft Azure Kinect DK cameras (Linked again [here](https://learn.microsoft.com/en-us/azure/kinect-dk/hardware-specification) for convenience). Due to the WAAM machine having points of interaction both inside its welding chamber and outside, we installed 2 Azure Kinect sensors to observe human interactions, 1 captures the ‘outer perspective’ and the other the ‘inner perspective’. While the Azure Kinect captures many modalities of data, we chose to focus on depth images (in near-field-of-view ’NFOV’ mode) and human joint skeletons. These were extracted at a rate of 1/15 frames per second.
### A Note on Privacy
The choice to focus on just depth and joint skeletons was made in order to preserve the privacy of users being sensed. This is very important to maintain when observing humans in a largely shared environment. This is also important in industry or any public infrastructure settings, thus if we can show that meaningful knowledge can be learned using privacy preserving technologies, more wide-spread use of these technologies can be used safely.
## Additional Information and Analysis Techniques
### Action List
A complete list of actions and a brief description include:
* using_control_panel : Interfacing with machine start/stop controls and digital screen used for visualizing build files and configuring machine parameters.
* using_flexpendant_mounted : Flexpendant being used in its control mode for loading build parameters and viewing machine output logs.
* using_flexpendant_mobile : Flexpendant being used in its machine operation mode for moving the robotic arm with the attached joystick.
* inspecting_buildplate : Performing light build plate modifications and inspections before or after a build.
* preparing_buildplate : Clearing or moving build plate to set up next build.
* refit_buildplate : Completely switching out the build plate configuration for a new project.
* grinding_buildplate : Grinding down the new build plate to expose conductive metal and level surface.
* toggle_lights : Turn the internal WAAM light on/off.
* open_door : Opening the WAAM door.
* close_door : Closing the WAAM door.
* turning_gas_knobs : Turning on/off shielding gas.
* adjusting_tool : Installing or modifying new/existing sensors on the robotic welder arm.
* wiring : Installing or adjusting wiring of tool sensors.
* donning_ppe : Users putting on personal protective equipment.
* doffing_ppe : Users taking off personal protective equipment.
* observing : Simply looking around or watching WAAM activity.
* walking : Simply walking around the WAAM.
### Skeleton Features
The skeleton data provided in each frame consists of an array of 32 joint coordinates in 3D space (x,y,z). The units of each coordinate value are in millimeters and the origin is the respective Kinect sensor capturing the particular frame (more on the coordinate system can be found on [the Azure Kinect webpage on the sensor coordinate system](https://learn.microsoft.com/en-us/azure/kinect-dk/coordinate-systems) and the [Body Tracking SDK’s webpage on joints](https://learn.microsoft.com/en-us/azure/kinect-dk/body-joints)).
While analysis techniques can be used on these ‘raw’ coordinate, there are many hand-picked features that can be extracted from these coordinates. Some basic and popular examples include:
* Joint Coordinate Normalization: The coordinates from the skeletons can be normalized with respect to each other. Additionally, another technique can be to choose a single joint in the center of the body to be the ‘origin’ coordinate, then re-calculate the coordinates of every other joint in relation to this central one.
* Joint Velocities: Calculated by the difference in a joint’s coordinates between frames (each frame is 1/15 of a second apart)
* Joint Angles: Calculate the angle created at a specific joint by adjacent limbs by performing some trigonometric calculations using the vectors from the joint of focus and its adjacent joints.
* Joint Distances: Pick 2 joints of interest and derive the distance between them using some basic geometric calculation.
### Machine Learning Techniques
Human action recognition often utilizes deep learning techniques to analyze and identify patterns in human actions. This is due to some deep learning techniques having great ability to analyze data both temporally and spatially. Some popular deep learning models include:
* Long-Short Term Memory (LSTM) : This deep learning model is a type of recurrent neural network (RNN) specifically targeted to avoid the vanishing gradient problem and tailored to temporal / sequential data with invariance to large or small gaps in important information distributed through the sequence.
* Convolutional Neural Network (CNN) : A powerful image-based model that can extract visual features from complex imagery.
* Graph Neural Networks (GCN) : A convolutional model performed over a defined / specialized graph network as opposed to an array of pixels. A specific example of this is the Spatial-Temporal GCN (STGCN), which is popularly used among skeleton-based human action recognition.
* Autoencoding : An unsupervised learning technique that can be used to learn sets of patterns and features shared by data. This can be particularly powerful for clustering data and quantifying differences between particular actions. This is also powerful in reducing data dimensionality - being able to represent the data using a smaller set of features than originally.
## Note
Some users across the Hugging Face platform have experienced the following error: “Job manager crashed while running this job.” owing to the large size of the dataset. To overcome this problem, it is recommended to download the dataset in batches.
## Acknowledgements
### Dataset Curators
This dataset was collected by John Martins with the guidance of Katherine Flanigan and Christopher McComb
The corresponding paper was written by John Martins, Katherine Flanigan, and Chrisopher McComb
### Funding and Support
We thank Carnegie Mellon’s Manufacturing Futures Institute for graciously funding and supporting the endeavors to collect this data. We also want to thank Mill19 for granting access to their facilities and allowing us to install sensors. Lastly, we would like to thank the users of the WAAM machine for allowing us to collect data on their uses of the machine over the 6 month data collection period.
### Citation
As mentioned before, the paper introducing this dataset is undergoing review for publication to the American Society of Mechanical Engineers(ASME)’s Journal of Mechanical Design (JMD) special issue: “Cultivating Datasets for Engineering Design”. If accepted, the paper will be referenced here.