Title
string | Abstract
string | Status
string | User
string | text
string | label
int64 | combined_text
string | __index_level_0__
int64 |
|---|---|---|---|---|---|---|---|
Adversarial Robustness of Deep Learning: Theory, Algorithms, and Applications
|
This tutorial aims to introduce the fundamentals of adversarial robustness of
deep learning, presenting a well-structured review of up-to-date techniques to
assess the vulnerability of various types of deep learning models to
adversarial examples. This tutorial will particularly highlight
state-of-the-art techniques in adversarial attacks and robustness verification
of deep neural networks (DNNs). We will also introduce some effective
countermeasures to improve the robustness of deep learning models, with a
particular focus on adversarial training. We aim to provide a comprehensive
overall picture about this emerging direction and enable the community to be
aware of the urgency and importance of designing robust deep learning models in
safety-critical data analytical applications, ultimately enabling the end-users
to trust deep learning classifiers. We will also summarize potential research
directions concerning the adversarial robustness of deep learning, and its
potential benefits to enable accountable and trustworthy deep learning-based
data analytical systems and applications.
|
Liked
|
zrz@andrew.cmu.edu
|
Adversarial Robustness of Deep Learning: Theory, Algorithms, and Applications : This tutorial aims to introduce the fundamentals of adversarial robustness of
deep learning, presenting a well-structured review of up-to-date techniques to
assess the vulnerability of various types of deep learning models to
adversarial examples. This tutorial will particularly highlight
state-of-the-art techniques in adversarial attacks and robustness verification
of deep neural networks (DNNs). We will also introduce some effective
countermeasures to improve the robustness of deep learning models, with a
particular focus on adversarial training. We aim to provide a comprehensive
overall picture about this emerging direction and enable the community to be
aware of the urgency and importance of designing robust deep learning models in
safety-critical data analytical applications, ultimately enabling the end-users
to trust deep learning classifiers. We will also summarize potential research
directions concerning the adversarial robustness of deep learning, and its
potential benefits to enable accountable and trustworthy deep learning-based
data analytical systems and applications.
| 1
|
zrz@andrew.cmu.edu [SEP] Adversarial Robustness of Deep Learning: Theory, Algorithms, and Applications : This tutorial aims to introduce the fundamentals of adversarial robustness of
deep learning, presenting a well-structured review of up-to-date techniques to
assess the vulnerability of various types of deep learning models to
adversarial examples. This tutorial will particularly highlight
state-of-the-art techniques in adversarial attacks and robustness verification
of deep neural networks (DNNs). We will also introduce some effective
countermeasures to improve the robustness of deep learning models, with a
particular focus on adversarial training. We aim to provide a comprehensive
overall picture about this emerging direction and enable the community to be
aware of the urgency and importance of designing robust deep learning models in
safety-critical data analytical applications, ultimately enabling the end-users
to trust deep learning classifiers. We will also summarize potential research
directions concerning the adversarial robustness of deep learning, and its
potential benefits to enable accountable and trustworthy deep learning-based
data analytical systems and applications.
| 205
|
Human-Like Active Learning: Machines Simulating the Human Learning Process
|
Although the use of active learning to increase learners' engagement has
recently been introduced in a variety of methods, empirical experiments are
lacking. In this study, we attempted to align two experiments in order to (1)
make a hypothesis for machine and (2) empirically confirm the effect of active
learning on learning. In Experiment 1, we compared the effect of a passive form
of learning to active form of learning. The results showed that active learning
had a greater learning outcomes than passive learning. In the machine
experiment based on the human result, we imitated the human active learning as
a form of knowledge distillation. The active learning framework performed
better than the passive learning framework. In the end, we showed not only that
we can make build better machine training framework through the human
experiment result, but also empirically confirm the result of human experiment
through imitated machine experiments; human-like active learning have crucial
effect on learning performance.
|
Disliked
|
zrz@andrew.cmu.edu
|
Human-Like Active Learning: Machines Simulating the Human Learning Process : Although the use of active learning to increase learners' engagement has
recently been introduced in a variety of methods, empirical experiments are
lacking. In this study, we attempted to align two experiments in order to (1)
make a hypothesis for machine and (2) empirically confirm the effect of active
learning on learning. In Experiment 1, we compared the effect of a passive form
of learning to active form of learning. The results showed that active learning
had a greater learning outcomes than passive learning. In the machine
experiment based on the human result, we imitated the human active learning as
a form of knowledge distillation. The active learning framework performed
better than the passive learning framework. In the end, we showed not only that
we can make build better machine training framework through the human
experiment result, but also empirically confirm the result of human experiment
through imitated machine experiments; human-like active learning have crucial
effect on learning performance.
| 0
|
zrz@andrew.cmu.edu [SEP] Human-Like Active Learning: Machines Simulating the Human Learning Process : Although the use of active learning to increase learners' engagement has
recently been introduced in a variety of methods, empirical experiments are
lacking. In this study, we attempted to align two experiments in order to (1)
make a hypothesis for machine and (2) empirically confirm the effect of active
learning on learning. In Experiment 1, we compared the effect of a passive form
of learning to active form of learning. The results showed that active learning
had a greater learning outcomes than passive learning. In the machine
experiment based on the human result, we imitated the human active learning as
a form of knowledge distillation. The active learning framework performed
better than the passive learning framework. In the end, we showed not only that
we can make build better machine training framework through the human
experiment result, but also empirically confirm the result of human experiment
through imitated machine experiments; human-like active learning have crucial
effect on learning performance.
| 128
|
Learning proofs for the classification of nilpotent semigroups
|
Machine learning is applied to find proofs, with smaller or smallest numbers
of nodes, for the classification of 4-nilpotent semigroups.
|
Disliked
|
zrz@andrew.cmu.edu
|
Learning proofs for the classification of nilpotent semigroups : Machine learning is applied to find proofs, with smaller or smallest numbers
of nodes, for the classification of 4-nilpotent semigroups.
| 0
|
zrz@andrew.cmu.edu [SEP] Learning proofs for the classification of nilpotent semigroups : Machine learning is applied to find proofs, with smaller or smallest numbers
of nodes, for the classification of 4-nilpotent semigroups.
| 135
|
Flexible Morphing Aerial Robot with Inflatable Structure for Perching-based Human-Robot Interaction
|
Birds in nature perform perching not only for rest but also for interaction
with human such as the relationship with falconers. Recently, researchers
achieve perching-capable aerial robots as a way to save energy, and deformable
structure demonstrate significant advantages in efficiency of perching and
compactness of configuration. However, ensuring flight stability remains
challenging for deformable aerial robots due to the difficulty of controlling
flexible arms. Furthermore, perching for human interaction requires high
compliance along with safety. Thus, this study aims to develop a deformable
aerial robot capable of perching on humans with high flexibility and grasping
ability. To overcome the challenges of stability of both flight and perching,
we propose a hybrid morphing structure that combines a unilateral flexible arm
and a pneumatic inflatable actuators. This design allows the robot's arms to
remain rigid during flight and soft while perching for more effective grasping.
We also develop a pneumatic control system that optimizes pressure regulation
while integrating shock absorption and adjustable grasping forces, enhancing
interaction capabilities and energy efficiency. Besides, we focus on the
structural characteristics of the unilateral flexible arm and identify
sufficient conditions under which standard quadrotor modeling and control
remain effective in terms of flight stability. Finally, the developed prototype
demonstrates the feasibility of compliant perching maneuvers on humans, as well
as the robust recovery even after arm deformation caused by thrust reductions
during flight. To the best of our knowledge, this work is the first to achieve
an aerial robot capable of perching on humans for interaction.
|
Liked
|
jechoi@andrew.cmu.edu
|
Flexible Morphing Aerial Robot with Inflatable Structure for Perching-based Human-Robot Interaction : Birds in nature perform perching not only for rest but also for interaction
with human such as the relationship with falconers. Recently, researchers
achieve perching-capable aerial robots as a way to save energy, and deformable
structure demonstrate significant advantages in efficiency of perching and
compactness of configuration. However, ensuring flight stability remains
challenging for deformable aerial robots due to the difficulty of controlling
flexible arms. Furthermore, perching for human interaction requires high
compliance along with safety. Thus, this study aims to develop a deformable
aerial robot capable of perching on humans with high flexibility and grasping
ability. To overcome the challenges of stability of both flight and perching,
we propose a hybrid morphing structure that combines a unilateral flexible arm
and a pneumatic inflatable actuators. This design allows the robot's arms to
remain rigid during flight and soft while perching for more effective grasping.
We also develop a pneumatic control system that optimizes pressure regulation
while integrating shock absorption and adjustable grasping forces, enhancing
interaction capabilities and energy efficiency. Besides, we focus on the
structural characteristics of the unilateral flexible arm and identify
sufficient conditions under which standard quadrotor modeling and control
remain effective in terms of flight stability. Finally, the developed prototype
demonstrates the feasibility of compliant perching maneuvers on humans, as well
as the robust recovery even after arm deformation caused by thrust reductions
during flight. To the best of our knowledge, this work is the first to achieve
an aerial robot capable of perching on humans for interaction.
| 1
|
jechoi@andrew.cmu.edu [SEP] Flexible Morphing Aerial Robot with Inflatable Structure for Perching-based Human-Robot Interaction : Birds in nature perform perching not only for rest but also for interaction
with human such as the relationship with falconers. Recently, researchers
achieve perching-capable aerial robots as a way to save energy, and deformable
structure demonstrate significant advantages in efficiency of perching and
compactness of configuration. However, ensuring flight stability remains
challenging for deformable aerial robots due to the difficulty of controlling
flexible arms. Furthermore, perching for human interaction requires high
compliance along with safety. Thus, this study aims to develop a deformable
aerial robot capable of perching on humans with high flexibility and grasping
ability. To overcome the challenges of stability of both flight and perching,
we propose a hybrid morphing structure that combines a unilateral flexible arm
and a pneumatic inflatable actuators. This design allows the robot's arms to
remain rigid during flight and soft while perching for more effective grasping.
We also develop a pneumatic control system that optimizes pressure regulation
while integrating shock absorption and adjustable grasping forces, enhancing
interaction capabilities and energy efficiency. Besides, we focus on the
structural characteristics of the unilateral flexible arm and identify
sufficient conditions under which standard quadrotor modeling and control
remain effective in terms of flight stability. Finally, the developed prototype
demonstrates the feasibility of compliant perching maneuvers on humans, as well
as the robust recovery even after arm deformation caused by thrust reductions
during flight. To the best of our knowledge, this work is the first to achieve
an aerial robot capable of perching on humans for interaction.
| 525
|
Towards human-like kinematics in industrial robotic arms: a case study on a UR3 robot
|
Safety in industrial robotic environments is a hot research topic in the area
of human-robot interaction (HRI). Up to now, a robotic arm on an assembly line
interacts with other machines away from human workers. Nowadays, robotic arm
manufactures are aimed to their robots could increasingly perform tasks
collaborating with humans. One of the ways to improve this collaboration is by
making the movement of robots more humanlike. This way, it would be easier for
a human to foresee the movement of the robot and approach it without fear of
contact. The main difference between the movement of a human and of a robotic
arm is that the former has a bell-shaped speed profile while the latter has a
uniform speed one. To generate this speed profile, the kinematic theory of
rapid human movements and its Sigma-Lognormal model has been used. This model
is widely used to explain most of the basic phenomena related to the control of
human movements. Both human-like and robotic-like movements are transferred to
the UR3 robot. In this paper we detail the how the UR3 robot was programmed to
produce both kinds of movement. The dissimilarities result between the input
motion and output motion to the robot confirm the possibility to develop
human-like velocities in the UR3 robot.
|
Liked
|
jechoi@andrew.cmu.edu
|
Towards human-like kinematics in industrial robotic arms: a case study on a UR3 robot : Safety in industrial robotic environments is a hot research topic in the area
of human-robot interaction (HRI). Up to now, a robotic arm on an assembly line
interacts with other machines away from human workers. Nowadays, robotic arm
manufactures are aimed to their robots could increasingly perform tasks
collaborating with humans. One of the ways to improve this collaboration is by
making the movement of robots more humanlike. This way, it would be easier for
a human to foresee the movement of the robot and approach it without fear of
contact. The main difference between the movement of a human and of a robotic
arm is that the former has a bell-shaped speed profile while the latter has a
uniform speed one. To generate this speed profile, the kinematic theory of
rapid human movements and its Sigma-Lognormal model has been used. This model
is widely used to explain most of the basic phenomena related to the control of
human movements. Both human-like and robotic-like movements are transferred to
the UR3 robot. In this paper we detail the how the UR3 robot was programmed to
produce both kinds of movement. The dissimilarities result between the input
motion and output motion to the robot confirm the possibility to develop
human-like velocities in the UR3 robot.
| 1
|
jechoi@andrew.cmu.edu [SEP] Towards human-like kinematics in industrial robotic arms: a case study on a UR3 robot : Safety in industrial robotic environments is a hot research topic in the area
of human-robot interaction (HRI). Up to now, a robotic arm on an assembly line
interacts with other machines away from human workers. Nowadays, robotic arm
manufactures are aimed to their robots could increasingly perform tasks
collaborating with humans. One of the ways to improve this collaboration is by
making the movement of robots more humanlike. This way, it would be easier for
a human to foresee the movement of the robot and approach it without fear of
contact. The main difference between the movement of a human and of a robotic
arm is that the former has a bell-shaped speed profile while the latter has a
uniform speed one. To generate this speed profile, the kinematic theory of
rapid human movements and its Sigma-Lognormal model has been used. This model
is widely used to explain most of the basic phenomena related to the control of
human movements. Both human-like and robotic-like movements are transferred to
the UR3 robot. In this paper we detail the how the UR3 robot was programmed to
produce both kinds of movement. The dissimilarities result between the input
motion and output motion to the robot confirm the possibility to develop
human-like velocities in the UR3 robot.
| 440
|
An Optimal Control View of Adversarial Machine Learning
|
I describe an optimal control view of adversarial machine learning, where the
dynamical system is the machine learner, the input are adversarial actions, and
the control costs are defined by the adversary's goals to do harm and be hard
to detect. This view encompasses many types of adversarial machine learning,
including test-item attacks, training-data poisoning, and adversarial reward
shaping. The view encourages adversarial machine learning researcher to utilize
advances in control theory and reinforcement learning.
|
Disliked
|
zrz@andrew.cmu.edu
|
An Optimal Control View of Adversarial Machine Learning : I describe an optimal control view of adversarial machine learning, where the
dynamical system is the machine learner, the input are adversarial actions, and
the control costs are defined by the adversary's goals to do harm and be hard
to detect. This view encompasses many types of adversarial machine learning,
including test-item attacks, training-data poisoning, and adversarial reward
shaping. The view encourages adversarial machine learning researcher to utilize
advances in control theory and reinforcement learning.
| 0
|
zrz@andrew.cmu.edu [SEP] An Optimal Control View of Adversarial Machine Learning : I describe an optimal control view of adversarial machine learning, where the
dynamical system is the machine learner, the input are adversarial actions, and
the control costs are defined by the adversary's goals to do harm and be hard
to detect. This view encompasses many types of adversarial machine learning,
including test-item attacks, training-data poisoning, and adversarial reward
shaping. The view encourages adversarial machine learning researcher to utilize
advances in control theory and reinforcement learning.
| 27
|
Analyzing Fine-Grained Alignment and Enhancing Vision Understanding in Multimodal Language Models
|
Achieving better alignment between vision embeddings and Large Language
Models (LLMs) is crucial for enhancing the abilities of Multimodal LLMs
(MLLMs), particularly for recent models that rely on powerful pretrained vision
encoders and LLMs. A common approach to connect the pretrained vision encoder
and LLM is through a projector applied after the vision encoder. However, the
projector is often trained to enable the LLM to generate captions, and hence
the mechanism by which LLMs understand each vision token remains unclear. In
this work, we first investigate the role of the projector in compressing vision
embeddings and aligning them with word embeddings. We show that the projector
significantly compresses visual information, removing redundant details while
preserving essential elements necessary for the LLM to understand visual
content. We then examine patch-level alignment -- the alignment between each
vision patch and its corresponding semantic words -- and propose a
*multi-semantic alignment hypothesis*. Our analysis indicates that the
projector trained by caption loss improves patch-level alignment but only to a
limited extent, resulting in weak and coarse alignment. To address this issue,
we propose *patch-aligned training* to efficiently enhance patch-level
alignment. Our experiments show that patch-aligned training (1) achieves
stronger compression capability and improved patch-level alignment, enabling
the MLLM to generate higher-quality captions, (2) improves the MLLM's
performance by 16% on referring expression grounding tasks, 4% on
question-answering tasks, and 3% on modern instruction-following benchmarks
when using the same supervised fine-tuning (SFT) setting. The proposed method
can be easily extended to other multimodal models.
|
Liked
|
zrz@andrew.cmu.edu
|
Analyzing Fine-Grained Alignment and Enhancing Vision Understanding in Multimodal Language Models : Achieving better alignment between vision embeddings and Large Language
Models (LLMs) is crucial for enhancing the abilities of Multimodal LLMs
(MLLMs), particularly for recent models that rely on powerful pretrained vision
encoders and LLMs. A common approach to connect the pretrained vision encoder
and LLM is through a projector applied after the vision encoder. However, the
projector is often trained to enable the LLM to generate captions, and hence
the mechanism by which LLMs understand each vision token remains unclear. In
this work, we first investigate the role of the projector in compressing vision
embeddings and aligning them with word embeddings. We show that the projector
significantly compresses visual information, removing redundant details while
preserving essential elements necessary for the LLM to understand visual
content. We then examine patch-level alignment -- the alignment between each
vision patch and its corresponding semantic words -- and propose a
*multi-semantic alignment hypothesis*. Our analysis indicates that the
projector trained by caption loss improves patch-level alignment but only to a
limited extent, resulting in weak and coarse alignment. To address this issue,
we propose *patch-aligned training* to efficiently enhance patch-level
alignment. Our experiments show that patch-aligned training (1) achieves
stronger compression capability and improved patch-level alignment, enabling
the MLLM to generate higher-quality captions, (2) improves the MLLM's
performance by 16% on referring expression grounding tasks, 4% on
question-answering tasks, and 3% on modern instruction-following benchmarks
when using the same supervised fine-tuning (SFT) setting. The proposed method
can be easily extended to other multimodal models.
| 1
|
zrz@andrew.cmu.edu [SEP] Analyzing Fine-Grained Alignment and Enhancing Vision Understanding in Multimodal Language Models : Achieving better alignment between vision embeddings and Large Language
Models (LLMs) is crucial for enhancing the abilities of Multimodal LLMs
(MLLMs), particularly for recent models that rely on powerful pretrained vision
encoders and LLMs. A common approach to connect the pretrained vision encoder
and LLM is through a projector applied after the vision encoder. However, the
projector is often trained to enable the LLM to generate captions, and hence
the mechanism by which LLMs understand each vision token remains unclear. In
this work, we first investigate the role of the projector in compressing vision
embeddings and aligning them with word embeddings. We show that the projector
significantly compresses visual information, removing redundant details while
preserving essential elements necessary for the LLM to understand visual
content. We then examine patch-level alignment -- the alignment between each
vision patch and its corresponding semantic words -- and propose a
*multi-semantic alignment hypothesis*. Our analysis indicates that the
projector trained by caption loss improves patch-level alignment but only to a
limited extent, resulting in weak and coarse alignment. To address this issue,
we propose *patch-aligned training* to efficiently enhance patch-level
alignment. Our experiments show that patch-aligned training (1) achieves
stronger compression capability and improved patch-level alignment, enabling
the MLLM to generate higher-quality captions, (2) improves the MLLM's
performance by 16% on referring expression grounding tasks, 4% on
question-answering tasks, and 3% on modern instruction-following benchmarks
when using the same supervised fine-tuning (SFT) setting. The proposed method
can be easily extended to other multimodal models.
| 376
|
Learning Human-arm Reaching Motion Using IMU in Human-Robot Collaboration
|
Many tasks performed by two humans require mutual interaction between arms
such as handing-over tools and objects. In order for a robotic arm to interact
with a human in the same way, it must reason about the location of the human
arm in real-time. Furthermore and to acquire interaction in a timely manner,
the robot must be able predict the final target of the human in order to plan
and initiate motion beforehand. In this paper, we explore the use of a low-cost
wearable device equipped with two inertial measurement units (IMU) for learning
reaching motion for real-time applications of Human-Robot Collaboration (HRC).
A wearable device can replace or be complementary to visual perception in cases
of bad lighting or occlusions in a cluttered environment. We first train a
neural-network model to estimate the current location of the arm. Then, we
propose a novel model based on a recurrent neural-network to predict the future
target of the human arm during motion in real-time. Early prediction of the
target grants the robot with sufficient time to plan and initiate motion during
the motion of the human. The accuracies of the models are analyzed concerning
the features included in the motion representation. Through experiments and
real demonstrations with a robotic arm, we show that sufficient accuracy is
achieved for feasible HRC without any visual perception. Once trained, the
system can be deployed in various spaces with no additional effort. The models
exhibit high accuracy for various initial poses of the human arm. Moreover, the
trained models are shown to provide high success rates with additional human
participants not included in the model training.
|
Liked
|
jechoi@andrew.cmu.edu
|
Learning Human-arm Reaching Motion Using IMU in Human-Robot Collaboration : Many tasks performed by two humans require mutual interaction between arms
such as handing-over tools and objects. In order for a robotic arm to interact
with a human in the same way, it must reason about the location of the human
arm in real-time. Furthermore and to acquire interaction in a timely manner,
the robot must be able predict the final target of the human in order to plan
and initiate motion beforehand. In this paper, we explore the use of a low-cost
wearable device equipped with two inertial measurement units (IMU) for learning
reaching motion for real-time applications of Human-Robot Collaboration (HRC).
A wearable device can replace or be complementary to visual perception in cases
of bad lighting or occlusions in a cluttered environment. We first train a
neural-network model to estimate the current location of the arm. Then, we
propose a novel model based on a recurrent neural-network to predict the future
target of the human arm during motion in real-time. Early prediction of the
target grants the robot with sufficient time to plan and initiate motion during
the motion of the human. The accuracies of the models are analyzed concerning
the features included in the motion representation. Through experiments and
real demonstrations with a robotic arm, we show that sufficient accuracy is
achieved for feasible HRC without any visual perception. Once trained, the
system can be deployed in various spaces with no additional effort. The models
exhibit high accuracy for various initial poses of the human arm. Moreover, the
trained models are shown to provide high success rates with additional human
participants not included in the model training.
| 1
|
jechoi@andrew.cmu.edu [SEP] Learning Human-arm Reaching Motion Using IMU in Human-Robot Collaboration : Many tasks performed by two humans require mutual interaction between arms
such as handing-over tools and objects. In order for a robotic arm to interact
with a human in the same way, it must reason about the location of the human
arm in real-time. Furthermore and to acquire interaction in a timely manner,
the robot must be able predict the final target of the human in order to plan
and initiate motion beforehand. In this paper, we explore the use of a low-cost
wearable device equipped with two inertial measurement units (IMU) for learning
reaching motion for real-time applications of Human-Robot Collaboration (HRC).
A wearable device can replace or be complementary to visual perception in cases
of bad lighting or occlusions in a cluttered environment. We first train a
neural-network model to estimate the current location of the arm. Then, we
propose a novel model based on a recurrent neural-network to predict the future
target of the human arm during motion in real-time. Early prediction of the
target grants the robot with sufficient time to plan and initiate motion during
the motion of the human. The accuracies of the models are analyzed concerning
the features included in the motion representation. Through experiments and
real demonstrations with a robotic arm, we show that sufficient accuracy is
achieved for feasible HRC without any visual perception. Once trained, the
system can be deployed in various spaces with no additional effort. The models
exhibit high accuracy for various initial poses of the human arm. Moreover, the
trained models are shown to provide high success rates with additional human
participants not included in the model training.
| 441
|
Adding Context to Source Code Representations for Deep Learning
|
Deep learning models have been successfully applied to a variety of software
engineering tasks, such as code classification, summarisation, and bug and
vulnerability detection. In order to apply deep learning to these tasks, source
code needs to be represented in a format that is suitable for input into the
deep learning model. Most approaches to representing source code, such as
tokens, abstract syntax trees (ASTs), data flow graphs (DFGs), and control flow
graphs (CFGs) only focus on the code itself and do not take into account
additional context that could be useful for deep learning models. In this
paper, we argue that it is beneficial for deep learning models to have access
to additional contextual information about the code being analysed. We present
preliminary evidence that encoding context from the call hierarchy along with
information from the code itself can improve the performance of a
state-of-the-art deep learning model for two software engineering tasks. We
outline our research agenda for adding further contextual information to source
code representations for deep learning.
|
Liked
|
zrz@andrew.cmu.edu
|
Adding Context to Source Code Representations for Deep Learning : Deep learning models have been successfully applied to a variety of software
engineering tasks, such as code classification, summarisation, and bug and
vulnerability detection. In order to apply deep learning to these tasks, source
code needs to be represented in a format that is suitable for input into the
deep learning model. Most approaches to representing source code, such as
tokens, abstract syntax trees (ASTs), data flow graphs (DFGs), and control flow
graphs (CFGs) only focus on the code itself and do not take into account
additional context that could be useful for deep learning models. In this
paper, we argue that it is beneficial for deep learning models to have access
to additional contextual information about the code being analysed. We present
preliminary evidence that encoding context from the call hierarchy along with
information from the code itself can improve the performance of a
state-of-the-art deep learning model for two software engineering tasks. We
outline our research agenda for adding further contextual information to source
code representations for deep learning.
| 1
|
zrz@andrew.cmu.edu [SEP] Adding Context to Source Code Representations for Deep Learning : Deep learning models have been successfully applied to a variety of software
engineering tasks, such as code classification, summarisation, and bug and
vulnerability detection. In order to apply deep learning to these tasks, source
code needs to be represented in a format that is suitable for input into the
deep learning model. Most approaches to representing source code, such as
tokens, abstract syntax trees (ASTs), data flow graphs (DFGs), and control flow
graphs (CFGs) only focus on the code itself and do not take into account
additional context that could be useful for deep learning models. In this
paper, we argue that it is beneficial for deep learning models to have access
to additional contextual information about the code being analysed. We present
preliminary evidence that encoding context from the call hierarchy along with
information from the code itself can improve the performance of a
state-of-the-art deep learning model for two software engineering tasks. We
outline our research agenda for adding further contextual information to source
code representations for deep learning.
| 246
|
Multi-Modal Masked Autoencoders for Medical Vision-and-Language Pre-Training
|
Medical vision-and-language pre-training provides a feasible solution to
extract effective vision-and-language representations from medical images and
texts. However, few studies have been dedicated to this field to facilitate
medical vision-and-language understanding. In this paper, we propose a
self-supervised learning paradigm with multi-modal masked autoencoders
(M$^3$AE), which learn cross-modal domain knowledge by reconstructing missing
pixels and tokens from randomly masked images and texts. There are three key
designs to make this simple approach work. First, considering the different
information densities of vision and language, we adopt different masking ratios
for the input image and text, where a considerably larger masking ratio is used
for images. Second, we use visual and textual features from different layers to
perform the reconstruction to deal with different levels of abstraction in
visual and language. Third, we develop different designs for vision and
language decoders (i.e., a Transformer for vision and a multi-layer perceptron
for language). To perform a comprehensive evaluation and facilitate further
research, we construct a medical vision-and-language benchmark including three
tasks. Experimental results demonstrate the effectiveness of our approach,
where state-of-the-art results are achieved on all downstream tasks. Besides,
we conduct further analysis to better verify the effectiveness of different
components of our approach and various settings of pre-training. The source
code is available at~\url{https://github.com/zhjohnchan/M3AE}.
|
Liked
|
zrz@andrew.cmu.edu
|
Multi-Modal Masked Autoencoders for Medical Vision-and-Language Pre-Training : Medical vision-and-language pre-training provides a feasible solution to
extract effective vision-and-language representations from medical images and
texts. However, few studies have been dedicated to this field to facilitate
medical vision-and-language understanding. In this paper, we propose a
self-supervised learning paradigm with multi-modal masked autoencoders
(M$^3$AE), which learn cross-modal domain knowledge by reconstructing missing
pixels and tokens from randomly masked images and texts. There are three key
designs to make this simple approach work. First, considering the different
information densities of vision and language, we adopt different masking ratios
for the input image and text, where a considerably larger masking ratio is used
for images. Second, we use visual and textual features from different layers to
perform the reconstruction to deal with different levels of abstraction in
visual and language. Third, we develop different designs for vision and
language decoders (i.e., a Transformer for vision and a multi-layer perceptron
for language). To perform a comprehensive evaluation and facilitate further
research, we construct a medical vision-and-language benchmark including three
tasks. Experimental results demonstrate the effectiveness of our approach,
where state-of-the-art results are achieved on all downstream tasks. Besides,
we conduct further analysis to better verify the effectiveness of different
components of our approach and various settings of pre-training. The source
code is available at~\url{https://github.com/zhjohnchan/M3AE}.
| 1
|
zrz@andrew.cmu.edu [SEP] Multi-Modal Masked Autoencoders for Medical Vision-and-Language Pre-Training : Medical vision-and-language pre-training provides a feasible solution to
extract effective vision-and-language representations from medical images and
texts. However, few studies have been dedicated to this field to facilitate
medical vision-and-language understanding. In this paper, we propose a
self-supervised learning paradigm with multi-modal masked autoencoders
(M$^3$AE), which learn cross-modal domain knowledge by reconstructing missing
pixels and tokens from randomly masked images and texts. There are three key
designs to make this simple approach work. First, considering the different
information densities of vision and language, we adopt different masking ratios
for the input image and text, where a considerably larger masking ratio is used
for images. Second, we use visual and textual features from different layers to
perform the reconstruction to deal with different levels of abstraction in
visual and language. Third, we develop different designs for vision and
language decoders (i.e., a Transformer for vision and a multi-layer perceptron
for language). To perform a comprehensive evaluation and facilitate further
research, we construct a medical vision-and-language benchmark including three
tasks. Experimental results demonstrate the effectiveness of our approach,
where state-of-the-art results are achieved on all downstream tasks. Besides,
we conduct further analysis to better verify the effectiveness of different
components of our approach and various settings of pre-training. The source
code is available at~\url{https://github.com/zhjohnchan/M3AE}.
| 367
|
Modern Deep Reinforcement Learning Algorithms
|
Recent advances in Reinforcement Learning, grounded on combining classical
theoretical results with Deep Learning paradigm, led to breakthroughs in many
artificial intelligence tasks and gave birth to Deep Reinforcement Learning
(DRL) as a field of research. In this work latest DRL algorithms are reviewed
with a focus on their theoretical justification, practical limitations and
observed empirical properties.
|
Liked
|
zrz@andrew.cmu.edu
|
Modern Deep Reinforcement Learning Algorithms : Recent advances in Reinforcement Learning, grounded on combining classical
theoretical results with Deep Learning paradigm, led to breakthroughs in many
artificial intelligence tasks and gave birth to Deep Reinforcement Learning
(DRL) as a field of research. In this work latest DRL algorithms are reviewed
with a focus on their theoretical justification, practical limitations and
observed empirical properties.
| 1
|
zrz@andrew.cmu.edu [SEP] Modern Deep Reinforcement Learning Algorithms : Recent advances in Reinforcement Learning, grounded on combining classical
theoretical results with Deep Learning paradigm, led to breakthroughs in many
artificial intelligence tasks and gave birth to Deep Reinforcement Learning
(DRL) as a field of research. In this work latest DRL algorithms are reviewed
with a focus on their theoretical justification, practical limitations and
observed empirical properties.
| 212
|
An Introduction to MM Algorithms for Machine Learning and Statistical
|
MM (majorization--minimization) algorithms are an increasingly popular tool
for solving optimization problems in machine learning and statistical
estimation. This article introduces the MM algorithm framework in general and
via three popular example applications: Gaussian mixture regressions,
multinomial logistic regressions, and support vector machines. Specific
algorithms for the three examples are derived and numerical demonstrations are
presented. Theoretical and practical aspects of MM algorithm design are
discussed.
|
Liked
|
zrz@andrew.cmu.edu
|
An Introduction to MM Algorithms for Machine Learning and Statistical : MM (majorization--minimization) algorithms are an increasingly popular tool
for solving optimization problems in machine learning and statistical
estimation. This article introduces the MM algorithm framework in general and
via three popular example applications: Gaussian mixture regressions,
multinomial logistic regressions, and support vector machines. Specific
algorithms for the three examples are derived and numerical demonstrations are
presented. Theoretical and practical aspects of MM algorithm design are
discussed.
| 1
|
zrz@andrew.cmu.edu [SEP] An Introduction to MM Algorithms for Machine Learning and Statistical : MM (majorization--minimization) algorithms are an increasingly popular tool
for solving optimization problems in machine learning and statistical
estimation. This article introduces the MM algorithm framework in general and
via three popular example applications: Gaussian mixture regressions,
multinomial logistic regressions, and support vector machines. Specific
algorithms for the three examples are derived and numerical demonstrations are
presented. Theoretical and practical aspects of MM algorithm design are
discussed.
| 97
|
The SET Perceptual Factors Framework: Towards Assured Perception for Autonomous Systems
|
Future autonomous systems promise significant societal benefits, yet their
deployment raises concerns about safety and trustworthiness. A key concern is
assuring the reliability of robot perception, as perception seeds safe
decision-making. Failures in perception are often due to complex yet common
environmental factors and can lead to accidents that erode public trust. To
address this concern, we introduce the SET (Self, Environment, and Target)
Perceptual Factors Framework. We designed the framework to systematically
analyze how factors such as weather, occlusion, or sensor limitations
negatively impact perception. To achieve this, the framework employs SET State
Trees to categorize where such factors originate and SET Factor Trees to model
how these sources and factors impact perceptual tasks like object detection or
pose estimation. Next, we develop Perceptual Factor Models using both trees to
quantify the uncertainty for a given task. Our framework aims to promote
rigorous safety assurances and cultivate greater public understanding and trust
in autonomous systems by offering a transparent and standardized method for
identifying, modeling, and communicating perceptual risks.
|
Disliked
|
zrz@andrew.cmu.edu
|
The SET Perceptual Factors Framework: Towards Assured Perception for Autonomous Systems : Future autonomous systems promise significant societal benefits, yet their
deployment raises concerns about safety and trustworthiness. A key concern is
assuring the reliability of robot perception, as perception seeds safe
decision-making. Failures in perception are often due to complex yet common
environmental factors and can lead to accidents that erode public trust. To
address this concern, we introduce the SET (Self, Environment, and Target)
Perceptual Factors Framework. We designed the framework to systematically
analyze how factors such as weather, occlusion, or sensor limitations
negatively impact perception. To achieve this, the framework employs SET State
Trees to categorize where such factors originate and SET Factor Trees to model
how these sources and factors impact perceptual tasks like object detection or
pose estimation. Next, we develop Perceptual Factor Models using both trees to
quantify the uncertainty for a given task. Our framework aims to promote
rigorous safety assurances and cultivate greater public understanding and trust
in autonomous systems by offering a transparent and standardized method for
identifying, modeling, and communicating perceptual risks.
| 0
|
zrz@andrew.cmu.edu [SEP] The SET Perceptual Factors Framework: Towards Assured Perception for Autonomous Systems : Future autonomous systems promise significant societal benefits, yet their
deployment raises concerns about safety and trustworthiness. A key concern is
assuring the reliability of robot perception, as perception seeds safe
decision-making. Failures in perception are often due to complex yet common
environmental factors and can lead to accidents that erode public trust. To
address this concern, we introduce the SET (Self, Environment, and Target)
Perceptual Factors Framework. We designed the framework to systematically
analyze how factors such as weather, occlusion, or sensor limitations
negatively impact perception. To achieve this, the framework employs SET State
Trees to categorize where such factors originate and SET Factor Trees to model
how these sources and factors impact perceptual tasks like object detection or
pose estimation. Next, we develop Perceptual Factor Models using both trees to
quantify the uncertainty for a given task. Our framework aims to promote
rigorous safety assurances and cultivate greater public understanding and trust
in autonomous systems by offering a transparent and standardized method for
identifying, modeling, and communicating perceptual risks.
| 299
|
Emotional Musical Prosody for the Enhancement of Trust in Robotic Arm Communication
|
As robotic arms become prevalent in industry it is crucial to improve levels
of trust from human collaborators. Low levels of trust in human-robot
interaction can reduce overall performance and prevent full robot utilization.
We investigated the potential benefits of using emotional musical prosody to
allow the robot to respond emotionally to the user's actions. We tested
participants' responses to interacting with a virtual robot arm that acted as a
decision agent, helping participants select the next number in a sequence. We
compared results from three versions of the application in a between-group
experiment, where the robot had different emotional reactions to the user's
input depending on whether the user agreed with the robot and whether the
user's choice was correct. In all versions, the robot reacted with emotional
gestures. One version used prosody-based emotional audio phrases selected from
our dataset of singer improvisations, the second version used audio consisting
of a single pitch randomly assigned to each emotion, and the final version used
no audio, only gestures. Our results showed no significant difference for the
percentage of times users from each group agreed with the robot, and no
difference between user's agreement with the robot after it made a mistake.
However, participants also took a trust survey following the interaction, and
we found that the reported trust ratings of the musical prosody group were
significantly higher than both the single-pitch and no audio groups.
|
Disliked
|
jechoi@andrew.cmu.edu
|
Emotional Musical Prosody for the Enhancement of Trust in Robotic Arm Communication : As robotic arms become prevalent in industry it is crucial to improve levels
of trust from human collaborators. Low levels of trust in human-robot
interaction can reduce overall performance and prevent full robot utilization.
We investigated the potential benefits of using emotional musical prosody to
allow the robot to respond emotionally to the user's actions. We tested
participants' responses to interacting with a virtual robot arm that acted as a
decision agent, helping participants select the next number in a sequence. We
compared results from three versions of the application in a between-group
experiment, where the robot had different emotional reactions to the user's
input depending on whether the user agreed with the robot and whether the
user's choice was correct. In all versions, the robot reacted with emotional
gestures. One version used prosody-based emotional audio phrases selected from
our dataset of singer improvisations, the second version used audio consisting
of a single pitch randomly assigned to each emotion, and the final version used
no audio, only gestures. Our results showed no significant difference for the
percentage of times users from each group agreed with the robot, and no
difference between user's agreement with the robot after it made a mistake.
However, participants also took a trust survey following the interaction, and
we found that the reported trust ratings of the musical prosody group were
significantly higher than both the single-pitch and no audio groups.
| 0
|
jechoi@andrew.cmu.edu [SEP] Emotional Musical Prosody for the Enhancement of Trust in Robotic Arm Communication : As robotic arms become prevalent in industry it is crucial to improve levels
of trust from human collaborators. Low levels of trust in human-robot
interaction can reduce overall performance and prevent full robot utilization.
We investigated the potential benefits of using emotional musical prosody to
allow the robot to respond emotionally to the user's actions. We tested
participants' responses to interacting with a virtual robot arm that acted as a
decision agent, helping participants select the next number in a sequence. We
compared results from three versions of the application in a between-group
experiment, where the robot had different emotional reactions to the user's
input depending on whether the user agreed with the robot and whether the
user's choice was correct. In all versions, the robot reacted with emotional
gestures. One version used prosody-based emotional audio phrases selected from
our dataset of singer improvisations, the second version used audio consisting
of a single pitch randomly assigned to each emotion, and the final version used
no audio, only gestures. Our results showed no significant difference for the
percentage of times users from each group agreed with the robot, and no
difference between user's agreement with the robot after it made a mistake.
However, participants also took a trust survey following the interaction, and
we found that the reported trust ratings of the musical prosody group were
significantly higher than both the single-pitch and no audio groups.
| 542
|
Generating Realistic Arm Movements in Reinforcement Learning: A Quantitative Comparison of Reward Terms and Task Requirements
|
The mimicking of human-like arm movement characteristics involves the
consideration of three factors during control policy synthesis: (a) chosen task
requirements, (b) inclusion of noise during movement execution and (c) chosen
optimality principles. Previous studies showed that when considering these
factors (a-c) individually, it is possible to synthesize arm movements that
either kinematically match the experimental data or reproduce the stereotypical
triphasic muscle activation pattern. However, to date no quantitative
comparison has been made on how realistic the arm movement generated by each
factor is; as well as whether a partial or total combination of all factors
results in arm movements with human-like kinematic characteristics and a
triphasic muscle pattern. To investigate this, we used reinforcement learning
to learn a control policy for a musculoskeletal arm model, aiming to discern
which combination of factors (a-c) results in realistic arm movements according
to four frequently reported stereotypical characteristics. Our findings
indicate that incorporating velocity and acceleration requirements into the
reaching task, employing reward terms that encourage minimization of mechanical
work, hand jerk, and control effort, along with the inclusion of noise during
movement, leads to the emergence of realistic human arm movements in
reinforcement learning. We expect that the gained insights will help in the
future to better predict desired arm movements and corrective forces in
wearable assistive devices.
|
Liked
|
jechoi@andrew.cmu.edu
|
Generating Realistic Arm Movements in Reinforcement Learning: A Quantitative Comparison of Reward Terms and Task Requirements : The mimicking of human-like arm movement characteristics involves the
consideration of three factors during control policy synthesis: (a) chosen task
requirements, (b) inclusion of noise during movement execution and (c) chosen
optimality principles. Previous studies showed that when considering these
factors (a-c) individually, it is possible to synthesize arm movements that
either kinematically match the experimental data or reproduce the stereotypical
triphasic muscle activation pattern. However, to date no quantitative
comparison has been made on how realistic the arm movement generated by each
factor is; as well as whether a partial or total combination of all factors
results in arm movements with human-like kinematic characteristics and a
triphasic muscle pattern. To investigate this, we used reinforcement learning
to learn a control policy for a musculoskeletal arm model, aiming to discern
which combination of factors (a-c) results in realistic arm movements according
to four frequently reported stereotypical characteristics. Our findings
indicate that incorporating velocity and acceleration requirements into the
reaching task, employing reward terms that encourage minimization of mechanical
work, hand jerk, and control effort, along with the inclusion of noise during
movement, leads to the emergence of realistic human arm movements in
reinforcement learning. We expect that the gained insights will help in the
future to better predict desired arm movements and corrective forces in
wearable assistive devices.
| 1
|
jechoi@andrew.cmu.edu [SEP] Generating Realistic Arm Movements in Reinforcement Learning: A Quantitative Comparison of Reward Terms and Task Requirements : The mimicking of human-like arm movement characteristics involves the
consideration of three factors during control policy synthesis: (a) chosen task
requirements, (b) inclusion of noise during movement execution and (c) chosen
optimality principles. Previous studies showed that when considering these
factors (a-c) individually, it is possible to synthesize arm movements that
either kinematically match the experimental data or reproduce the stereotypical
triphasic muscle activation pattern. However, to date no quantitative
comparison has been made on how realistic the arm movement generated by each
factor is; as well as whether a partial or total combination of all factors
results in arm movements with human-like kinematic characteristics and a
triphasic muscle pattern. To investigate this, we used reinforcement learning
to learn a control policy for a musculoskeletal arm model, aiming to discern
which combination of factors (a-c) results in realistic arm movements according
to four frequently reported stereotypical characteristics. Our findings
indicate that incorporating velocity and acceleration requirements into the
reaching task, employing reward terms that encourage minimization of mechanical
work, hand jerk, and control effort, along with the inclusion of noise during
movement, leads to the emergence of realistic human arm movements in
reinforcement learning. We expect that the gained insights will help in the
future to better predict desired arm movements and corrective forces in
wearable assistive devices.
| 571
|
A Unified Analytical Framework for Trustable Machine Learning and Automation Running with Blockchain
|
Traditional machine learning algorithms use data from databases that are
mutable, and therefore the data cannot be fully trusted. Also, the machine
learning process is difficult to automate. This paper proposes building a
trustable machine learning system by using blockchain technology, which can
store data in a permanent and immutable way. In addition, smart contracts are
used to automate the machine learning process. This paper makes three
contributions. First, it establishes a link between machine learning technology
and blockchain technology. Previously, machine learning and blockchain have
been considered two independent technologies without an obvious link. Second,
it proposes a unified analytical framework for trustable machine learning by
using blockchain technology. This unified framework solves both the
trustability and automation issues in machine learning. Third, it enables a
computer to translate core machine learning implementation from a single thread
on a single machine to multiple threads on multiple machines running with
blockchain by using a unified approach. The paper uses association rule mining
as an example to demonstrate how trustable machine learning can be implemented
with blockchain, and it shows how this approach can be used to analyze opioid
prescriptions to help combat the opioid crisis.
|
Disliked
|
zrz@andrew.cmu.edu
|
A Unified Analytical Framework for Trustable Machine Learning and Automation Running with Blockchain : Traditional machine learning algorithms use data from databases that are
mutable, and therefore the data cannot be fully trusted. Also, the machine
learning process is difficult to automate. This paper proposes building a
trustable machine learning system by using blockchain technology, which can
store data in a permanent and immutable way. In addition, smart contracts are
used to automate the machine learning process. This paper makes three
contributions. First, it establishes a link between machine learning technology
and blockchain technology. Previously, machine learning and blockchain have
been considered two independent technologies without an obvious link. Second,
it proposes a unified analytical framework for trustable machine learning by
using blockchain technology. This unified framework solves both the
trustability and automation issues in machine learning. Third, it enables a
computer to translate core machine learning implementation from a single thread
on a single machine to multiple threads on multiple machines running with
blockchain by using a unified approach. The paper uses association rule mining
as an example to demonstrate how trustable machine learning can be implemented
with blockchain, and it shows how this approach can be used to analyze opioid
prescriptions to help combat the opioid crisis.
| 0
|
zrz@andrew.cmu.edu [SEP] A Unified Analytical Framework for Trustable Machine Learning and Automation Running with Blockchain : Traditional machine learning algorithms use data from databases that are
mutable, and therefore the data cannot be fully trusted. Also, the machine
learning process is difficult to automate. This paper proposes building a
trustable machine learning system by using blockchain technology, which can
store data in a permanent and immutable way. In addition, smart contracts are
used to automate the machine learning process. This paper makes three
contributions. First, it establishes a link between machine learning technology
and blockchain technology. Previously, machine learning and blockchain have
been considered two independent technologies without an obvious link. Second,
it proposes a unified analytical framework for trustable machine learning by
using blockchain technology. This unified framework solves both the
trustability and automation issues in machine learning. Third, it enables a
computer to translate core machine learning implementation from a single thread
on a single machine to multiple threads on multiple machines running with
blockchain by using a unified approach. The paper uses association rule mining
as an example to demonstrate how trustable machine learning can be implemented
with blockchain, and it shows how this approach can be used to analyze opioid
prescriptions to help combat the opioid crisis.
| 35
|
A Survey of Deep Learning Techniques for Mobile Robot Applications
|
Advancements in deep learning over the years have attracted research into how
deep artificial neural networks can be used in robotic systems. This research
survey will present a summarization of the current research with a specific
focus on the gains and obstacles for deep learning to be applied to mobile
robotics.
|
Liked
|
zrz@andrew.cmu.edu
|
A Survey of Deep Learning Techniques for Mobile Robot Applications : Advancements in deep learning over the years have attracted research into how
deep artificial neural networks can be used in robotic systems. This research
survey will present a summarization of the current research with a specific
focus on the gains and obstacles for deep learning to be applied to mobile
robotics.
| 1
|
zrz@andrew.cmu.edu [SEP] A Survey of Deep Learning Techniques for Mobile Robot Applications : Advancements in deep learning over the years have attracted research into how
deep artificial neural networks can be used in robotic systems. This research
survey will present a summarization of the current research with a specific
focus on the gains and obstacles for deep learning to be applied to mobile
robotics.
| 267
|
Moving Deep Learning into Web Browser: How Far Can We Go?
|
Recently, several JavaScript-based deep learning frameworks have emerged,
making it possible to perform deep learning tasks directly in browsers.
However, little is known on what and how well we can do with these frameworks
for deep learning in browsers. To bridge the knowledge gap, in this paper, we
conduct the first empirical study of deep learning in browsers. We survey 7
most popular JavaScript-based deep learning frameworks, investigating to what
extent deep learning tasks have been supported in browsers so far. Then we
measure the performance of different frameworks when running different deep
learning tasks. Finally, we dig out the performance gap between deep learning
in browsers and on native platforms by comparing the performance of
TensorFlow.js and TensorFlow in Python. Our findings could help application
developers, deep-learning framework vendors and browser vendors to improve the
efficiency of deep learning in browsers.
|
Disliked
|
zrz@andrew.cmu.edu
|
Moving Deep Learning into Web Browser: How Far Can We Go? : Recently, several JavaScript-based deep learning frameworks have emerged,
making it possible to perform deep learning tasks directly in browsers.
However, little is known on what and how well we can do with these frameworks
for deep learning in browsers. To bridge the knowledge gap, in this paper, we
conduct the first empirical study of deep learning in browsers. We survey 7
most popular JavaScript-based deep learning frameworks, investigating to what
extent deep learning tasks have been supported in browsers so far. Then we
measure the performance of different frameworks when running different deep
learning tasks. Finally, we dig out the performance gap between deep learning
in browsers and on native platforms by comparing the performance of
TensorFlow.js and TensorFlow in Python. Our findings could help application
developers, deep-learning framework vendors and browser vendors to improve the
efficiency of deep learning in browsers.
| 0
|
zrz@andrew.cmu.edu [SEP] Moving Deep Learning into Web Browser: How Far Can We Go? : Recently, several JavaScript-based deep learning frameworks have emerged,
making it possible to perform deep learning tasks directly in browsers.
However, little is known on what and how well we can do with these frameworks
for deep learning in browsers. To bridge the knowledge gap, in this paper, we
conduct the first empirical study of deep learning in browsers. We survey 7
most popular JavaScript-based deep learning frameworks, investigating to what
extent deep learning tasks have been supported in browsers so far. Then we
measure the performance of different frameworks when running different deep
learning tasks. Finally, we dig out the performance gap between deep learning
in browsers and on native platforms by comparing the performance of
TensorFlow.js and TensorFlow in Python. Our findings could help application
developers, deep-learning framework vendors and browser vendors to improve the
efficiency of deep learning in browsers.
| 164
|
Pre-training with Non-expert Human Demonstration for Deep Reinforcement Learning
|
Deep reinforcement learning (deep RL) has achieved superior performance in
complex sequential tasks by using deep neural networks as function
approximators to learn directly from raw input images. However, learning
directly from raw images is data inefficient. The agent must learn feature
representation of complex states in addition to learning a policy. As a result,
deep RL typically suffers from slow learning speeds and often requires a
prohibitively large amount of training time and data to reach reasonable
performance, making it inapplicable to real-world settings where data is
expensive. In this work, we improve data efficiency in deep RL by addressing
one of the two learning goals, feature learning. We leverage supervised
learning to pre-train on a small set of non-expert human demonstrations and
empirically evaluate our approach using the asynchronous advantage actor-critic
algorithms (A3C) in the Atari domain. Our results show significant improvements
in learning speed, even when the provided demonstration is noisy and of low
quality.
|
Liked
|
zrz@andrew.cmu.edu
|
Pre-training with Non-expert Human Demonstration for Deep Reinforcement Learning : Deep reinforcement learning (deep RL) has achieved superior performance in
complex sequential tasks by using deep neural networks as function
approximators to learn directly from raw input images. However, learning
directly from raw images is data inefficient. The agent must learn feature
representation of complex states in addition to learning a policy. As a result,
deep RL typically suffers from slow learning speeds and often requires a
prohibitively large amount of training time and data to reach reasonable
performance, making it inapplicable to real-world settings where data is
expensive. In this work, we improve data efficiency in deep RL by addressing
one of the two learning goals, feature learning. We leverage supervised
learning to pre-train on a small set of non-expert human demonstrations and
empirically evaluate our approach using the asynchronous advantage actor-critic
algorithms (A3C) in the Atari domain. Our results show significant improvements
in learning speed, even when the provided demonstration is noisy and of low
quality.
| 1
|
zrz@andrew.cmu.edu [SEP] Pre-training with Non-expert Human Demonstration for Deep Reinforcement Learning : Deep reinforcement learning (deep RL) has achieved superior performance in
complex sequential tasks by using deep neural networks as function
approximators to learn directly from raw input images. However, learning
directly from raw images is data inefficient. The agent must learn feature
representation of complex states in addition to learning a policy. As a result,
deep RL typically suffers from slow learning speeds and often requires a
prohibitively large amount of training time and data to reach reasonable
performance, making it inapplicable to real-world settings where data is
expensive. In this work, we improve data efficiency in deep RL by addressing
one of the two learning goals, feature learning. We leverage supervised
learning to pre-train on a small set of non-expert human demonstrations and
empirically evaluate our approach using the asynchronous advantage actor-critic
algorithms (A3C) in the Atari domain. Our results show significant improvements
in learning speed, even when the provided demonstration is noisy and of low
quality.
| 260
|
Beyond One Model Fits All: Ensemble Deep Learning for Autonomous Vehicles
|
Deep learning has revolutionized autonomous driving by enabling vehicles to
perceive and interpret their surroundings with remarkable accuracy. This
progress is attributed to various deep learning models, including Mediated
Perception, Behavior Reflex, and Direct Perception, each offering unique
advantages and challenges in enhancing autonomous driving capabilities.
However, there is a gap in research addressing integrating these approaches and
understanding their relevance in diverse driving scenarios. This study
introduces three distinct neural network models corresponding to Mediated
Perception, Behavior Reflex, and Direct Perception approaches. We explore their
significance across varying driving conditions, shedding light on the strengths
and limitations of each approach. Our architecture fuses information from the
base, future latent vector prediction, and auxiliary task networks, using
global routing commands to select appropriate action sub-networks. We aim to
provide insights into effectively utilizing diverse modeling strategies in
autonomous driving by conducting experiments and evaluations. The results show
that the ensemble model performs better than the individual approaches,
suggesting that each modality contributes uniquely toward the performance of
the overall model. Moreover, by exploring the significance of each modality,
this study offers a roadmap for future research in autonomous driving,
emphasizing the importance of leveraging multiple models to achieve robust
performance.
|
Liked
|
zrz@andrew.cmu.edu
|
Beyond One Model Fits All: Ensemble Deep Learning for Autonomous Vehicles : Deep learning has revolutionized autonomous driving by enabling vehicles to
perceive and interpret their surroundings with remarkable accuracy. This
progress is attributed to various deep learning models, including Mediated
Perception, Behavior Reflex, and Direct Perception, each offering unique
advantages and challenges in enhancing autonomous driving capabilities.
However, there is a gap in research addressing integrating these approaches and
understanding their relevance in diverse driving scenarios. This study
introduces three distinct neural network models corresponding to Mediated
Perception, Behavior Reflex, and Direct Perception approaches. We explore their
significance across varying driving conditions, shedding light on the strengths
and limitations of each approach. Our architecture fuses information from the
base, future latent vector prediction, and auxiliary task networks, using
global routing commands to select appropriate action sub-networks. We aim to
provide insights into effectively utilizing diverse modeling strategies in
autonomous driving by conducting experiments and evaluations. The results show
that the ensemble model performs better than the individual approaches,
suggesting that each modality contributes uniquely toward the performance of
the overall model. Moreover, by exploring the significance of each modality,
this study offers a roadmap for future research in autonomous driving,
emphasizing the importance of leveraging multiple models to achieve robust
performance.
| 1
|
zrz@andrew.cmu.edu [SEP] Beyond One Model Fits All: Ensemble Deep Learning for Autonomous Vehicles : Deep learning has revolutionized autonomous driving by enabling vehicles to
perceive and interpret their surroundings with remarkable accuracy. This
progress is attributed to various deep learning models, including Mediated
Perception, Behavior Reflex, and Direct Perception, each offering unique
advantages and challenges in enhancing autonomous driving capabilities.
However, there is a gap in research addressing integrating these approaches and
understanding their relevance in diverse driving scenarios. This study
introduces three distinct neural network models corresponding to Mediated
Perception, Behavior Reflex, and Direct Perception approaches. We explore their
significance across varying driving conditions, shedding light on the strengths
and limitations of each approach. Our architecture fuses information from the
base, future latent vector prediction, and auxiliary task networks, using
global routing commands to select appropriate action sub-networks. We aim to
provide insights into effectively utilizing diverse modeling strategies in
autonomous driving by conducting experiments and evaluations. The results show
that the ensemble model performs better than the individual approaches,
suggesting that each modality contributes uniquely toward the performance of
the overall model. Moreover, by exploring the significance of each modality,
this study offers a roadmap for future research in autonomous driving,
emphasizing the importance of leveraging multiple models to achieve robust
performance.
| 303
|
Deep Learning and Its Applications to Machine Health Monitoring: A Survey
|
Since 2006, deep learning (DL) has become a rapidly growing research
direction, redefining state-of-the-art performances in a wide range of areas
such as object recognition, image segmentation, speech recognition and machine
translation. In modern manufacturing systems, data-driven machine health
monitoring is gaining in popularity due to the widespread deployment of
low-cost sensors and their connection to the Internet. Meanwhile, deep learning
provides useful tools for processing and analyzing these big machinery data.
The main purpose of this paper is to review and summarize the emerging research
work of deep learning on machine health monitoring. After the brief
introduction of deep learning techniques, the applications of deep learning in
machine health monitoring systems are reviewed mainly from the following
aspects: Auto-encoder (AE) and its variants, Restricted Boltzmann Machines and
its variants including Deep Belief Network (DBN) and Deep Boltzmann Machines
(DBM), Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN).
Finally, some new trends of DL-based machine health monitoring methods are
discussed.
|
Liked
|
zrz@andrew.cmu.edu
|
Deep Learning and Its Applications to Machine Health Monitoring: A Survey : Since 2006, deep learning (DL) has become a rapidly growing research
direction, redefining state-of-the-art performances in a wide range of areas
such as object recognition, image segmentation, speech recognition and machine
translation. In modern manufacturing systems, data-driven machine health
monitoring is gaining in popularity due to the widespread deployment of
low-cost sensors and their connection to the Internet. Meanwhile, deep learning
provides useful tools for processing and analyzing these big machinery data.
The main purpose of this paper is to review and summarize the emerging research
work of deep learning on machine health monitoring. After the brief
introduction of deep learning techniques, the applications of deep learning in
machine health monitoring systems are reviewed mainly from the following
aspects: Auto-encoder (AE) and its variants, Restricted Boltzmann Machines and
its variants including Deep Belief Network (DBN) and Deep Boltzmann Machines
(DBM), Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN).
Finally, some new trends of DL-based machine health monitoring methods are
discussed.
| 1
|
zrz@andrew.cmu.edu [SEP] Deep Learning and Its Applications to Machine Health Monitoring: A Survey : Since 2006, deep learning (DL) has become a rapidly growing research
direction, redefining state-of-the-art performances in a wide range of areas
such as object recognition, image segmentation, speech recognition and machine
translation. In modern manufacturing systems, data-driven machine health
monitoring is gaining in popularity due to the widespread deployment of
low-cost sensors and their connection to the Internet. Meanwhile, deep learning
provides useful tools for processing and analyzing these big machinery data.
The main purpose of this paper is to review and summarize the emerging research
work of deep learning on machine health monitoring. After the brief
introduction of deep learning techniques, the applications of deep learning in
machine health monitoring systems are reviewed mainly from the following
aspects: Auto-encoder (AE) and its variants, Restricted Boltzmann Machines and
its variants including Deep Belief Network (DBN) and Deep Boltzmann Machines
(DBM), Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN).
Finally, some new trends of DL-based machine health monitoring methods are
discussed.
| 240
|
Can Machines Learn the True Probabilities?
|
When there exists uncertainty, AI machines are designed to make decisions so
as to reach the best expected outcomes. Expectations are based on true facts
about the objective environment the machines interact with, and those facts can
be encoded into AI models in the form of true objective probability functions.
Accordingly, AI models involve probabilistic machine learning in which the
probabilities should be objectively interpreted. We prove under some basic
assumptions when machines can learn the true objective probabilities, if any,
and when machines cannot learn them.
|
Liked
|
zrz@andrew.cmu.edu
|
Can Machines Learn the True Probabilities? : When there exists uncertainty, AI machines are designed to make decisions so
as to reach the best expected outcomes. Expectations are based on true facts
about the objective environment the machines interact with, and those facts can
be encoded into AI models in the form of true objective probability functions.
Accordingly, AI models involve probabilistic machine learning in which the
probabilities should be objectively interpreted. We prove under some basic
assumptions when machines can learn the true objective probabilities, if any,
and when machines cannot learn them.
| 1
|
zrz@andrew.cmu.edu [SEP] Can Machines Learn the True Probabilities? : When there exists uncertainty, AI machines are designed to make decisions so
as to reach the best expected outcomes. Expectations are based on true facts
about the objective environment the machines interact with, and those facts can
be encoded into AI models in the form of true objective probability functions.
Accordingly, AI models involve probabilistic machine learning in which the
probabilities should be objectively interpreted. We prove under some basic
assumptions when machines can learn the true objective probabilities, if any,
and when machines cannot learn them.
| 66
|
Teaching Computer Vision for Ecology
|
Computer vision can accelerate ecology research by automating the analysis of
raw imagery from sensors like camera traps, drones, and satellites. However,
computer vision is an emerging discipline that is rarely taught to ecologists.
This work discusses our experience teaching a diverse group of ecologists to
prototype and evaluate computer vision systems in the context of an intensive
hands-on summer workshop. We explain the workshop structure, discuss common
challenges, and propose best practices. This document is intended for computer
scientists who teach computer vision across disciplines, but it may also be
useful to ecologists or other domain experts who are learning to use computer
vision themselves.
|
Disliked
|
zrz@andrew.cmu.edu
|
Teaching Computer Vision for Ecology : Computer vision can accelerate ecology research by automating the analysis of
raw imagery from sensors like camera traps, drones, and satellites. However,
computer vision is an emerging discipline that is rarely taught to ecologists.
This work discusses our experience teaching a diverse group of ecologists to
prototype and evaluate computer vision systems in the context of an intensive
hands-on summer workshop. We explain the workshop structure, discuss common
challenges, and propose best practices. This document is intended for computer
scientists who teach computer vision across disciplines, but it may also be
useful to ecologists or other domain experts who are learning to use computer
vision themselves.
| 0
|
zrz@andrew.cmu.edu [SEP] Teaching Computer Vision for Ecology : Computer vision can accelerate ecology research by automating the analysis of
raw imagery from sensors like camera traps, drones, and satellites. However,
computer vision is an emerging discipline that is rarely taught to ecologists.
This work discusses our experience teaching a diverse group of ecologists to
prototype and evaluate computer vision systems in the context of an intensive
hands-on summer workshop. We explain the workshop structure, discuss common
challenges, and propose best practices. This document is intended for computer
scientists who teach computer vision across disciplines, but it may also be
useful to ecologists or other domain experts who are learning to use computer
vision themselves.
| 375
|
Semantic-Aware Ship Detection with Vision-Language Integration
|
Ship detection in remote sensing imagery is a critical task with wide-ranging
applications, such as maritime activity monitoring, shipping logistics, and
environmental studies. However, existing methods often struggle to capture
fine-grained semantic information, limiting their effectiveness in complex
scenarios. To address these challenges, we propose a novel detection framework
that combines Vision-Language Models (VLMs) with a multi-scale adaptive sliding
window strategy. To facilitate Semantic-Aware Ship Detection (SASD), we
introduce ShipSem-VL, a specialized Vision-Language dataset designed to capture
fine-grained ship attributes. We evaluate our framework through three
well-defined tasks, providing a comprehensive analysis of its performance and
demonstrating its effectiveness in advancing SASD from multiple perspectives.
|
Liked
|
zrz@andrew.cmu.edu
|
Semantic-Aware Ship Detection with Vision-Language Integration : Ship detection in remote sensing imagery is a critical task with wide-ranging
applications, such as maritime activity monitoring, shipping logistics, and
environmental studies. However, existing methods often struggle to capture
fine-grained semantic information, limiting their effectiveness in complex
scenarios. To address these challenges, we propose a novel detection framework
that combines Vision-Language Models (VLMs) with a multi-scale adaptive sliding
window strategy. To facilitate Semantic-Aware Ship Detection (SASD), we
introduce ShipSem-VL, a specialized Vision-Language dataset designed to capture
fine-grained ship attributes. We evaluate our framework through three
well-defined tasks, providing a comprehensive analysis of its performance and
demonstrating its effectiveness in advancing SASD from multiple perspectives.
| 1
|
zrz@andrew.cmu.edu [SEP] Semantic-Aware Ship Detection with Vision-Language Integration : Ship detection in remote sensing imagery is a critical task with wide-ranging
applications, such as maritime activity monitoring, shipping logistics, and
environmental studies. However, existing methods often struggle to capture
fine-grained semantic information, limiting their effectiveness in complex
scenarios. To address these challenges, we propose a novel detection framework
that combines Vision-Language Models (VLMs) with a multi-scale adaptive sliding
window strategy. To facilitate Semantic-Aware Ship Detection (SASD), we
introduce ShipSem-VL, a specialized Vision-Language dataset designed to capture
fine-grained ship attributes. We evaluate our framework through three
well-defined tasks, providing a comprehensive analysis of its performance and
demonstrating its effectiveness in advancing SASD from multiple perspectives.
| 347
|
When Machine Learning Meets Privacy: A Survey and Outlook
|
The newly emerged machine learning (e.g. deep learning) methods have become a
strong driving force to revolutionize a wide range of industries, such as smart
healthcare, financial technology, and surveillance systems. Meanwhile, privacy
has emerged as a big concern in this machine learning-based artificial
intelligence era. It is important to note that the problem of privacy
preservation in the context of machine learning is quite different from that in
traditional data privacy protection, as machine learning can act as both friend
and foe. Currently, the work on the preservation of privacy and machine
learning (ML) is still in an infancy stage, as most existing solutions only
focus on privacy problems during the machine learning process. Therefore, a
comprehensive study on the privacy preservation problems and machine learning
is required. This paper surveys the state of the art in privacy issues and
solutions for machine learning. The survey covers three categories of
interactions between privacy and machine learning: (i) private machine
learning, (ii) machine learning aided privacy protection, and (iii) machine
learning-based privacy attack and corresponding protection schemes. The current
research progress in each category is reviewed and the key challenges are
identified. Finally, based on our in-depth analysis of the area of privacy and
machine learning, we point out future research directions in this field.
|
Disliked
|
zrz@andrew.cmu.edu
|
When Machine Learning Meets Privacy: A Survey and Outlook : The newly emerged machine learning (e.g. deep learning) methods have become a
strong driving force to revolutionize a wide range of industries, such as smart
healthcare, financial technology, and surveillance systems. Meanwhile, privacy
has emerged as a big concern in this machine learning-based artificial
intelligence era. It is important to note that the problem of privacy
preservation in the context of machine learning is quite different from that in
traditional data privacy protection, as machine learning can act as both friend
and foe. Currently, the work on the preservation of privacy and machine
learning (ML) is still in an infancy stage, as most existing solutions only
focus on privacy problems during the machine learning process. Therefore, a
comprehensive study on the privacy preservation problems and machine learning
is required. This paper surveys the state of the art in privacy issues and
solutions for machine learning. The survey covers three categories of
interactions between privacy and machine learning: (i) private machine
learning, (ii) machine learning aided privacy protection, and (iii) machine
learning-based privacy attack and corresponding protection schemes. The current
research progress in each category is reviewed and the key challenges are
identified. Finally, based on our in-depth analysis of the area of privacy and
machine learning, we point out future research directions in this field.
| 0
|
zrz@andrew.cmu.edu [SEP] When Machine Learning Meets Privacy: A Survey and Outlook : The newly emerged machine learning (e.g. deep learning) methods have become a
strong driving force to revolutionize a wide range of industries, such as smart
healthcare, financial technology, and surveillance systems. Meanwhile, privacy
has emerged as a big concern in this machine learning-based artificial
intelligence era. It is important to note that the problem of privacy
preservation in the context of machine learning is quite different from that in
traditional data privacy protection, as machine learning can act as both friend
and foe. Currently, the work on the preservation of privacy and machine
learning (ML) is still in an infancy stage, as most existing solutions only
focus on privacy problems during the machine learning process. Therefore, a
comprehensive study on the privacy preservation problems and machine learning
is required. This paper surveys the state of the art in privacy issues and
solutions for machine learning. The survey covers three categories of
interactions between privacy and machine learning: (i) private machine
learning, (ii) machine learning aided privacy protection, and (iii) machine
learning-based privacy attack and corresponding protection schemes. The current
research progress in each category is reviewed and the key challenges are
identified. Finally, based on our in-depth analysis of the area of privacy and
machine learning, we point out future research directions in this field.
| 51
|
Machine Learning for Clinical Predictive Analytics
|
In this chapter, we provide a brief overview of applying machine learning
techniques for clinical prediction tasks. We begin with a quick introduction to
the concepts of machine learning and outline some of the most common machine
learning algorithms. Next, we demonstrate how to apply the algorithms with
appropriate toolkits to conduct machine learning experiments for clinical
prediction tasks. The objectives of this chapter are to (1) understand the
basics of machine learning techniques and the reasons behind why they are
useful for solving clinical prediction problems, (2) understand the intuition
behind some machine learning models, including regression, decision trees, and
support vector machines, and (3) understand how to apply these models to
clinical prediction problems using publicly available datasets via case
studies.
|
Disliked
|
zrz@andrew.cmu.edu
|
Machine Learning for Clinical Predictive Analytics : In this chapter, we provide a brief overview of applying machine learning
techniques for clinical prediction tasks. We begin with a quick introduction to
the concepts of machine learning and outline some of the most common machine
learning algorithms. Next, we demonstrate how to apply the algorithms with
appropriate toolkits to conduct machine learning experiments for clinical
prediction tasks. The objectives of this chapter are to (1) understand the
basics of machine learning techniques and the reasons behind why they are
useful for solving clinical prediction problems, (2) understand the intuition
behind some machine learning models, including regression, decision trees, and
support vector machines, and (3) understand how to apply these models to
clinical prediction problems using publicly available datasets via case
studies.
| 0
|
zrz@andrew.cmu.edu [SEP] Machine Learning for Clinical Predictive Analytics : In this chapter, we provide a brief overview of applying machine learning
techniques for clinical prediction tasks. We begin with a quick introduction to
the concepts of machine learning and outline some of the most common machine
learning algorithms. Next, we demonstrate how to apply the algorithms with
appropriate toolkits to conduct machine learning experiments for clinical
prediction tasks. The objectives of this chapter are to (1) understand the
basics of machine learning techniques and the reasons behind why they are
useful for solving clinical prediction problems, (2) understand the intuition
behind some machine learning models, including regression, decision trees, and
support vector machines, and (3) understand how to apply these models to
clinical prediction problems using publicly available datasets via case
studies.
| 28
|
Enhancing camera surveillance using computer vision: a research note
|
$\mathbf{Purpose}$ - The growth of police operated surveillance cameras has
out-paced the ability of humans to monitor them effectively. Computer vision is
a possible solution. An ongoing research project on the application of computer
vision within a municipal police department is described. The paper aims to
discuss these issues.
$\mathbf{Design/methodology/approach}$ - Following the demystification of
computer vision technology, its potential for police agencies is developed
within a focus on computer vision as a solution for two common surveillance
camera tasks (live monitoring of multiple surveillance cameras and summarizing
archived video files). Three unaddressed research questions (can specialized
computer vision applications for law enforcement be developed at this time, how
will computer vision be utilized within existing public safety camera
monitoring rooms, and what are the system-wide impacts of a computer vision
capability on local criminal justice systems) are considered.
$\mathbf{Findings}$ - Despite computer vision becoming accessible to law
enforcement agencies the impact of computer vision has not been discussed or
adequately researched. There is little knowledge of computer vision or its
potential in the field.
$\mathbf{Originality/value}$ - This paper introduces and discusses computer
vision from a law enforcement perspective and will be valuable to police
personnel tasked with monitoring large camera networks and considering computer
vision as a system upgrade.
|
Disliked
|
zrz@andrew.cmu.edu
|
Enhancing camera surveillance using computer vision: a research note : $\mathbf{Purpose}$ - The growth of police operated surveillance cameras has
out-paced the ability of humans to monitor them effectively. Computer vision is
a possible solution. An ongoing research project on the application of computer
vision within a municipal police department is described. The paper aims to
discuss these issues.
$\mathbf{Design/methodology/approach}$ - Following the demystification of
computer vision technology, its potential for police agencies is developed
within a focus on computer vision as a solution for two common surveillance
camera tasks (live monitoring of multiple surveillance cameras and summarizing
archived video files). Three unaddressed research questions (can specialized
computer vision applications for law enforcement be developed at this time, how
will computer vision be utilized within existing public safety camera
monitoring rooms, and what are the system-wide impacts of a computer vision
capability on local criminal justice systems) are considered.
$\mathbf{Findings}$ - Despite computer vision becoming accessible to law
enforcement agencies the impact of computer vision has not been discussed or
adequately researched. There is little knowledge of computer vision or its
potential in the field.
$\mathbf{Originality/value}$ - This paper introduces and discusses computer
vision from a law enforcement perspective and will be valuable to police
personnel tasked with monitoring large camera networks and considering computer
vision as a system upgrade.
| 0
|
zrz@andrew.cmu.edu [SEP] Enhancing camera surveillance using computer vision: a research note : $\mathbf{Purpose}$ - The growth of police operated surveillance cameras has
out-paced the ability of humans to monitor them effectively. Computer vision is
a possible solution. An ongoing research project on the application of computer
vision within a municipal police department is described. The paper aims to
discuss these issues.
$\mathbf{Design/methodology/approach}$ - Following the demystification of
computer vision technology, its potential for police agencies is developed
within a focus on computer vision as a solution for two common surveillance
camera tasks (live monitoring of multiple surveillance cameras and summarizing
archived video files). Three unaddressed research questions (can specialized
computer vision applications for law enforcement be developed at this time, how
will computer vision be utilized within existing public safety camera
monitoring rooms, and what are the system-wide impacts of a computer vision
capability on local criminal justice systems) are considered.
$\mathbf{Findings}$ - Despite computer vision becoming accessible to law
enforcement agencies the impact of computer vision has not been discussed or
adequately researched. There is little knowledge of computer vision or its
potential in the field.
$\mathbf{Originality/value}$ - This paper introduces and discusses computer
vision from a law enforcement perspective and will be valuable to police
personnel tasked with monitoring large camera networks and considering computer
vision as a system upgrade.
| 343
|
Meta-Learning: A Survey
|
Meta-learning, or learning to learn, is the science of systematically
observing how different machine learning approaches perform on a wide range of
learning tasks, and then learning from this experience, or meta-data, to learn
new tasks much faster than otherwise possible. Not only does this dramatically
speed up and improve the design of machine learning pipelines or neural
architectures, it also allows us to replace hand-engineered algorithms with
novel approaches learned in a data-driven way. In this chapter, we provide an
overview of the state of the art in this fascinating and continuously evolving
field.
|
Liked
|
zrz@andrew.cmu.edu
|
Meta-Learning: A Survey : Meta-learning, or learning to learn, is the science of systematically
observing how different machine learning approaches perform on a wide range of
learning tasks, and then learning from this experience, or meta-data, to learn
new tasks much faster than otherwise possible. Not only does this dramatically
speed up and improve the design of machine learning pipelines or neural
architectures, it also allows us to replace hand-engineered algorithms with
novel approaches learned in a data-driven way. In this chapter, we provide an
overview of the state of the art in this fascinating and continuously evolving
field.
| 1
|
zrz@andrew.cmu.edu [SEP] Meta-Learning: A Survey : Meta-learning, or learning to learn, is the science of systematically
observing how different machine learning approaches perform on a wide range of
learning tasks, and then learning from this experience, or meta-data, to learn
new tasks much faster than otherwise possible. Not only does this dramatically
speed up and improve the design of machine learning pipelines or neural
architectures, it also allows us to replace hand-engineered algorithms with
novel approaches learned in a data-driven way. In this chapter, we provide an
overview of the state of the art in this fascinating and continuously evolving
field.
| 87
|
Preparatory Manipulation Planning using Automatically Determined Single and Dual Arms
|
This paper presents a manipulation planning algorithm for robots to reorient
objects. It automatically finds a sequence of robot motion that manipulates and
prepares an object for specific tasks. Examples of the preparatory manipulation
planning problems include reorienting an electric drill to cut holes,
reorienting workpieces for assembly, and reorienting cargo for packing, etc.
The proposed algorithm could plan single and dual arm manipulation sequences to
solve the problems. The mechanism under the planner is a regrasp graph which
encodes grasp configurations and object poses. The algorithms search the graph
to find a sequence of robot motion to reorient objects. The planner is able to
plan both single and dual arm manipulation. It could also automatically
determine whether to use a single arm, dual arms, or their combinations to
finish given tasks. The planner is examined by various humanoid robots like
Nextage, HRP2Kai, HRP5P, etc., using both simulation and real-world
experiments.
|
Liked
|
jechoi@andrew.cmu.edu
|
Preparatory Manipulation Planning using Automatically Determined Single and Dual Arms : This paper presents a manipulation planning algorithm for robots to reorient
objects. It automatically finds a sequence of robot motion that manipulates and
prepares an object for specific tasks. Examples of the preparatory manipulation
planning problems include reorienting an electric drill to cut holes,
reorienting workpieces for assembly, and reorienting cargo for packing, etc.
The proposed algorithm could plan single and dual arm manipulation sequences to
solve the problems. The mechanism under the planner is a regrasp graph which
encodes grasp configurations and object poses. The algorithms search the graph
to find a sequence of robot motion to reorient objects. The planner is able to
plan both single and dual arm manipulation. It could also automatically
determine whether to use a single arm, dual arms, or their combinations to
finish given tasks. The planner is examined by various humanoid robots like
Nextage, HRP2Kai, HRP5P, etc., using both simulation and real-world
experiments.
| 1
|
jechoi@andrew.cmu.edu [SEP] Preparatory Manipulation Planning using Automatically Determined Single and Dual Arms : This paper presents a manipulation planning algorithm for robots to reorient
objects. It automatically finds a sequence of robot motion that manipulates and
prepares an object for specific tasks. Examples of the preparatory manipulation
planning problems include reorienting an electric drill to cut holes,
reorienting workpieces for assembly, and reorienting cargo for packing, etc.
The proposed algorithm could plan single and dual arm manipulation sequences to
solve the problems. The mechanism under the planner is a regrasp graph which
encodes grasp configurations and object poses. The algorithms search the graph
to find a sequence of robot motion to reorient objects. The planner is able to
plan both single and dual arm manipulation. It could also automatically
determine whether to use a single arm, dual arms, or their combinations to
finish given tasks. The planner is examined by various humanoid robots like
Nextage, HRP2Kai, HRP5P, etc., using both simulation and real-world
experiments.
| 496
|
Arm Manipulation Planning of Tethered Tools with the Help of a Tool Balancer
|
Robotic manipulation of tethered tools is widely seen in robotic work cells.
They may cause excess strain on the tool's cable or undesired entanglements
with the robot's arms. This paper presents a manipulation planner with cable
orientation constraints for tethered tools suspended by tool balancers. The
planner uses orientation constraints to limit the bending of the balancer's
cable while the robot manipulates a tool and places it in a desired pose. The
constraints reduce entanglements and decrease the torque induced by the cable
on the robot joints. Simulation and real-world experiments show that the
constrained planner can successfully plan robot motions for the manipulation of
suspended tethered tools preventing the robot from damaging the cable or
getting its arms entangled, potentially avoiding accidents. The planner is
expected to play promising roles in manufacturing cells.
|
Liked
|
jechoi@andrew.cmu.edu
|
Arm Manipulation Planning of Tethered Tools with the Help of a Tool Balancer : Robotic manipulation of tethered tools is widely seen in robotic work cells.
They may cause excess strain on the tool's cable or undesired entanglements
with the robot's arms. This paper presents a manipulation planner with cable
orientation constraints for tethered tools suspended by tool balancers. The
planner uses orientation constraints to limit the bending of the balancer's
cable while the robot manipulates a tool and places it in a desired pose. The
constraints reduce entanglements and decrease the torque induced by the cable
on the robot joints. Simulation and real-world experiments show that the
constrained planner can successfully plan robot motions for the manipulation of
suspended tethered tools preventing the robot from damaging the cable or
getting its arms entangled, potentially avoiding accidents. The planner is
expected to play promising roles in manufacturing cells.
| 1
|
jechoi@andrew.cmu.edu [SEP] Arm Manipulation Planning of Tethered Tools with the Help of a Tool Balancer : Robotic manipulation of tethered tools is widely seen in robotic work cells.
They may cause excess strain on the tool's cable or undesired entanglements
with the robot's arms. This paper presents a manipulation planner with cable
orientation constraints for tethered tools suspended by tool balancers. The
planner uses orientation constraints to limit the bending of the balancer's
cable while the robot manipulates a tool and places it in a desired pose. The
constraints reduce entanglements and decrease the torque induced by the cable
on the robot joints. Simulation and real-world experiments show that the
constrained planner can successfully plan robot motions for the manipulation of
suspended tethered tools preventing the robot from damaging the cable or
getting its arms entangled, potentially avoiding accidents. The planner is
expected to play promising roles in manufacturing cells.
| 529
|
Lidar for Autonomous Driving: The principles, challenges, and trends for automotive lidar and perception systems
|
Autonomous vehicles rely on their perception systems to acquire information
about their immediate surroundings. It is necessary to detect the presence of
other vehicles, pedestrians and other relevant entities. Safety concerns and
the need for accurate estimations have led to the introduction of Light
Detection and Ranging (LiDAR) systems in complement to the camera or
radar-based perception systems. This article presents a review of
state-of-the-art automotive LiDAR technologies and the perception algorithms
used with those technologies. LiDAR systems are introduced first by analyzing
the main components, from laser transmitter to its beam scanning mechanism.
Advantages/disadvantages and the current status of various solutions are
introduced and compared. Then, the specific perception pipeline for LiDAR data
processing, from an autonomous vehicle perspective is detailed. The
model-driven approaches and the emerging deep learning solutions are reviewed.
Finally, we provide an overview of the limitations, challenges and trends for
automotive LiDARs and perception systems.
|
Liked
|
zrz@andrew.cmu.edu
|
Lidar for Autonomous Driving: The principles, challenges, and trends for automotive lidar and perception systems : Autonomous vehicles rely on their perception systems to acquire information
about their immediate surroundings. It is necessary to detect the presence of
other vehicles, pedestrians and other relevant entities. Safety concerns and
the need for accurate estimations have led to the introduction of Light
Detection and Ranging (LiDAR) systems in complement to the camera or
radar-based perception systems. This article presents a review of
state-of-the-art automotive LiDAR technologies and the perception algorithms
used with those technologies. LiDAR systems are introduced first by analyzing
the main components, from laser transmitter to its beam scanning mechanism.
Advantages/disadvantages and the current status of various solutions are
introduced and compared. Then, the specific perception pipeline for LiDAR data
processing, from an autonomous vehicle perspective is detailed. The
model-driven approaches and the emerging deep learning solutions are reviewed.
Finally, we provide an overview of the limitations, challenges and trends for
automotive LiDARs and perception systems.
| 1
|
zrz@andrew.cmu.edu [SEP] Lidar for Autonomous Driving: The principles, challenges, and trends for automotive lidar and perception systems : Autonomous vehicles rely on their perception systems to acquire information
about their immediate surroundings. It is necessary to detect the presence of
other vehicles, pedestrians and other relevant entities. Safety concerns and
the need for accurate estimations have led to the introduction of Light
Detection and Ranging (LiDAR) systems in complement to the camera or
radar-based perception systems. This article presents a review of
state-of-the-art automotive LiDAR technologies and the perception algorithms
used with those technologies. LiDAR systems are introduced first by analyzing
the main components, from laser transmitter to its beam scanning mechanism.
Advantages/disadvantages and the current status of various solutions are
introduced and compared. Then, the specific perception pipeline for LiDAR data
processing, from an autonomous vehicle perspective is detailed. The
model-driven approaches and the emerging deep learning solutions are reviewed.
Finally, we provide an overview of the limitations, challenges and trends for
automotive LiDARs and perception systems.
| 296
|
Deep Gaussian Mixture Models
|
Deep learning is a hierarchical inference method formed by subsequent
multiple layers of learning able to more efficiently describe complex
relationships. In this work, Deep Gaussian Mixture Models are introduced and
discussed. A Deep Gaussian Mixture model (DGMM) is a network of multiple layers
of latent variables, where, at each layer, the variables follow a mixture of
Gaussian distributions. Thus, the deep mixture model consists of a set of
nested mixtures of linear models, which globally provide a nonlinear model able
to describe the data in a very flexible way. In order to avoid
overparameterized solutions, dimension reduction by factor models can be
applied at each layer of the architecture thus resulting in deep mixtures of
factor analysers.
|
Disliked
|
zrz@andrew.cmu.edu
|
Deep Gaussian Mixture Models : Deep learning is a hierarchical inference method formed by subsequent
multiple layers of learning able to more efficiently describe complex
relationships. In this work, Deep Gaussian Mixture Models are introduced and
discussed. A Deep Gaussian Mixture model (DGMM) is a network of multiple layers
of latent variables, where, at each layer, the variables follow a mixture of
Gaussian distributions. Thus, the deep mixture model consists of a set of
nested mixtures of linear models, which globally provide a nonlinear model able
to describe the data in a very flexible way. In order to avoid
overparameterized solutions, dimension reduction by factor models can be
applied at each layer of the architecture thus resulting in deep mixtures of
factor analysers.
| 0
|
zrz@andrew.cmu.edu [SEP] Deep Gaussian Mixture Models : Deep learning is a hierarchical inference method formed by subsequent
multiple layers of learning able to more efficiently describe complex
relationships. In this work, Deep Gaussian Mixture Models are introduced and
discussed. A Deep Gaussian Mixture model (DGMM) is a network of multiple layers
of latent variables, where, at each layer, the variables follow a mixture of
Gaussian distributions. Thus, the deep mixture model consists of a set of
nested mixtures of linear models, which globally provide a nonlinear model able
to describe the data in a very flexible way. In order to avoid
overparameterized solutions, dimension reduction by factor models can be
applied at each layer of the architecture thus resulting in deep mixtures of
factor analysers.
| 268
|
Experimental Characterization of Robot Arm Rigidity in Order to Be Used in Machining Operation
|
Attempts to install a rotating tool at the end of a robot arm
poly-articulated date back twenty years, but these robots were not designed for
that. Indeed, two essential features are necessary for machining: high rigidity
and precision in a given workspace. The experimental results presented are the
dynamic identification of a poly-articulated robot equipped with an integrated
spindle. This study aims to highlight the influence of the geometric
configuration of the robot arm on the overall stiffness of the system. The
spindle is taken into account as an additional weight on board but also as a
dynamical excitation for the robot KUKA KR_240_2. Study of the robotic
machining vibrations shows the suitable directions of movement in milling
process
|
Liked
|
jechoi@andrew.cmu.edu
|
Experimental Characterization of Robot Arm Rigidity in Order to Be Used in Machining Operation : Attempts to install a rotating tool at the end of a robot arm
poly-articulated date back twenty years, but these robots were not designed for
that. Indeed, two essential features are necessary for machining: high rigidity
and precision in a given workspace. The experimental results presented are the
dynamic identification of a poly-articulated robot equipped with an integrated
spindle. This study aims to highlight the influence of the geometric
configuration of the robot arm on the overall stiffness of the system. The
spindle is taken into account as an additional weight on board but also as a
dynamical excitation for the robot KUKA KR_240_2. Study of the robotic
machining vibrations shows the suitable directions of movement in milling
process
| 1
|
jechoi@andrew.cmu.edu [SEP] Experimental Characterization of Robot Arm Rigidity in Order to Be Used in Machining Operation : Attempts to install a rotating tool at the end of a robot arm
poly-articulated date back twenty years, but these robots were not designed for
that. Indeed, two essential features are necessary for machining: high rigidity
and precision in a given workspace. The experimental results presented are the
dynamic identification of a poly-articulated robot equipped with an integrated
spindle. This study aims to highlight the influence of the geometric
configuration of the robot arm on the overall stiffness of the system. The
spindle is taken into account as an additional weight on board but also as a
dynamical excitation for the robot KUKA KR_240_2. Study of the robotic
machining vibrations shows the suitable directions of movement in milling
process
| 526
|
WiCV 2020: The Seventh Women In Computer Vision Workshop
|
In this paper we present the details of Women in Computer Vision Workshop -
WiCV 2020, organized in alongside virtual CVPR 2020. This event aims at
encouraging the women researchers in the field of computer vision. It provides
a voice to a minority (female) group in computer vision community and focuses
on increasingly the visibility of these researchers, both in academia and
industry. WiCV believes that such an event can play an important role in
lowering the gender imbalance in the field of computer vision. WiCV is
organized each year where it provides a.) opportunity for collaboration with
between researchers b.) mentorship to female junior researchers c.) financial
support to presenters to overcome monetary burden and d.) large and diverse
choice of role models, who can serve as examples to younger researchers at the
beginning of their careers. In this paper, we present a report on the workshop
program, trends over the past years, a summary of statistics regarding
presenters, attendees, and sponsorship for the current workshop.
|
Disliked
|
zrz@andrew.cmu.edu
|
WiCV 2020: The Seventh Women In Computer Vision Workshop : In this paper we present the details of Women in Computer Vision Workshop -
WiCV 2020, organized in alongside virtual CVPR 2020. This event aims at
encouraging the women researchers in the field of computer vision. It provides
a voice to a minority (female) group in computer vision community and focuses
on increasingly the visibility of these researchers, both in academia and
industry. WiCV believes that such an event can play an important role in
lowering the gender imbalance in the field of computer vision. WiCV is
organized each year where it provides a.) opportunity for collaboration with
between researchers b.) mentorship to female junior researchers c.) financial
support to presenters to overcome monetary burden and d.) large and diverse
choice of role models, who can serve as examples to younger researchers at the
beginning of their careers. In this paper, we present a report on the workshop
program, trends over the past years, a summary of statistics regarding
presenters, attendees, and sponsorship for the current workshop.
| 0
|
zrz@andrew.cmu.edu [SEP] WiCV 2020: The Seventh Women In Computer Vision Workshop : In this paper we present the details of Women in Computer Vision Workshop -
WiCV 2020, organized in alongside virtual CVPR 2020. This event aims at
encouraging the women researchers in the field of computer vision. It provides
a voice to a minority (female) group in computer vision community and focuses
on increasingly the visibility of these researchers, both in academia and
industry. WiCV believes that such an event can play an important role in
lowering the gender imbalance in the field of computer vision. WiCV is
organized each year where it provides a.) opportunity for collaboration with
between researchers b.) mentorship to female junior researchers c.) financial
support to presenters to overcome monetary burden and d.) large and diverse
choice of role models, who can serve as examples to younger researchers at the
beginning of their careers. In this paper, we present a report on the workshop
program, trends over the past years, a summary of statistics regarding
presenters, attendees, and sponsorship for the current workshop.
| 373
|
Kinematic Optimization of a Robotic Arm for Automation Tasks with Human Demonstration
|
Robotic arms are highly common in various automation processes such as
manufacturing lines. However, these highly capable robots are usually degraded
to simple repetitive tasks such as pick-and-place. On the other hand, designing
an optimal robot for one specific task consumes large resources of engineering
time and costs. In this paper, we propose a novel concept for optimizing the
fitness of a robotic arm to perform a specific task based on human
demonstration. Fitness of a robot arm is a measure of its ability to follow
recorded human arm and hand paths. The optimization is conducted using a
modified variant of the Particle Swarm Optimization for the robot design
problem. In the proposed approach, we generate an optimal robot design along
with the required path to complete the task. The approach could reduce the
time-to-market of robotic arms and enable the standardization of modular
robotic parts. Novice users could easily apply a minimal robot arm to various
tasks. Two test cases of common manufacturing tasks are presented yielding
optimal designs and reduced computational effort by up to 92%.
|
Liked
|
jechoi@andrew.cmu.edu
|
Kinematic Optimization of a Robotic Arm for Automation Tasks with Human Demonstration : Robotic arms are highly common in various automation processes such as
manufacturing lines. However, these highly capable robots are usually degraded
to simple repetitive tasks such as pick-and-place. On the other hand, designing
an optimal robot for one specific task consumes large resources of engineering
time and costs. In this paper, we propose a novel concept for optimizing the
fitness of a robotic arm to perform a specific task based on human
demonstration. Fitness of a robot arm is a measure of its ability to follow
recorded human arm and hand paths. The optimization is conducted using a
modified variant of the Particle Swarm Optimization for the robot design
problem. In the proposed approach, we generate an optimal robot design along
with the required path to complete the task. The approach could reduce the
time-to-market of robotic arms and enable the standardization of modular
robotic parts. Novice users could easily apply a minimal robot arm to various
tasks. Two test cases of common manufacturing tasks are presented yielding
optimal designs and reduced computational effort by up to 92%.
| 1
|
jechoi@andrew.cmu.edu [SEP] Kinematic Optimization of a Robotic Arm for Automation Tasks with Human Demonstration : Robotic arms are highly common in various automation processes such as
manufacturing lines. However, these highly capable robots are usually degraded
to simple repetitive tasks such as pick-and-place. On the other hand, designing
an optimal robot for one specific task consumes large resources of engineering
time and costs. In this paper, we propose a novel concept for optimizing the
fitness of a robotic arm to perform a specific task based on human
demonstration. Fitness of a robot arm is a measure of its ability to follow
recorded human arm and hand paths. The optimization is conducted using a
modified variant of the Particle Swarm Optimization for the robot design
problem. In the proposed approach, we generate an optimal robot design along
with the required path to complete the task. The approach could reduce the
time-to-market of robotic arms and enable the standardization of modular
robotic parts. Novice users could easily apply a minimal robot arm to various
tasks. Two test cases of common manufacturing tasks are presented yielding
optimal designs and reduced computational effort by up to 92%.
| 390
|
Public Perceptions of Autonomous Vehicles: A Survey of Pedestrians and Cyclists in Pittsburgh
|
This study investigates how autonomous vehicle(AV) technology is perceived by
pedestrians and bicyclists in Pittsburgh. Using survey data from over 1200
respondents, the research explores the interplay between demographics, AV
interactions, infrastructural readiness, safety perceptions, and trust.
Findings highlight demographic divides, infrastructure gaps, and the crucial
role of communication and education in AV adoption.
|
Liked
|
zrz@andrew.cmu.edu
|
Public Perceptions of Autonomous Vehicles: A Survey of Pedestrians and Cyclists in Pittsburgh : This study investigates how autonomous vehicle(AV) technology is perceived by
pedestrians and bicyclists in Pittsburgh. Using survey data from over 1200
respondents, the research explores the interplay between demographics, AV
interactions, infrastructural readiness, safety perceptions, and trust.
Findings highlight demographic divides, infrastructure gaps, and the crucial
role of communication and education in AV adoption.
| 1
|
zrz@andrew.cmu.edu [SEP] Public Perceptions of Autonomous Vehicles: A Survey of Pedestrians and Cyclists in Pittsburgh : This study investigates how autonomous vehicle(AV) technology is perceived by
pedestrians and bicyclists in Pittsburgh. Using survey data from over 1200
respondents, the research explores the interplay between demographics, AV
interactions, infrastructural readiness, safety perceptions, and trust.
Findings highlight demographic divides, infrastructure gaps, and the crucial
role of communication and education in AV adoption.
| 301
|
Reflective VLM Planning for Dual-Arm Desktop Cleaning: Bridging Open-Vocabulary Perception and Precise Manipulation
|
Desktop cleaning demands open-vocabulary recognition and precise manipulation
for heterogeneous debris. We propose a hierarchical framework integrating
reflective Vision-Language Model (VLM) planning with dual-arm execution via
structured scene representation. Grounded-SAM2 facilitates open-vocabulary
detection, while a memory-augmented VLM generates, critiques, and revises
manipulation sequences. These sequences are converted into parametric
trajectories for five primitives executed by coordinated Franka arms. Evaluated
in simulated scenarios, our system achieving 87.2% task completion, a 28.8%
improvement over static VLM and 36.2% over single-arm baselines. Structured
memory integration proves crucial for robust, generalizable manipulation while
maintaining real-time control performance.
|
Liked
|
jechoi@andrew.cmu.edu
|
Reflective VLM Planning for Dual-Arm Desktop Cleaning: Bridging Open-Vocabulary Perception and Precise Manipulation : Desktop cleaning demands open-vocabulary recognition and precise manipulation
for heterogeneous debris. We propose a hierarchical framework integrating
reflective Vision-Language Model (VLM) planning with dual-arm execution via
structured scene representation. Grounded-SAM2 facilitates open-vocabulary
detection, while a memory-augmented VLM generates, critiques, and revises
manipulation sequences. These sequences are converted into parametric
trajectories for five primitives executed by coordinated Franka arms. Evaluated
in simulated scenarios, our system achieving 87.2% task completion, a 28.8%
improvement over static VLM and 36.2% over single-arm baselines. Structured
memory integration proves crucial for robust, generalizable manipulation while
maintaining real-time control performance.
| 1
|
jechoi@andrew.cmu.edu [SEP] Reflective VLM Planning for Dual-Arm Desktop Cleaning: Bridging Open-Vocabulary Perception and Precise Manipulation : Desktop cleaning demands open-vocabulary recognition and precise manipulation
for heterogeneous debris. We propose a hierarchical framework integrating
reflective Vision-Language Model (VLM) planning with dual-arm execution via
structured scene representation. Grounded-SAM2 facilitates open-vocabulary
detection, while a memory-augmented VLM generates, critiques, and revises
manipulation sequences. These sequences are converted into parametric
trajectories for five primitives executed by coordinated Franka arms. Evaluated
in simulated scenarios, our system achieving 87.2% task completion, a 28.8%
improvement over static VLM and 36.2% over single-arm baselines. Structured
memory integration proves crucial for robust, generalizable manipulation while
maintaining real-time control performance.
| 556
|
DAG-Plan: Generating Directed Acyclic Dependency Graphs for Dual-Arm Cooperative Planning
|
Dual-arm robots offer enhanced versatility and efficiency over single-arm
counterparts by enabling concurrent manipulation of multiple objects or
cooperative execution of tasks using both arms. However, the coordination of
dual-arm systems for long-horizon tasks continues to pose significant
challenges, stemming from the intricate temporal and spatial dependencies among
sub-tasks, necessitating intelligent decisions regarding the allocation of
actions between arms and their optimal execution order. Existing task planning
methods predominantly focus on single-arm robots or rely on predefined bimanual
operations to use large language models (LLMs) generate task sequence with
linear temporal dependency, failing to fully leverage the capabilities of
dual-arm systems. To address this limitation, we introduce DAG-Plan, a
structured task planning framework tailored for dual-arm robots. DAG-Plan
harnesses LLMs to decompose intricate tasks into actionable sub-tasks
represented as nodes within a directed acyclic graph (DAG). Critically,
DAG-Plan dynamically assigns these sub-tasks to the appropriate arm based on
real-time environmental observations, enabling parallel and adaptive execution.
We evaluate DAG-Plan on the Dual-Arm Kitchen Benchmark, comprising 5 sequential
tasks with 44 sub-tasks. Extensive experiments demonstrate the superiority of
DAG-Plan over directly using LLM to generate linear task sequence, achieving
52.8% higher efficiency compared to the single-arm task planning and 48% higher
success rate of the dual-arm task planning. Compared to iterative methods,
DAG-Plan improving execution efficiency 84.1% due to its fewer query time. More
demos and information are available on https://sites.google.com/view/dag-plan.
|
Liked
|
jechoi@andrew.cmu.edu
|
DAG-Plan: Generating Directed Acyclic Dependency Graphs for Dual-Arm Cooperative Planning : Dual-arm robots offer enhanced versatility and efficiency over single-arm
counterparts by enabling concurrent manipulation of multiple objects or
cooperative execution of tasks using both arms. However, the coordination of
dual-arm systems for long-horizon tasks continues to pose significant
challenges, stemming from the intricate temporal and spatial dependencies among
sub-tasks, necessitating intelligent decisions regarding the allocation of
actions between arms and their optimal execution order. Existing task planning
methods predominantly focus on single-arm robots or rely on predefined bimanual
operations to use large language models (LLMs) generate task sequence with
linear temporal dependency, failing to fully leverage the capabilities of
dual-arm systems. To address this limitation, we introduce DAG-Plan, a
structured task planning framework tailored for dual-arm robots. DAG-Plan
harnesses LLMs to decompose intricate tasks into actionable sub-tasks
represented as nodes within a directed acyclic graph (DAG). Critically,
DAG-Plan dynamically assigns these sub-tasks to the appropriate arm based on
real-time environmental observations, enabling parallel and adaptive execution.
We evaluate DAG-Plan on the Dual-Arm Kitchen Benchmark, comprising 5 sequential
tasks with 44 sub-tasks. Extensive experiments demonstrate the superiority of
DAG-Plan over directly using LLM to generate linear task sequence, achieving
52.8% higher efficiency compared to the single-arm task planning and 48% higher
success rate of the dual-arm task planning. Compared to iterative methods,
DAG-Plan improving execution efficiency 84.1% due to its fewer query time. More
demos and information are available on https://sites.google.com/view/dag-plan.
| 1
|
jechoi@andrew.cmu.edu [SEP] DAG-Plan: Generating Directed Acyclic Dependency Graphs for Dual-Arm Cooperative Planning : Dual-arm robots offer enhanced versatility and efficiency over single-arm
counterparts by enabling concurrent manipulation of multiple objects or
cooperative execution of tasks using both arms. However, the coordination of
dual-arm systems for long-horizon tasks continues to pose significant
challenges, stemming from the intricate temporal and spatial dependencies among
sub-tasks, necessitating intelligent decisions regarding the allocation of
actions between arms and their optimal execution order. Existing task planning
methods predominantly focus on single-arm robots or rely on predefined bimanual
operations to use large language models (LLMs) generate task sequence with
linear temporal dependency, failing to fully leverage the capabilities of
dual-arm systems. To address this limitation, we introduce DAG-Plan, a
structured task planning framework tailored for dual-arm robots. DAG-Plan
harnesses LLMs to decompose intricate tasks into actionable sub-tasks
represented as nodes within a directed acyclic graph (DAG). Critically,
DAG-Plan dynamically assigns these sub-tasks to the appropriate arm based on
real-time environmental observations, enabling parallel and adaptive execution.
We evaluate DAG-Plan on the Dual-Arm Kitchen Benchmark, comprising 5 sequential
tasks with 44 sub-tasks. Extensive experiments demonstrate the superiority of
DAG-Plan over directly using LLM to generate linear task sequence, achieving
52.8% higher efficiency compared to the single-arm task planning and 48% higher
success rate of the dual-arm task planning. Compared to iterative methods,
DAG-Plan improving execution efficiency 84.1% due to its fewer query time. More
demos and information are available on https://sites.google.com/view/dag-plan.
| 487
|
Pen and Paper Exercises in Machine Learning
|
This is a collection of (mostly) pen-and-paper exercises in machine learning.
The exercises are on the following topics: linear algebra, optimisation,
directed graphical models, undirected graphical models, expressive power of
graphical models, factor graphs and message passing, inference for hidden
Markov models, model-based learning (including ICA and unnormalised models),
sampling and Monte-Carlo integration, and variational inference.
|
Disliked
|
zrz@andrew.cmu.edu
|
Pen and Paper Exercises in Machine Learning : This is a collection of (mostly) pen-and-paper exercises in machine learning.
The exercises are on the following topics: linear algebra, optimisation,
directed graphical models, undirected graphical models, expressive power of
graphical models, factor graphs and message passing, inference for hidden
Markov models, model-based learning (including ICA and unnormalised models),
sampling and Monte-Carlo integration, and variational inference.
| 0
|
zrz@andrew.cmu.edu [SEP] Pen and Paper Exercises in Machine Learning : This is a collection of (mostly) pen-and-paper exercises in machine learning.
The exercises are on the following topics: linear algebra, optimisation,
directed graphical models, undirected graphical models, expressive power of
graphical models, factor graphs and message passing, inference for hidden
Markov models, model-based learning (including ICA and unnormalised models),
sampling and Monte-Carlo integration, and variational inference.
| 122
|
How to deal with glare for improved perception of Autonomous Vehicles
|
Vision sensors are versatile and can capture a wide range of visual cues,
such as color, texture, shape, and depth. This versatility, along with the
relatively inexpensive availability of machine vision cameras, played an
important role in adopting vision-based environment perception systems in
autonomous vehicles (AVs). However, vision-based perception systems can be
easily affected by glare in the presence of a bright source of light, such as
the sun or the headlights of the oncoming vehicle at night or simply by light
reflecting off snow or ice-covered surfaces; scenarios encountered frequently
during driving. In this paper, we investigate various glare reduction
techniques, including the proposed saturated pixel-aware glare reduction
technique for improved performance of the computer vision (CV) tasks employed
by the perception layer of AVs. We evaluate these glare reduction methods based
on various performance metrics of the CV algorithms used by the perception
layer. Specifically, we considered object detection, object recognition, object
tracking, depth estimation, and lane detection which are crucial for autonomous
driving. The experimental findings validate the efficacy of the proposed glare
reduction approach, showcasing enhanced performance across diverse perception
tasks and remarkable resilience against varying levels of glare.
|
Liked
|
zrz@andrew.cmu.edu
|
How to deal with glare for improved perception of Autonomous Vehicles : Vision sensors are versatile and can capture a wide range of visual cues,
such as color, texture, shape, and depth. This versatility, along with the
relatively inexpensive availability of machine vision cameras, played an
important role in adopting vision-based environment perception systems in
autonomous vehicles (AVs). However, vision-based perception systems can be
easily affected by glare in the presence of a bright source of light, such as
the sun or the headlights of the oncoming vehicle at night or simply by light
reflecting off snow or ice-covered surfaces; scenarios encountered frequently
during driving. In this paper, we investigate various glare reduction
techniques, including the proposed saturated pixel-aware glare reduction
technique for improved performance of the computer vision (CV) tasks employed
by the perception layer of AVs. We evaluate these glare reduction methods based
on various performance metrics of the CV algorithms used by the perception
layer. Specifically, we considered object detection, object recognition, object
tracking, depth estimation, and lane detection which are crucial for autonomous
driving. The experimental findings validate the efficacy of the proposed glare
reduction approach, showcasing enhanced performance across diverse perception
tasks and remarkable resilience against varying levels of glare.
| 1
|
zrz@andrew.cmu.edu [SEP] How to deal with glare for improved perception of Autonomous Vehicles : Vision sensors are versatile and can capture a wide range of visual cues,
such as color, texture, shape, and depth. This versatility, along with the
relatively inexpensive availability of machine vision cameras, played an
important role in adopting vision-based environment perception systems in
autonomous vehicles (AVs). However, vision-based perception systems can be
easily affected by glare in the presence of a bright source of light, such as
the sun or the headlights of the oncoming vehicle at night or simply by light
reflecting off snow or ice-covered surfaces; scenarios encountered frequently
during driving. In this paper, we investigate various glare reduction
techniques, including the proposed saturated pixel-aware glare reduction
technique for improved performance of the computer vision (CV) tasks employed
by the perception layer of AVs. We evaluate these glare reduction methods based
on various performance metrics of the CV algorithms used by the perception
layer. Specifically, we considered object detection, object recognition, object
tracking, depth estimation, and lane detection which are crucial for autonomous
driving. The experimental findings validate the efficacy of the proposed glare
reduction approach, showcasing enhanced performance across diverse perception
tasks and remarkable resilience against varying levels of glare.
| 300
|
Multiband NFC for High-Throughput Wireless Computer Vision Sensor Network
|
Vision sensors lie in the heart of computer vision. In many computer vision
applications, such as AR/VR, non-contacting near-field communication (NFC) with
high throughput is required to transfer information to algorithms. In this
work, we proposed a novel NFC system which utilizes multiple frequency bands to
achieve high throughput.
|
Liked
|
zrz@andrew.cmu.edu
|
Multiband NFC for High-Throughput Wireless Computer Vision Sensor Network : Vision sensors lie in the heart of computer vision. In many computer vision
applications, such as AR/VR, non-contacting near-field communication (NFC) with
high throughput is required to transfer information to algorithms. In this
work, we proposed a novel NFC system which utilizes multiple frequency bands to
achieve high throughput.
| 1
|
zrz@andrew.cmu.edu [SEP] Multiband NFC for High-Throughput Wireless Computer Vision Sensor Network : Vision sensors lie in the heart of computer vision. In many computer vision
applications, such as AR/VR, non-contacting near-field communication (NFC) with
high throughput is required to transfer information to algorithms. In this
work, we proposed a novel NFC system which utilizes multiple frequency bands to
achieve high throughput.
| 337
|
PACC: A Passive-Arm Approach for High-Payload Collaborative Carrying with Quadruped Robots Using Model Predictive Control
|
In this paper, we introduce the concept of using passive arm structures with
intrinsic impedance for robot-robot and human-robot collaborative carrying with
quadruped robots. The concept is meant for a leader-follower task and takes a
minimalist approach that focuses on exploiting the robots' payload capabilities
and reducing energy consumption, without compromising the robot locomotion
capabilities. We introduce a preliminary arm mechanical design and describe how
to use its joint displacements to guide the robot's motion. To control the
robot's locomotion, we propose a decentralized Model Predictive Controller that
incorporates an approximation of the arm dynamics and the estimation of the
external forces from the collaborative carrying. We validate the overall system
experimentally by performing both robot-robot and human-robot collaborative
carrying on a stair-like obstacle and on rough terrain.
|
Disliked
|
jechoi@andrew.cmu.edu
|
PACC: A Passive-Arm Approach for High-Payload Collaborative Carrying with Quadruped Robots Using Model Predictive Control : In this paper, we introduce the concept of using passive arm structures with
intrinsic impedance for robot-robot and human-robot collaborative carrying with
quadruped robots. The concept is meant for a leader-follower task and takes a
minimalist approach that focuses on exploiting the robots' payload capabilities
and reducing energy consumption, without compromising the robot locomotion
capabilities. We introduce a preliminary arm mechanical design and describe how
to use its joint displacements to guide the robot's motion. To control the
robot's locomotion, we propose a decentralized Model Predictive Controller that
incorporates an approximation of the arm dynamics and the estimation of the
external forces from the collaborative carrying. We validate the overall system
experimentally by performing both robot-robot and human-robot collaborative
carrying on a stair-like obstacle and on rough terrain.
| 0
|
jechoi@andrew.cmu.edu [SEP] PACC: A Passive-Arm Approach for High-Payload Collaborative Carrying with Quadruped Robots Using Model Predictive Control : In this paper, we introduce the concept of using passive arm structures with
intrinsic impedance for robot-robot and human-robot collaborative carrying with
quadruped robots. The concept is meant for a leader-follower task and takes a
minimalist approach that focuses on exploiting the robots' payload capabilities
and reducing energy consumption, without compromising the robot locomotion
capabilities. We introduce a preliminary arm mechanical design and describe how
to use its joint displacements to guide the robot's motion. To control the
robot's locomotion, we propose a decentralized Model Predictive Controller that
incorporates an approximation of the arm dynamics and the estimation of the
external forces from the collaborative carrying. We validate the overall system
experimentally by performing both robot-robot and human-robot collaborative
carrying on a stair-like obstacle and on rough terrain.
| 23
|
Tuning Learning Rates with the Cumulative-Learning Constant
|
This paper introduces a novel method for optimizing learning rates in machine
learning. A previously unrecognized proportionality between learning rates and
dataset sizes is discovered, providing valuable insights into how dataset scale
influences training dynamics. Additionally, a cumulative learning constant is
identified, offering a framework for designing and optimizing advanced learning
rate schedules. These findings have the potential to enhance training
efficiency and performance across a wide range of machine learning
applications.
|
Liked
|
zrz@andrew.cmu.edu
|
Tuning Learning Rates with the Cumulative-Learning Constant : This paper introduces a novel method for optimizing learning rates in machine
learning. A previously unrecognized proportionality between learning rates and
dataset sizes is discovered, providing valuable insights into how dataset scale
influences training dynamics. Additionally, a cumulative learning constant is
identified, offering a framework for designing and optimizing advanced learning
rate schedules. These findings have the potential to enhance training
efficiency and performance across a wide range of machine learning
applications.
| 1
|
zrz@andrew.cmu.edu [SEP] Tuning Learning Rates with the Cumulative-Learning Constant : This paper introduces a novel method for optimizing learning rates in machine
learning. A previously unrecognized proportionality between learning rates and
dataset sizes is discovered, providing valuable insights into how dataset scale
influences training dynamics. Additionally, a cumulative learning constant is
identified, offering a framework for designing and optimizing advanced learning
rate schedules. These findings have the potential to enhance training
efficiency and performance across a wide range of machine learning
applications.
| 43
|
Multi-agent Collaborative Perception for Robotic Fleet: A Systematic Review
|
Collaborative perception in multi-robot fleets is a way to incorporate the
power of unity in robotic fleets. Collaborative perception refers to the
collective ability of multiple entities or agents to share and integrate their
sensory information for a more comprehensive understanding of their
environment. In other words, it involves the collaboration and fusion of data
from various sensors or sources to enhance perception and decision-making
capabilities. By combining data from diverse sources, such as cameras, lidar,
radar, or other sensors, the system can create a more accurate and robust
representation of the environment. In this review paper, we have summarized
findings from 20+ research papers on collaborative perception. Moreover, we
discuss testing and evaluation frameworks commonly accepted in academia and
industry for autonomous vehicles and autonomous mobile robots. Our experiments
with the trivial perception module show an improvement of over 200% with
collaborative perception compared to individual robot perception. Here's our
GitHub repository that shows the benefits of collaborative perception:
https://github.com/synapsemobility/synapseBEV
|
Liked
|
zrz@andrew.cmu.edu
|
Multi-agent Collaborative Perception for Robotic Fleet: A Systematic Review : Collaborative perception in multi-robot fleets is a way to incorporate the
power of unity in robotic fleets. Collaborative perception refers to the
collective ability of multiple entities or agents to share and integrate their
sensory information for a more comprehensive understanding of their
environment. In other words, it involves the collaboration and fusion of data
from various sensors or sources to enhance perception and decision-making
capabilities. By combining data from diverse sources, such as cameras, lidar,
radar, or other sensors, the system can create a more accurate and robust
representation of the environment. In this review paper, we have summarized
findings from 20+ research papers on collaborative perception. Moreover, we
discuss testing and evaluation frameworks commonly accepted in academia and
industry for autonomous vehicles and autonomous mobile robots. Our experiments
with the trivial perception module show an improvement of over 200% with
collaborative perception compared to individual robot perception. Here's our
GitHub repository that shows the benefits of collaborative perception:
https://github.com/synapsemobility/synapseBEV
| 1
|
zrz@andrew.cmu.edu [SEP] Multi-agent Collaborative Perception for Robotic Fleet: A Systematic Review : Collaborative perception in multi-robot fleets is a way to incorporate the
power of unity in robotic fleets. Collaborative perception refers to the
collective ability of multiple entities or agents to share and integrate their
sensory information for a more comprehensive understanding of their
environment. In other words, it involves the collaboration and fusion of data
from various sensors or sources to enhance perception and decision-making
capabilities. By combining data from diverse sources, such as cameras, lidar,
radar, or other sensors, the system can create a more accurate and robust
representation of the environment. In this review paper, we have summarized
findings from 20+ research papers on collaborative perception. Moreover, we
discuss testing and evaluation frameworks commonly accepted in academia and
industry for autonomous vehicles and autonomous mobile robots. Our experiments
with the trivial perception module show an improvement of over 200% with
collaborative perception compared to individual robot perception. Here's our
GitHub repository that shows the benefits of collaborative perception:
https://github.com/synapsemobility/synapseBEV
| 279
|
V-MAO: Generative Modeling for Multi-Arm Manipulation of Articulated Objects
|
Manipulating articulated objects requires multiple robot arms in general. It
is challenging to enable multiple robot arms to collaboratively complete
manipulation tasks on articulated objects. In this paper, we present
$\textbf{V-MAO}$, a framework for learning multi-arm manipulation of
articulated objects. Our framework includes a variational generative model that
learns contact point distribution over object rigid parts for each robot arm.
The training signal is obtained from interaction with the simulation
environment which is enabled by planning and a novel formulation of
object-centric control for articulated objects. We deploy our framework in a
customized MuJoCo simulation environment and demonstrate that our framework
achieves a high success rate on six different objects and two different robots.
We also show that generative modeling can effectively learn the contact point
distribution on articulated objects.
|
Liked
|
jechoi@andrew.cmu.edu
|
V-MAO: Generative Modeling for Multi-Arm Manipulation of Articulated Objects : Manipulating articulated objects requires multiple robot arms in general. It
is challenging to enable multiple robot arms to collaboratively complete
manipulation tasks on articulated objects. In this paper, we present
$\textbf{V-MAO}$, a framework for learning multi-arm manipulation of
articulated objects. Our framework includes a variational generative model that
learns contact point distribution over object rigid parts for each robot arm.
The training signal is obtained from interaction with the simulation
environment which is enabled by planning and a novel formulation of
object-centric control for articulated objects. We deploy our framework in a
customized MuJoCo simulation environment and demonstrate that our framework
achieves a high success rate on six different objects and two different robots.
We also show that generative modeling can effectively learn the contact point
distribution on articulated objects.
| 1
|
jechoi@andrew.cmu.edu [SEP] V-MAO: Generative Modeling for Multi-Arm Manipulation of Articulated Objects : Manipulating articulated objects requires multiple robot arms in general. It
is challenging to enable multiple robot arms to collaboratively complete
manipulation tasks on articulated objects. In this paper, we present
$\textbf{V-MAO}$, a framework for learning multi-arm manipulation of
articulated objects. Our framework includes a variational generative model that
learns contact point distribution over object rigid parts for each robot arm.
The training signal is obtained from interaction with the simulation
environment which is enabled by planning and a novel formulation of
object-centric control for articulated objects. We deploy our framework in a
customized MuJoCo simulation environment and demonstrate that our framework
achieves a high success rate on six different objects and two different robots.
We also show that generative modeling can effectively learn the contact point
distribution on articulated objects.
| 498
|
Deep Embedding Kernel
|
In this paper, we propose a novel supervised learning method that is called
Deep Embedding Kernel (DEK). DEK combines the advantages of deep learning and
kernel methods in a unified framework. More specifically, DEK is a learnable
kernel represented by a newly designed deep architecture. Compared with
pre-defined kernels, this kernel can be explicitly trained to map data to an
optimized high-level feature space where data may have favorable features
toward the application. Compared with typical deep learning using SoftMax or
logistic regression as the top layer, DEK is expected to be more generalizable
to new data. Experimental results show that DEK has superior performance than
typical machine learning methods in identity detection, classification,
regression, dimension reduction, and transfer learning.
|
Disliked
|
zrz@andrew.cmu.edu
|
Deep Embedding Kernel : In this paper, we propose a novel supervised learning method that is called
Deep Embedding Kernel (DEK). DEK combines the advantages of deep learning and
kernel methods in a unified framework. More specifically, DEK is a learnable
kernel represented by a newly designed deep architecture. Compared with
pre-defined kernels, this kernel can be explicitly trained to map data to an
optimized high-level feature space where data may have favorable features
toward the application. Compared with typical deep learning using SoftMax or
logistic regression as the top layer, DEK is expected to be more generalizable
to new data. Experimental results show that DEK has superior performance than
typical machine learning methods in identity detection, classification,
regression, dimension reduction, and transfer learning.
| 0
|
zrz@andrew.cmu.edu [SEP] Deep Embedding Kernel : In this paper, we propose a novel supervised learning method that is called
Deep Embedding Kernel (DEK). DEK combines the advantages of deep learning and
kernel methods in a unified framework. More specifically, DEK is a learnable
kernel represented by a newly designed deep architecture. Compared with
pre-defined kernels, this kernel can be explicitly trained to map data to an
optimized high-level feature space where data may have favorable features
toward the application. Compared with typical deep learning using SoftMax or
logistic regression as the top layer, DEK is expected to be more generalizable
to new data. Experimental results show that DEK has superior performance than
typical machine learning methods in identity detection, classification,
regression, dimension reduction, and transfer learning.
| 215
|
Development of an Intuitive Foot-Machine Interface for Robotic Surgery
|
The human-machine interface is of critical importance for master-slave
control of the robotic system for surgery, in which current systems offer the
control or two robotic arms teleoperated by the surgeon's hands. To relax the
need for surgical assistants and augment dexterity in surgery, it has been
recently proposed to use a robot like a third arm that can be controlled
seamlessly, independently from the natural arms, and work together with them.
This report will develop and investigate this concept by implementing foot
control of a robotic surgical arm. A novel passive haptic foot-machine
interface system and analysis of its performances were introduced in this
report. This interface using a parallel-serial hybrid structure with springs
and force sensors, which allows intuitive control of a slave robotic arm with
four degrees of freedom (dof). The elastic isometric design enables a user to
control the interface system accurately and adaptively, with an enlarged
sensing range breaking the physical restriction of the pedal size. A subject
specific (independent component analysis, ICA) model is identified to map the
surgeon's foot movements into kinematic parameters of the slave robotic arm. To
validate the system and assess the performance it allows, 10 subjects carried
out experiments to manipulate the foot-machine interface system in various
movements. With these experimental data, the mapping models were built and
verified. A comparison between different mapping models was made and analyzed
proving the ICA algorithm is obviously dominant over other methods.
|
Liked
|
jechoi@andrew.cmu.edu
|
Development of an Intuitive Foot-Machine Interface for Robotic Surgery : The human-machine interface is of critical importance for master-slave
control of the robotic system for surgery, in which current systems offer the
control or two robotic arms teleoperated by the surgeon's hands. To relax the
need for surgical assistants and augment dexterity in surgery, it has been
recently proposed to use a robot like a third arm that can be controlled
seamlessly, independently from the natural arms, and work together with them.
This report will develop and investigate this concept by implementing foot
control of a robotic surgical arm. A novel passive haptic foot-machine
interface system and analysis of its performances were introduced in this
report. This interface using a parallel-serial hybrid structure with springs
and force sensors, which allows intuitive control of a slave robotic arm with
four degrees of freedom (dof). The elastic isometric design enables a user to
control the interface system accurately and adaptively, with an enlarged
sensing range breaking the physical restriction of the pedal size. A subject
specific (independent component analysis, ICA) model is identified to map the
surgeon's foot movements into kinematic parameters of the slave robotic arm. To
validate the system and assess the performance it allows, 10 subjects carried
out experiments to manipulate the foot-machine interface system in various
movements. With these experimental data, the mapping models were built and
verified. A comparison between different mapping models was made and analyzed
proving the ICA algorithm is obviously dominant over other methods.
| 1
|
jechoi@andrew.cmu.edu [SEP] Development of an Intuitive Foot-Machine Interface for Robotic Surgery : The human-machine interface is of critical importance for master-slave
control of the robotic system for surgery, in which current systems offer the
control or two robotic arms teleoperated by the surgeon's hands. To relax the
need for surgical assistants and augment dexterity in surgery, it has been
recently proposed to use a robot like a third arm that can be controlled
seamlessly, independently from the natural arms, and work together with them.
This report will develop and investigate this concept by implementing foot
control of a robotic surgical arm. A novel passive haptic foot-machine
interface system and analysis of its performances were introduced in this
report. This interface using a parallel-serial hybrid structure with springs
and force sensors, which allows intuitive control of a slave robotic arm with
four degrees of freedom (dof). The elastic isometric design enables a user to
control the interface system accurately and adaptively, with an enlarged
sensing range breaking the physical restriction of the pedal size. A subject
specific (independent component analysis, ICA) model is identified to map the
surgeon's foot movements into kinematic parameters of the slave robotic arm. To
validate the system and assess the performance it allows, 10 subjects carried
out experiments to manipulate the foot-machine interface system in various
movements. With these experimental data, the mapping models were built and
verified. A comparison between different mapping models was made and analyzed
proving the ICA algorithm is obviously dominant over other methods.
| 483
|
Domain Knowledge in Artificial Intelligence: Using Conceptual Modeling to Increase Machine Learning Accuracy and Explainability
|
Machine learning enables the extraction of useful information from large,
diverse datasets. However, despite many successful applications, machine
learning continues to suffer from performance and transparency issues. These
challenges can be partially attributed to the limited use of domain knowledge
by machine learning models. This research proposes using the domain knowledge
represented in conceptual models to improve the preparation of the data used to
train machine learning models. We develop and demonstrate a method, called the
Conceptual Modeling for Machine Learning (CMML), which is comprised of
guidelines for data preparation in machine learning and based on conceptual
modeling constructs and principles. To assess the impact of CMML on machine
learning outcomes, we first applied it to two real-world problems to evaluate
its impact on model performance. We then solicited an assessment by data
scientists on the applicability of the method. These results demonstrate the
value of CMML for improving machine learning outcomes.
|
Liked
|
zrz@andrew.cmu.edu
|
Domain Knowledge in Artificial Intelligence: Using Conceptual Modeling to Increase Machine Learning Accuracy and Explainability : Machine learning enables the extraction of useful information from large,
diverse datasets. However, despite many successful applications, machine
learning continues to suffer from performance and transparency issues. These
challenges can be partially attributed to the limited use of domain knowledge
by machine learning models. This research proposes using the domain knowledge
represented in conceptual models to improve the preparation of the data used to
train machine learning models. We develop and demonstrate a method, called the
Conceptual Modeling for Machine Learning (CMML), which is comprised of
guidelines for data preparation in machine learning and based on conceptual
modeling constructs and principles. To assess the impact of CMML on machine
learning outcomes, we first applied it to two real-world problems to evaluate
its impact on model performance. We then solicited an assessment by data
scientists on the applicability of the method. These results demonstrate the
value of CMML for improving machine learning outcomes.
| 1
|
zrz@andrew.cmu.edu [SEP] Domain Knowledge in Artificial Intelligence: Using Conceptual Modeling to Increase Machine Learning Accuracy and Explainability : Machine learning enables the extraction of useful information from large,
diverse datasets. However, despite many successful applications, machine
learning continues to suffer from performance and transparency issues. These
challenges can be partially attributed to the limited use of domain knowledge
by machine learning models. This research proposes using the domain knowledge
represented in conceptual models to improve the preparation of the data used to
train machine learning models. We develop and demonstrate a method, called the
Conceptual Modeling for Machine Learning (CMML), which is comprised of
guidelines for data preparation in machine learning and based on conceptual
modeling constructs and principles. To assess the impact of CMML on machine
learning outcomes, we first applied it to two real-world problems to evaluate
its impact on model performance. We then solicited an assessment by data
scientists on the applicability of the method. These results demonstrate the
value of CMML for improving machine learning outcomes.
| 116
|
Neurofeedback-Driven 6-DOF Robotic Arm: Integration of Brain-Computer Interface with Arduino for Advanced Control
|
Brain computer interface (BCI) applications in robotics are becoming more
famous and famous. People with disabilities are facing a real-time problem of
doing simple activities such as grasping, handshaking etc. in order to aid with
this problem, the use of brain signals to control actuators is showing a great
importance. The Emotive Insight, a Brain-Computer Interface (BCI) device, is
utilized in this project to collect brain signals and transform them into
commands for controlling a robotic arm using an Arduino controller. The Emotive
Insight captures brain signals, which are subsequently analyzed using Emotive
software and connected with Arduino code. The HITI Brain software integrates
these devices, allowing for smooth communication between brain activity and the
robotic arm. This system demonstrates how brain impulses may be utilized to
control external devices directly. The results showed that the system is
applicable efficiently to robotic arms and also for prosthetic arms with Multi
Degree of Freedom. In addition to that, the system can be used for other
actuators such as bikes, mobile robots, wheelchairs etc.
|
Liked
|
jechoi@andrew.cmu.edu
|
Neurofeedback-Driven 6-DOF Robotic Arm: Integration of Brain-Computer Interface with Arduino for Advanced Control : Brain computer interface (BCI) applications in robotics are becoming more
famous and famous. People with disabilities are facing a real-time problem of
doing simple activities such as grasping, handshaking etc. in order to aid with
this problem, the use of brain signals to control actuators is showing a great
importance. The Emotive Insight, a Brain-Computer Interface (BCI) device, is
utilized in this project to collect brain signals and transform them into
commands for controlling a robotic arm using an Arduino controller. The Emotive
Insight captures brain signals, which are subsequently analyzed using Emotive
software and connected with Arduino code. The HITI Brain software integrates
these devices, allowing for smooth communication between brain activity and the
robotic arm. This system demonstrates how brain impulses may be utilized to
control external devices directly. The results showed that the system is
applicable efficiently to robotic arms and also for prosthetic arms with Multi
Degree of Freedom. In addition to that, the system can be used for other
actuators such as bikes, mobile robots, wheelchairs etc.
| 1
|
jechoi@andrew.cmu.edu [SEP] Neurofeedback-Driven 6-DOF Robotic Arm: Integration of Brain-Computer Interface with Arduino for Advanced Control : Brain computer interface (BCI) applications in robotics are becoming more
famous and famous. People with disabilities are facing a real-time problem of
doing simple activities such as grasping, handshaking etc. in order to aid with
this problem, the use of brain signals to control actuators is showing a great
importance. The Emotive Insight, a Brain-Computer Interface (BCI) device, is
utilized in this project to collect brain signals and transform them into
commands for controlling a robotic arm using an Arduino controller. The Emotive
Insight captures brain signals, which are subsequently analyzed using Emotive
software and connected with Arduino code. The HITI Brain software integrates
these devices, allowing for smooth communication between brain activity and the
robotic arm. This system demonstrates how brain impulses may be utilized to
control external devices directly. The results showed that the system is
applicable efficiently to robotic arms and also for prosthetic arms with Multi
Degree of Freedom. In addition to that, the system can be used for other
actuators such as bikes, mobile robots, wheelchairs etc.
| 448
|
One-Shot Dual-Arm Imitation Learning
|
We introduce One-Shot Dual-Arm Imitation Learning (ODIL), which enables
dual-arm robots to learn precise and coordinated everyday tasks from just a
single demonstration of the task. ODIL uses a new three-stage visual servoing
(3-VS) method for precise alignment between the end-effector and target object,
after which replay of the demonstration trajectory is sufficient to perform the
task. This is achieved without requiring prior task or object knowledge, or
additional data collection and training following the single demonstration.
Furthermore, we propose a new dual-arm coordination paradigm for learning
dual-arm tasks from a single demonstration. ODIL was tested on a real-world
dual-arm robot, demonstrating state-of-the-art performance across six precise
and coordinated tasks in both 4-DoF and 6-DoF settings, and showing robustness
in the presence of distractor objects and partial occlusions. Videos are
available at: https://www.robot-learning.uk/one-shot-dual-arm.
|
Liked
|
jechoi@andrew.cmu.edu
|
One-Shot Dual-Arm Imitation Learning : We introduce One-Shot Dual-Arm Imitation Learning (ODIL), which enables
dual-arm robots to learn precise and coordinated everyday tasks from just a
single demonstration of the task. ODIL uses a new three-stage visual servoing
(3-VS) method for precise alignment between the end-effector and target object,
after which replay of the demonstration trajectory is sufficient to perform the
task. This is achieved without requiring prior task or object knowledge, or
additional data collection and training following the single demonstration.
Furthermore, we propose a new dual-arm coordination paradigm for learning
dual-arm tasks from a single demonstration. ODIL was tested on a real-world
dual-arm robot, demonstrating state-of-the-art performance across six precise
and coordinated tasks in both 4-DoF and 6-DoF settings, and showing robustness
in the presence of distractor objects and partial occlusions. Videos are
available at: https://www.robot-learning.uk/one-shot-dual-arm.
| 1
|
jechoi@andrew.cmu.edu [SEP] One-Shot Dual-Arm Imitation Learning : We introduce One-Shot Dual-Arm Imitation Learning (ODIL), which enables
dual-arm robots to learn precise and coordinated everyday tasks from just a
single demonstration of the task. ODIL uses a new three-stage visual servoing
(3-VS) method for precise alignment between the end-effector and target object,
after which replay of the demonstration trajectory is sufficient to perform the
task. This is achieved without requiring prior task or object knowledge, or
additional data collection and training following the single demonstration.
Furthermore, we propose a new dual-arm coordination paradigm for learning
dual-arm tasks from a single demonstration. ODIL was tested on a real-world
dual-arm robot, demonstrating state-of-the-art performance across six precise
and coordinated tasks in both 4-DoF and 6-DoF settings, and showing robustness
in the presence of distractor objects and partial occlusions. Videos are
available at: https://www.robot-learning.uk/one-shot-dual-arm.
| 443
|
Harnessing The Multi-Stability Of Kresling Origami For Reconfigurable Articulation In Soft Robotic Arms
|
This study examines a biology-inspired approach of using reconfigurable
articulation to reduce the control requirement for soft robotic arms. We
construct a robotic arm by assembling Kresling origami modules that exhibit
predictable bistability. Via switching between their two stable states, these
origami modules can behave either like a flexible joint with low bending
stiffness or like a stiff link with high stiffness, without requiring any
continuous power supply. In this way, the robotic arm can exhibit
pseudo-linkage kinematics with lower control requirements and improved motion
accuracy. A unique advantage of using origami as the robotic arm skeleton is
that its bending stiffness ratio between stable states is directly related to
the underlying Kresling design. Therefore, we conduct extensive parametric
analyses and experimental validations to identify the optimized Kresling
pattern for articulation. The results indicate that a higher angle ratio, a
smaller resting length at contracted stable state, and a large number of
polygon sides can offer more significant and robust bending stiffness tuning.
Based on this insight, we construct a proof-of-concept, tendon-driven robotic
arm consisting of three modules, and show that it can exhibit the desired
reconfigurable articulation behavior. Moreover, the deformations of this
manipulator are consistent with kinematic model predictions, which validate the
possibility of using simple controllers for such compliant robotic systems.
|
Liked
|
jechoi@andrew.cmu.edu
|
Harnessing The Multi-Stability Of Kresling Origami For Reconfigurable Articulation In Soft Robotic Arms : This study examines a biology-inspired approach of using reconfigurable
articulation to reduce the control requirement for soft robotic arms. We
construct a robotic arm by assembling Kresling origami modules that exhibit
predictable bistability. Via switching between their two stable states, these
origami modules can behave either like a flexible joint with low bending
stiffness or like a stiff link with high stiffness, without requiring any
continuous power supply. In this way, the robotic arm can exhibit
pseudo-linkage kinematics with lower control requirements and improved motion
accuracy. A unique advantage of using origami as the robotic arm skeleton is
that its bending stiffness ratio between stable states is directly related to
the underlying Kresling design. Therefore, we conduct extensive parametric
analyses and experimental validations to identify the optimized Kresling
pattern for articulation. The results indicate that a higher angle ratio, a
smaller resting length at contracted stable state, and a large number of
polygon sides can offer more significant and robust bending stiffness tuning.
Based on this insight, we construct a proof-of-concept, tendon-driven robotic
arm consisting of three modules, and show that it can exhibit the desired
reconfigurable articulation behavior. Moreover, the deformations of this
manipulator are consistent with kinematic model predictions, which validate the
possibility of using simple controllers for such compliant robotic systems.
| 1
|
jechoi@andrew.cmu.edu [SEP] Harnessing The Multi-Stability Of Kresling Origami For Reconfigurable Articulation In Soft Robotic Arms : This study examines a biology-inspired approach of using reconfigurable
articulation to reduce the control requirement for soft robotic arms. We
construct a robotic arm by assembling Kresling origami modules that exhibit
predictable bistability. Via switching between their two stable states, these
origami modules can behave either like a flexible joint with low bending
stiffness or like a stiff link with high stiffness, without requiring any
continuous power supply. In this way, the robotic arm can exhibit
pseudo-linkage kinematics with lower control requirements and improved motion
accuracy. A unique advantage of using origami as the robotic arm skeleton is
that its bending stiffness ratio between stable states is directly related to
the underlying Kresling design. Therefore, we conduct extensive parametric
analyses and experimental validations to identify the optimized Kresling
pattern for articulation. The results indicate that a higher angle ratio, a
smaller resting length at contracted stable state, and a large number of
polygon sides can offer more significant and robust bending stiffness tuning.
Based on this insight, we construct a proof-of-concept, tendon-driven robotic
arm consisting of three modules, and show that it can exhibit the desired
reconfigurable articulation behavior. Moreover, the deformations of this
manipulator are consistent with kinematic model predictions, which validate the
possibility of using simple controllers for such compliant robotic systems.
| 484
|
Survey on LiDAR Perception in Adverse Weather Conditions
|
Autonomous vehicles rely on a variety of sensors to gather information about
their surrounding. The vehicle's behavior is planned based on the environment
perception, making its reliability crucial for safety reasons. The active LiDAR
sensor is able to create an accurate 3D representation of a scene, making it a
valuable addition for environment perception for autonomous vehicles. Due to
light scattering and occlusion, the LiDAR's performance change under adverse
weather conditions like fog, snow or rain. This limitation recently fostered a
large body of research on approaches to alleviate the decrease in perception
performance. In this survey, we gathered, analyzed, and discussed different
aspects on dealing with adverse weather conditions in LiDAR-based environment
perception. We address topics such as the availability of appropriate data, raw
point cloud processing and denoising, robust perception algorithms and sensor
fusion to mitigate adverse weather induced shortcomings. We furthermore
identify the most pressing gaps in the current literature and pinpoint
promising research directions.
|
Disliked
|
zrz@andrew.cmu.edu
|
Survey on LiDAR Perception in Adverse Weather Conditions : Autonomous vehicles rely on a variety of sensors to gather information about
their surrounding. The vehicle's behavior is planned based on the environment
perception, making its reliability crucial for safety reasons. The active LiDAR
sensor is able to create an accurate 3D representation of a scene, making it a
valuable addition for environment perception for autonomous vehicles. Due to
light scattering and occlusion, the LiDAR's performance change under adverse
weather conditions like fog, snow or rain. This limitation recently fostered a
large body of research on approaches to alleviate the decrease in perception
performance. In this survey, we gathered, analyzed, and discussed different
aspects on dealing with adverse weather conditions in LiDAR-based environment
perception. We address topics such as the availability of appropriate data, raw
point cloud processing and denoising, robust perception algorithms and sensor
fusion to mitigate adverse weather induced shortcomings. We furthermore
identify the most pressing gaps in the current literature and pinpoint
promising research directions.
| 0
|
zrz@andrew.cmu.edu [SEP] Survey on LiDAR Perception in Adverse Weather Conditions : Autonomous vehicles rely on a variety of sensors to gather information about
their surrounding. The vehicle's behavior is planned based on the environment
perception, making its reliability crucial for safety reasons. The active LiDAR
sensor is able to create an accurate 3D representation of a scene, making it a
valuable addition for environment perception for autonomous vehicles. Due to
light scattering and occlusion, the LiDAR's performance change under adverse
weather conditions like fog, snow or rain. This limitation recently fostered a
large body of research on approaches to alleviate the decrease in perception
performance. In this survey, we gathered, analyzed, and discussed different
aspects on dealing with adverse weather conditions in LiDAR-based environment
perception. We address topics such as the availability of appropriate data, raw
point cloud processing and denoising, robust perception algorithms and sensor
fusion to mitigate adverse weather induced shortcomings. We furthermore
identify the most pressing gaps in the current literature and pinpoint
promising research directions.
| 334
|
Interruption-Aware Cooperative Perception for V2X Communication-Aided Autonomous Driving
|
Cooperative perception can significantly improve the perception performance
of autonomous vehicles beyond the limited perception ability of individual
vehicles by exchanging information with neighbor agents through V2X
communication. However, most existing work assume ideal communication among
agents, ignoring the significant and common \textit{interruption issues} caused
by imperfect V2X communication, where cooperation agents can not receive
cooperative messages successfully and thus fail to achieve cooperative
perception, leading to safety risks. To fully reap the benefits of cooperative
perception in practice, we propose V2X communication INterruption-aware
COoperative Perception (V2X-INCOP), a cooperative perception system robust to
communication interruption for V2X communication-aided autonomous driving,
which leverages historical cooperation information to recover missing
information due to the interruptions and alleviate the impact of the
interruption issue. To achieve comprehensive recovery, we design a
communication-adaptive multi-scale spatial-temporal prediction model to extract
multi-scale spatial-temporal features based on V2X communication conditions and
capture the most significant information for the prediction of the missing
information. To further improve recovery performance, we adopt a knowledge
distillation framework to give explicit and direct supervision to the
prediction model and a curriculum learning strategy to stabilize the training
of the model. Experiments on three public cooperative perception datasets
demonstrate that the proposed method is effective in alleviating the impacts of
communication interruption on cooperative perception.
|
Liked
|
zrz@andrew.cmu.edu
|
Interruption-Aware Cooperative Perception for V2X Communication-Aided Autonomous Driving : Cooperative perception can significantly improve the perception performance
of autonomous vehicles beyond the limited perception ability of individual
vehicles by exchanging information with neighbor agents through V2X
communication. However, most existing work assume ideal communication among
agents, ignoring the significant and common \textit{interruption issues} caused
by imperfect V2X communication, where cooperation agents can not receive
cooperative messages successfully and thus fail to achieve cooperative
perception, leading to safety risks. To fully reap the benefits of cooperative
perception in practice, we propose V2X communication INterruption-aware
COoperative Perception (V2X-INCOP), a cooperative perception system robust to
communication interruption for V2X communication-aided autonomous driving,
which leverages historical cooperation information to recover missing
information due to the interruptions and alleviate the impact of the
interruption issue. To achieve comprehensive recovery, we design a
communication-adaptive multi-scale spatial-temporal prediction model to extract
multi-scale spatial-temporal features based on V2X communication conditions and
capture the most significant information for the prediction of the missing
information. To further improve recovery performance, we adopt a knowledge
distillation framework to give explicit and direct supervision to the
prediction model and a curriculum learning strategy to stabilize the training
of the model. Experiments on three public cooperative perception datasets
demonstrate that the proposed method is effective in alleviating the impacts of
communication interruption on cooperative perception.
| 1
|
zrz@andrew.cmu.edu [SEP] Interruption-Aware Cooperative Perception for V2X Communication-Aided Autonomous Driving : Cooperative perception can significantly improve the perception performance
of autonomous vehicles beyond the limited perception ability of individual
vehicles by exchanging information with neighbor agents through V2X
communication. However, most existing work assume ideal communication among
agents, ignoring the significant and common \textit{interruption issues} caused
by imperfect V2X communication, where cooperation agents can not receive
cooperative messages successfully and thus fail to achieve cooperative
perception, leading to safety risks. To fully reap the benefits of cooperative
perception in practice, we propose V2X communication INterruption-aware
COoperative Perception (V2X-INCOP), a cooperative perception system robust to
communication interruption for V2X communication-aided autonomous driving,
which leverages historical cooperation information to recover missing
information due to the interruptions and alleviate the impact of the
interruption issue. To achieve comprehensive recovery, we design a
communication-adaptive multi-scale spatial-temporal prediction model to extract
multi-scale spatial-temporal features based on V2X communication conditions and
capture the most significant information for the prediction of the missing
information. To further improve recovery performance, we adopt a knowledge
distillation framework to give explicit and direct supervision to the
prediction model and a curriculum learning strategy to stabilize the training
of the model. Experiments on three public cooperative perception datasets
demonstrate that the proposed method is effective in alleviating the impacts of
communication interruption on cooperative perception.
| 285
|
Bridging Hard and Soft: Mechanical Metamaterials Enable Rigid Torque Transmission in Soft Robots
|
Torque and continuous rotation are fundamental methods of actuation and
manipulation in rigid robots. Soft robot arms use soft materials and structures
to mimic the passive compliance of biological arms that bend and extend. This
use of compliance prevents soft arms from continuously transmitting and
exerting torques to interact with their environment. Here, we show how relying
on patterning structures instead of inherent material properties allows soft
robotic arms to remain compliant while continuously transmitting torque to
their environment. We demonstrate a soft robotic arm made from a pair of
mechanical metamaterials that act as compliant constant-velocity joints. The
joints are up to 52 times stiffer in torsion than bending and can bend up to
45{\deg}. This robot arm can continuously transmit torque while deforming in
all other directions. The arm's mechanical design achieves high motion
repeatability (0.4 mm and 0.1{\deg}) when tracking trajectories. We then
trained a neural network to learn the inverse kinematics, enabling us to
program the arm to complete tasks that are challenging for existing soft robots
such as installing light bulbs, fastening bolts, and turning valves. The arm's
passive compliance makes it safe around humans and provides a source of
mechanical intelligence, enabling it to adapt to misalignment when manipulating
objects. This work will bridge the gap between hard and soft robotics with
applications in human assistance, warehouse automation, and extreme
environments.
|
Liked
|
jechoi@andrew.cmu.edu
|
Bridging Hard and Soft: Mechanical Metamaterials Enable Rigid Torque Transmission in Soft Robots : Torque and continuous rotation are fundamental methods of actuation and
manipulation in rigid robots. Soft robot arms use soft materials and structures
to mimic the passive compliance of biological arms that bend and extend. This
use of compliance prevents soft arms from continuously transmitting and
exerting torques to interact with their environment. Here, we show how relying
on patterning structures instead of inherent material properties allows soft
robotic arms to remain compliant while continuously transmitting torque to
their environment. We demonstrate a soft robotic arm made from a pair of
mechanical metamaterials that act as compliant constant-velocity joints. The
joints are up to 52 times stiffer in torsion than bending and can bend up to
45{\deg}. This robot arm can continuously transmit torque while deforming in
all other directions. The arm's mechanical design achieves high motion
repeatability (0.4 mm and 0.1{\deg}) when tracking trajectories. We then
trained a neural network to learn the inverse kinematics, enabling us to
program the arm to complete tasks that are challenging for existing soft robots
such as installing light bulbs, fastening bolts, and turning valves. The arm's
passive compliance makes it safe around humans and provides a source of
mechanical intelligence, enabling it to adapt to misalignment when manipulating
objects. This work will bridge the gap between hard and soft robotics with
applications in human assistance, warehouse automation, and extreme
environments.
| 1
|
jechoi@andrew.cmu.edu [SEP] Bridging Hard and Soft: Mechanical Metamaterials Enable Rigid Torque Transmission in Soft Robots : Torque and continuous rotation are fundamental methods of actuation and
manipulation in rigid robots. Soft robot arms use soft materials and structures
to mimic the passive compliance of biological arms that bend and extend. This
use of compliance prevents soft arms from continuously transmitting and
exerting torques to interact with their environment. Here, we show how relying
on patterning structures instead of inherent material properties allows soft
robotic arms to remain compliant while continuously transmitting torque to
their environment. We demonstrate a soft robotic arm made from a pair of
mechanical metamaterials that act as compliant constant-velocity joints. The
joints are up to 52 times stiffer in torsion than bending and can bend up to
45{\deg}. This robot arm can continuously transmit torque while deforming in
all other directions. The arm's mechanical design achieves high motion
repeatability (0.4 mm and 0.1{\deg}) when tracking trajectories. We then
trained a neural network to learn the inverse kinematics, enabling us to
program the arm to complete tasks that are challenging for existing soft robots
such as installing light bulbs, fastening bolts, and turning valves. The arm's
passive compliance makes it safe around humans and provides a source of
mechanical intelligence, enabling it to adapt to misalignment when manipulating
objects. This work will bridge the gap between hard and soft robotics with
applications in human assistance, warehouse automation, and extreme
environments.
| 407
|
Autonomous Soil Collection in Environments With Heterogeneous Terrain
|
To autonomously collect soil in uncultivated terrain, robotic arms must
distinguish between different amorphous materials and submerge themselves into
the correct material. We develop a prototype that collects soil in
heterogeneous terrain. If mounted to a mobile robot, it can be used to perform
soil collection and analysis without human intervention. Unique among soil
sampling robots, we use a general-purpose robotic arm rather than a soil core
sampler.
|
Liked
|
jechoi@andrew.cmu.edu
|
Autonomous Soil Collection in Environments With Heterogeneous Terrain : To autonomously collect soil in uncultivated terrain, robotic arms must
distinguish between different amorphous materials and submerge themselves into
the correct material. We develop a prototype that collects soil in
heterogeneous terrain. If mounted to a mobile robot, it can be used to perform
soil collection and analysis without human intervention. Unique among soil
sampling robots, we use a general-purpose robotic arm rather than a soil core
sampler.
| 1
|
jechoi@andrew.cmu.edu [SEP] Autonomous Soil Collection in Environments With Heterogeneous Terrain : To autonomously collect soil in uncultivated terrain, robotic arms must
distinguish between different amorphous materials and submerge themselves into
the correct material. We develop a prototype that collects soil in
heterogeneous terrain. If mounted to a mobile robot, it can be used to perform
soil collection and analysis without human intervention. Unique among soil
sampling robots, we use a general-purpose robotic arm rather than a soil core
sampler.
| 519
|
Open Arms: Open-Source Arms, Hands & Control
|
Open Arms is a novel open-source platform of realistic human-like robotic
hands and arms hardware with 28 Degree-of-Freedom (DoF), designed to extend the
capabilities and accessibility of humanoid robotic grasping and manipulation.
The Open Arms framework includes an open SDK and development environment,
simulation tools, and application development tools to build and operate Open
Arms. This paper describes these hands controls, sensing, mechanisms, aesthetic
design, and manufacturing and their real-world applications with a teleoperated
nursing robot. From 2015 to 2022, the authors have designed and established the
manufacturing of Open Arms as a low-cost, high functionality robotic arms
hardware and software framework to serve both humanoid robot applications and
the urgent demand for low-cost prosthetics, as part of the Hanson Robotics
Sophia Robot platform. Using the techniques of consumer product manufacturing,
we set out to define modular, low-cost techniques for approximating the
dexterity and sensitivity of human hands. To demonstrate the dexterity and
control of our hands, we present a Generative Grasping Residual CNN (GGR-CNN)
model that can generate robust antipodal grasps from input images of various
objects in real-time speeds (22ms). We achieved state-of-the-art accuracy of
92.4% using our model architecture on a standard Cornell Grasping Dataset,
which contains a diverse set of household objects.
|
Liked
|
jechoi@andrew.cmu.edu
|
Open Arms: Open-Source Arms, Hands & Control : Open Arms is a novel open-source platform of realistic human-like robotic
hands and arms hardware with 28 Degree-of-Freedom (DoF), designed to extend the
capabilities and accessibility of humanoid robotic grasping and manipulation.
The Open Arms framework includes an open SDK and development environment,
simulation tools, and application development tools to build and operate Open
Arms. This paper describes these hands controls, sensing, mechanisms, aesthetic
design, and manufacturing and their real-world applications with a teleoperated
nursing robot. From 2015 to 2022, the authors have designed and established the
manufacturing of Open Arms as a low-cost, high functionality robotic arms
hardware and software framework to serve both humanoid robot applications and
the urgent demand for low-cost prosthetics, as part of the Hanson Robotics
Sophia Robot platform. Using the techniques of consumer product manufacturing,
we set out to define modular, low-cost techniques for approximating the
dexterity and sensitivity of human hands. To demonstrate the dexterity and
control of our hands, we present a Generative Grasping Residual CNN (GGR-CNN)
model that can generate robust antipodal grasps from input images of various
objects in real-time speeds (22ms). We achieved state-of-the-art accuracy of
92.4% using our model architecture on a standard Cornell Grasping Dataset,
which contains a diverse set of household objects.
| 1
|
jechoi@andrew.cmu.edu [SEP] Open Arms: Open-Source Arms, Hands & Control : Open Arms is a novel open-source platform of realistic human-like robotic
hands and arms hardware with 28 Degree-of-Freedom (DoF), designed to extend the
capabilities and accessibility of humanoid robotic grasping and manipulation.
The Open Arms framework includes an open SDK and development environment,
simulation tools, and application development tools to build and operate Open
Arms. This paper describes these hands controls, sensing, mechanisms, aesthetic
design, and manufacturing and their real-world applications with a teleoperated
nursing robot. From 2015 to 2022, the authors have designed and established the
manufacturing of Open Arms as a low-cost, high functionality robotic arms
hardware and software framework to serve both humanoid robot applications and
the urgent demand for low-cost prosthetics, as part of the Hanson Robotics
Sophia Robot platform. Using the techniques of consumer product manufacturing,
we set out to define modular, low-cost techniques for approximating the
dexterity and sensitivity of human hands. To demonstrate the dexterity and
control of our hands, we present a Generative Grasping Residual CNN (GGR-CNN)
model that can generate robust antipodal grasps from input images of various
objects in real-time speeds (22ms). We achieved state-of-the-art accuracy of
92.4% using our model architecture on a standard Cornell Grasping Dataset,
which contains a diverse set of household objects.
| 434
|
Deep frequency principle towards understanding why deeper learning is faster
|
Understanding the effect of depth in deep learning is a critical problem. In
this work, we utilize the Fourier analysis to empirically provide a promising
mechanism to understand why feedforward deeper learning is faster. To this end,
we separate a deep neural network, trained by normal stochastic gradient
descent, into two parts during analysis, i.e., a pre-condition component and a
learning component, in which the output of the pre-condition one is the input
of the learning one. We use a filtering method to characterize the frequency
distribution of a high-dimensional function. Based on experiments of deep
networks and real dataset, we propose a deep frequency principle, that is, the
effective target function for a deeper hidden layer biases towards lower
frequency during the training. Therefore, the learning component effectively
learns a lower frequency function if the pre-condition component has more
layers. Due to the well-studied frequency principle, i.e., deep neural networks
learn lower frequency functions faster, the deep frequency principle provides a
reasonable explanation to why deeper learning is faster. We believe these
empirical studies would be valuable for future theoretical studies of the
effect of depth in deep learning.
|
Liked
|
zrz@andrew.cmu.edu
|
Deep frequency principle towards understanding why deeper learning is faster : Understanding the effect of depth in deep learning is a critical problem. In
this work, we utilize the Fourier analysis to empirically provide a promising
mechanism to understand why feedforward deeper learning is faster. To this end,
we separate a deep neural network, trained by normal stochastic gradient
descent, into two parts during analysis, i.e., a pre-condition component and a
learning component, in which the output of the pre-condition one is the input
of the learning one. We use a filtering method to characterize the frequency
distribution of a high-dimensional function. Based on experiments of deep
networks and real dataset, we propose a deep frequency principle, that is, the
effective target function for a deeper hidden layer biases towards lower
frequency during the training. Therefore, the learning component effectively
learns a lower frequency function if the pre-condition component has more
layers. Due to the well-studied frequency principle, i.e., deep neural networks
learn lower frequency functions faster, the deep frequency principle provides a
reasonable explanation to why deeper learning is faster. We believe these
empirical studies would be valuable for future theoretical studies of the
effect of depth in deep learning.
| 1
|
zrz@andrew.cmu.edu [SEP] Deep frequency principle towards understanding why deeper learning is faster : Understanding the effect of depth in deep learning is a critical problem. In
this work, we utilize the Fourier analysis to empirically provide a promising
mechanism to understand why feedforward deeper learning is faster. To this end,
we separate a deep neural network, trained by normal stochastic gradient
descent, into two parts during analysis, i.e., a pre-condition component and a
learning component, in which the output of the pre-condition one is the input
of the learning one. We use a filtering method to characterize the frequency
distribution of a high-dimensional function. Based on experiments of deep
networks and real dataset, we propose a deep frequency principle, that is, the
effective target function for a deeper hidden layer biases towards lower
frequency during the training. Therefore, the learning component effectively
learns a lower frequency function if the pre-condition component has more
layers. Due to the well-studied frequency principle, i.e., deep neural networks
learn lower frequency functions faster, the deep frequency principle provides a
reasonable explanation to why deeper learning is faster. We believe these
empirical studies would be valuable for future theoretical studies of the
effect of depth in deep learning.
| 194
|
RoboTwin: Dual-Arm Robot Benchmark with Generative Digital Twins
|
In the rapidly advancing field of robotics, dual-arm coordination and complex
object manipulation are essential capabilities for developing advanced
autonomous systems. However, the scarcity of diverse, high-quality
demonstration data and real-world-aligned evaluation benchmarks severely limits
such development. To address this, we introduce RoboTwin, a generative digital
twin framework that uses 3D generative foundation models and large language
models to produce diverse expert datasets and provide a real-world-aligned
evaluation platform for dual-arm robotic tasks. Specifically, RoboTwin creates
varied digital twins of objects from single 2D images, generating realistic and
interactive scenarios. It also introduces a spatial relation-aware code
generation framework that combines object annotations with large language
models to break down tasks, determine spatial constraints, and generate precise
robotic movement code. Our framework offers a comprehensive benchmark with both
simulated and real-world data, enabling standardized evaluation and better
alignment between simulated training and real-world performance. We validated
our approach using the open-source COBOT Magic Robot platform. Policies
pre-trained on RoboTwin-generated data and fine-tuned with limited real-world
samples demonstrate significant potential for enhancing dual-arm robotic
manipulation systems by improving success rates by over 70% for single-arm
tasks and over 40% for dual-arm tasks compared to models trained solely on
real-world data.
|
Liked
|
jechoi@andrew.cmu.edu
|
RoboTwin: Dual-Arm Robot Benchmark with Generative Digital Twins : In the rapidly advancing field of robotics, dual-arm coordination and complex
object manipulation are essential capabilities for developing advanced
autonomous systems. However, the scarcity of diverse, high-quality
demonstration data and real-world-aligned evaluation benchmarks severely limits
such development. To address this, we introduce RoboTwin, a generative digital
twin framework that uses 3D generative foundation models and large language
models to produce diverse expert datasets and provide a real-world-aligned
evaluation platform for dual-arm robotic tasks. Specifically, RoboTwin creates
varied digital twins of objects from single 2D images, generating realistic and
interactive scenarios. It also introduces a spatial relation-aware code
generation framework that combines object annotations with large language
models to break down tasks, determine spatial constraints, and generate precise
robotic movement code. Our framework offers a comprehensive benchmark with both
simulated and real-world data, enabling standardized evaluation and better
alignment between simulated training and real-world performance. We validated
our approach using the open-source COBOT Magic Robot platform. Policies
pre-trained on RoboTwin-generated data and fine-tuned with limited real-world
samples demonstrate significant potential for enhancing dual-arm robotic
manipulation systems by improving success rates by over 70% for single-arm
tasks and over 40% for dual-arm tasks compared to models trained solely on
real-world data.
| 1
|
jechoi@andrew.cmu.edu [SEP] RoboTwin: Dual-Arm Robot Benchmark with Generative Digital Twins : In the rapidly advancing field of robotics, dual-arm coordination and complex
object manipulation are essential capabilities for developing advanced
autonomous systems. However, the scarcity of diverse, high-quality
demonstration data and real-world-aligned evaluation benchmarks severely limits
such development. To address this, we introduce RoboTwin, a generative digital
twin framework that uses 3D generative foundation models and large language
models to produce diverse expert datasets and provide a real-world-aligned
evaluation platform for dual-arm robotic tasks. Specifically, RoboTwin creates
varied digital twins of objects from single 2D images, generating realistic and
interactive scenarios. It also introduces a spatial relation-aware code
generation framework that combines object annotations with large language
models to break down tasks, determine spatial constraints, and generate precise
robotic movement code. Our framework offers a comprehensive benchmark with both
simulated and real-world data, enabling standardized evaluation and better
alignment between simulated training and real-world performance. We validated
our approach using the open-source COBOT Magic Robot platform. Policies
pre-trained on RoboTwin-generated data and fine-tuned with limited real-world
samples demonstrate significant potential for enhancing dual-arm robotic
manipulation systems by improving success rates by over 70% for single-arm
tasks and over 40% for dual-arm tasks compared to models trained solely on
real-world data.
| 511
|
Planning to Build Soma Blocks Using a Dual-arm Robot
|
This paper presents a planner that can automatically find an optimal assembly
sequence for a dual-arm robot to assemble the soma blocks. The planner uses the
mesh model of objects and the final state of the assembly to generate all
possible assembly sequence and evaluate the optimal assembly sequence by
considering the stability, graspability, assemblability, as well as the need
for a second arm. Especially, the need for a second arm is considered when
supports from worktables and other workpieces are not enough to produce a
stable assembly. The planner will refer to an assisting grasp to additionally
hold and support the unstable components so that the robot can further assemble
new workpieces and finally reach a stable state. The output of the planner is
the optimal assembly orders, candidate grasps, assembly directions, and the
assisting grasps if any. The output of the planner can be used to guide a
dual-arm robot to perform the assembly task. The planner is verified in both
simulations and real-world executions.
|
Liked
|
jechoi@andrew.cmu.edu
|
Planning to Build Soma Blocks Using a Dual-arm Robot : This paper presents a planner that can automatically find an optimal assembly
sequence for a dual-arm robot to assemble the soma blocks. The planner uses the
mesh model of objects and the final state of the assembly to generate all
possible assembly sequence and evaluate the optimal assembly sequence by
considering the stability, graspability, assemblability, as well as the need
for a second arm. Especially, the need for a second arm is considered when
supports from worktables and other workpieces are not enough to produce a
stable assembly. The planner will refer to an assisting grasp to additionally
hold and support the unstable components so that the robot can further assemble
new workpieces and finally reach a stable state. The output of the planner is
the optimal assembly orders, candidate grasps, assembly directions, and the
assisting grasps if any. The output of the planner can be used to guide a
dual-arm robot to perform the assembly task. The planner is verified in both
simulations and real-world executions.
| 1
|
jechoi@andrew.cmu.edu [SEP] Planning to Build Soma Blocks Using a Dual-arm Robot : This paper presents a planner that can automatically find an optimal assembly
sequence for a dual-arm robot to assemble the soma blocks. The planner uses the
mesh model of objects and the final state of the assembly to generate all
possible assembly sequence and evaluate the optimal assembly sequence by
considering the stability, graspability, assemblability, as well as the need
for a second arm. Especially, the need for a second arm is considered when
supports from worktables and other workpieces are not enough to produce a
stable assembly. The planner will refer to an assisting grasp to additionally
hold and support the unstable components so that the robot can further assemble
new workpieces and finally reach a stable state. The output of the planner is
the optimal assembly orders, candidate grasps, assembly directions, and the
assisting grasps if any. The output of the planner can be used to guide a
dual-arm robot to perform the assembly task. The planner is verified in both
simulations and real-world executions.
| 497
|
Cybathlon -- Legged Mobile Assistance for Quadriplegics
|
Assistance robots are the future for people who need daily care due to
limited mobility or being wheelchair-bound. Current solutions of attaching
robotic arms to motorized wheelchairs only provide limited additional mobility
at the cost of increased size. We present a mouth joystick control interface,
augmented with voice commands, for an independent quadrupedal assistance robot
with an arm. We validate and showcase our system in the Cybathlon Challenges
February 2024 Assistance Robot Race, where we solve four everyday tasks in
record time, winning first place. Our system remains generic and sets the basis
for a platform that could help and provide independence in the everyday lives
of people in wheelchairs.
|
Disliked
|
jechoi@andrew.cmu.edu
|
Cybathlon -- Legged Mobile Assistance for Quadriplegics : Assistance robots are the future for people who need daily care due to
limited mobility or being wheelchair-bound. Current solutions of attaching
robotic arms to motorized wheelchairs only provide limited additional mobility
at the cost of increased size. We present a mouth joystick control interface,
augmented with voice commands, for an independent quadrupedal assistance robot
with an arm. We validate and showcase our system in the Cybathlon Challenges
February 2024 Assistance Robot Race, where we solve four everyday tasks in
record time, winning first place. Our system remains generic and sets the basis
for a platform that could help and provide independence in the everyday lives
of people in wheelchairs.
| 0
|
jechoi@andrew.cmu.edu [SEP] Cybathlon -- Legged Mobile Assistance for Quadriplegics : Assistance robots are the future for people who need daily care due to
limited mobility or being wheelchair-bound. Current solutions of attaching
robotic arms to motorized wheelchairs only provide limited additional mobility
at the cost of increased size. We present a mouth joystick control interface,
augmented with voice commands, for an independent quadrupedal assistance robot
with an arm. We validate and showcase our system in the Cybathlon Challenges
February 2024 Assistance Robot Race, where we solve four everyday tasks in
record time, winning first place. Our system remains generic and sets the basis
for a platform that could help and provide independence in the everyday lives
of people in wheelchairs.
| 524
|
RGB-D Robotic Pose Estimation For a Servicing Robotic Arm
|
A large number of robotic and human-assisted missions to the Moon and Mars
are forecast. NASA's efforts to learn about the geology and makeup of these
celestial bodies rely heavily on the use of robotic arms. The safety and
redundancy aspects will be crucial when humans will be working alongside the
robotic explorers. Additionally, robotic arms are crucial to satellite
servicing and planned orbit debris mitigation missions. The goal of this work
is to create a custom Computer Vision (CV) based Artificial Neural Network
(ANN) that would be able to rapidly identify the posture of a 7 Degree of
Freedom (DoF) robotic arm from a single (RGB-D) image - just like humans can
easily identify if an arm is pointing in some general direction. The Sawyer
robotic arm is used for developing and training this intelligent algorithm.
Since Sawyer's joint space spans 7 dimensions, it is an insurmountable task to
cover the entire joint configuration space. In this work, orthogonal arrays are
used, similar to the Taguchi method, to efficiently span the joint space with
the minimal number of training images. This ``optimally'' generated database is
used to train the custom ANN and its degree of accuracy is on average equal to
twice the smallest joint displacement step used for database generation. A
pre-trained ANN will be useful for estimating the postures of robotic
manipulators used on space stations, spacecraft, and rovers as an auxiliary
tool or for contingency plans.
|
Liked
|
jechoi@andrew.cmu.edu
|
RGB-D Robotic Pose Estimation For a Servicing Robotic Arm : A large number of robotic and human-assisted missions to the Moon and Mars
are forecast. NASA's efforts to learn about the geology and makeup of these
celestial bodies rely heavily on the use of robotic arms. The safety and
redundancy aspects will be crucial when humans will be working alongside the
robotic explorers. Additionally, robotic arms are crucial to satellite
servicing and planned orbit debris mitigation missions. The goal of this work
is to create a custom Computer Vision (CV) based Artificial Neural Network
(ANN) that would be able to rapidly identify the posture of a 7 Degree of
Freedom (DoF) robotic arm from a single (RGB-D) image - just like humans can
easily identify if an arm is pointing in some general direction. The Sawyer
robotic arm is used for developing and training this intelligent algorithm.
Since Sawyer's joint space spans 7 dimensions, it is an insurmountable task to
cover the entire joint configuration space. In this work, orthogonal arrays are
used, similar to the Taguchi method, to efficiently span the joint space with
the minimal number of training images. This ``optimally'' generated database is
used to train the custom ANN and its degree of accuracy is on average equal to
twice the smallest joint displacement step used for database generation. A
pre-trained ANN will be useful for estimating the postures of robotic
manipulators used on space stations, spacecraft, and rovers as an auxiliary
tool or for contingency plans.
| 1
|
jechoi@andrew.cmu.edu [SEP] RGB-D Robotic Pose Estimation For a Servicing Robotic Arm : A large number of robotic and human-assisted missions to the Moon and Mars
are forecast. NASA's efforts to learn about the geology and makeup of these
celestial bodies rely heavily on the use of robotic arms. The safety and
redundancy aspects will be crucial when humans will be working alongside the
robotic explorers. Additionally, robotic arms are crucial to satellite
servicing and planned orbit debris mitigation missions. The goal of this work
is to create a custom Computer Vision (CV) based Artificial Neural Network
(ANN) that would be able to rapidly identify the posture of a 7 Degree of
Freedom (DoF) robotic arm from a single (RGB-D) image - just like humans can
easily identify if an arm is pointing in some general direction. The Sawyer
robotic arm is used for developing and training this intelligent algorithm.
Since Sawyer's joint space spans 7 dimensions, it is an insurmountable task to
cover the entire joint configuration space. In this work, orthogonal arrays are
used, similar to the Taguchi method, to efficiently span the joint space with
the minimal number of training images. This ``optimally'' generated database is
used to train the custom ANN and its degree of accuracy is on average equal to
twice the smallest joint displacement step used for database generation. A
pre-trained ANN will be useful for estimating the postures of robotic
manipulators used on space stations, spacecraft, and rovers as an auxiliary
tool or for contingency plans.
| 449
|
Optimal Multi-Manipulator Arm Placement for Maximal Dexterity during Robotics Surgery
|
Robot arm placements are oftentimes a limitation in surgical preoperative
procedures, relying on trained staff to evaluate and decide on the optimal
positions for the arms. Given new and different patient anatomies, it can be
challenging to make an informed choice, leading to more frequently colliding
arms or limited manipulator workspaces. In this paper, we develop a method to
generate the optimal manipulator base positions for the multi-port da Vinci
surgical system that minimizes self-collision and environment-collision, and
maximizes the surgeon's reachability inside the patient. Scoring functions are
defined for each criterion so that they may be optimized over. Since for
multi-manipulator setups, a large number of free parameters are available to
adjust the base positioning of each arm, a challenge becomes how one can
expediently assess possible setups. We thus also propose methods that perform
fast queries of each measure with the use of a proxy collision-checker. We then
develop an optimization method to determine the optimal position using the
scoring functions. We evaluate the optimality of the base positions for the
robot arms on canonical trajectories, and show that the solution yielded by the
optimization program can satisfy each criterion. The metrics and optimization
strategy are generalizable to other surgical robotic platforms so that
patient-side manipulator positioning may be optimized and solved.
|
Liked
|
jechoi@andrew.cmu.edu
|
Optimal Multi-Manipulator Arm Placement for Maximal Dexterity during Robotics Surgery : Robot arm placements are oftentimes a limitation in surgical preoperative
procedures, relying on trained staff to evaluate and decide on the optimal
positions for the arms. Given new and different patient anatomies, it can be
challenging to make an informed choice, leading to more frequently colliding
arms or limited manipulator workspaces. In this paper, we develop a method to
generate the optimal manipulator base positions for the multi-port da Vinci
surgical system that minimizes self-collision and environment-collision, and
maximizes the surgeon's reachability inside the patient. Scoring functions are
defined for each criterion so that they may be optimized over. Since for
multi-manipulator setups, a large number of free parameters are available to
adjust the base positioning of each arm, a challenge becomes how one can
expediently assess possible setups. We thus also propose methods that perform
fast queries of each measure with the use of a proxy collision-checker. We then
develop an optimization method to determine the optimal position using the
scoring functions. We evaluate the optimality of the base positions for the
robot arms on canonical trajectories, and show that the solution yielded by the
optimization program can satisfy each criterion. The metrics and optimization
strategy are generalizable to other surgical robotic platforms so that
patient-side manipulator positioning may be optimized and solved.
| 1
|
jechoi@andrew.cmu.edu [SEP] Optimal Multi-Manipulator Arm Placement for Maximal Dexterity during Robotics Surgery : Robot arm placements are oftentimes a limitation in surgical preoperative
procedures, relying on trained staff to evaluate and decide on the optimal
positions for the arms. Given new and different patient anatomies, it can be
challenging to make an informed choice, leading to more frequently colliding
arms or limited manipulator workspaces. In this paper, we develop a method to
generate the optimal manipulator base positions for the multi-port da Vinci
surgical system that minimizes self-collision and environment-collision, and
maximizes the surgeon's reachability inside the patient. Scoring functions are
defined for each criterion so that they may be optimized over. Since for
multi-manipulator setups, a large number of free parameters are available to
adjust the base positioning of each arm, a challenge becomes how one can
expediently assess possible setups. We thus also propose methods that perform
fast queries of each measure with the use of a proxy collision-checker. We then
develop an optimization method to determine the optimal position using the
scoring functions. We evaluate the optimality of the base positions for the
robot arms on canonical trajectories, and show that the solution yielded by the
optimization program can satisfy each criterion. The metrics and optimization
strategy are generalizable to other surgical robotic platforms so that
patient-side manipulator positioning may be optimized and solved.
| 553
|
Developing and Comparing Single-arm and Dual-arm Regrasp
|
The goal of this paper is to develop efficient regrasp algorithms for
single-arm and dual-arm regrasp and compares the performance of single-arm and
dual-arm regrasp by running the two algorithms thousands of times. We focus on
pick-and-place regrasp which reorients an object from one placement to another
by using a sequence of pick-ups and place-downs. After analyzing the simulation
results, we find dual-arm regrasp is not necessarily better than single-arm
regrasp: Dual-arm regrasp is flexible. When the two hands can grasp the object
with good clearance, dual-arm regrasp is better and has higher successful rate
than single-arm regrasp. However, dual-arm regrasp suffers from geometric
constraints caused by the two arms. When the grasps overlap, dual-arm regrasp
is bad. Developers need to sample grasps with high density to reduce
overlapping. This leads to exploded combinatorics in previous methods, but is
possible with the algorithms presented in this paper. Following the results,
practitioners may choose single-arm or dual-arm robots by considering the
object shapes and grasps. Meanwhile, they can reduce overlapping and implement
practical dual-arm regrasp by using the presented algorithms.
|
Liked
|
jechoi@andrew.cmu.edu
|
Developing and Comparing Single-arm and Dual-arm Regrasp : The goal of this paper is to develop efficient regrasp algorithms for
single-arm and dual-arm regrasp and compares the performance of single-arm and
dual-arm regrasp by running the two algorithms thousands of times. We focus on
pick-and-place regrasp which reorients an object from one placement to another
by using a sequence of pick-ups and place-downs. After analyzing the simulation
results, we find dual-arm regrasp is not necessarily better than single-arm
regrasp: Dual-arm regrasp is flexible. When the two hands can grasp the object
with good clearance, dual-arm regrasp is better and has higher successful rate
than single-arm regrasp. However, dual-arm regrasp suffers from geometric
constraints caused by the two arms. When the grasps overlap, dual-arm regrasp
is bad. Developers need to sample grasps with high density to reduce
overlapping. This leads to exploded combinatorics in previous methods, but is
possible with the algorithms presented in this paper. Following the results,
practitioners may choose single-arm or dual-arm robots by considering the
object shapes and grasps. Meanwhile, they can reduce overlapping and implement
practical dual-arm regrasp by using the presented algorithms.
| 1
|
jechoi@andrew.cmu.edu [SEP] Developing and Comparing Single-arm and Dual-arm Regrasp : The goal of this paper is to develop efficient regrasp algorithms for
single-arm and dual-arm regrasp and compares the performance of single-arm and
dual-arm regrasp by running the two algorithms thousands of times. We focus on
pick-and-place regrasp which reorients an object from one placement to another
by using a sequence of pick-ups and place-downs. After analyzing the simulation
results, we find dual-arm regrasp is not necessarily better than single-arm
regrasp: Dual-arm regrasp is flexible. When the two hands can grasp the object
with good clearance, dual-arm regrasp is better and has higher successful rate
than single-arm regrasp. However, dual-arm regrasp suffers from geometric
constraints caused by the two arms. When the grasps overlap, dual-arm regrasp
is bad. Developers need to sample grasps with high density to reduce
overlapping. This leads to exploded combinatorics in previous methods, but is
possible with the algorithms presented in this paper. Following the results,
practitioners may choose single-arm or dual-arm robots by considering the
object shapes and grasps. Meanwhile, they can reduce overlapping and implement
practical dual-arm regrasp by using the presented algorithms.
| 8
|
Words or Vision: Do Vision-Language Models Have Blind Faith in Text?
|
Vision-Language Models (VLMs) excel in integrating visual and textual
information for vision-centric tasks, but their handling of inconsistencies
between modalities is underexplored. We investigate VLMs' modality preferences
when faced with visual data and varied textual inputs in vision-centered
settings. By introducing textual variations to four vision-centric tasks and
evaluating ten Vision-Language Models (VLMs), we discover a \emph{``blind faith
in text''} phenomenon: VLMs disproportionately trust textual data over visual
data when inconsistencies arise, leading to significant performance drops under
corrupted text and raising safety concerns. We analyze factors influencing this
text bias, including instruction prompts, language model size, text relevance,
token order, and the interplay between visual and textual certainty. While
certain factors, such as scaling up the language model size, slightly mitigate
text bias, others like token order can exacerbate it due to positional biases
inherited from language models. To address this issue, we explore supervised
fine-tuning with text augmentation and demonstrate its effectiveness in
reducing text bias. Additionally, we provide a theoretical analysis suggesting
that the blind faith in text phenomenon may stem from an imbalance of pure text
and multi-modal data during training. Our findings highlight the need for
balanced training and careful consideration of modality interactions in VLMs to
enhance their robustness and reliability in handling multi-modal data
inconsistencies.
|
Disliked
|
zrz@andrew.cmu.edu
|
Words or Vision: Do Vision-Language Models Have Blind Faith in Text? : Vision-Language Models (VLMs) excel in integrating visual and textual
information for vision-centric tasks, but their handling of inconsistencies
between modalities is underexplored. We investigate VLMs' modality preferences
when faced with visual data and varied textual inputs in vision-centered
settings. By introducing textual variations to four vision-centric tasks and
evaluating ten Vision-Language Models (VLMs), we discover a \emph{``blind faith
in text''} phenomenon: VLMs disproportionately trust textual data over visual
data when inconsistencies arise, leading to significant performance drops under
corrupted text and raising safety concerns. We analyze factors influencing this
text bias, including instruction prompts, language model size, text relevance,
token order, and the interplay between visual and textual certainty. While
certain factors, such as scaling up the language model size, slightly mitigate
text bias, others like token order can exacerbate it due to positional biases
inherited from language models. To address this issue, we explore supervised
fine-tuning with text augmentation and demonstrate its effectiveness in
reducing text bias. Additionally, we provide a theoretical analysis suggesting
that the blind faith in text phenomenon may stem from an imbalance of pure text
and multi-modal data during training. Our findings highlight the need for
balanced training and careful consideration of modality interactions in VLMs to
enhance their robustness and reliability in handling multi-modal data
inconsistencies.
| 0
|
zrz@andrew.cmu.edu [SEP] Words or Vision: Do Vision-Language Models Have Blind Faith in Text? : Vision-Language Models (VLMs) excel in integrating visual and textual
information for vision-centric tasks, but their handling of inconsistencies
between modalities is underexplored. We investigate VLMs' modality preferences
when faced with visual data and varied textual inputs in vision-centered
settings. By introducing textual variations to four vision-centric tasks and
evaluating ten Vision-Language Models (VLMs), we discover a \emph{``blind faith
in text''} phenomenon: VLMs disproportionately trust textual data over visual
data when inconsistencies arise, leading to significant performance drops under
corrupted text and raising safety concerns. We analyze factors influencing this
text bias, including instruction prompts, language model size, text relevance,
token order, and the interplay between visual and textual certainty. While
certain factors, such as scaling up the language model size, slightly mitigate
text bias, others like token order can exacerbate it due to positional biases
inherited from language models. To address this issue, we explore supervised
fine-tuning with text augmentation and demonstrate its effectiveness in
reducing text bias. Additionally, we provide a theoretical analysis suggesting
that the blind faith in text phenomenon may stem from an imbalance of pure text
and multi-modal data during training. Our findings highlight the need for
balanced training and careful consideration of modality interactions in VLMs to
enhance their robustness and reliability in handling multi-modal data
inconsistencies.
| 351
|
3D Hand-Eye Calibration for Collaborative Robot Arm: Look at Robot Base Once
|
Hand-eye calibration is a common problem in the field of collaborative
robotics, involving the determination of the transformation matrix between the
visual sensor and the robot flange to enable vision-based robotic tasks.
However, this process typically requires multiple movements of the robot arm
and an external calibration object, making it both time-consuming and
inconvenient, especially in scenarios where frequent recalibration is
necessary. In this work, we extend our previous method which eliminates the
need for external calibration objects such as a chessboard. We propose a
generic dataset generation approach for point cloud registration, focusing on
aligning the robot base point cloud with the scanned data. Furthermore, a more
detailed simulation study is conducted involving several different
collaborative robot arms, followed by real-world experiments in an industrial
setting. Our improved method is simulated and evaluated using a total of 14
robotic arms from 9 different brands, including KUKA, Universal Robots,
UFACTORY, and Franka Emika, all of which are widely used in the field of
collaborative robotics. Physical experiments demonstrate that our extended
approach achieves performance comparable to existing commercial hand-eye
calibration solutions, while completing the entire calibration procedure in
just a few seconds. In addition, we provide a user-friendly hand-eye
calibration solution, with the code publicly available at
github.com/leihui6/LRBO.
|
Liked
|
jechoi@andrew.cmu.edu
|
3D Hand-Eye Calibration for Collaborative Robot Arm: Look at Robot Base Once : Hand-eye calibration is a common problem in the field of collaborative
robotics, involving the determination of the transformation matrix between the
visual sensor and the robot flange to enable vision-based robotic tasks.
However, this process typically requires multiple movements of the robot arm
and an external calibration object, making it both time-consuming and
inconvenient, especially in scenarios where frequent recalibration is
necessary. In this work, we extend our previous method which eliminates the
need for external calibration objects such as a chessboard. We propose a
generic dataset generation approach for point cloud registration, focusing on
aligning the robot base point cloud with the scanned data. Furthermore, a more
detailed simulation study is conducted involving several different
collaborative robot arms, followed by real-world experiments in an industrial
setting. Our improved method is simulated and evaluated using a total of 14
robotic arms from 9 different brands, including KUKA, Universal Robots,
UFACTORY, and Franka Emika, all of which are widely used in the field of
collaborative robotics. Physical experiments demonstrate that our extended
approach achieves performance comparable to existing commercial hand-eye
calibration solutions, while completing the entire calibration procedure in
just a few seconds. In addition, we provide a user-friendly hand-eye
calibration solution, with the code publicly available at
github.com/leihui6/LRBO.
| 1
|
jechoi@andrew.cmu.edu [SEP] 3D Hand-Eye Calibration for Collaborative Robot Arm: Look at Robot Base Once : Hand-eye calibration is a common problem in the field of collaborative
robotics, involving the determination of the transformation matrix between the
visual sensor and the robot flange to enable vision-based robotic tasks.
However, this process typically requires multiple movements of the robot arm
and an external calibration object, making it both time-consuming and
inconvenient, especially in scenarios where frequent recalibration is
necessary. In this work, we extend our previous method which eliminates the
need for external calibration objects such as a chessboard. We propose a
generic dataset generation approach for point cloud registration, focusing on
aligning the robot base point cloud with the scanned data. Furthermore, a more
detailed simulation study is conducted involving several different
collaborative robot arms, followed by real-world experiments in an industrial
setting. Our improved method is simulated and evaluated using a total of 14
robotic arms from 9 different brands, including KUKA, Universal Robots,
UFACTORY, and Franka Emika, all of which are widely used in the field of
collaborative robotics. Physical experiments demonstrate that our extended
approach achieves performance comparable to existing commercial hand-eye
calibration solutions, while completing the entire calibration procedure in
just a few seconds. In addition, we provide a user-friendly hand-eye
calibration solution, with the code publicly available at
github.com/leihui6/LRBO.
| 495
|
Beneficial and Harmful Explanatory Machine Learning
|
Given the recent successes of Deep Learning in AI there has been increased
interest in the role and need for explanations in machine learned theories. A
distinct notion in this context is that of Michie's definition of Ultra-Strong
Machine Learning (USML). USML is demonstrated by a measurable increase in human
performance of a task following provision to the human of a symbolic machine
learned theory for task performance. A recent paper demonstrates the beneficial
effect of a machine learned logic theory for a classification task, yet no
existing work to our knowledge has examined the potential harmfulness of
machine's involvement for human comprehension during learning. This paper
investigates the explanatory effects of a machine learned theory in the context
of simple two person games and proposes a framework for identifying the
harmfulness of machine explanations based on the Cognitive Science literature.
The approach involves a cognitive window consisting of two quantifiable bounds
and it is supported by empirical evidence collected from human trials. Our
quantitative and qualitative results indicate that human learning aided by a
symbolic machine learned theory which satisfies a cognitive window has achieved
significantly higher performance than human self learning. Results also
demonstrate that human learning aided by a symbolic machine learned theory that
fails to satisfy this window leads to significantly worse performance than
unaided human learning.
|
Liked
|
zrz@andrew.cmu.edu
|
Beneficial and Harmful Explanatory Machine Learning : Given the recent successes of Deep Learning in AI there has been increased
interest in the role and need for explanations in machine learned theories. A
distinct notion in this context is that of Michie's definition of Ultra-Strong
Machine Learning (USML). USML is demonstrated by a measurable increase in human
performance of a task following provision to the human of a symbolic machine
learned theory for task performance. A recent paper demonstrates the beneficial
effect of a machine learned logic theory for a classification task, yet no
existing work to our knowledge has examined the potential harmfulness of
machine's involvement for human comprehension during learning. This paper
investigates the explanatory effects of a machine learned theory in the context
of simple two person games and proposes a framework for identifying the
harmfulness of machine explanations based on the Cognitive Science literature.
The approach involves a cognitive window consisting of two quantifiable bounds
and it is supported by empirical evidence collected from human trials. Our
quantitative and qualitative results indicate that human learning aided by a
symbolic machine learned theory which satisfies a cognitive window has achieved
significantly higher performance than human self learning. Results also
demonstrate that human learning aided by a symbolic machine learned theory that
fails to satisfy this window leads to significantly worse performance than
unaided human learning.
| 1
|
zrz@andrew.cmu.edu [SEP] Beneficial and Harmful Explanatory Machine Learning : Given the recent successes of Deep Learning in AI there has been increased
interest in the role and need for explanations in machine learned theories. A
distinct notion in this context is that of Michie's definition of Ultra-Strong
Machine Learning (USML). USML is demonstrated by a measurable increase in human
performance of a task following provision to the human of a symbolic machine
learned theory for task performance. A recent paper demonstrates the beneficial
effect of a machine learned logic theory for a classification task, yet no
existing work to our knowledge has examined the potential harmfulness of
machine's involvement for human comprehension during learning. This paper
investigates the explanatory effects of a machine learned theory in the context
of simple two person games and proposes a framework for identifying the
harmfulness of machine explanations based on the Cognitive Science literature.
The approach involves a cognitive window consisting of two quantifiable bounds
and it is supported by empirical evidence collected from human trials. Our
quantitative and qualitative results indicate that human learning aided by a
symbolic machine learned theory which satisfies a cognitive window has achieved
significantly higher performance than human self learning. Results also
demonstrate that human learning aided by a symbolic machine learned theory that
fails to satisfy this window leads to significantly worse performance than
unaided human learning.
| 126
|
Optimal Trajectory Planning for Flexible Robots with Large Deformation
|
Robot arms with lighter weight can reduce unnecessary energy consumption
which is desirable in robotic industry. However, lightweight arms undergo
undesirable elastic deformation. In this paper, the planar motion of a
lightweight flexible arm is investigated. In order to obtain a precise
mathematical model, the axial displacement and nonlinear curvature of flexible
arm arising from large bending deformation is taken into consideration. An
in-extensional condition, the axial displacement is related to transverse
displacement of the flexible beam, is applied. This leads to a robotic model
with three rigid modes and one elastic mode. The elastic mode depends on time
and position. An Assume Mode Method is used to remove the spatial dependence.
The governing equations is derived using Lagrange Method. The effects of
nonlinear terms due to the large deformation, gravity, and tip-mass are
considered. Control inputs include forces and moment exerted at the joint
between slider and arm (see Fig. 1). The conventional computed torque control
laws cannot stabilize the system, since there are not as many control inputs as
states of the system. A Particle Swarm Optimization (PSO) technique is then
used to obtain a suitable trajectory with the aim of minimizing excitations of
the elastic mode. Two methods are considered for generating a trajectory
function, either to use a three-layer Artificial Neural Network (ANN) or to use
spline interpolation. A sliding mode control strategy is proposed in which the
sliding surfaces include elastic mode in order to guarantee robustness. The
simulations show that the three-layer ANN technique provides arbitrary small
settling time, and also the optimization algorithm converges faster and
generates smooth trajectories unlike spline function technique.
|
Liked
|
jechoi@andrew.cmu.edu
|
Optimal Trajectory Planning for Flexible Robots with Large Deformation : Robot arms with lighter weight can reduce unnecessary energy consumption
which is desirable in robotic industry. However, lightweight arms undergo
undesirable elastic deformation. In this paper, the planar motion of a
lightweight flexible arm is investigated. In order to obtain a precise
mathematical model, the axial displacement and nonlinear curvature of flexible
arm arising from large bending deformation is taken into consideration. An
in-extensional condition, the axial displacement is related to transverse
displacement of the flexible beam, is applied. This leads to a robotic model
with three rigid modes and one elastic mode. The elastic mode depends on time
and position. An Assume Mode Method is used to remove the spatial dependence.
The governing equations is derived using Lagrange Method. The effects of
nonlinear terms due to the large deformation, gravity, and tip-mass are
considered. Control inputs include forces and moment exerted at the joint
between slider and arm (see Fig. 1). The conventional computed torque control
laws cannot stabilize the system, since there are not as many control inputs as
states of the system. A Particle Swarm Optimization (PSO) technique is then
used to obtain a suitable trajectory with the aim of minimizing excitations of
the elastic mode. Two methods are considered for generating a trajectory
function, either to use a three-layer Artificial Neural Network (ANN) or to use
spline interpolation. A sliding mode control strategy is proposed in which the
sliding surfaces include elastic mode in order to guarantee robustness. The
simulations show that the three-layer ANN technique provides arbitrary small
settling time, and also the optimization algorithm converges faster and
generates smooth trajectories unlike spline function technique.
| 1
|
jechoi@andrew.cmu.edu [SEP] Optimal Trajectory Planning for Flexible Robots with Large Deformation : Robot arms with lighter weight can reduce unnecessary energy consumption
which is desirable in robotic industry. However, lightweight arms undergo
undesirable elastic deformation. In this paper, the planar motion of a
lightweight flexible arm is investigated. In order to obtain a precise
mathematical model, the axial displacement and nonlinear curvature of flexible
arm arising from large bending deformation is taken into consideration. An
in-extensional condition, the axial displacement is related to transverse
displacement of the flexible beam, is applied. This leads to a robotic model
with three rigid modes and one elastic mode. The elastic mode depends on time
and position. An Assume Mode Method is used to remove the spatial dependence.
The governing equations is derived using Lagrange Method. The effects of
nonlinear terms due to the large deformation, gravity, and tip-mass are
considered. Control inputs include forces and moment exerted at the joint
between slider and arm (see Fig. 1). The conventional computed torque control
laws cannot stabilize the system, since there are not as many control inputs as
states of the system. A Particle Swarm Optimization (PSO) technique is then
used to obtain a suitable trajectory with the aim of minimizing excitations of
the elastic mode. Two methods are considered for generating a trajectory
function, either to use a three-layer Artificial Neural Network (ANN) or to use
spline interpolation. A sliding mode control strategy is proposed in which the
sliding surfaces include elastic mode in order to guarantee robustness. The
simulations show that the three-layer ANN technique provides arbitrary small
settling time, and also the optimization algorithm converges faster and
generates smooth trajectories unlike spline function technique.
| 572
|
Application of deep reinforcement learning for Indian stock trading automation
|
In stock trading, feature extraction and trading strategy design are the two
important tasks to achieve long-term benefits using machine learning
techniques. Several methods have been proposed to design trading strategy by
acquiring trading signals to maximize the rewards. In the present paper the
theory of deep reinforcement learning is applied for stock trading strategy and
investment decisions to Indian markets. The experiments are performed
systematically with three classical Deep Reinforcement Learning models Deep
Q-Network, Double Deep Q-Network and Dueling Double Deep Q-Network on ten
Indian stock datasets. The performance of the models are evaluated and
comparison is made.
|
Liked
|
zrz@andrew.cmu.edu
|
Application of deep reinforcement learning for Indian stock trading automation : In stock trading, feature extraction and trading strategy design are the two
important tasks to achieve long-term benefits using machine learning
techniques. Several methods have been proposed to design trading strategy by
acquiring trading signals to maximize the rewards. In the present paper the
theory of deep reinforcement learning is applied for stock trading strategy and
investment decisions to Indian markets. The experiments are performed
systematically with three classical Deep Reinforcement Learning models Deep
Q-Network, Double Deep Q-Network and Dueling Double Deep Q-Network on ten
Indian stock datasets. The performance of the models are evaluated and
comparison is made.
| 1
|
zrz@andrew.cmu.edu [SEP] Application of deep reinforcement learning for Indian stock trading automation : In stock trading, feature extraction and trading strategy design are the two
important tasks to achieve long-term benefits using machine learning
techniques. Several methods have been proposed to design trading strategy by
acquiring trading signals to maximize the rewards. In the present paper the
theory of deep reinforcement learning is applied for stock trading strategy and
investment decisions to Indian markets. The experiments are performed
systematically with three classical Deep Reinforcement Learning models Deep
Q-Network, Double Deep Q-Network and Dueling Double Deep Q-Network on ten
Indian stock datasets. The performance of the models are evaluated and
comparison is made.
| 219
|
Dynamic Movement Primitive based Motion Retargeting for Dual-Arm Sign Language Motions
|
We aim to develop an efficient programming method for equipping service
robots with the skill of performing sign language motions. This paper addresses
the problem of transferring complex dual-arm sign language motions
characterized by the coordination among arms and hands from human to robot,
which is seldom considered in previous studies of motion retargeting
techniques. In this paper, we propose a novel motion retargeting method that
leverages graph optimization and Dynamic Movement Primitives (DMPs) for this
problem. We employ DMPs in a leader-follower manner to parameterize the
original trajectories while preserving motion rhythm and relative movements
between human body parts, and adopt a three-step optimization procedure to find
deformed trajectories for robot motion planning while ensuring feasibility for
robot execution. Experimental results of several Chinese Sign Language (CSL)
motions have been successfully performed on ABB's YuMi dual-arm collaborative
robot (14-DOF) with two 6-DOF Inspire-Robotics' multi-fingered hands, a system
with 26 DOFs in total.
|
Liked
|
jechoi@andrew.cmu.edu
|
Dynamic Movement Primitive based Motion Retargeting for Dual-Arm Sign Language Motions : We aim to develop an efficient programming method for equipping service
robots with the skill of performing sign language motions. This paper addresses
the problem of transferring complex dual-arm sign language motions
characterized by the coordination among arms and hands from human to robot,
which is seldom considered in previous studies of motion retargeting
techniques. In this paper, we propose a novel motion retargeting method that
leverages graph optimization and Dynamic Movement Primitives (DMPs) for this
problem. We employ DMPs in a leader-follower manner to parameterize the
original trajectories while preserving motion rhythm and relative movements
between human body parts, and adopt a three-step optimization procedure to find
deformed trajectories for robot motion planning while ensuring feasibility for
robot execution. Experimental results of several Chinese Sign Language (CSL)
motions have been successfully performed on ABB's YuMi dual-arm collaborative
robot (14-DOF) with two 6-DOF Inspire-Robotics' multi-fingered hands, a system
with 26 DOFs in total.
| 1
|
jechoi@andrew.cmu.edu [SEP] Dynamic Movement Primitive based Motion Retargeting for Dual-Arm Sign Language Motions : We aim to develop an efficient programming method for equipping service
robots with the skill of performing sign language motions. This paper addresses
the problem of transferring complex dual-arm sign language motions
characterized by the coordination among arms and hands from human to robot,
which is seldom considered in previous studies of motion retargeting
techniques. In this paper, we propose a novel motion retargeting method that
leverages graph optimization and Dynamic Movement Primitives (DMPs) for this
problem. We employ DMPs in a leader-follower manner to parameterize the
original trajectories while preserving motion rhythm and relative movements
between human body parts, and adopt a three-step optimization procedure to find
deformed trajectories for robot motion planning while ensuring feasibility for
robot execution. Experimental results of several Chinese Sign Language (CSL)
motions have been successfully performed on ABB's YuMi dual-arm collaborative
robot (14-DOF) with two 6-DOF Inspire-Robotics' multi-fingered hands, a system
with 26 DOFs in total.
| 489
|
An Overview of Deep Semi-Supervised Learning
|
Deep neural networks demonstrated their ability to provide remarkable
performances on a wide range of supervised learning tasks (e.g., image
classification) when trained on extensive collections of labeled data (e.g.,
ImageNet). However, creating such large datasets requires a considerable amount
of resources, time, and effort. Such resources may not be available in many
practical cases, limiting the adoption and the application of many deep
learning methods. In a search for more data-efficient deep learning methods to
overcome the need for large annotated datasets, there is a rising research
interest in semi-supervised learning and its applications to deep neural
networks to reduce the amount of labeled data required, by either developing
novel methods or adopting existing semi-supervised learning frameworks for a
deep learning setting. In this paper, we provide a comprehensive overview of
deep semi-supervised learning, starting with an introduction to the field,
followed by a summarization of the dominant semi-supervised approaches in deep
learning.
|
Liked
|
zrz@andrew.cmu.edu
|
An Overview of Deep Semi-Supervised Learning : Deep neural networks demonstrated their ability to provide remarkable
performances on a wide range of supervised learning tasks (e.g., image
classification) when trained on extensive collections of labeled data (e.g.,
ImageNet). However, creating such large datasets requires a considerable amount
of resources, time, and effort. Such resources may not be available in many
practical cases, limiting the adoption and the application of many deep
learning methods. In a search for more data-efficient deep learning methods to
overcome the need for large annotated datasets, there is a rising research
interest in semi-supervised learning and its applications to deep neural
networks to reduce the amount of labeled data required, by either developing
novel methods or adopting existing semi-supervised learning frameworks for a
deep learning setting. In this paper, we provide a comprehensive overview of
deep semi-supervised learning, starting with an introduction to the field,
followed by a summarization of the dominant semi-supervised approaches in deep
learning.
| 1
|
zrz@andrew.cmu.edu [SEP] An Overview of Deep Semi-Supervised Learning : Deep neural networks demonstrated their ability to provide remarkable
performances on a wide range of supervised learning tasks (e.g., image
classification) when trained on extensive collections of labeled data (e.g.,
ImageNet). However, creating such large datasets requires a considerable amount
of resources, time, and effort. Such resources may not be available in many
practical cases, limiting the adoption and the application of many deep
learning methods. In a search for more data-efficient deep learning methods to
overcome the need for large annotated datasets, there is a rising research
interest in semi-supervised learning and its applications to deep neural
networks to reduce the amount of labeled data required, by either developing
novel methods or adopting existing semi-supervised learning frameworks for a
deep learning setting. In this paper, we provide a comprehensive overview of
deep semi-supervised learning, starting with an introduction to the field,
followed by a summarization of the dominant semi-supervised approaches in deep
learning.
| 200
|
Task Oriented Video Coding: A Survey
|
Video coding technology has been continuously improved for higher compression
ratio with higher resolution. However, the state-of-the-art video coding
standards, such as H.265/HEVC and Versatile Video Coding, are still designed
with the assumption the compressed video will be watched by humans. With the
tremendous advance and maturation of deep neural networks in solving computer
vision tasks, more and more videos are directly analyzed by deep neural
networks without humans' involvement. Such a conventional design for video
coding standard is not optimal when the compressed video is used by computer
vision applications. While the human visual system is consistently sensitive to
the content with high contrast, the impact of pixels on computer vision
algorithms is driven by specific computer vision tasks. In this paper, we
explore and summarize recent progress on computer vision task oriented video
coding and emerging video coding standard, Video Coding for Machines.
|
Disliked
|
zrz@andrew.cmu.edu
|
Task Oriented Video Coding: A Survey : Video coding technology has been continuously improved for higher compression
ratio with higher resolution. However, the state-of-the-art video coding
standards, such as H.265/HEVC and Versatile Video Coding, are still designed
with the assumption the compressed video will be watched by humans. With the
tremendous advance and maturation of deep neural networks in solving computer
vision tasks, more and more videos are directly analyzed by deep neural
networks without humans' involvement. Such a conventional design for video
coding standard is not optimal when the compressed video is used by computer
vision applications. While the human visual system is consistently sensitive to
the content with high contrast, the impact of pixels on computer vision
algorithms is driven by specific computer vision tasks. In this paper, we
explore and summarize recent progress on computer vision task oriented video
coding and emerging video coding standard, Video Coding for Machines.
| 0
|
zrz@andrew.cmu.edu [SEP] Task Oriented Video Coding: A Survey : Video coding technology has been continuously improved for higher compression
ratio with higher resolution. However, the state-of-the-art video coding
standards, such as H.265/HEVC and Versatile Video Coding, are still designed
with the assumption the compressed video will be watched by humans. With the
tremendous advance and maturation of deep neural networks in solving computer
vision tasks, more and more videos are directly analyzed by deep neural
networks without humans' involvement. Such a conventional design for video
coding standard is not optimal when the compressed video is used by computer
vision applications. While the human visual system is consistently sensitive to
the content with high contrast, the impact of pixels on computer vision
algorithms is driven by specific computer vision tasks. In this paper, we
explore and summarize recent progress on computer vision task oriented video
coding and emerging video coding standard, Video Coding for Machines.
| 354
|
How Developers Iterate on Machine Learning Workflows -- A Survey of the Applied Machine Learning Literature
|
Machine learning workflow development is anecdotally regarded to be an
iterative process of trial-and-error with humans-in-the-loop. However, we are
not aware of quantitative evidence corroborating this popular belief. A
quantitative characterization of iteration can serve as a benchmark for machine
learning workflow development in practice, and can aid the development of
human-in-the-loop machine learning systems. To this end, we conduct a
small-scale survey of the applied machine learning literature from five
distinct application domains. We collect and distill statistics on the role of
iteration within machine learning workflow development, and report preliminary
trends and insights from our investigation, as a starting point towards this
benchmark. Based on our findings, we finally describe desiderata for effective
and versatile human-in-the-loop machine learning systems that can cater to
users in diverse domains.
|
Liked
|
zrz@andrew.cmu.edu
|
How Developers Iterate on Machine Learning Workflows -- A Survey of the Applied Machine Learning Literature : Machine learning workflow development is anecdotally regarded to be an
iterative process of trial-and-error with humans-in-the-loop. However, we are
not aware of quantitative evidence corroborating this popular belief. A
quantitative characterization of iteration can serve as a benchmark for machine
learning workflow development in practice, and can aid the development of
human-in-the-loop machine learning systems. To this end, we conduct a
small-scale survey of the applied machine learning literature from five
distinct application domains. We collect and distill statistics on the role of
iteration within machine learning workflow development, and report preliminary
trends and insights from our investigation, as a starting point towards this
benchmark. Based on our findings, we finally describe desiderata for effective
and versatile human-in-the-loop machine learning systems that can cater to
users in diverse domains.
| 1
|
zrz@andrew.cmu.edu [SEP] How Developers Iterate on Machine Learning Workflows -- A Survey of the Applied Machine Learning Literature : Machine learning workflow development is anecdotally regarded to be an
iterative process of trial-and-error with humans-in-the-loop. However, we are
not aware of quantitative evidence corroborating this popular belief. A
quantitative characterization of iteration can serve as a benchmark for machine
learning workflow development in practice, and can aid the development of
human-in-the-loop machine learning systems. To this end, we conduct a
small-scale survey of the applied machine learning literature from five
distinct application domains. We collect and distill statistics on the role of
iteration within machine learning workflow development, and report preliminary
trends and insights from our investigation, as a starting point towards this
benchmark. Based on our findings, we finally describe desiderata for effective
and versatile human-in-the-loop machine learning systems that can cater to
users in diverse domains.
| 108
|
Learning Task-aware Robust Deep Learning Systems
|
Many works demonstrate that deep learning system is vulnerable to adversarial
attack. A deep learning system consists of two parts: the deep learning task
and the deep model. Nowadays, most existing works investigate the impact of the
deep model on robustness of deep learning systems, ignoring the impact of the
learning task. In this paper, we adopt the binary and interval label encoding
strategy to redefine the classification task and design corresponding loss to
improve robustness of the deep learning system. Our method can be viewed as
improving the robustness of deep learning systems from both the learning task
and deep model. Experimental results demonstrate that our learning task-aware
method is much more robust than traditional classification while retaining the
accuracy.
|
Liked
|
zrz@andrew.cmu.edu
|
Learning Task-aware Robust Deep Learning Systems : Many works demonstrate that deep learning system is vulnerable to adversarial
attack. A deep learning system consists of two parts: the deep learning task
and the deep model. Nowadays, most existing works investigate the impact of the
deep model on robustness of deep learning systems, ignoring the impact of the
learning task. In this paper, we adopt the binary and interval label encoding
strategy to redefine the classification task and design corresponding loss to
improve robustness of the deep learning system. Our method can be viewed as
improving the robustness of deep learning systems from both the learning task
and deep model. Experimental results demonstrate that our learning task-aware
method is much more robust than traditional classification while retaining the
accuracy.
| 1
|
zrz@andrew.cmu.edu [SEP] Learning Task-aware Robust Deep Learning Systems : Many works demonstrate that deep learning system is vulnerable to adversarial
attack. A deep learning system consists of two parts: the deep learning task
and the deep model. Nowadays, most existing works investigate the impact of the
deep model on robustness of deep learning systems, ignoring the impact of the
learning task. In this paper, we adopt the binary and interval label encoding
strategy to redefine the classification task and design corresponding loss to
improve robustness of the deep learning system. Our method can be viewed as
improving the robustness of deep learning systems from both the learning task
and deep model. Experimental results demonstrate that our learning task-aware
method is much more robust than traditional classification while retaining the
accuracy.
| 162
|
To New Beginnings: A Survey of Unified Perception in Autonomous Vehicle Software
|
Autonomous vehicle perception typically relies on modular pipelines that
decompose the task into detection, tracking, and prediction. While
interpretable, these pipelines suffer from error accumulation and limited
inter-task synergy. Unified perception has emerged as a promising paradigm that
integrates these sub-tasks within a shared architecture, potentially improving
robustness, contextual reasoning, and efficiency while retaining interpretable
outputs. In this survey, we provide a comprehensive overview of unified
perception, introducing a holistic and systemic taxonomy that categorizes
methods along task integration, tracking formulation, and representation flow.
We define three paradigms -Early, Late, and Full Unified Perception- and
systematically review existing methods, their architectures, training
strategies, datasets used, and open-source availability, while highlighting
future research directions. This work establishes the first comprehensive
framework for understanding and advancing unified perception, consolidates
fragmented efforts, and guides future research toward more robust,
generalizable, and interpretable perception.
|
Disliked
|
zrz@andrew.cmu.edu
|
To New Beginnings: A Survey of Unified Perception in Autonomous Vehicle Software : Autonomous vehicle perception typically relies on modular pipelines that
decompose the task into detection, tracking, and prediction. While
interpretable, these pipelines suffer from error accumulation and limited
inter-task synergy. Unified perception has emerged as a promising paradigm that
integrates these sub-tasks within a shared architecture, potentially improving
robustness, contextual reasoning, and efficiency while retaining interpretable
outputs. In this survey, we provide a comprehensive overview of unified
perception, introducing a holistic and systemic taxonomy that categorizes
methods along task integration, tracking formulation, and representation flow.
We define three paradigms -Early, Late, and Full Unified Perception- and
systematically review existing methods, their architectures, training
strategies, datasets used, and open-source availability, while highlighting
future research directions. This work establishes the first comprehensive
framework for understanding and advancing unified perception, consolidates
fragmented efforts, and guides future research toward more robust,
generalizable, and interpretable perception.
| 0
|
zrz@andrew.cmu.edu [SEP] To New Beginnings: A Survey of Unified Perception in Autonomous Vehicle Software : Autonomous vehicle perception typically relies on modular pipelines that
decompose the task into detection, tracking, and prediction. While
interpretable, these pipelines suffer from error accumulation and limited
inter-task synergy. Unified perception has emerged as a promising paradigm that
integrates these sub-tasks within a shared architecture, potentially improving
robustness, contextual reasoning, and efficiency while retaining interpretable
outputs. In this survey, we provide a comprehensive overview of unified
perception, introducing a holistic and systemic taxonomy that categorizes
methods along task integration, tracking formulation, and representation flow.
We define three paradigms -Early, Late, and Full Unified Perception- and
systematically review existing methods, their architectures, training
strategies, datasets used, and open-source availability, while highlighting
future research directions. This work establishes the first comprehensive
framework for understanding and advancing unified perception, consolidates
fragmented efforts, and guides future research toward more robust,
generalizable, and interpretable perception.
| 335
|
Advances in Hybrid Modular Climbing Robots: Design Principles and Refinement Strategies
|
This paper explores the design strategies for hybrid pole- or trunk-climbing
robots, focusing on methods to inform design decisions and assess metrics such
as adaptability and performance. A wheeled-grasping hybrid robot with modular,
tendon-driven grasping arms and a wheeled drive system mounted on a turret was
developed to climb columns of varying diameters. Here, the key innovation is
the underactuated arms that can be adjusted to different column sizes by adding
or removing modular linkages, though the robot also features capabilities like
self-locking (the ability of the robot to stay on the column by friction
without power), autonomous grasping, and rotation around the column axis.
Mathematical models describe conditions for self-locking and vertical climbing.
Experimental results demonstrate the robot's efficacy in climbing and
self-locking, validating the proposed models and highlighting the potential for
fully automated solutions in industrial applications. This work provides a
comprehensive framework for evaluating and designing hybrid climbing robots,
contributing to advancements in autonomous robotics for environments where
climbing tall structures is critical.
|
Liked
|
jechoi@andrew.cmu.edu
|
Advances in Hybrid Modular Climbing Robots: Design Principles and Refinement Strategies : This paper explores the design strategies for hybrid pole- or trunk-climbing
robots, focusing on methods to inform design decisions and assess metrics such
as adaptability and performance. A wheeled-grasping hybrid robot with modular,
tendon-driven grasping arms and a wheeled drive system mounted on a turret was
developed to climb columns of varying diameters. Here, the key innovation is
the underactuated arms that can be adjusted to different column sizes by adding
or removing modular linkages, though the robot also features capabilities like
self-locking (the ability of the robot to stay on the column by friction
without power), autonomous grasping, and rotation around the column axis.
Mathematical models describe conditions for self-locking and vertical climbing.
Experimental results demonstrate the robot's efficacy in climbing and
self-locking, validating the proposed models and highlighting the potential for
fully automated solutions in industrial applications. This work provides a
comprehensive framework for evaluating and designing hybrid climbing robots,
contributing to advancements in autonomous robotics for environments where
climbing tall structures is critical.
| 1
|
jechoi@andrew.cmu.edu [SEP] Advances in Hybrid Modular Climbing Robots: Design Principles and Refinement Strategies : This paper explores the design strategies for hybrid pole- or trunk-climbing
robots, focusing on methods to inform design decisions and assess metrics such
as adaptability and performance. A wheeled-grasping hybrid robot with modular,
tendon-driven grasping arms and a wheeled drive system mounted on a turret was
developed to climb columns of varying diameters. Here, the key innovation is
the underactuated arms that can be adjusted to different column sizes by adding
or removing modular linkages, though the robot also features capabilities like
self-locking (the ability of the robot to stay on the column by friction
without power), autonomous grasping, and rotation around the column axis.
Mathematical models describe conditions for self-locking and vertical climbing.
Experimental results demonstrate the robot's efficacy in climbing and
self-locking, validating the proposed models and highlighting the potential for
fully automated solutions in industrial applications. This work provides a
comprehensive framework for evaluating and designing hybrid climbing robots,
contributing to advancements in autonomous robotics for environments where
climbing tall structures is critical.
| 564
|
Distributed Deep Reinforcement Learning: A Survey and A Multi-Player Multi-Agent Learning Toolbox
|
With the breakthrough of AlphaGo, deep reinforcement learning becomes a
recognized technique for solving sequential decision-making problems. Despite
its reputation, data inefficiency caused by its trial and error learning
mechanism makes deep reinforcement learning hard to be practical in a wide
range of areas. Plenty of methods have been developed for sample efficient deep
reinforcement learning, such as environment modeling, experience transfer, and
distributed modifications, amongst which, distributed deep reinforcement
learning has shown its potential in various applications, such as
human-computer gaming, and intelligent transportation. In this paper, we
conclude the state of this exciting field, by comparing the classical
distributed deep reinforcement learning methods, and studying important
components to achieve efficient distributed learning, covering single player
single agent distributed deep reinforcement learning to the most complex
multiple players multiple agents distributed deep reinforcement learning.
Furthermore, we review recently released toolboxes that help to realize
distributed deep reinforcement learning without many modifications of their
non-distributed versions. By analyzing their strengths and weaknesses, a
multi-player multi-agent distributed deep reinforcement learning toolbox is
developed and released, which is further validated on Wargame, a complex
environment, showing usability of the proposed toolbox for multiple players and
multiple agents distributed deep reinforcement learning under complex games.
Finally, we try to point out challenges and future trends, hoping this brief
review can provide a guide or a spark for researchers who are interested in
distributed deep reinforcement learning.
|
Disliked
|
zrz@andrew.cmu.edu
|
Distributed Deep Reinforcement Learning: A Survey and A Multi-Player Multi-Agent Learning Toolbox : With the breakthrough of AlphaGo, deep reinforcement learning becomes a
recognized technique for solving sequential decision-making problems. Despite
its reputation, data inefficiency caused by its trial and error learning
mechanism makes deep reinforcement learning hard to be practical in a wide
range of areas. Plenty of methods have been developed for sample efficient deep
reinforcement learning, such as environment modeling, experience transfer, and
distributed modifications, amongst which, distributed deep reinforcement
learning has shown its potential in various applications, such as
human-computer gaming, and intelligent transportation. In this paper, we
conclude the state of this exciting field, by comparing the classical
distributed deep reinforcement learning methods, and studying important
components to achieve efficient distributed learning, covering single player
single agent distributed deep reinforcement learning to the most complex
multiple players multiple agents distributed deep reinforcement learning.
Furthermore, we review recently released toolboxes that help to realize
distributed deep reinforcement learning without many modifications of their
non-distributed versions. By analyzing their strengths and weaknesses, a
multi-player multi-agent distributed deep reinforcement learning toolbox is
developed and released, which is further validated on Wargame, a complex
environment, showing usability of the proposed toolbox for multiple players and
multiple agents distributed deep reinforcement learning under complex games.
Finally, we try to point out challenges and future trends, hoping this brief
review can provide a guide or a spark for researchers who are interested in
distributed deep reinforcement learning.
| 0
|
zrz@andrew.cmu.edu [SEP] Distributed Deep Reinforcement Learning: A Survey and A Multi-Player Multi-Agent Learning Toolbox : With the breakthrough of AlphaGo, deep reinforcement learning becomes a
recognized technique for solving sequential decision-making problems. Despite
its reputation, data inefficiency caused by its trial and error learning
mechanism makes deep reinforcement learning hard to be practical in a wide
range of areas. Plenty of methods have been developed for sample efficient deep
reinforcement learning, such as environment modeling, experience transfer, and
distributed modifications, amongst which, distributed deep reinforcement
learning has shown its potential in various applications, such as
human-computer gaming, and intelligent transportation. In this paper, we
conclude the state of this exciting field, by comparing the classical
distributed deep reinforcement learning methods, and studying important
components to achieve efficient distributed learning, covering single player
single agent distributed deep reinforcement learning to the most complex
multiple players multiple agents distributed deep reinforcement learning.
Furthermore, we review recently released toolboxes that help to realize
distributed deep reinforcement learning without many modifications of their
non-distributed versions. By analyzing their strengths and weaknesses, a
multi-player multi-agent distributed deep reinforcement learning toolbox is
developed and released, which is further validated on Wargame, a complex
environment, showing usability of the proposed toolbox for multiple players and
multiple agents distributed deep reinforcement learning under complex games.
Finally, we try to point out challenges and future trends, hoping this brief
review can provide a guide or a spark for researchers who are interested in
distributed deep reinforcement learning.
| 179
|
Evaluation Challenges for Geospatial ML
|
As geospatial machine learning models and maps derived from their predictions
are increasingly used for downstream analyses in science and policy, it is
imperative to evaluate their accuracy and applicability. Geospatial machine
learning has key distinctions from other learning paradigms, and as such, the
correct way to measure performance of spatial machine learning outputs has been
a topic of debate. In this paper, I delineate unique challenges of model
evaluation for geospatial machine learning with global or remotely sensed
datasets, culminating in concrete takeaways to improve evaluations of
geospatial model performance.
|
Disliked
|
zrz@andrew.cmu.edu
|
Evaluation Challenges for Geospatial ML : As geospatial machine learning models and maps derived from their predictions
are increasingly used for downstream analyses in science and policy, it is
imperative to evaluate their accuracy and applicability. Geospatial machine
learning has key distinctions from other learning paradigms, and as such, the
correct way to measure performance of spatial machine learning outputs has been
a topic of debate. In this paper, I delineate unique challenges of model
evaluation for geospatial machine learning with global or remotely sensed
datasets, culminating in concrete takeaways to improve evaluations of
geospatial model performance.
| 0
|
zrz@andrew.cmu.edu [SEP] Evaluation Challenges for Geospatial ML : As geospatial machine learning models and maps derived from their predictions
are increasingly used for downstream analyses in science and policy, it is
imperative to evaluate their accuracy and applicability. Geospatial machine
learning has key distinctions from other learning paradigms, and as such, the
correct way to measure performance of spatial machine learning outputs has been
a topic of debate. In this paper, I delineate unique challenges of model
evaluation for geospatial machine learning with global or remotely sensed
datasets, culminating in concrete takeaways to improve evaluations of
geospatial model performance.
| 52
|
A Mobile Quad-Arm Robot ARMS: Wheeled-Legged Tripedal Locomotion and Quad-Arm Loco-Manipulation
|
This article proposes a mobile quad-arm robot: ARMS, which unifies
wheeled-legged tripedal locomotion, wheeled locomotion, and quad-arm
loco-manipulation. ARMS's four arms have different mechanisms and are partially
designed to be general-purpose arms for the hybrid locomotion and
loco-manipulation. One three-degree-of-freedom (DOF) arm has an active wheel,
which is used for wheeled-legged tripedal walking and wheeled driving with
passive wheels attached to the torso. Two three-DOF general-purpose arms are
series elastic and used for wheeled-legged tripedal walking, object grasping,
and manipulation. The upper two-DOF arm is used for manipulation only; its
position and orientation are determined by coordinating all arms. Each motor is
controlled by an angle controller and trajectory modification with angle,
angular velocity, angular acceleration, and torque constraints. ARMS was
verified with seven experiments involving joint control, wheeled-legged
locomotion, wheeled locomotion and grasping, slope locomotion, block terrain
locomotion, carrying a bag, and outdoor locomotion.
|
Disliked
|
jechoi@andrew.cmu.edu
|
A Mobile Quad-Arm Robot ARMS: Wheeled-Legged Tripedal Locomotion and Quad-Arm Loco-Manipulation : This article proposes a mobile quad-arm robot: ARMS, which unifies
wheeled-legged tripedal locomotion, wheeled locomotion, and quad-arm
loco-manipulation. ARMS's four arms have different mechanisms and are partially
designed to be general-purpose arms for the hybrid locomotion and
loco-manipulation. One three-degree-of-freedom (DOF) arm has an active wheel,
which is used for wheeled-legged tripedal walking and wheeled driving with
passive wheels attached to the torso. Two three-DOF general-purpose arms are
series elastic and used for wheeled-legged tripedal walking, object grasping,
and manipulation. The upper two-DOF arm is used for manipulation only; its
position and orientation are determined by coordinating all arms. Each motor is
controlled by an angle controller and trajectory modification with angle,
angular velocity, angular acceleration, and torque constraints. ARMS was
verified with seven experiments involving joint control, wheeled-legged
locomotion, wheeled locomotion and grasping, slope locomotion, block terrain
locomotion, carrying a bag, and outdoor locomotion.
| 0
|
jechoi@andrew.cmu.edu [SEP] A Mobile Quad-Arm Robot ARMS: Wheeled-Legged Tripedal Locomotion and Quad-Arm Loco-Manipulation : This article proposes a mobile quad-arm robot: ARMS, which unifies
wheeled-legged tripedal locomotion, wheeled locomotion, and quad-arm
loco-manipulation. ARMS's four arms have different mechanisms and are partially
designed to be general-purpose arms for the hybrid locomotion and
loco-manipulation. One three-degree-of-freedom (DOF) arm has an active wheel,
which is used for wheeled-legged tripedal walking and wheeled driving with
passive wheels attached to the torso. Two three-DOF general-purpose arms are
series elastic and used for wheeled-legged tripedal walking, object grasping,
and manipulation. The upper two-DOF arm is used for manipulation only; its
position and orientation are determined by coordinating all arms. Each motor is
controlled by an angle controller and trajectory modification with angle,
angular velocity, angular acceleration, and torque constraints. ARMS was
verified with seven experiments involving joint control, wheeled-legged
locomotion, wheeled locomotion and grasping, slope locomotion, block terrain
locomotion, carrying a bag, and outdoor locomotion.
| 15
|
Highly dynamic locomotion control of biped robot enhanced by swing arms
|
Swing arms have an irreplaceable role in promoting highly dynamic locomotion
on bipedal robots by a larger angular momentum control space from the viewpoint
of biomechanics. Few bipedal robots utilize swing arms and its redundancy
characteristic of multiple degrees of freedom due to the lack of appropriate
locomotion control strategies to perfectly integrate modeling and control. This
paper presents a kind of control strategy by modeling the bipedal robot as a
flywheel-spring loaded inverted pendulum (F-SLIP) to extract characteristics of
swing arms and using the whole-body controller (WBC) to achieve these
characteristics, and also proposes a evaluation system including three aspects
of agility defined by us, stability and energy consumption for the highly
dynamic locomotion of bipedal robots. We design several sets of simulation
experiments and analyze the effects of swing arms according to the evaluation
system during the jumping motion of Purple (Purple energy rises in the
east)V1.0, a kind of bipedal robot designed to test high explosive locomotion.
Results show that Purple's agility is increased by more than 10 percent,
stabilization time is reduced by a factor of two, and energy consumption is
reduced by more than 20 percent after introducing swing arms.
|
Disliked
|
jechoi@andrew.cmu.edu
|
Highly dynamic locomotion control of biped robot enhanced by swing arms : Swing arms have an irreplaceable role in promoting highly dynamic locomotion
on bipedal robots by a larger angular momentum control space from the viewpoint
of biomechanics. Few bipedal robots utilize swing arms and its redundancy
characteristic of multiple degrees of freedom due to the lack of appropriate
locomotion control strategies to perfectly integrate modeling and control. This
paper presents a kind of control strategy by modeling the bipedal robot as a
flywheel-spring loaded inverted pendulum (F-SLIP) to extract characteristics of
swing arms and using the whole-body controller (WBC) to achieve these
characteristics, and also proposes a evaluation system including three aspects
of agility defined by us, stability and energy consumption for the highly
dynamic locomotion of bipedal robots. We design several sets of simulation
experiments and analyze the effects of swing arms according to the evaluation
system during the jumping motion of Purple (Purple energy rises in the
east)V1.0, a kind of bipedal robot designed to test high explosive locomotion.
Results show that Purple's agility is increased by more than 10 percent,
stabilization time is reduced by a factor of two, and energy consumption is
reduced by more than 20 percent after introducing swing arms.
| 0
|
jechoi@andrew.cmu.edu [SEP] Highly dynamic locomotion control of biped robot enhanced by swing arms : Swing arms have an irreplaceable role in promoting highly dynamic locomotion
on bipedal robots by a larger angular momentum control space from the viewpoint
of biomechanics. Few bipedal robots utilize swing arms and its redundancy
characteristic of multiple degrees of freedom due to the lack of appropriate
locomotion control strategies to perfectly integrate modeling and control. This
paper presents a kind of control strategy by modeling the bipedal robot as a
flywheel-spring loaded inverted pendulum (F-SLIP) to extract characteristics of
swing arms and using the whole-body controller (WBC) to achieve these
characteristics, and also proposes a evaluation system including three aspects
of agility defined by us, stability and energy consumption for the highly
dynamic locomotion of bipedal robots. We design several sets of simulation
experiments and analyze the effects of swing arms according to the evaluation
system during the jumping motion of Purple (Purple energy rises in the
east)V1.0, a kind of bipedal robot designed to test high explosive locomotion.
Results show that Purple's agility is increased by more than 10 percent,
stabilization time is reduced by a factor of two, and energy consumption is
reduced by more than 20 percent after introducing swing arms.
| 425
|
Geometrization of deep networks for the interpretability of deep learning systems
|
How to understand deep learning systems remains an open problem. In this
paper we propose that the answer may lie in the geometrization of deep
networks. Geometrization is a bridge to connect physics, geometry, deep network
and quantum computation and this may result in a new scheme to reveal the rule
of the physical world. By comparing the geometry of image matching and deep
networks, we show that geometrization of deep networks can be used to
understand existing deep learning systems and it may also help to solve the
interpretability problem of deep learning systems.
|
Disliked
|
zrz@andrew.cmu.edu
|
Geometrization of deep networks for the interpretability of deep learning systems : How to understand deep learning systems remains an open problem. In this
paper we propose that the answer may lie in the geometrization of deep
networks. Geometrization is a bridge to connect physics, geometry, deep network
and quantum computation and this may result in a new scheme to reveal the rule
of the physical world. By comparing the geometry of image matching and deep
networks, we show that geometrization of deep networks can be used to
understand existing deep learning systems and it may also help to solve the
interpretability problem of deep learning systems.
| 0
|
zrz@andrew.cmu.edu [SEP] Geometrization of deep networks for the interpretability of deep learning systems : How to understand deep learning systems remains an open problem. In this
paper we propose that the answer may lie in the geometrization of deep
networks. Geometrization is a bridge to connect physics, geometry, deep network
and quantum computation and this may result in a new scheme to reveal the rule
of the physical world. By comparing the geometry of image matching and deep
networks, we show that geometrization of deep networks can be used to
understand existing deep learning systems and it may also help to solve the
interpretability problem of deep learning systems.
| 160
|
Position Paper: Towards Transparent Machine Learning
|
Transparent machine learning is introduced as an alternative form of machine
learning, where both the model and the learning system are represented in
source code form. The goal of this project is to enable direct human
understanding of machine learning models, giving us the ability to learn,
verify, and refine them as programs. If solved, this technology could represent
a best-case scenario for the safety and security of AI systems going forward.
|
Liked
|
zrz@andrew.cmu.edu
|
Position Paper: Towards Transparent Machine Learning : Transparent machine learning is introduced as an alternative form of machine
learning, where both the model and the learning system are represented in
source code form. The goal of this project is to enable direct human
understanding of machine learning models, giving us the ability to learn,
verify, and refine them as programs. If solved, this technology could represent
a best-case scenario for the safety and security of AI systems going forward.
| 1
|
zrz@andrew.cmu.edu [SEP] Position Paper: Towards Transparent Machine Learning : Transparent machine learning is introduced as an alternative form of machine
learning, where both the model and the learning system are represented in
source code form. The goal of this project is to enable direct human
understanding of machine learning models, giving us the ability to learn,
verify, and refine them as programs. If solved, this technology could represent
a best-case scenario for the safety and security of AI systems going forward.
| 31
|
Deep Learning: From Basics to Building Deep Neural Networks with Python
|
This book is intended for beginners who have no familiarity with deep
learning. Our only expectation from readers is that they already have the basic
programming skills in Python.
|
Disliked
|
zrz@andrew.cmu.edu
|
Deep Learning: From Basics to Building Deep Neural Networks with Python : This book is intended for beginners who have no familiarity with deep
learning. Our only expectation from readers is that they already have the basic
programming skills in Python.
| 0
|
zrz@andrew.cmu.edu [SEP] Deep Learning: From Basics to Building Deep Neural Networks with Python : This book is intended for beginners who have no familiarity with deep
learning. Our only expectation from readers is that they already have the basic
programming skills in Python.
| 202
|
Automating the Learning of Inverse Kinematics for Robotic Arms with Redundant DoFs
|
Inverse Kinematics (IK) solves the problem of mapping from the Cartesian
space to the joint configuration space of a robotic arm. It has a wide range of
applications in areas such as computer graphics, protein structure prediction,
and robotics. With the vast advances of artificial neural networks (NNs), many
researchers recently turned to data-driven approaches to solving the IK
problem. Unfortunately, NNs become inadequate for robotic arms with redundant
Degrees-of-Freedom (DoFs). This is because such arms may have multiple angle
solutions to reach the same desired pose, while typical NNs only implement
one-to-one mapping functions, which associate just one consistent output for a
given input. In order to train usable NNs to solve the IK problem, most
existing works employ customized training datasets, in which every desired pose
only has one angle solution. This inevitably limits the generalization and
automation of the proposed approaches. This paper breaks through at two fronts:
(1) a systematic and mechanical approach to training data collection that
covers the entire working space of the robotic arm, and can be fully automated
and done only once after the arm is developed; and (2) a novel NN-based
framework that can leverage the redundant DoFs to produce multiple angle
solutions to any given desired pose of the robotic arm. The latter is
especially useful for robotic applications such as obstacle avoidance and
posture imitation.
|
Liked
|
jechoi@andrew.cmu.edu
|
Automating the Learning of Inverse Kinematics for Robotic Arms with Redundant DoFs : Inverse Kinematics (IK) solves the problem of mapping from the Cartesian
space to the joint configuration space of a robotic arm. It has a wide range of
applications in areas such as computer graphics, protein structure prediction,
and robotics. With the vast advances of artificial neural networks (NNs), many
researchers recently turned to data-driven approaches to solving the IK
problem. Unfortunately, NNs become inadequate for robotic arms with redundant
Degrees-of-Freedom (DoFs). This is because such arms may have multiple angle
solutions to reach the same desired pose, while typical NNs only implement
one-to-one mapping functions, which associate just one consistent output for a
given input. In order to train usable NNs to solve the IK problem, most
existing works employ customized training datasets, in which every desired pose
only has one angle solution. This inevitably limits the generalization and
automation of the proposed approaches. This paper breaks through at two fronts:
(1) a systematic and mechanical approach to training data collection that
covers the entire working space of the robotic arm, and can be fully automated
and done only once after the arm is developed; and (2) a novel NN-based
framework that can leverage the redundant DoFs to produce multiple angle
solutions to any given desired pose of the robotic arm. The latter is
especially useful for robotic applications such as obstacle avoidance and
posture imitation.
| 1
|
jechoi@andrew.cmu.edu [SEP] Automating the Learning of Inverse Kinematics for Robotic Arms with Redundant DoFs : Inverse Kinematics (IK) solves the problem of mapping from the Cartesian
space to the joint configuration space of a robotic arm. It has a wide range of
applications in areas such as computer graphics, protein structure prediction,
and robotics. With the vast advances of artificial neural networks (NNs), many
researchers recently turned to data-driven approaches to solving the IK
problem. Unfortunately, NNs become inadequate for robotic arms with redundant
Degrees-of-Freedom (DoFs). This is because such arms may have multiple angle
solutions to reach the same desired pose, while typical NNs only implement
one-to-one mapping functions, which associate just one consistent output for a
given input. In order to train usable NNs to solve the IK problem, most
existing works employ customized training datasets, in which every desired pose
only has one angle solution. This inevitably limits the generalization and
automation of the proposed approaches. This paper breaks through at two fronts:
(1) a systematic and mechanical approach to training data collection that
covers the entire working space of the robotic arm, and can be fully automated
and done only once after the arm is developed; and (2) a novel NN-based
framework that can leverage the redundant DoFs to produce multiple angle
solutions to any given desired pose of the robotic arm. The latter is
especially useful for robotic applications such as obstacle avoidance and
posture imitation.
| 455
|
Classic machine learning methods
|
In this chapter, we present the main classic machine learning methods. A
large part of the chapter is devoted to supervised learning techniques for
classification and regression, including nearest-neighbor methods, linear and
logistic regressions, support vector machines and tree-based algorithms. We
also describe the problem of overfitting as well as strategies to overcome it.
We finally provide a brief overview of unsupervised learning methods, namely
for clustering and dimensionality reduction.
|
Disliked
|
zrz@andrew.cmu.edu
|
Classic machine learning methods : In this chapter, we present the main classic machine learning methods. A
large part of the chapter is devoted to supervised learning techniques for
classification and regression, including nearest-neighbor methods, linear and
logistic regressions, support vector machines and tree-based algorithms. We
also describe the problem of overfitting as well as strategies to overcome it.
We finally provide a brief overview of unsupervised learning methods, namely
for clustering and dimensionality reduction.
| 0
|
zrz@andrew.cmu.edu [SEP] Classic machine learning methods : In this chapter, we present the main classic machine learning methods. A
large part of the chapter is devoted to supervised learning techniques for
classification and regression, including nearest-neighbor methods, linear and
logistic regressions, support vector machines and tree-based algorithms. We
also describe the problem of overfitting as well as strategies to overcome it.
We finally provide a brief overview of unsupervised learning methods, namely
for clustering and dimensionality reduction.
| 123
|
Rope3D: TheRoadside Perception Dataset for Autonomous Driving and Monocular 3D Object Detection Task
|
Concurrent perception datasets for autonomous driving are mainly limited to
frontal view with sensors mounted on the vehicle. None of them is designed for
the overlooked roadside perception tasks. On the other hand, the data captured
from roadside cameras have strengths over frontal-view data, which is believed
to facilitate a safer and more intelligent autonomous driving system. To
accelerate the progress of roadside perception, we present the first
high-diversity challenging Roadside Perception 3D dataset- Rope3D from a novel
view. The dataset consists of 50k images and over 1.5M 3D objects in various
scenes, which are captured under different settings including various cameras
with ambiguous mounting positions, camera specifications, viewpoints, and
different environmental conditions. We conduct strict 2D-3D joint annotation
and comprehensive data analysis, as well as set up a new 3D roadside perception
benchmark with metrics and evaluation devkit. Furthermore, we tailor the
existing frontal-view monocular 3D object detection approaches and propose to
leverage the geometry constraint to solve the inherent ambiguities caused by
various sensors, viewpoints. Our dataset is available on
https://thudair.baai.ac.cn/rope.
|
Disliked
|
zrz@andrew.cmu.edu
|
Rope3D: TheRoadside Perception Dataset for Autonomous Driving and Monocular 3D Object Detection Task : Concurrent perception datasets for autonomous driving are mainly limited to
frontal view with sensors mounted on the vehicle. None of them is designed for
the overlooked roadside perception tasks. On the other hand, the data captured
from roadside cameras have strengths over frontal-view data, which is believed
to facilitate a safer and more intelligent autonomous driving system. To
accelerate the progress of roadside perception, we present the first
high-diversity challenging Roadside Perception 3D dataset- Rope3D from a novel
view. The dataset consists of 50k images and over 1.5M 3D objects in various
scenes, which are captured under different settings including various cameras
with ambiguous mounting positions, camera specifications, viewpoints, and
different environmental conditions. We conduct strict 2D-3D joint annotation
and comprehensive data analysis, as well as set up a new 3D roadside perception
benchmark with metrics and evaluation devkit. Furthermore, we tailor the
existing frontal-view monocular 3D object detection approaches and propose to
leverage the geometry constraint to solve the inherent ambiguities caused by
various sensors, viewpoints. Our dataset is available on
https://thudair.baai.ac.cn/rope.
| 0
|
zrz@andrew.cmu.edu [SEP] Rope3D: TheRoadside Perception Dataset for Autonomous Driving and Monocular 3D Object Detection Task : Concurrent perception datasets for autonomous driving are mainly limited to
frontal view with sensors mounted on the vehicle. None of them is designed for
the overlooked roadside perception tasks. On the other hand, the data captured
from roadside cameras have strengths over frontal-view data, which is believed
to facilitate a safer and more intelligent autonomous driving system. To
accelerate the progress of roadside perception, we present the first
high-diversity challenging Roadside Perception 3D dataset- Rope3D from a novel
view. The dataset consists of 50k images and over 1.5M 3D objects in various
scenes, which are captured under different settings including various cameras
with ambiguous mounting positions, camera specifications, viewpoints, and
different environmental conditions. We conduct strict 2D-3D joint annotation
and comprehensive data analysis, as well as set up a new 3D roadside perception
benchmark with metrics and evaluation devkit. Furthermore, we tailor the
existing frontal-view monocular 3D object detection approaches and propose to
leverage the geometry constraint to solve the inherent ambiguities caused by
various sensors, viewpoints. Our dataset is available on
https://thudair.baai.ac.cn/rope.
| 304
|
A metric for characterizing the arm nonuse workspace in poststroke individuals using a robot arm
|
An over-reliance on the less-affected limb for functional tasks at the
expense of the paretic limb and in spite of recovered capacity is an
often-observed phenomenon in survivors of hemispheric stroke. The difference
between capacity for use and actual spontaneous use is referred to as arm
nonuse. Obtaining an ecologically valid evaluation of arm nonuse is challenging
because it requires the observation of spontaneous arm choice for different
tasks, which can easily be influenced by instructions, presumed expectations,
and awareness that one is being tested. To better quantify arm nonuse, we
developed the Bimanual Arm Reaching Test with a Robot (BARTR) for
quantitatively assessing arm nonuse in chronic stroke survivors. The BARTR is
an instrument that utilizes a robot arm as a means of remote and unbiased data
collection of nuanced spatial data for clinical evaluations of arm nonuse. This
approach shows promise for determining the efficacy of interventions designed
to reduce paretic arm nonuse and enhance functional recovery after stroke. We
show that the BARTR satisfies the criteria of an appropriate metric for
neurorehabilitative contexts: it is valid, reliable, and simple to use.
|
Liked
|
jechoi@andrew.cmu.edu
|
A metric for characterizing the arm nonuse workspace in poststroke individuals using a robot arm : An over-reliance on the less-affected limb for functional tasks at the
expense of the paretic limb and in spite of recovered capacity is an
often-observed phenomenon in survivors of hemispheric stroke. The difference
between capacity for use and actual spontaneous use is referred to as arm
nonuse. Obtaining an ecologically valid evaluation of arm nonuse is challenging
because it requires the observation of spontaneous arm choice for different
tasks, which can easily be influenced by instructions, presumed expectations,
and awareness that one is being tested. To better quantify arm nonuse, we
developed the Bimanual Arm Reaching Test with a Robot (BARTR) for
quantitatively assessing arm nonuse in chronic stroke survivors. The BARTR is
an instrument that utilizes a robot arm as a means of remote and unbiased data
collection of nuanced spatial data for clinical evaluations of arm nonuse. This
approach shows promise for determining the efficacy of interventions designed
to reduce paretic arm nonuse and enhance functional recovery after stroke. We
show that the BARTR satisfies the criteria of an appropriate metric for
neurorehabilitative contexts: it is valid, reliable, and simple to use.
| 1
|
jechoi@andrew.cmu.edu [SEP] A metric for characterizing the arm nonuse workspace in poststroke individuals using a robot arm : An over-reliance on the less-affected limb for functional tasks at the
expense of the paretic limb and in spite of recovered capacity is an
often-observed phenomenon in survivors of hemispheric stroke. The difference
between capacity for use and actual spontaneous use is referred to as arm
nonuse. Obtaining an ecologically valid evaluation of arm nonuse is challenging
because it requires the observation of spontaneous arm choice for different
tasks, which can easily be influenced by instructions, presumed expectations,
and awareness that one is being tested. To better quantify arm nonuse, we
developed the Bimanual Arm Reaching Test with a Robot (BARTR) for
quantitatively assessing arm nonuse in chronic stroke survivors. The BARTR is
an instrument that utilizes a robot arm as a means of remote and unbiased data
collection of nuanced spatial data for clinical evaluations of arm nonuse. This
approach shows promise for determining the efficacy of interventions designed
to reduce paretic arm nonuse and enhance functional recovery after stroke. We
show that the BARTR satisfies the criteria of an appropriate metric for
neurorehabilitative contexts: it is valid, reliable, and simple to use.
| 410
|
Probabilistic Deep Learning with Probabilistic Neural Networks and Deep Probabilistic Models
|
Probabilistic deep learning is deep learning that accounts for uncertainty,
both model uncertainty and data uncertainty. It is based on the use of
probabilistic models and deep neural networks. We distinguish two approaches to
probabilistic deep learning: probabilistic neural networks and deep
probabilistic models. The former employs deep neural networks that utilize
probabilistic layers which can represent and process uncertainty; the latter
uses probabilistic models that incorporate deep neural network components which
capture complex non-linear stochastic relationships between the random
variables. We discuss some major examples of each approach including Bayesian
neural networks and mixture density networks (for probabilistic neural
networks), and variational autoencoders, deep Gaussian processes and deep mixed
effects models (for deep probabilistic models). TensorFlow Probability is a
library for probabilistic modeling and inference which can be used for both
approaches of probabilistic deep learning. We include its code examples for
illustration.
|
Liked
|
zrz@andrew.cmu.edu
|
Probabilistic Deep Learning with Probabilistic Neural Networks and Deep Probabilistic Models : Probabilistic deep learning is deep learning that accounts for uncertainty,
both model uncertainty and data uncertainty. It is based on the use of
probabilistic models and deep neural networks. We distinguish two approaches to
probabilistic deep learning: probabilistic neural networks and deep
probabilistic models. The former employs deep neural networks that utilize
probabilistic layers which can represent and process uncertainty; the latter
uses probabilistic models that incorporate deep neural network components which
capture complex non-linear stochastic relationships between the random
variables. We discuss some major examples of each approach including Bayesian
neural networks and mixture density networks (for probabilistic neural
networks), and variational autoencoders, deep Gaussian processes and deep mixed
effects models (for deep probabilistic models). TensorFlow Probability is a
library for probabilistic modeling and inference which can be used for both
approaches of probabilistic deep learning. We include its code examples for
illustration.
| 1
|
zrz@andrew.cmu.edu [SEP] Probabilistic Deep Learning with Probabilistic Neural Networks and Deep Probabilistic Models : Probabilistic deep learning is deep learning that accounts for uncertainty,
both model uncertainty and data uncertainty. It is based on the use of
probabilistic models and deep neural networks. We distinguish two approaches to
probabilistic deep learning: probabilistic neural networks and deep
probabilistic models. The former employs deep neural networks that utilize
probabilistic layers which can represent and process uncertainty; the latter
uses probabilistic models that incorporate deep neural network components which
capture complex non-linear stochastic relationships between the random
variables. We discuss some major examples of each approach including Bayesian
neural networks and mixture density networks (for probabilistic neural
networks), and variational autoencoders, deep Gaussian processes and deep mixed
effects models (for deep probabilistic models). TensorFlow Probability is a
library for probabilistic modeling and inference which can be used for both
approaches of probabilistic deep learning. We include its code examples for
illustration.
| 169
|
A Survey on Resilient Machine Learning
|
Machine learning based system are increasingly being used for sensitive tasks
such as security surveillance, guiding autonomous vehicle, taking investment
decisions, detecting and blocking network intrusion and malware etc. However,
recent research has shown that machine learning models are venerable to attacks
by adversaries at all phases of machine learning (eg, training data collection,
training, operation). All model classes of machine learning systems can be
misled by providing carefully crafted inputs making them wrongly classify
inputs. Maliciously created input samples can affect the learning process of a
ML system by either slowing down the learning process, or affecting the
performance of the learned mode, or causing the system make error(s) only in
attacker's planned scenario. Because of these developments, understanding
security of machine learning algorithms and systems is emerging as an important
research area among computer security and machine learning researchers and
practitioners. We present a survey of this emerging area in machine learning.
|
Liked
|
zrz@andrew.cmu.edu
|
A Survey on Resilient Machine Learning : Machine learning based system are increasingly being used for sensitive tasks
such as security surveillance, guiding autonomous vehicle, taking investment
decisions, detecting and blocking network intrusion and malware etc. However,
recent research has shown that machine learning models are venerable to attacks
by adversaries at all phases of machine learning (eg, training data collection,
training, operation). All model classes of machine learning systems can be
misled by providing carefully crafted inputs making them wrongly classify
inputs. Maliciously created input samples can affect the learning process of a
ML system by either slowing down the learning process, or affecting the
performance of the learned mode, or causing the system make error(s) only in
attacker's planned scenario. Because of these developments, understanding
security of machine learning algorithms and systems is emerging as an important
research area among computer security and machine learning researchers and
practitioners. We present a survey of this emerging area in machine learning.
| 1
|
zrz@andrew.cmu.edu [SEP] A Survey on Resilient Machine Learning : Machine learning based system are increasingly being used for sensitive tasks
such as security surveillance, guiding autonomous vehicle, taking investment
decisions, detecting and blocking network intrusion and malware etc. However,
recent research has shown that machine learning models are venerable to attacks
by adversaries at all phases of machine learning (eg, training data collection,
training, operation). All model classes of machine learning systems can be
misled by providing carefully crafted inputs making them wrongly classify
inputs. Maliciously created input samples can affect the learning process of a
ML system by either slowing down the learning process, or affecting the
performance of the learned mode, or causing the system make error(s) only in
attacker's planned scenario. Because of these developments, understanding
security of machine learning algorithms and systems is emerging as an important
research area among computer security and machine learning researchers and
practitioners. We present a survey of this emerging area in machine learning.
| 96
|
Security of Deep Learning Methodologies: Challenges and Opportunities
|
Despite the plethora of studies about security vulnerabilities and defenses
of deep learning models, security aspects of deep learning methodologies, such
as transfer learning, have been rarely studied. In this article, we highlight
the security challenges and research opportunities of these methodologies,
focusing on vulnerabilities and attacks unique to them.
|
Liked
|
zrz@andrew.cmu.edu
|
Security of Deep Learning Methodologies: Challenges and Opportunities : Despite the plethora of studies about security vulnerabilities and defenses
of deep learning models, security aspects of deep learning methodologies, such
as transfer learning, have been rarely studied. In this article, we highlight
the security challenges and research opportunities of these methodologies,
focusing on vulnerabilities and attacks unique to them.
| 1
|
zrz@andrew.cmu.edu [SEP] Security of Deep Learning Methodologies: Challenges and Opportunities : Despite the plethora of studies about security vulnerabilities and defenses
of deep learning models, security aspects of deep learning methodologies, such
as transfer learning, have been rarely studied. In this article, we highlight
the security challenges and research opportunities of these methodologies,
focusing on vulnerabilities and attacks unique to them.
| 265
|
Expanding the Boundaries of Vision Prior Knowledge in Multi-modal Large Language Models
|
Does the prior knowledge of the vision encoder constrain the capability
boundary of Multi-modal Large Language Models (MLLMs)? While most existing
research treats MLLMs as unified systems optimized through end-to-end training,
the impact of vision encoder's prior knowledge is seldom investigated. In this
work, we introduce a novel metric, $Rank_e$, to quantify the effect of prior
knowledge of the vision encoder on MLLM performance. Our analysis reveals a
positive correlation between prior knowledge and MLLM performance. Moreover, we
find that domain-specific fine-tuning using solely end-to-end visual question
answering (VQA) data is insufficient, particularly for entities with low
inherent visual prior knowledge. To address this issue, we propose VisPRE
(Vision Prior Remediation), a two-stage training framework that explicitly
incorporates prior knowledge at the vision encoder level. Experimental results
demonstrate that augmenting vision encoder's prior knowledge substantially
boosts the visual understanding capabilities of MLLMs, offering a novel and
effective strategy for improving performance, especially in scenarios involving
uncommon visual entities.
|
Liked
|
zrz@andrew.cmu.edu
|
Expanding the Boundaries of Vision Prior Knowledge in Multi-modal Large Language Models : Does the prior knowledge of the vision encoder constrain the capability
boundary of Multi-modal Large Language Models (MLLMs)? While most existing
research treats MLLMs as unified systems optimized through end-to-end training,
the impact of vision encoder's prior knowledge is seldom investigated. In this
work, we introduce a novel metric, $Rank_e$, to quantify the effect of prior
knowledge of the vision encoder on MLLM performance. Our analysis reveals a
positive correlation between prior knowledge and MLLM performance. Moreover, we
find that domain-specific fine-tuning using solely end-to-end visual question
answering (VQA) data is insufficient, particularly for entities with low
inherent visual prior knowledge. To address this issue, we propose VisPRE
(Vision Prior Remediation), a two-stage training framework that explicitly
incorporates prior knowledge at the vision encoder level. Experimental results
demonstrate that augmenting vision encoder's prior knowledge substantially
boosts the visual understanding capabilities of MLLMs, offering a novel and
effective strategy for improving performance, especially in scenarios involving
uncommon visual entities.
| 1
|
zrz@andrew.cmu.edu [SEP] Expanding the Boundaries of Vision Prior Knowledge in Multi-modal Large Language Models : Does the prior knowledge of the vision encoder constrain the capability
boundary of Multi-modal Large Language Models (MLLMs)? While most existing
research treats MLLMs as unified systems optimized through end-to-end training,
the impact of vision encoder's prior knowledge is seldom investigated. In this
work, we introduce a novel metric, $Rank_e$, to quantify the effect of prior
knowledge of the vision encoder on MLLM performance. Our analysis reveals a
positive correlation between prior knowledge and MLLM performance. Moreover, we
find that domain-specific fine-tuning using solely end-to-end visual question
answering (VQA) data is insufficient, particularly for entities with low
inherent visual prior knowledge. To address this issue, we propose VisPRE
(Vision Prior Remediation), a two-stage training framework that explicitly
incorporates prior knowledge at the vision encoder level. Experimental results
demonstrate that augmenting vision encoder's prior knowledge substantially
boosts the visual understanding capabilities of MLLMs, offering a novel and
effective strategy for improving performance, especially in scenarios involving
uncommon visual entities.
| 348
|
Neural Models and Algorithms for Sensorimotor Control of an Octopus Arm
|
In this article, a biophysically realistic model of a soft octopus arm with
internal musculature is presented. The modeling is motivated by experimental
observations of sensorimotor control where an arm localizes and reaches a
target. Major contributions of this article are: (i) development of models to
capture the mechanical properties of arm musculature, the electrical properties
of the arm peripheral nervous system (PNS), and the coupling of PNS with
muscular contractions; (ii) modeling the arm sensory system, including
chemosensing and proprioception; and (iii) algorithms for sensorimotor control,
which include a novel feedback neural motor control law for mimicking
target-oriented arm reaching motions, and a novel consensus algorithm for
solving sensing problems such as locating a food source from local chemical
sensory information (exogenous) and arm deformation information (endogenous).
Several analytical results, including rest-state characterization and stability
properties of the proposed sensing and motor control algorithms, are provided.
Numerical simulations demonstrate the efficacy of our approach. Qualitative
comparisons against observed arm rest shapes and target-oriented reaching
motions are also reported.
|
Liked
|
jechoi@andrew.cmu.edu
|
Neural Models and Algorithms for Sensorimotor Control of an Octopus Arm : In this article, a biophysically realistic model of a soft octopus arm with
internal musculature is presented. The modeling is motivated by experimental
observations of sensorimotor control where an arm localizes and reaches a
target. Major contributions of this article are: (i) development of models to
capture the mechanical properties of arm musculature, the electrical properties
of the arm peripheral nervous system (PNS), and the coupling of PNS with
muscular contractions; (ii) modeling the arm sensory system, including
chemosensing and proprioception; and (iii) algorithms for sensorimotor control,
which include a novel feedback neural motor control law for mimicking
target-oriented arm reaching motions, and a novel consensus algorithm for
solving sensing problems such as locating a food source from local chemical
sensory information (exogenous) and arm deformation information (endogenous).
Several analytical results, including rest-state characterization and stability
properties of the proposed sensing and motor control algorithms, are provided.
Numerical simulations demonstrate the efficacy of our approach. Qualitative
comparisons against observed arm rest shapes and target-oriented reaching
motions are also reported.
| 1
|
jechoi@andrew.cmu.edu [SEP] Neural Models and Algorithms for Sensorimotor Control of an Octopus Arm : In this article, a biophysically realistic model of a soft octopus arm with
internal musculature is presented. The modeling is motivated by experimental
observations of sensorimotor control where an arm localizes and reaches a
target. Major contributions of this article are: (i) development of models to
capture the mechanical properties of arm musculature, the electrical properties
of the arm peripheral nervous system (PNS), and the coupling of PNS with
muscular contractions; (ii) modeling the arm sensory system, including
chemosensing and proprioception; and (iii) algorithms for sensorimotor control,
which include a novel feedback neural motor control law for mimicking
target-oriented arm reaching motions, and a novel consensus algorithm for
solving sensing problems such as locating a food source from local chemical
sensory information (exogenous) and arm deformation information (endogenous).
Several analytical results, including rest-state characterization and stability
properties of the proposed sensing and motor control algorithms, are provided.
Numerical simulations demonstrate the efficacy of our approach. Qualitative
comparisons against observed arm rest shapes and target-oriented reaching
motions are also reported.
| 493
|
PyKale: Knowledge-Aware Machine Learning from Multiple Sources in Python
|
Machine learning is a general-purpose technology holding promises for many
interdisciplinary research problems. However, significant barriers exist in
crossing disciplinary boundaries when most machine learning tools are developed
in different areas separately. We present Pykale - a Python library for
knowledge-aware machine learning on graphs, images, texts, and videos to enable
and accelerate interdisciplinary research. We formulate new green machine
learning guidelines based on standard software engineering practices and
propose a novel pipeline-based application programming interface (API). PyKale
focuses on leveraging knowledge from multiple sources for accurate and
interpretable prediction, thus supporting multimodal learning and transfer
learning (particularly domain adaptation) with latest deep learning and
dimensionality reduction models. We build PyKale on PyTorch and leverage the
rich PyTorch ecosystem. Our pipeline-based API design enforces standardization
and minimalism, embracing green machine learning concepts via reducing
repetitions and redundancy, reusing existing resources, and recycling learning
models across areas. We demonstrate its interdisciplinary nature via examples
in bioinformatics, knowledge graph, image/video recognition, and medical
imaging.
|
Disliked
|
zrz@andrew.cmu.edu
|
PyKale: Knowledge-Aware Machine Learning from Multiple Sources in Python : Machine learning is a general-purpose technology holding promises for many
interdisciplinary research problems. However, significant barriers exist in
crossing disciplinary boundaries when most machine learning tools are developed
in different areas separately. We present Pykale - a Python library for
knowledge-aware machine learning on graphs, images, texts, and videos to enable
and accelerate interdisciplinary research. We formulate new green machine
learning guidelines based on standard software engineering practices and
propose a novel pipeline-based application programming interface (API). PyKale
focuses on leveraging knowledge from multiple sources for accurate and
interpretable prediction, thus supporting multimodal learning and transfer
learning (particularly domain adaptation) with latest deep learning and
dimensionality reduction models. We build PyKale on PyTorch and leverage the
rich PyTorch ecosystem. Our pipeline-based API design enforces standardization
and minimalism, embracing green machine learning concepts via reducing
repetitions and redundancy, reusing existing resources, and recycling learning
models across areas. We demonstrate its interdisciplinary nature via examples
in bioinformatics, knowledge graph, image/video recognition, and medical
imaging.
| 0
|
zrz@andrew.cmu.edu [SEP] PyKale: Knowledge-Aware Machine Learning from Multiple Sources in Python : Machine learning is a general-purpose technology holding promises for many
interdisciplinary research problems. However, significant barriers exist in
crossing disciplinary boundaries when most machine learning tools are developed
in different areas separately. We present Pykale - a Python library for
knowledge-aware machine learning on graphs, images, texts, and videos to enable
and accelerate interdisciplinary research. We formulate new green machine
learning guidelines based on standard software engineering practices and
propose a novel pipeline-based application programming interface (API). PyKale
focuses on leveraging knowledge from multiple sources for accurate and
interpretable prediction, thus supporting multimodal learning and transfer
learning (particularly domain adaptation) with latest deep learning and
dimensionality reduction models. We build PyKale on PyTorch and leverage the
rich PyTorch ecosystem. Our pipeline-based API design enforces standardization
and minimalism, embracing green machine learning concepts via reducing
repetitions and redundancy, reusing existing resources, and recycling learning
models across areas. We demonstrate its interdisciplinary nature via examples
in bioinformatics, knowledge graph, image/video recognition, and medical
imaging.
| 136
|
Improving Cancer Imaging Diagnosis with Bayesian Networks and Deep Learning: A Bayesian Deep Learning Approach
|
With recent advancements in the development of artificial intelligence
applications using theories and algorithms in machine learning, many accurate
models can be created to train and predict on given datasets. With the
realization of the importance of imaging interpretation in cancer diagnosis,
this article aims to investigate the theory behind Deep Learning and Bayesian
Network prediction models. Based on the advantages and drawbacks of each model,
different approaches will be used to construct a Bayesian Deep Learning Model,
combining the strengths while minimizing the weaknesses. Finally, the
applications and accuracy of the resulting Bayesian Deep Learning approach in
the health industry in classifying images will be analyzed.
|
Liked
|
zrz@andrew.cmu.edu
|
Improving Cancer Imaging Diagnosis with Bayesian Networks and Deep Learning: A Bayesian Deep Learning Approach : With recent advancements in the development of artificial intelligence
applications using theories and algorithms in machine learning, many accurate
models can be created to train and predict on given datasets. With the
realization of the importance of imaging interpretation in cancer diagnosis,
this article aims to investigate the theory behind Deep Learning and Bayesian
Network prediction models. Based on the advantages and drawbacks of each model,
different approaches will be used to construct a Bayesian Deep Learning Model,
combining the strengths while minimizing the weaknesses. Finally, the
applications and accuracy of the resulting Bayesian Deep Learning approach in
the health industry in classifying images will be analyzed.
| 1
|
zrz@andrew.cmu.edu [SEP] Improving Cancer Imaging Diagnosis with Bayesian Networks and Deep Learning: A Bayesian Deep Learning Approach : With recent advancements in the development of artificial intelligence
applications using theories and algorithms in machine learning, many accurate
models can be created to train and predict on given datasets. With the
realization of the importance of imaging interpretation in cancer diagnosis,
this article aims to investigate the theory behind Deep Learning and Bayesian
Network prediction models. Based on the advantages and drawbacks of each model,
different approaches will be used to construct a Bayesian Deep Learning Model,
combining the strengths while minimizing the weaknesses. Finally, the
applications and accuracy of the resulting Bayesian Deep Learning approach in
the health industry in classifying images will be analyzed.
| 217
|
Pre-training with Non-expert Human Demonstration for Deep Reinforcement Learning
|
Deep reinforcement learning (deep RL) has achieved superior performance in
complex sequential tasks by using deep neural networks as function
approximators to learn directly from raw input images. However, learning
directly from raw images is data inefficient. The agent must learn feature
representation of complex states in addition to learning a policy. As a result,
deep RL typically suffers from slow learning speeds and often requires a
prohibitively large amount of training time and data to reach reasonable
performance, making it inapplicable to real-world settings where data is
expensive. In this work, we improve data efficiency in deep RL by addressing
one of the two learning goals, feature learning. We leverage supervised
learning to pre-train on a small set of non-expert human demonstrations and
empirically evaluate our approach using the asynchronous advantage actor-critic
algorithms (A3C) in the Atari domain. Our results show significant improvements
in learning speed, even when the provided demonstration is noisy and of low
quality.
|
Liked
|
zrz@andrew.cmu.edu
|
Pre-training with Non-expert Human Demonstration for Deep Reinforcement Learning : Deep reinforcement learning (deep RL) has achieved superior performance in
complex sequential tasks by using deep neural networks as function
approximators to learn directly from raw input images. However, learning
directly from raw images is data inefficient. The agent must learn feature
representation of complex states in addition to learning a policy. As a result,
deep RL typically suffers from slow learning speeds and often requires a
prohibitively large amount of training time and data to reach reasonable
performance, making it inapplicable to real-world settings where data is
expensive. In this work, we improve data efficiency in deep RL by addressing
one of the two learning goals, feature learning. We leverage supervised
learning to pre-train on a small set of non-expert human demonstrations and
empirically evaluate our approach using the asynchronous advantage actor-critic
algorithms (A3C) in the Atari domain. Our results show significant improvements
in learning speed, even when the provided demonstration is noisy and of low
quality.
| 1
|
zrz@andrew.cmu.edu [SEP] Pre-training with Non-expert Human Demonstration for Deep Reinforcement Learning : Deep reinforcement learning (deep RL) has achieved superior performance in
complex sequential tasks by using deep neural networks as function
approximators to learn directly from raw input images. However, learning
directly from raw images is data inefficient. The agent must learn feature
representation of complex states in addition to learning a policy. As a result,
deep RL typically suffers from slow learning speeds and often requires a
prohibitively large amount of training time and data to reach reasonable
performance, making it inapplicable to real-world settings where data is
expensive. In this work, we improve data efficiency in deep RL by addressing
one of the two learning goals, feature learning. We leverage supervised
learning to pre-train on a small set of non-expert human demonstrations and
empirically evaluate our approach using the asynchronous advantage actor-critic
algorithms (A3C) in the Atari domain. Our results show significant improvements
in learning speed, even when the provided demonstration is noisy and of low
quality.
| 266
|
Generating and Customizing Robotic Arm Trajectories using Neural Networks
|
We introduce a neural network approach for generating and customizing the
trajectory of a robotic arm, that guarantees precision and repeatability. To
highlight the potential of this novel method, we describe the design and
implementation of the technique and show its application in an experimental
setting of cognitive robotics. In this scenario, the NICO robot was
characterized by the ability to point to specific points in space with precise
linear movements, increasing the predictability of the robotic action during
its interaction with humans. To achieve this goal, the neural network computes
the forward kinematics of the robot arm. By integrating it with a generator of
joint angles, another neural network was developed and trained on an artificial
dataset created from suitable start and end poses of the robotic arm. Through
the computation of angular velocities, the robot was characterized by its
ability to perform the movement, and the quality of its action was evaluated in
terms of shape and accuracy. Thanks to its broad applicability, our approach
successfully generates precise trajectories that could be customized in their
shape and adapted to different settings.
|
Liked
|
jechoi@andrew.cmu.edu
|
Generating and Customizing Robotic Arm Trajectories using Neural Networks : We introduce a neural network approach for generating and customizing the
trajectory of a robotic arm, that guarantees precision and repeatability. To
highlight the potential of this novel method, we describe the design and
implementation of the technique and show its application in an experimental
setting of cognitive robotics. In this scenario, the NICO robot was
characterized by the ability to point to specific points in space with precise
linear movements, increasing the predictability of the robotic action during
its interaction with humans. To achieve this goal, the neural network computes
the forward kinematics of the robot arm. By integrating it with a generator of
joint angles, another neural network was developed and trained on an artificial
dataset created from suitable start and end poses of the robotic arm. Through
the computation of angular velocities, the robot was characterized by its
ability to perform the movement, and the quality of its action was evaluated in
terms of shape and accuracy. Thanks to its broad applicability, our approach
successfully generates precise trajectories that could be customized in their
shape and adapted to different settings.
| 1
|
jechoi@andrew.cmu.edu [SEP] Generating and Customizing Robotic Arm Trajectories using Neural Networks : We introduce a neural network approach for generating and customizing the
trajectory of a robotic arm, that guarantees precision and repeatability. To
highlight the potential of this novel method, we describe the design and
implementation of the technique and show its application in an experimental
setting of cognitive robotics. In this scenario, the NICO robot was
characterized by the ability to point to specific points in space with precise
linear movements, increasing the predictability of the robotic action during
its interaction with humans. To achieve this goal, the neural network computes
the forward kinematics of the robot arm. By integrating it with a generator of
joint angles, another neural network was developed and trained on an artificial
dataset created from suitable start and end poses of the robotic arm. Through
the computation of angular velocities, the robot was characterized by its
ability to perform the movement, and the quality of its action was evaluated in
terms of shape and accuracy. Thanks to its broad applicability, our approach
successfully generates precise trajectories that could be customized in their
shape and adapted to different settings.
| 454
|
Pedipulate: Enabling Manipulation Skills using a Quadruped Robot's Leg
|
Legged robots have the potential to become vital in maintenance, home
support, and exploration scenarios. In order to interact with and manipulate
their environments, most legged robots are equipped with a dedicated robot arm,
which means additional mass and mechanical complexity compared to standard
legged robots. In this work, we explore pedipulation - using the legs of a
legged robot for manipulation. By training a reinforcement learning policy that
tracks position targets for one foot, we enable a dedicated pedipulation
controller that is robust to disturbances, has a large workspace through
whole-body behaviors, and can reach far-away targets with gait emergence,
enabling loco-pedipulation. By deploying our controller on a quadrupedal robot
using teleoperation, we demonstrate various real-world tasks such as door
opening, sample collection, and pushing obstacles. We demonstrate load carrying
of more than 2.0 kg at the foot. Additionally, the controller is robust to
interaction forces at the foot, disturbances at the base, and slippery contact
surfaces. Videos of the experiments are available at
https://sites.google.com/leggedrobotics.com/pedipulate.
|
Liked
|
jechoi@andrew.cmu.edu
|
Pedipulate: Enabling Manipulation Skills using a Quadruped Robot's Leg : Legged robots have the potential to become vital in maintenance, home
support, and exploration scenarios. In order to interact with and manipulate
their environments, most legged robots are equipped with a dedicated robot arm,
which means additional mass and mechanical complexity compared to standard
legged robots. In this work, we explore pedipulation - using the legs of a
legged robot for manipulation. By training a reinforcement learning policy that
tracks position targets for one foot, we enable a dedicated pedipulation
controller that is robust to disturbances, has a large workspace through
whole-body behaviors, and can reach far-away targets with gait emergence,
enabling loco-pedipulation. By deploying our controller on a quadrupedal robot
using teleoperation, we demonstrate various real-world tasks such as door
opening, sample collection, and pushing obstacles. We demonstrate load carrying
of more than 2.0 kg at the foot. Additionally, the controller is robust to
interaction forces at the foot, disturbances at the base, and slippery contact
surfaces. Videos of the experiments are available at
https://sites.google.com/leggedrobotics.com/pedipulate.
| 1
|
jechoi@andrew.cmu.edu [SEP] Pedipulate: Enabling Manipulation Skills using a Quadruped Robot's Leg : Legged robots have the potential to become vital in maintenance, home
support, and exploration scenarios. In order to interact with and manipulate
their environments, most legged robots are equipped with a dedicated robot arm,
which means additional mass and mechanical complexity compared to standard
legged robots. In this work, we explore pedipulation - using the legs of a
legged robot for manipulation. By training a reinforcement learning policy that
tracks position targets for one foot, we enable a dedicated pedipulation
controller that is robust to disturbances, has a large workspace through
whole-body behaviors, and can reach far-away targets with gait emergence,
enabling loco-pedipulation. By deploying our controller on a quadrupedal robot
using teleoperation, we demonstrate various real-world tasks such as door
opening, sample collection, and pushing obstacles. We demonstrate load carrying
of more than 2.0 kg at the foot. Additionally, the controller is robust to
interaction forces at the foot, disturbances at the base, and slippery contact
surfaces. Videos of the experiments are available at
https://sites.google.com/leggedrobotics.com/pedipulate.
| 533
|
Augmented Reality Remote Operation of Dual Arm Manipulators in Hot Boxes
|
In nuclear isotope and chemistry laboratories, hot cells and gloveboxes
provide scientists with a controlled and safe environment to perform
experiments. Working on experiments in these isolated containment cells
requires scientists to be physically present. For hot cell work today,
scientists manipulate equipment and radioactive material inside through a
bilateral mechanical control mechanism. Motions produced outside the cell with
the master control levers are mechanically transferred to the internal grippers
inside the shielded containment cell. There is a growing need to have the
capability to conduct experiments within these cells remotely. A simple method
to enable remote manipulations within hot cell and glovebox cells is to mount
two robotic arms inside a box to mimic the motions of human hands. An AR
application was built in this work to allow a user wearing a Microsoft HoloLens
2 headset to teleoperate dual arm manipulators by grasping robotic end-effector
digital replicas in AR from a remote location. In addition to the real-time
replica of the physical robotic arms in AR, the application enables users to
view a live video stream attached to the robotic arms and parse a 3D point
cloud of 3D objects in their remote AR environment for better situational
awareness. This work also provides users with virtual fixture to assist in
manipulation and other teleoperation tasks.
|
Liked
|
jechoi@andrew.cmu.edu
|
Augmented Reality Remote Operation of Dual Arm Manipulators in Hot Boxes : In nuclear isotope and chemistry laboratories, hot cells and gloveboxes
provide scientists with a controlled and safe environment to perform
experiments. Working on experiments in these isolated containment cells
requires scientists to be physically present. For hot cell work today,
scientists manipulate equipment and radioactive material inside through a
bilateral mechanical control mechanism. Motions produced outside the cell with
the master control levers are mechanically transferred to the internal grippers
inside the shielded containment cell. There is a growing need to have the
capability to conduct experiments within these cells remotely. A simple method
to enable remote manipulations within hot cell and glovebox cells is to mount
two robotic arms inside a box to mimic the motions of human hands. An AR
application was built in this work to allow a user wearing a Microsoft HoloLens
2 headset to teleoperate dual arm manipulators by grasping robotic end-effector
digital replicas in AR from a remote location. In addition to the real-time
replica of the physical robotic arms in AR, the application enables users to
view a live video stream attached to the robotic arms and parse a 3D point
cloud of 3D objects in their remote AR environment for better situational
awareness. This work also provides users with virtual fixture to assist in
manipulation and other teleoperation tasks.
| 1
|
jechoi@andrew.cmu.edu [SEP] Augmented Reality Remote Operation of Dual Arm Manipulators in Hot Boxes : In nuclear isotope and chemistry laboratories, hot cells and gloveboxes
provide scientists with a controlled and safe environment to perform
experiments. Working on experiments in these isolated containment cells
requires scientists to be physically present. For hot cell work today,
scientists manipulate equipment and radioactive material inside through a
bilateral mechanical control mechanism. Motions produced outside the cell with
the master control levers are mechanically transferred to the internal grippers
inside the shielded containment cell. There is a growing need to have the
capability to conduct experiments within these cells remotely. A simple method
to enable remote manipulations within hot cell and glovebox cells is to mount
two robotic arms inside a box to mimic the motions of human hands. An AR
application was built in this work to allow a user wearing a Microsoft HoloLens
2 headset to teleoperate dual arm manipulators by grasping robotic end-effector
digital replicas in AR from a remote location. In addition to the real-time
replica of the physical robotic arms in AR, the application enables users to
view a live video stream attached to the robotic arms and parse a 3D point
cloud of 3D objects in their remote AR environment for better situational
awareness. This work also provides users with virtual fixture to assist in
manipulation and other teleoperation tasks.
| 521
|
Communicating Robot Arm Motion Intent Through Mixed Reality Head-mounted Displays
|
Efficient motion intent communication is necessary for safe and collaborative
work environments with collocated humans and robots. Humans efficiently
communicate their motion intent to other humans through gestures, gaze, and
social cues. However, robots often have difficulty efficiently communicating
their motion intent to humans via these methods. Many existing methods for
robot motion intent communication rely on 2D displays, which require the human
to continually pause their work and check a visualization. We propose a mixed
reality head-mounted display visualization of the proposed robot motion over
the wearer's real-world view of the robot and its environment. To evaluate the
effectiveness of this system against a 2D display visualization and against no
visualization, we asked 32 participants to labeled different robot arm motions
as either colliding or non-colliding with blocks on a table. We found a 16%
increase in accuracy with a 62% decrease in the time it took to complete the
task compared to the next best system. This demonstrates that a mixed-reality
HMD allows a human to more quickly and accurately tell where the robot is going
to move than the compared baselines.
|
Liked
|
jechoi@andrew.cmu.edu
|
Communicating Robot Arm Motion Intent Through Mixed Reality Head-mounted Displays : Efficient motion intent communication is necessary for safe and collaborative
work environments with collocated humans and robots. Humans efficiently
communicate their motion intent to other humans through gestures, gaze, and
social cues. However, robots often have difficulty efficiently communicating
their motion intent to humans via these methods. Many existing methods for
robot motion intent communication rely on 2D displays, which require the human
to continually pause their work and check a visualization. We propose a mixed
reality head-mounted display visualization of the proposed robot motion over
the wearer's real-world view of the robot and its environment. To evaluate the
effectiveness of this system against a 2D display visualization and against no
visualization, we asked 32 participants to labeled different robot arm motions
as either colliding or non-colliding with blocks on a table. We found a 16%
increase in accuracy with a 62% decrease in the time it took to complete the
task compared to the next best system. This demonstrates that a mixed-reality
HMD allows a human to more quickly and accurately tell where the robot is going
to move than the compared baselines.
| 1
|
jechoi@andrew.cmu.edu [SEP] Communicating Robot Arm Motion Intent Through Mixed Reality Head-mounted Displays : Efficient motion intent communication is necessary for safe and collaborative
work environments with collocated humans and robots. Humans efficiently
communicate their motion intent to other humans through gestures, gaze, and
social cues. However, robots often have difficulty efficiently communicating
their motion intent to humans via these methods. Many existing methods for
robot motion intent communication rely on 2D displays, which require the human
to continually pause their work and check a visualization. We propose a mixed
reality head-mounted display visualization of the proposed robot motion over
the wearer's real-world view of the robot and its environment. To evaluate the
effectiveness of this system against a 2D display visualization and against no
visualization, we asked 32 participants to labeled different robot arm motions
as either colliding or non-colliding with blocks on a table. We found a 16%
increase in accuracy with a 62% decrease in the time it took to complete the
task compared to the next best system. This demonstrates that a mixed-reality
HMD allows a human to more quickly and accurately tell where the robot is going
to move than the compared baselines.
| 561
|
The ART of Transfer Learning: An Adaptive and Robust Pipeline
|
Transfer learning is an essential tool for improving the performance of
primary tasks by leveraging information from auxiliary data resources. In this
work, we propose Adaptive Robust Transfer Learning (ART), a flexible pipeline
of performing transfer learning with generic machine learning algorithms. We
establish the non-asymptotic learning theory of ART, providing a provable
theoretical guarantee for achieving adaptive transfer while preventing negative
transfer. Additionally, we introduce an ART-integrated-aggregating machine that
produces a single final model when multiple candidate algorithms are
considered. We demonstrate the promising performance of ART through extensive
empirical studies on regression, classification, and sparse learning. We
further present a real-data analysis for a mortality study.
|
Liked
|
zrz@andrew.cmu.edu
|
The ART of Transfer Learning: An Adaptive and Robust Pipeline : Transfer learning is an essential tool for improving the performance of
primary tasks by leveraging information from auxiliary data resources. In this
work, we propose Adaptive Robust Transfer Learning (ART), a flexible pipeline
of performing transfer learning with generic machine learning algorithms. We
establish the non-asymptotic learning theory of ART, providing a provable
theoretical guarantee for achieving adaptive transfer while preventing negative
transfer. Additionally, we introduce an ART-integrated-aggregating machine that
produces a single final model when multiple candidate algorithms are
considered. We demonstrate the promising performance of ART through extensive
empirical studies on regression, classification, and sparse learning. We
further present a real-data analysis for a mortality study.
| 1
|
zrz@andrew.cmu.edu [SEP] The ART of Transfer Learning: An Adaptive and Robust Pipeline : Transfer learning is an essential tool for improving the performance of
primary tasks by leveraging information from auxiliary data resources. In this
work, we propose Adaptive Robust Transfer Learning (ART), a flexible pipeline
of performing transfer learning with generic machine learning algorithms. We
establish the non-asymptotic learning theory of ART, providing a provable
theoretical guarantee for achieving adaptive transfer while preventing negative
transfer. Additionally, we introduce an ART-integrated-aggregating machine that
produces a single final model when multiple candidate algorithms are
considered. We demonstrate the promising performance of ART through extensive
empirical studies on regression, classification, and sparse learning. We
further present a real-data analysis for a mortality study.
| 151
|
APEX: Ambidextrous Dual-Arm Robotic Manipulation Using Collision-Free Generative Diffusion Models
|
Dexterous manipulation, particularly adept coordinating and grasping,
constitutes a fundamental and indispensable capability for robots, facilitating
the emulation of human-like behaviors. Integrating this capability into robots
empowers them to supplement and even supplant humans in undertaking
increasingly intricate tasks in both daily life and industrial settings.
Unfortunately, contemporary methodologies encounter serious challenges in
devising manipulation trajectories owing to the intricacies of tasks, the
expansive robotic manipulation space, and dynamic obstacles. We propose a novel
approach, APEX, to address all these difficulties by introducing a
collision-free latent diffusion model for both robotic motion planning and
manipulation. Firstly, we simplify the complexity of real-life ambidextrous
dual-arm robotic manipulation tasks by abstracting them as aligning two
vectors. Secondly, we devise latent diffusion models to produce a variety of
robotic manipulation trajectories. Furthermore, we integrate obstacle
information utilizing a classifier-guidance technique, thereby guaranteeing
both the feasibility and safety of the generated manipulation trajectories.
Lastly, we validate our proposed algorithm through extensive experiments
conducted on the hardware platform of ambidextrous dual-arm robots. Our
algorithm consistently generates successful and seamless trajectories across
diverse tasks, surpassing conventional robotic motion planning algorithms.
These results carry significant implications for the future design of diffusion
robots, enhancing their capability to tackle more intricate robotic
manipulation tasks with increased efficiency and safety. Complete video
demonstrations of our experiments can be found in
https://sites.google.com/view/apex-dual-arm/home.
|
Liked
|
jechoi@andrew.cmu.edu
|
APEX: Ambidextrous Dual-Arm Robotic Manipulation Using Collision-Free Generative Diffusion Models : Dexterous manipulation, particularly adept coordinating and grasping,
constitutes a fundamental and indispensable capability for robots, facilitating
the emulation of human-like behaviors. Integrating this capability into robots
empowers them to supplement and even supplant humans in undertaking
increasingly intricate tasks in both daily life and industrial settings.
Unfortunately, contemporary methodologies encounter serious challenges in
devising manipulation trajectories owing to the intricacies of tasks, the
expansive robotic manipulation space, and dynamic obstacles. We propose a novel
approach, APEX, to address all these difficulties by introducing a
collision-free latent diffusion model for both robotic motion planning and
manipulation. Firstly, we simplify the complexity of real-life ambidextrous
dual-arm robotic manipulation tasks by abstracting them as aligning two
vectors. Secondly, we devise latent diffusion models to produce a variety of
robotic manipulation trajectories. Furthermore, we integrate obstacle
information utilizing a classifier-guidance technique, thereby guaranteeing
both the feasibility and safety of the generated manipulation trajectories.
Lastly, we validate our proposed algorithm through extensive experiments
conducted on the hardware platform of ambidextrous dual-arm robots. Our
algorithm consistently generates successful and seamless trajectories across
diverse tasks, surpassing conventional robotic motion planning algorithms.
These results carry significant implications for the future design of diffusion
robots, enhancing their capability to tackle more intricate robotic
manipulation tasks with increased efficiency and safety. Complete video
demonstrations of our experiments can be found in
https://sites.google.com/view/apex-dual-arm/home.
| 1
|
jechoi@andrew.cmu.edu [SEP] APEX: Ambidextrous Dual-Arm Robotic Manipulation Using Collision-Free Generative Diffusion Models : Dexterous manipulation, particularly adept coordinating and grasping,
constitutes a fundamental and indispensable capability for robots, facilitating
the emulation of human-like behaviors. Integrating this capability into robots
empowers them to supplement and even supplant humans in undertaking
increasingly intricate tasks in both daily life and industrial settings.
Unfortunately, contemporary methodologies encounter serious challenges in
devising manipulation trajectories owing to the intricacies of tasks, the
expansive robotic manipulation space, and dynamic obstacles. We propose a novel
approach, APEX, to address all these difficulties by introducing a
collision-free latent diffusion model for both robotic motion planning and
manipulation. Firstly, we simplify the complexity of real-life ambidextrous
dual-arm robotic manipulation tasks by abstracting them as aligning two
vectors. Secondly, we devise latent diffusion models to produce a variety of
robotic manipulation trajectories. Furthermore, we integrate obstacle
information utilizing a classifier-guidance technique, thereby guaranteeing
both the feasibility and safety of the generated manipulation trajectories.
Lastly, we validate our proposed algorithm through extensive experiments
conducted on the hardware platform of ambidextrous dual-arm robots. Our
algorithm consistently generates successful and seamless trajectories across
diverse tasks, surpassing conventional robotic motion planning algorithms.
These results carry significant implications for the future design of diffusion
robots, enhancing their capability to tackle more intricate robotic
manipulation tasks with increased efficiency and safety. Complete video
demonstrations of our experiments can be found in
https://sites.google.com/view/apex-dual-arm/home.
| 494
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.