topic
stringclasses 2
values | relevance score
int64 1
10
| paper name
stringlengths 19
239
| text
stringlengths 1.56k
680k
|
---|---|---|---|
synthetic_cpt | 2 | LLM-Neo_Parameter_Efficient_Knowledge_Distillation_for_Large_Language_Models.pdf | 4
2
0
2
n
u
J
3
1
]
E
S
.
s
c
[
1
v
0
0
3
0
1
.
6
0
4
2
:
v
i
X
r
a
Large Language Models as Software Components:
A Taxonomy for LLM-Integrated Applications
Irene Weber
Kempten University of Applied Sciences, Germany
irene.weber@hs-kempten.de
Abstract
Large Language Models (LLMs) have become widely adopted recently. Research explores their use both
as autonomous agents and as tools for software engineering. LLM-integrated applications, on the other
hand, are software systems that leverage an LLM to perform tasks that would otherwise be impossible or
require significant coding effort. While LLM-integrated application engineering is emerging as new discipline,
its terminology, concepts and methods need to be established. This study provides a taxonomy for LLM-
integrated applications, offering a framework for analyzing and describing these systems. It also demonstrates
various ways to utilize LLMs in applications, as well as options for implementing such integrations.
Following established methods, we analyze a sample of recent LLM-integrated applications to identify rel-
evant dimensions. We evaluate the taxonomy by applying it to additional cases. This review shows that
applications integrate LLMs in numerous ways for various purposes. Frequently, they comprise multiple
LLM integrations, which we term “LLM components”. To gain a clear understanding of an application’s
architecture, we examine each LLM component separately. We identify thirteen dimensions along which to
characterize an LLM component, including the LLM skills leveraged, the format of the output, and more.
LLM-integrated applications are described as combinations of their LLM components. We suggest a concise
representation using feature vectors for visualization.
The taxonomy is effective for describing LLM-integrated applications. It can contribute to theory building in
the nascent field of LLM-integrated application engineering and aid in developing such systems. Researchers
and practitioners explore numerous creative ways to leverage LLMs in applications. Though challenges
persist, integrating LLMs may revolutionize the way software systems are built.
Keywords:
component
large language model, LLM-integrated, taxonomy, copilot, architecture, AI agent, LLM
1. Introduction
fields, such as medicine, law, marketing, education,
human resources, etc.
Large Language Models (LLMs) have significantly
impacted various sectors of economy and society [47].
Due to their proficiency in text understanding, cre-
ative work, communication, knowledge work, and
code writing, they have been adopted in numerous
Public discussions often focus on the ethical aspects
and societal consequences of these systems [36, 39].
Meanwhile, research investigates Artificial General
Intelligences and autonomous AI agents that can use
services, data sources, and other tools, and collabo-
rate to solve complex tasks [11, 62, 57, 21]. In addi-
tion, LLMs offer many opportunities to enhance soft-
ware systems. They enable natural language interac-
tion [59], automate complex tasks [19], and provide
supportive collaboration, as seen with recent LLM-
based assistant products often branded as “copilots” 1.
This paper addresses the potential of LLMs for soft-
ware development by integrating their capabilities as
components into software systems. This contrasts
with current software engineering research, which
views LLMs as tools for software development rather
than as software components [14, 22], and with the
considerable body of research examining LLMs as au-
tonomous agents within multiagent systems [21].
Software systems that invoke an LLM and process
its output are referred to as “LLM-integrated appli-
cations”, “LLM-integrated systems”, “LLM-based ap-
plications”, etc. [32, 13, 57]. LLMs are versatile, mul-
tipurpose tools capable of providing functionalities
that would otherwise be unfeasible or require sub-
stantial development efforts [15, 24]. By significantly
expediting system development, they have the poten-
tial to revolutionize not only the way users interact
with technology, but also the fundamental processes
of software development.
LLM-integrated applications engineering is emerging
as a research field. E.g.,
[10] proposes LLM Sys-
tems Engineering (LLM-SE) as a novel discipline, and
[44, 8, 7] discuss experiences and challenges that de-
velopers of such systems encounter in practice.
This study develops a taxonomy that provides a
structured framework for categorizing and analyzing
LLM-integrated applications across various domains.
To develop and evaluate the taxonomy, we collected
a sample of LLM-integrated applications, concentrat-
ing on technical and industrial domains. These ap-
plications showcase a broad range of opportunities
to leverage LLMs, often integrating LLMs in mul-
tiple ways for distinct purposes.
In developing the
taxonomy, we found that examining each of these in-
tegrations, termed “LLM components”, separately is
crucial for a clear understanding of an application’s
architecture.
The taxonomy adopts an original architectural per-
spective, focusing on how the application interacts
with the LLM while abstracting from the specifics
of application domains. For researchers, the taxon-
omy contributes to shape a common understanding
and terminology, thus aiding theory building in this
emerging domain [29, 50, 18]. For practitioners, the
taxonomy provides inspiration for potential uses of
LLMs in applications, presents design options, and
helps identify challenges and approaches to address
them.
Objectives. In this study, a taxonomy is understood
as a set of dimensions divided into characteristics.
The objective is to identify dimensions that are useful
for categorizing the integration of LLMs in applica-
tions from an architectural perspective. To be most
effective, the taxonomy should be easy to understand
and apply, yet distinctive enough to uncover the es-
sential aspects. Additionally, we aim to develop a
visual representation tailored to the taxonomy’s in-
tended purposes.
Overview. The following section 2 provides back-
ground on LLMs and introduces relevant concepts.
Section 3 presents an overview of related work. The
study design adheres to a Design Science Research
approach [46]. We apply established methods for tax-
onomy design [42, 48] as described in Section 4. This
section also presents the sample of LLM-integrated
applications used for this study. The developed tax-
onomy is presented, demonstrated and formally eval-
uated in section 5. In section 6, we discuss its usabil-
ity and usefulness. Section 7 summarizes the contri-
butions, addresses limitations, and concludes.
2. Large Language Models
2.1. Background
1E.g., https://docs.github.com/en/copilot,
https://copilot.cloud.microsoft/en-us/copilot-excel,
https://www.salesforce.com/einsteincopilot
State-of-the-art LLMs such as GPT-3.5, GPT-4,
Llama, PALM2, etc., are artificial neural networks
i.e., very simple processing
consisting of neurons,
2
units, that are organized in layers and connected by
weighted links. Training a neural network means
adapting these weights such that the neural network
shows a certain desired behavior. Specifically, an
LLM is trained to predict the likelihoods of pieces
of text termed, tokens, to occur as continuations of
a given text presented as input to the LLM. This in-
put is referred to as prompt. The prompt combined
with the produced output constitutes the context of
an LLM. It may comprise more than 100k tokens in
state-of-the-art LLMs2. Still, its length is limited and
determines the maximum size of prompts and outputs
that an LLM is capable of processing and generating
at a time.
Training of an LLM optimizes its parameters such
that its computed likelihoods align with real text ex-
amples. The training data is a vast body of text snip-
pets extracted, processed, and curated from sources
such as Wikipedia, Github code repositories, common
websites, books, or news archives. An LLM trained
on massive examples is termed a foundation model
or pre-trained model. During training, an LLM not
only learns to produce correct language but also ab-
sorbs and stores information and factual knowledge.
However, it is well known that LLMs frequently pick
up biases, leading to ethical problems. They may
also produce factually incorrect outputs that sound
plausible and convincing, termed hallucinations.
Recent findings show that LLMs can be applied to
a wide range of tasks by appropriately formulating
prompts. Different prompt patterns succeed in dif-
ferent tasks. Basic approaches rely on instructing
the LLM to solve a task described or explained in
the prompt. In few-shot prompting (also known as
few-shot learning), the prompt is augmented with ex-
ample input-output pairs illustrating how to solve the
task, e.g., the requested output format. The number
of examples can vary. Prompting with one example is
called one-shot prompting, while prompting without
any examples is called zero-shot prompting. One-shot
and few-shot prompting fall under the broader cat-
egory of in-context learning. Prompt patterns such
2https://platform.openai.com/docs/models
as chain-of-thought and thinking-aloud aim to elicit
advanced reasoning capabilities from LLMs.
As effective prompts are crucial for unlocking the di-
verse capabilities of an LLM, the discipline of prompt
engineering is evolving, focusing on the systematic
design and management of prompts [66, 9, 53, 31].
2.2. Definitions
Invoking an LLM results in an input-processing-
output sequence: Upon receiving a prompt, the LLM
processes it and generates an output. We refer to an
individual sequence of input-processing-output per-
formed by the LLM as LLM invocation, and define
an LLM-integrated application as a system in which
the software generates the prompt for the LLM and
processes its output. The concept of an application
is broad, encompassing service-oriented architectures
and systems with components loosely coupled via
API calls.
Given an LLM’s versatility, an application can uti-
lize it for different tasks, each demanding a specific
approach to create the prompt and handle the re-
sult. This paper defines a particular software compo-
nent that accomplishes this as an LLM-based software
component or, simply, LLM component. An LLM-
integrated application can comprise several LLM
components. The study develops a taxonomy for
LLM components. LLM-integrated applications are
described as combinations of their LLM components.
3. Related Work
With the recent progress in generative AI and LLMs,
the interest in these techniques has increased, and
numerous surveys have been published, providing an
extensive overview of technical aspects of LLMs [72],
reviewing LLMs as tools for software engineering [22],
and discussing the technical challenges of applying
LLMs across various fields [25]. Further studies ad-
dress the regulatory and ethical aspects of Genera-
tive AI and ChatGPT, with a particular focus on
AI-human collaboration [41], and Augmented Lan-
guage Models (ALMs), which are LLMs that enhance
3
their capabilities by querying tools such as APIs,
databases, and web search engines [38].
Taxomonies related to LLMs include a taxonomy for
prompts designed to solve complex tasks [49] and a
taxonomy of methods for cost-effectively invoking a
remote LLM [60]. A comparative analysis of stud-
ies on applications of ChatGPT is provided by [27],
whereas LLMs are compared based on their applica-
tion domains and the tasks they solve in [20]. Most
closely related to the taxonomy developed here is a
taxonomy for LLM-powered multiagent architectures
[21] which focuses on autonomous agents with less
technical detail. Taxonomies of applications of AI in
enterprises [48] and applications of generative AI, in-
cluding but not limited to LLMs [52], are developed
using methods similar to those in our study.
Several taxonomies in the field of conversational
agents and task-oriented dialog (TOD) systems ad-
dress system architecture [1, 40, 12, 3]. However, they
omit detailed coverage of the integration of generative
language models.
4. Methods
We constructed the taxonomy following established
guidelines [42, 48, 29], drawing from a sample of
LLM-integrated applications. These applications are
detailed in section 4.1.
4.1. Development
Taxonomy. We derived an initial taxonomy from the
standard architecture of conversational assistants de-
scribed in [3], guided by the idea that conversational
assistants are essentially “chatbots with tools”, i.e.,
language-operated user interfaces that interact with
external systems. This approach proved unsuccessful.
The second version was based on the classical three-
tier software architecture, and then extended over
several development cycles. By repeatedly apply-
ing the evolving taxonomy to the example instances,
we identified dimensions and characteristics using an
“empirical-to-conceptual” approach. When new di-
mensions emerged, additional characteristics were de-
rived in a “conceptual-to-empirical” manner. After
five major refinement cycles, the set of dimensions
and characteristics solidified. In the subsequent eval-
uation phase, we applied the taxonomy to a new set
of example instances that were not considered while
constructing the taxonomy. As the dimensions and
characteristics remained stable, the taxonomy was
considered complete. In the final phase, we refined
the wording and visual format of the taxonomy.
Visualization. Developing a taxonomy involves cre-
ating a representation that effectively supports its
intended purpose [29]. Taxonomies can be repre-
sented in various formats, with morphological boxes
[54, 55] or radar charts [21] being well-established
approaches. We evaluated morphological boxes, be-
cause they effectively position categorized instances
within the design space. However, we found that they
make it difficult to perceive a group of categorized in-
stances as a whole since they occupy a large display
area. This drawback is significant for our purposes,
as LLM-integrated applications often comprise mul-
tiple LLM components. Therefore, we developed a
more condensed visualization of the taxonomy based
on feature vectors.
Example instances. We searched for instances of
LLM-integrated applications for taxonomy develop-
ment that should meet the following criteria:
• The application aims for real-world use rather
than focusing on research only (such as testbeds
for experiments or proofs-of-concept). It demon-
strates efforts towards practical usability and ad-
dresses challenges encountered in real-world sce-
narios.
• The application’s architecture, particularly its
LLM components, is described in sufficient de-
tail for analysis.
• The sample of instances covers a diverse range
of architectures.
• The example instances are situated within indus-
trial or technical domains, as we aim to focus on
LLM-integrated applications beyond well-known
fields like law, medicine, marketing, human re-
sources, and education.
4
The search revealed a predominance of theoretical re-
search on LLM-integrated applications while papers
focusing on practically applied systems were scarce.
Searching non-scientific websites uncovered commer-
cially advertised AI-powered applications, but their
internal workings were typically undisclosed, and reli-
able evaluations were lacking. Furthermore, the het-
erogeneous terminology and concepts in this emerg-
literature
ing field make a comprehensive formal
search unfeasible.
Instead, by repeatedly search-
ing Google Scholar and non-scientific websites using
terms “LLM-integrated applications”, “LLM-powered
applications”, “LLM-enhanced system”, “LLM” and
“tools”, along similar variants, we selected six suitable
instances. Some of them integrate LLMs in multiple
ways, totaling eleven distinct LLM components.
For a thorough evaluation, we selected new instances
using relaxed criteria, including those intended for
research. Additionally, we included a real-world ex-
ample lacking explicit documentation to broaden the
diversity of our sample and assess the taxonomy’s
coverage. Within the five selected instances, we iden-
tified ten LLM components.
4.2. Sample of LLM-integrated applications
Table 1 gives an overview of the sample. Names of ap-
plications and LLM components are uniformly writ-
ten as one CamelCase word and typeset in small caps,
deviating from the format chosen by the respective
authors.
LowCode. LowCode is a web-based application
consisting of a prompt-definition section and a di-
alogue section. The prompt-definition section sup-
ports the design of prompts for complex tasks, such
as composing extensive essays, writing resumes for
job applications or acting as a hotel service chatbot
[5]. In the dialogue section, users converse with an
LLM to complete the complex task based on the de-
fined prompt.
LowCode comprises two LLM components termed
Planning and Executing. Planning operates in
the prompt-definition section, where a user roughly
describes a complex task, and Planning designs a
workflow for solving it. The prompt-definition section
offers a low-code development environment where the
LLM-generated workflow is visualized as a graphi-
cal flowchart, allowing a user to edit and adjust the
logic of the flow and the contents of its steps. For
instance, in essay-writing scenarios, this involves in-
serting additional sections, rearranging sections, and
refining the contents of sections. Once approved by
the user, LowCode translates the modified work-
flow back into natural language and incorporates it
into a prompt for Executing. In the dialogue sec-
tion, users converse in interactive, multi-turn dia-
logues with Executing. As defined in the prompt, it
acts as an assistant for tasks such as writing an essay
or resume, or as a hotel service chatbot. While the
idea of the LLM planning a workflow might suggest
using the LLM for application control, LowCode
Planning actually serves as a prompt generator that
supports developing prompts for complex tasks.
Honeycomb. Honeycomb is an observability plat-
form collecting data from software applications in
distributed environments for monitoring.
Users
define queries to retrieve information about the
observed software systems through Honeycomb’s
Query Builder UI. The recently added LLM-based
QueryAssistant allows users to articulate inquiries
in plain English, such as “slow endpoints by status
code” or “which service has the highest latency?”
The QueryAssistant converts these into queries in
Honeycomb’s format, which users can execute and
manually refine [7, 8].
MyCrunchGpt. MyCrunchGpt acts as an ex-
pert system within the engineering domain, specif-
ically for airfoil design and calculations in fluid me-
chanics. These tasks require complex workflows com-
prising several steps such as preparing data, param-
eterizing tools, and evaluating results, using vari-
ous software systems and tools. The aim of My-
CrunchGpt is to facilitate the definition of these
workflows and automate their execution [28].
MyCrunchGpt offers a web interface featuring a
dialogue window for inputting commands in plain
English, along with separate windows displaying the
5
Table 1: Example instances selected for development (top 6) and evaluation (bottom 5)
Application
References LLM components
Honeycomb
QueryAssistant
[7, 8]
Planning, Executing
LowCode
[5],[35]
DesignAssistant, SettingsEditor, DomainExpert
[28]
MyCrunchGpt
Manager, Operator
MatrixProduction [69]
TaskPlanning
[37]
WorkplaceRobot
TaskExecutor, MemoryGenerator
[64]
AutoDroid
ActionPlanning, ScenarioFeedback
[51]
ProgPrompt
QuestionAnswering
[26]
FactoryAssistants
DstPrompter, PolicyPrompter
[71]
SgpTod
Reporting
[70]
TruckPlatoon
ActionExecutor, Advisor, IntentDetector, Explainer
[16, 44]
ExcelCopilot
output and results of software tools invoked by My-
CrunchGpt in the backend. MyCrunchGpt relies
on predefined workflows, not supporting deviations
or cycles. By appending a specific instruction to the
dialogue history in the prompt for each step of the
workflow, it uses the LLM as a smart parser to ex-
tract parameters for APIs and backend tools from
user input. APIs and tools are called in the prede-
fined order [28, p. 56].
MyCrunchGpt is still in development. The paper
[28] explains the domain as well as the integration of
the LLM, but does not fully detail the implementa-
tion of the latter. Still, MyCrunchGpt illustrates
innovative applications of an LLM in a technical do-
main. We categorize three LLM components solving
tasks within MyCrunchGpt: a DesignAssistant
guiding users through workflows and requesting pa-
rameters for function and API calls; a SettingsEd-
itor updating a JSON file with settings for a back-
end software tool; and a DomainExpert which helps
evaluating results by comparing them to related re-
sults, e.g., existing airfoil designs, which it derives
from its trained knowledge.
MatrixProduction. MatrixProduction
em-
ploys an LLM for controlling a matrix production
system [69]. While in a classical line production
setup, workstations are arranged linearly and the
manufacturing steps follow a fixed sequence, matrix
production is oriented towards greater flexibility.
transport vehicles
Autonomous
carry materials
and intermediate products to workstations, termed
automation modules, each offering a spectrum of
manufacturing skills that it can contribute to the
production process. Compared to line production,
matrix production is highly adaptable and can
manufacture a variety of personalized products with
full automation. This requires intelligent production
management to (a) create workplans that orchestrate
and schedule the automation modules’ skills, and (b)
program the involved automation modules such that
they execute the required processing steps.
MatrixProduction incorporates two LLM compo-
nents: Manager creates workplans as sequences of
skills (a), while Operator generates programs for
the involved automation modules (b).
MatrixProduction prompts Manager and Op-
erator to provide textual explanations in addition
to the required sequences of skills or automation
module programs. The LLM output is processed
by a parser before being used to control the physi-
cal systems. Manager relies on built-in production-
specific knowledge of the LLM such as “a hole is pro-
duced by drilling”.
Noteworthy in this approach is its tight integra-
tion into the system landscape of Industry 4.0.
The few-shot Manager and Operator prompts
are generated automatically using Asset Adminis-
tration Shells, which are standardized, technology-
6
independent data repositories storing digital twins of
manufacturing assets for use in Industry 4.0 [2].
WorkplaceRobot. An experimental robot system
is enhanced with LLM-based task planning in [37].
The robot operates in a workplace environment fea-
turing a desk and several objects. It has previously
been trained to execute basic operations expressed
in natural language such as “open the drawer” or
“take the pink object and place it in the drawer”.
LLM-based task planning enables the robot to per-
form more complex orders like “tidy up the work area
and turn off all the lights”. To this end, an LLM is
prompted to generate a sequence of basic operations
that accomplish the complex order.
Although the robot expects operations phrased in
language, the LLM is prompted with a
natural
Python coding task. For instance, the basic opera-
tion “turn on the green light” corresponds to a Python
command push_button(’green’). The prompt for
the LLM includes several examples each consisting
of a description of an environment state, a complex
order formatted as a comment, and a sequence of
Python robot commands that accomplish the com-
plex order. When invoking the LLM to generate the
Python program for a new order, the prompt is aug-
mented with a description of the environment’s cur-
rent state and the new order as a comment.
The Python code produced by the LLM is trans-
lated back to a sequence of basic operations in nat-
ural language. When the robot executes these oper-
ations, there is no feedback about successful comple-
tion. Rather, the system assumes that all basic op-
erations require a fixed number of timesteps to com-
plete.
AutoDroid. The goal of mobile task automation is
hands-free user interaction for smartphones through
voice commands. AutoDroid is a voice control sys-
tem for smartphones that can automatically execute
complex orders such as “remind me to do laundry on
May 11th” or “delete the last photo I took” [64, 65].
as “scroll down, then press button x” in the calen-
dar app. AutoDroid employs an LLM component
TaskExecutor to plan these sequences of opera-
tions. The challenge is that the next operation to ex-
ecute depends on the current state of the Android app
which continuously changes as the app is operated.
AutoDroid solves this by invoking the TaskEx-
ecutor repeatedly after each app operation with the
prompt comprising the updated state of the Graph-
ical User Interface (GUI) along with the user’s com-
plex order.
Before executing irrevocable operations, such as per-
manently deleting data or calling a contact, Auto-
Droid prompts the user to confirm or adjust the op-
eration. TaskExecutor is instructed to include a
“confirmation needed” hint in its output for such op-
erations.
The prompt for TaskExecutor comprises an ex-
tract from a knowledge base which is built automati-
cally in an offline learning phase as follows: In a first
step, a “UI Automator” (which is not an LLM com-
ponent) automatically and randomly operates the
GUI elements of an Android app to generate a UI
Transition Graph (UTG). The UTG has GUI states
as nodes and the possible transitions between GUI
states as edges. As next steps, AutoDroid invokes
two LLM components referred to as MemoryGen-
erators to analyze the UTG.
The first MemoryGenerator is prompted repeat-
edly for each GUI state in the UTG. Its task is to
explain the functionality of the GUI elements. Be-
sides instructions and examples of the table format
desired as output, its prompt includes an HTML rep-
resentation of the GUI state, the GUI actions preced-
ing this state, and the GUI element operated next.
Its output consists of tuples explaining the function-
ality of a GUI element by naming the derived func-
tionality (e.g., “delete all the events in the calendar
app”) and the GUI states and GUI element actions in-
volved. Similarly, the second MemoryGenerator
is prompted to output a table listing GUI states and
explanations of their functions. These tables consti-
tute AutoDroid’s knowledge base.
Such complex orders are fulfilled by performing se-
quences of basic operations in an Android app, such
ProgPrompt. ProgPrompt [51] is an approach
to
to LLM-based robot
task planning similar
7
Its robot is controlled by
WorkplaceRobot.
Python code and works in a real and a simulated
household environment.
ProgPrompt comprises two LLM components. Ac-
tionPlanning generates Python scripts for tasks
such as “microwave salmon” using basic opera-
tions
like grab(’salmon’), open(’microwave’),
and putin(’salmon’, ’microwave’), notably with-
out considering the current state of the environment.
To establish a feedback loop with the environment,
ActionPlanning adds assert statements. These
statements verify the preconditions of basic opera-
tions and trigger remedial actions when preconditions
are not met. For instance, a script for “microwave
salmon” comprises the following code fragment:
if assert(’microwave’ is ’opened’)
else: open(’microwave’)
putin(’salmon’, ’microwave’)
When operating in the simulated environment,
ProgPrompt can verify an assert statement
through its second LLM component, Scenario-
Feedback. Prompted with the current state of the
environment and the assert statement, Scenario-
Feedback evaluates it and outputs True or False.
FactoryAssistants. FactoryAssistants advise
workers on troubleshooting production line issues in
two manufacturing domains: detergent production
and textile production [26]. The assistants leverage
domain knowledge from FAQs and documented prob-
lem cases to answer user queries. The required do-
main knowledge is provided as a part of the prompt.
SgpTod. SgpTod employs an LLM to implement a
chatbot, specifically, a task-oriented dialogue (TOD)
system [71]. TOD systems are also known as conver-
sational assistants. In contrast to open-domain dia-
logue (ODD) systems, which engage users in goalless
conversations, they are designed for assisting users in
specific tasks.
In general, TOD systems require the following
components [3]: Natural Language Understanding
(NLU), analyzing the user’s input to classify intents
and extract entities; Dialogue Management (DM) for
deciding on a system action that is appropriate in
a given dialogue state (e.g., ask for more informa-
tion or invoke a hotel booking service); and Natu-
ral Language Generation (NLG) for producing a re-
sponse that the TOD system can present to the user.
Intent classification, also known as intent detection,
matches free-text user input to one of several tasks a
TOD system can perform (e.g., book a hotel). Entity
extraction isolates situational values, called entities,
from the user input (e.g., the town and the date of
the hotel booking). The TOD system may require
several dialogue turns to elicit all necessary entities
from the user.
In TOD research, the system’s in-
ternal representation of the user’s intentions and the
entity values is commonly referred to as its “belief
state”. For example, in the restaurant search domain,
the belief state may include attribute-value pairs like
cuisine:Indian and pricerange:medium.
SgpTod is a multi-domain TOD system, concur-
rently handling multiple task domains found in stan-
dard TOD evaluation datasets, such as recommend-
ing restaurants or finding taxis. Similar to other ex-
perimental TOD systems [23], SgpTod accesses a
database that stores information from the task do-
mains, such as available hotels and restaurants.
SgpTod comprises two LLM components, called
DstPrompter and PolicyPrompter, that are
both invoked in every dialogue turn between SgpTod
and the user. The DstPrompter handles the NLU
aspect, analyzing the user’s input and populating the
system’s belief state.
It outputs is an SQL query
suited to extract the database entries that match the
current belief state. Upon retrieving the database en-
tries, SgpTod invokes its PolicyPrompter which
covers both DM and NLG. Prompted with the dia-
logue history and the database entries retrieved, it
produces a two-part output: a natural language re-
sponse for NLG and a system action for DM.
TruckPlatoon. The concept of truck platooning
means that trucks travel closely together for bet-
ter fuel efficiency and traffic flow. TruckPla-
toon comprises an algorithmic control loop which
autonomously maintains a consistent distance be-
tween trucks. It invokes an LLM to generate natural-
language reports on the platoon’s performance and
8
stability from measurements tracked by the control
algorithm, providing easily understandable informa-
tion for engineers involved in monitoring and opti-
mizing the truck platooning system.
ExcelCopilot. ExcelCopilot is an example of
a recent trend where software companies integrate
LLM-based assistants, often termed “copilots”, into
their products [44]. These copilots not only provide
textual guidance but also perform actions within the
software environment, constituting a distinctive type
of LLM-integrated application. We chose Excel-
Copilot as an example for evaluating our taxonomy.
Since its implementation is undisclosed, we infer its
architecture from indirect sources, including a screen-
cast and a report on insights and experiences from
copilot developers [16, 44]. This inferred architecture
may deviate from the actual implementation.
ExcelCopilot is accessible in a task bar along-
side the Excel worksheet.
It features buttons with
context-dependent suggestions of actions and a text
box for users to type in commands in natural lan-
guage. ExcelCopilot only works with data tables,
so its initial suggestion is to convert the active work-
sheet’s data into a data table. Copilot functions ac-
tivate when a data table or part of it is selected. It
then presents buttons for four top-level tasks: “add
formula columns”, “highlight”, “sort and filter”, and
“analyze”. The “analyze” button triggers the copilot
to display more buttons, e.g., one that generates a
pivot chart from the selected data. ExcelCopilot
can also add a formula column to the data table and
explain the formula in plain language.
When a user inputs a free-text command, Excel-
Copilot may communicate its inability to fulfill
it. This constantly occurs with commands requiring
multiple steps, indicating that ExcelCopilot lacks
a planning LLM component as seen in, for example,
MatrixProduction. This observation, along with
its mention in [44], suggests that ExcelCopilot em-
ploys an intent detection-skill routing architecture.
This architecture includes an LLM component that
maps free-text user commands to potential intents
and then delegates to other LLM components tasked
with generating actions to fulfill those intents. Ac-
cordingly, ExcelCopilot comprises several types of
LLM components:
• Several distinct Action Executors generate
code for specific application actions, such as cre-
ating a pivot table, designing a worksheet for-
mula, inserting a diagram, and so on.
• An Advisor suggests meaningful next actions.
Its outputs serve to derive button captions and
prompts for ActionExecutors.
• When a user inputs a free-text command, the
IntentDetector is invoked to determine and
trigger a suitable ActionExecutor. The In-
tentDetector communicates its actions to
users and informs them when it cannot devise
a suitable action.
• The Explainer generates natural language ex-
planations of formulae designed by ExcelCopi-
lot. It is unclear whether under the hood, the
ActionExecutor is generating both the for-
mula and the explanation, or if two separate
LLM components are being invoked. We assume
the latter, i.e., that a separate Explainer LLM
component exists.
While users interact repeatedly with ExcelCopi-
lot, each interaction adheres to a single-turn pat-
tern, with the user providing a command and Ex-
celCopilot executing it [44].
5. A Taxonomy for LLM Components and
LLM-Integrated Applications
When developing the taxonomy, it emerged that an-
alyzing an LLM-integrated application should begin
with identifying and describing its distinct LLM com-
ponents. Analyzing each LLM component separately
helps capture details and provides a clear understand-
ing of how the application utilizes LLM capabili-
ties. The LLM-integrated application can then be
described as a combination of the LLM components
it employs.
9
Function
Meta
Invocation
Table 2: Dimensions and characteristics of the taxonomy. Codes of characteristics are printed in uppercase. “Meta” means
“metadimension”. “MuEx” means “mutual exclusiveness”.
Dimension
Interaction
Frequency
Logic
UI
Data
Instruction
State
Task
Check
Skills
Format
Revision
Consumer
Characteristics
App, Command, Dialog
Single, Iterative
cAlculate, Control
none, Input, Output, Both
none, Read, Write, Both
none, User, LLM, Program
none, User, LLM, Program
none, User, LLM, Program
none, User, LLM, Program
reWrite, Create, conVerse, Inform, Reason, Plan
FreeText, Item, Code, Structure
none, User, LLM, Program
User, LLM, Program, Engine
MuEx
enforced
yes
yes
yes
yes
enforced
enforced
yes
enforced
no
no
enforced
enforced
Prompt
Output
5.1. Overview and demonstration
The taxonomy identifies 13 dimensions for LLM com-
ponents, grouped into five metadimensions as shown
in table 2. It comprises both dimensions with gen-
uinely mutually exclusive characteristics and those
with non-exclusive characteristics. For dimensions
related to the technical integration of LLMs within
applications, mutual exclusiveness is enforced. Given
the open nature of software architecture, the inte-
gration of LLMs allows for significant diversity.
In
practice, LLM components may show multiple char-
acteristics within these dimensions. Nonetheless, the
taxonomy requires categorizing each component with
a predominant characteristic, enforcing a necessary
level of abstraction to effectively organize and struc-
ture the domain.
We applied the taxonomy to categorize each of the
example instances described in section 4.2. The re-
sults are depicted in figure 1. The dimensions and
their characteristics are detailed and illustrated with
examples in section 5.2.
The taxonomy visualizes an LLM component by a
feature vector comprising binary as well as multi-
valued features. Non-mutually exclusive dimensions
are represented by a set of binary features. The re-
maining dimensions are encoded as n-valued features
where n denotes the number of characteristics. For
compactness, we use one-letter codes of the charac-
teristics as feature values in the visualizations.
In
table 2, these codes are printed in upper case in the
respective characteristic’s name.
A feature vector representing an LLM component
is visualized in one line. For dimensions with non-
mutually exclusive characteristics, all possible codes
are listed, with the applicable ones marked. The re-
maining dimensions are represented by the code of
the applicable characteristic, with the characteris-
tic none shown as an empty cell. We shade feature
values with different tones to support visual percep-
tion. LLM components within the same application
are grouped together, visualizing an LLM-integrating
application in a tabular format.
5.2. Dimensions and characteristics
5.2.1. Invocation dimensions
Two Invocation dimensions address the way the LLM
is invoked within the application.
Interaction describes how the user interacts with the
LLM with three characteristics:
App: Users never converse with the LLM directly
in natural language, rather the application invokes
the LLM automatically. E.g., users do not interact
10
Invocation
Function
Prompt
(cid:125)(cid:124)
(cid:123)
(cid:122)
(cid:125)(cid:124)
(cid:123)
(cid:125)(cid:124)
(cid:123)
(cid:122)
Skills
(cid:125)(cid:124)
Out. Format
Output
(cid:123)
(cid:122)
(cid:125)(cid:124)
(cid:123)
(cid:122) (cid:125)(cid:124) (cid:123)
(cid:122)
n
o
i
t
c
a
r
e
t
n
I
C
C
D
Honeycomb QueryAssistant
LowCode Planning
LowCode Executing
MyGrunchGpt DesignAssistant D
C
MyGrunchGpt SettingsEditor
C
MyGrunchGpt DomainExpert
MatrixProduction Manager
MatrixProduction Operator
WorkplaceRobot
AutoDroid Executor
AutoDroid MemoryGenerator2
C
A
C
C
A
C
ProgPrompt ActionPlanning
ProgPrompt ScenarioFeedback A
FactoryAssistant
SgpTod DstPrompter
SgpTod PolicyPrompter
TruckPlatoon
D
D
A
A
ExcelCopilot ActionExecutor∗ A
A
ExcelCopilot Advisor
C
ExcelCopilot IntentDetector
A
ExcelCopilot Explainer
y
c
n
e
u
q
e
r
F
S
S
I
I
S
S
S
S
S
I
I
S
I
S
S
S
S
S
S
S
S
(cid:122)
n
o
i
t
c
u
r
t
s
n
I
a
t
a
D
I
U
c
i
g
o
L
A
e
t
a
t
S
k
s
a
T
k
c
e
h
C
e
t
i
r
W
e
r
e
t
a
e
r
C
e
s
r
e
V
n
o
c
m
r
o
f
n
I
n
o
s
a
e
R
A
A B
A B
A
A
I
I
I
I
C
C
C
C
A
C
C
A
R P P U P
P
U
P L U
P P U
P P P
P P P
P P U
P P L
P P U
I
C V I
V
W
I
I
P L U P
P P P
P
U
P P L
P P U
W
V
V
A I R P P U
P P P
C O
A O
P P P
W
A
A
C
A
P P L
P P P
P P U
P P P
t
x
e
T
e
e
r
F
m
e
t
I
n
a
l
P
P
P
F
F
P F
P F
P
P
P
F
F
F
P F
F
F
R
R
R
R
R
R
R
I
I
I
I
e
d
o
C
C
C
C
C
C
C
e
r
u
t
c
u
r
t
S
n
o
i
s
i
v
e
R
r
e
m
u
s
n
o
C
P
E
S U L
U
S
S
S
S
S
S
S
E
E
U
L
E
E
E
L
E
E
U
E
P
U
E
P
P
U
Figure 1: Categorized example instances. See table 2 for a legend. ∗, 2: multiple LLM components.
directly with ExcelCopilot ActionExecutor or
with MatrixProduction Operator.
Command : Users input single natural
language
commands. E.g., users interact with AutoDroid
TaskExecutor through single natural
language
commands.
Dialog: Users engage in multi-turn dialogues with the
LLM component to achieve a use goal. E.g., users
repeatedly prompt LowCode Executing or My-
CrunchGpt DesignAssistant in multi-turn dia-
logues to obtain an essay or an airfoil design, respec-
tively.
Frequency addresses how often the application in-
vokes a specific LLM component to fulfill a goal:
Single: A single invocation of an LLM component
is sufficient to produce the result. E.g.,
in My-
CrunchGpt, the application internally invokes dis-
tinct LLM components once for each user input by
injecting varying prompt instructions.
Iterative: The LLM component is invoked repeatedly
to produce the result. E.g., AutoDroid TaskEx-
11
ecutor is invoked multiple times to fulfill a com-
mand with an updated environment description in
the State prompt; LowCode Executing is repeat-
edly prompted by the user to achieve the use goal
while the application updates the dialogue history.
5.2.2. Function dimensions
The Function dimensions are derived from the classi-
cal three-tier software architecture model which seg-
regates an application into three distinct layers: pre-
sentation, logic and data [17]. The presentation layer
implements the UI. On the input side, it allows users
to enter data and commands that control the appli-
cation. On the output side, it presents information
and provides feedback on the execution of commands.
The logic layer holds the code that directly realizes
the core objectives and processes of an application
such as processing data, performing calculations, and
making decisions. The data layer of an application
manages the reading and writing of data from and
to persistent data storage. Due to its versatility, an
LLM component can simultaneously implement func-
tionality for all three layers. The taxonomy addresses
this with three Function dimensions.
UI indicates whether an LLM component contributes
significantly to the user interface of an application,
avoiding the need to implement graphical UI controls
or display elements:
none: No UI functionality is realized by the LLM.
E.g., in ExcelCopilot, the LLM does not replace
any UI elements.
Input:
is (partially) implemented by
the LLM. E.g., in MatrixProduction Manager,
users input their order in natural language, obviating
a product configuration GUI.
Output: Output UI is (partially) implemented by the
LLM. E.g., in TruckPlatoon, the output gener-
ated by the LLM component can replace a data cock-
pit with gauges and other visuals displaying numeri-
cal data.
Input and output UI are (partially) imple-
Both:
mented by the LLM. E.g., in MyCrunchGpt, the
DesignAssistant provides a convenient conversa-
interface for parameterization of APIs and
tional
Input UI
tools and feedback on missing values, which other-
wise might require a complex GUI.
Logic indicates whether the LLM component deter-
mines the control flow of the application. It discerns
two characteristics:
cAlculate: The output does not significantly impact
the control flow of the application, i.e., the output
is processed like data. E.g., MyCrunchGpt Set-
tingsEditor modifies a JSON file, replacing a pro-
grammed function; MyCrunchGpt DesignAssis-
tant asks the user for parameters, but the sequence
of calling APIs and tools follows a predefined work-
flow; the workflow computed by LowCode Plan-
ning is displayed without influencing the applica-
tion’s control flow.
Control : The output of the LLM is used for con-
trolling the application. E.g., the plans generated
by MatrixProduction Manager serve to sched-
ule and activate production modules; the actions pro-
posed by AutoDroid TaskExecutor are actually
executed and determine how the control flow of the
app proceeds.
Since an LLM invocation always computes a result,
cAlculate is interpreted as “calculate only”, making
cAlculate and Control mutually exclusive.
Data addresses whether the LLM contributes to read-
ing or writing persistent data:
none: The LLM does not contribute to reading or
writing persistent data. This characteristic applies
to most sample instances.
Read : The LLM is applied for reading from persistent
data store. E.g., SgpTod DstPrompter generates
SQL queries which the application executes; Honey-
comb QueryAssistant devises analytical database
queries.
Write and Both: No LLM component among the
samples generates database queries for creating or
updating persistent data.
5.2.3. Prompt-related dimensions
Integrating an LLM into an application poses spe-
cific requirements for prompts, such as the need for
prompts to reliably elicit output in the requested
12
form [68]. While a broad range of prompt patterns
have been identified and investigated [66], there is
still a lack of research on successful prompt pat-
terns specifically for LLM-integrated applications, on
which this taxonomy could build. Developing prompt
taxonomies is a challenging research endeavor in itself
[49] and is beyond the scope of this research. There-
fore, the taxonomy does not define a dimension with
specific prompt patterns as characteristics, but rather
focuses on how the application generates the prompt
for an LLM component from a technical perspective.
Prompts generally consist of several parts with dis-
tinct purposes, generated by different mechanisms.
Although many authors explore the concepts, a com-
mon terminology has yet to be established. This is
illustrated in table 3, showing terms from an ad-hoc
selection of recent papers addressing prompt gener-
In the table, italics indicate
ation in applications.
that the authors refrain from introducing an abstract
term and instead use a domain-specific description.
The term “examples” indicates a one-shot or few-shot
prompt pattern. The terms that are adopted for the
taxonomy are underlined.
The taxonomy distinguishes three prompt parts re-
ferred to as Prompt Instruction, Prompt State, and
Prompt Task. These parts can occur in any order,
potentially interleaved, and some parts may be ab-
sent.
• Instruction is the part of a prompt that outlines
how to solve the task. Defined during LLM com-
ponent development, it remains static through-
out an application’s lifespan.
• State is the situation-dependent part of the
prompt that is created dynamically every time
the LLM is invoked. The taxonomy opts for the
term State instead of “context” in order to avoid
confusion with the “LLM context” as explained
in section 2. The State may include the current
dialogue history, an extract of a knowledge base
needed specifically for the current LLM invoca-
tion, or a state or scene description, etc.
• Task is the part of the prompt conveying the
task to solve in a specific invocation.
Prompt Instruction, State and Task describe the ori-
gins of the prompt parts by uniform characteristics:
none: The prompt part is not present. E.g., Prog-
Prompt ActionPlanning has no State prompt,
nor does LowCode Planning (except the dialogue
history when planning a subprocess).
Instruction
and Task prompt parts are present in all sample in-
stances.
User : The user phrases the prompt part. E.g., the
Task for ExcelCopilot IntentDetector or for
LowCode Planning is phrased by the user. There
are no sample instances where the user provides the
Instruction or State prompt parts.
LLM : The prompt part is generated by an LLM. E.g.,
LowCode Planning generates the State for Low-
Code Executing and ExcelCopilot IntentDe-
tector generates the Task for ExcelCopilot Ac-
tionExecutors.
Program: Application code generates the prompt
part. E.g., AutoDroid programmatically generates
the State and the Task parts for its MemoryGen-
erators in the knowledge base building phase.
The Prompt Instruction dimension is always gener-
ated by Program. While a user and possibly an LLM
have defined this prompt part during application de-
velopment, this falls outside the scope of this taxon-
omy. Therefore, the Prompt Instruction dimension is
not discriminating and categorizes all cases as Pro-
gram. It is retained in the taxonomy for completeness
and better understandability.
Prompt Check describes whether the application em-
ploys a review mechanism to control and modify the
prompt before invoking the LLM. The same charac-
teristics as for the prompt parts are applicable:
none: The prompt is used without check.
User : The user checks and revises the prompt.
LLM : Another LLM component checks or revises the
prompt.
Program: The application comprises code to check
or revise the prompt. E.g., AutoDroid removes
personal data, such as names, to ensure privacy
before invoking the TaskExecutor; Honeycomb
QueryAssistant incorporates a coded mechanism
against prompt injection attacks.
13
Table 3: Terms used for prompt parts. Expressions specific to a domain are printed in italics, “examples” indicates a one-shot
or few-shot prompt pattern. Terms adopted for the taxonomy are underlined.
Source
[72]
[34]
[32]
[45]
[45]
[37]
Instruction
task description + examples
instruction prompt
predefined prompt
prompt template + examples
examples
prompt context, i.e., examples
[5]
[5]
[69]
[26]
education prompt
education prompt
role and goal + instruction + examples
predefined system instruction +
domain-specific information
State
DB schema
environment state, scene
description
dialogue history
dialogue history + provided
workflow
context
query results from
knowledge graph
Task
test instance
data prompt
user prompt
user input question
SQL query result
input task commands
user input task prompt
(circumscribed)
current task
the user’s request
Most example instances omit prompt checks. There
are no examples where a Check is performed by a
User or an LLM.
5.2.4. Skills dimensions
The Skills dimension captures the types of LLM ca-
pabilities that an application utilizes. It is designed
as a dimension with six non-mutually exclusive char-
acteristics.
Skills is decomposed into six specific capabilities:
reWrite: The LLM edits or transforms data or
text, such as rephrasing, summarizing, reformat-
ting, correcting, or replacing values. E.g., My-
CrunchGpt SettingsEditor replaces values in
JSON files; TruckPlatoon converts measurements
into textual explanations.
Create: The LLM generates novel output. E.g.,
LowCode Executing generates substantial bodies
of text for tasks like essay writing.
conVerse: The application relies on the LLM’s capa-
bility to engage in purposeful dialogues with humans.
E.g., MyCrunchGpt DesignAssistant asks users
for missing parameters; SgpTod PolicyPrompter
decides how to react to user inputs and formulates
chatbot responses.
Inform: The application depends on knowledge that
the LLM has acquired during its training, unlike
applications that provide all necessary information
within the prompt. E.g., MyCrunchGpt Domain-
Expert provides expert knowledge on airfoil designs;
MatrixProduction relies on built-in knowledge of
production processes, such as “a hole is produced
by drilling”; LowCode Executing uses its learned
knowledge for tasks like essay writing.
Reason: The LLM draws conclusions or makes log-
ical inferences. E.g., FormulaExplainer in Ex-
celCopilot explains the effects of Excel functions
in formulas; AutoDroid MemoryGenerators ex-
plain the effects of GUI elements in Android apps.
Plan: The LLM designs a detailed method or course
E.g., Au-
of action to achieve a specific goal.
toDroid TaskExecutor and WorkplaceRobot
TaskPlanning devise action plans to achieve goals.
The Plan and Reason characteristics are interrelated,
as planning also requires reasoning. The intended
handling of these characteristics is to categorize an
LLM component as Plan only and understand Plan
as implicitly subsuming Reason.
The effectiveness of LLMs as components of software
applications relies on their commonsense knowledge
and their ability to correctly interpret and handle a
broad variety of text inputs, including instructions,
14
examples, and code. It is reasonable to assume that a
fundamental capability, which might be termed Un-
terstand, is leveraged by every LLM component. As
it is not distinctive, the taxonomy does not list it
explicitly in the Skills dimension.
Applying this taxonomy dimension requires users to
determine which skills are most relevant and worth
highlighting in an LLM component. Given the versa-
tility of LLMs, reducing the focus to few predominant
skills is necessary to make categorizations distinctive
and expressive.
5.2.5. Output-related dimensions
Output Format characterizes the format of the LLM’s
output. As an output may consist of several parts in
diverse formats, this dimension is designed as non-
mutually exclusive, same as the Skills dimension. It
distinguishes four characteristics that are distinctive
and well discernible:
FreeText: unstructured natural language text out-
put. E.g., TruckPlatoon and MyCrunchGpt
DomainExpert generate text output in natural lan-
guage; MatrixProduction Manager and Ma-
trixProduction Operator produce FreeText ex-
planations complementing output in custom formats
to be parsed by the application.
Item: a single text item from a predefined set of
items, such as a class in a classification task. E.g.,
ProgPrompt ScenarioFeedback outputs either
True or False.
Code: source code or other highly formalized output
that the LLM has learned during its training, such
as a programming language, XML, or JSON. E.g.,
AutoDroid TaskExecutor produces code to steer
an Android app; MyCrunchGpt SettingsEditor
outputs JSON.
Structure: structured, formalized output adhering to
a custom format. E.g., LowCode Planning out-
puts text in a format that can be displayed as a flow
chart; MatrixProduction Manager and Oper-
ator produce output in custom formats combined
with FreeText explanations.
Output Revision indicates whether the application
checks or revises the LLM-generated output before
utilization. These characteristics and their interpre-
tations mirror those in the Prompt Check dimension:
none: There is no revision of the LLM output.
User : The user revises the LLM output. E.g.,
the user improves the plan generated by LowCode
Planning.
LLM : A further LLM component checks or revises
the output of the LLM component under considera-
tion.
Program: Programmed code checks or revises the
LLM output. E.g., Honeycomb QueryAssistant
corrects the query produced by the LLM before exe-
cuting it [7].
There are no instances in the sample set where an-
other LLM revises or checks the output of the LLM.
Most sample applications do not check or revise the
LLM’s output, though several of them parse and
transform it. The purpose of the Output Revision
dimension is to indicate whether the application in-
cludes control or correction mechanisms, rather than
just parsing it.
Output Consumer addresses the way of utilizing the
LLM output:
User signifies that the LLM output is presented to
a human user. E.g., the text output of TruckPla-
toon is intended for humans, as well as the output
of MyCrunchGPT DomainExpert.
LLM indicates that the output serves as a prompt
part in a further LLM invocation. E.g., the knowl-
edge base entries generated by an AutoDroid Mem-
oryGenerator become part of the prompt for
AutoDroid TaskExecutor; the plan output by
LowCode Planning serves as a part of the prompt
for LowCode Executing.
Program describes instances where the LLM output
is consumed and processed further by a software com-
ponent of the application. E.g., the output of Ma-
trixProduction Manager is handled by software
systems (including a Manufacturing Execution Sys-
tem) which use it to compute prompts for other LLM
components.
Engine covers scenarios where the LLM output is in-
tended for execution on a runtime engine. E.g., the
SQL query generated by SgpTod DstPrompter is
15
processed by a SQL interpreter; a part of the output
of MatrixProduction Operator is executed by
automation modules.
Although applications may parse and transform the
LLM output before use, the Output Consumer di-
mension is meant to identify the ultimate consumer,
such as an execution engine, rather than an interme-
diary parser or transformation code. When applica-
tions divide the LLM output into parts for different
consumers, users applying the taxonomy need to de-
termine which consumer is most relevant, since this
dimension is designed to be mutually exclusive.
5.3. Evaluation
Figure 2 displays the number of occurrences of char-
It must
acteristics within the example instances.
be noted, however, that these do not reflect actual
frequencies, as similar LLM components within the
same application are aggregated together, indicated
by symbols ∗ and 2 in figure 1. Furthermore, Ex-
celCopilot likely includes occurrences of Prompt
Check and Output Revision which are not counted
due to insufficient system documentation.
We evaluate the taxonomy against commonly ac-
cepted quality criteria: comprehensiveness, robust-
ness, conciseness, mutual exclusiveness, explanatory
power, and extensibility [58, 42]. The taxonomy
encompasses all example instances including those
that were not considered during its development.
This demonstrates comprehensiveness. As figure 1
shows, all example instances have unique categoriza-
tions, supporting the taxonomy’s robustness. This
not only indicates that the dimensions and charac-
teristics are distinctive for the domain, but also high-
lights the wide variety possible in this field. Concise-
ness demands that the taxonomy uses the minimum
number of dimensions and characteristics. The tax-
onomy gains conciseness by identifying relatively few
and abstract characteristics within each dimension.
However, it does not adhere to the related subcri-
terion that each characteristic must be present in at
least one investigated instance [54]. Unoccupied char-
acteristics are retained for dimensions whose char-
acteristics were derived conceptually, specifically, for
the Prompt dimensions, the Output Revision dimen-
sion, and the Data Function dimension, enhancing
the taxonomy’s ability to illustrate design options
and inspire novel uses for LLM integrations in ap-
plications. Some dimensions are constructed in par-
allel, sharing common sets of characteristics. While
this affects conciseness, it makes the taxonomy easier
to understand and apply. As is often seen in tax-
onomy development [54], we deliberately waived the
requirement for mutual exclusiveness for some di-
mensions, specifically the Output Format and Skills
dimensions. In the context of this taxonomy, these
can equivalently be understood as a set of of six
and four binary dimensions respectively, each divided
into characteristics “yes” and “no”. However, framing
them as a single dimension with non-mutually exclu-
sive characteristics seems more intuitive.
Metadimensions structure the taxonomy, and most
of the characteristics are illustrated through exam-
ples. These measures are recognized for enhancing
the explanatory power of a taxonomy [58]. The
taxonomy’s flat structure allows for the easy addition
of dimensions and characteristics, indicating that its
extensibility is good. Potential extensions and fur-
ther aspects of the taxonomy, including its usefulness
and ease of use, are discussed in section 6.
We visualize the taxonomy (or, strictly speaking, cat-
egorized instances) in a compact form using feature
vectors with characteristics abbreviated to single-
letter codes. This approach has a drawback, as
it requires referencing a legend. Additionally, non-
applicable characteristics in mutually exclusive di-
mensions are not visible, which means the design
space is not completely shown. However, the com-
pactness of the representation allows LLM compo-
nents within a common application to be grouped
closely, so that an LLM-integrated application can
be perceived as a unit without appearing convoluted.
This is a significant advantage for our purposes.
6. Discussion
The discussion first focuses on the taxonomy’s appli-
cability and ease of use before considering its overall
usefulness.
16
Invocation
(cid:122)
(cid:123)
(cid:125)(cid:124)
Inter. Freq. Logic UI
Function
(cid:125)(cid:124)
(cid:122)
(cid:123)
Data
(cid:122)
Instr.
Prompt
(cid:125)(cid:124)
State
Task
(cid:123)
Check
Skills
(cid:125)(cid:124)
(cid:122)
(cid:123)
Output
Format
(cid:122)
(cid:122) (cid:125)(cid:124) (cid:123) Revision Consumer
Output
(cid:125)(cid:124)
(cid:123)
A C D I S C A I O B R W B U L P U L P U L P U L P W C V I R P F I C S U L P U L P E
8 9 4 5 16 8 13 5 2 2 2 0 0 0 0 21 0 2 17 11 3 7 0 0 2 3 1 4 4 7 8 10 4 6 8 1 0 1 5 3 3 10
Figure 2: Occurrences of characteristics in the sample set of LLM-integrated applications.
6.1. Applicability and ease of use
The taxonomy was effectively applied to LLM-
integrated applications based on research papers,
source code blog posts, recorded software demonstra-
tions, and developer experiences. The analysis of
LowCode revealed it to be a prompt definition tool
combined with an LLM-based chatbot, which devi-
ates from the strict definition of an LLM-integrated
application. Still, the taxonomy provided an effective
categorization and led to a clear understanding of the
system’s architecture.
Obviously, the ease of categorization depends on the
clarity and comprehensiveness of the available infor-
mation, which varies across analyzed systems. An-
alyzing applications of LLMs in novel and uncom-
mon domains can be challenging. While these papers
present inspiring and innovative ideas for LLM inte-
gration, such as MyCrunchGpt and TruckPla-
toon, they may prioritize explaining the application
area and struggle to detail the technical aspects of the
LLM integration. A taxonomy for LLM-integrated
applications can guide and facilitate the writing pro-
cess and lead to more standardized and comparable
descriptions.
Applying the taxonomy is often more straightforward
for research-focused systems. Omitting the com-
plexities required for real-world applications, such as
prompt checks and output revisions, their architec-
tures are simpler and easier to describe. A taxonomy
can point out such omissions.
A fundamental challenge in applying the taxonomy
arises from the inherent versatility of LLMs, which
allows to define LLM components serving multiple
purposes. This is exemplified by SgpTod Poli-
cyPrompter, where the prompt is designed to pro-
duce a structure with two distinct outcomes (a class
label and a chatbot response), and similarly by Ma-
trixProduction, as detailed section 4.2. Draw-
ing an analogy to “function overloading” in classical
programming, such LLM components can be termed
“overloaded LLM components”.
A taxonomy can handle overloaded LLM components
in several ways: (1) define more dimensions as non-
mutually exclusive, (2) label overloaded LLM compo-
nents as “overloaded” without a more detailed catego-
rization, or (3) categorize them by their predominant
purpose or output. While the first approach allows
for the most precise categorization, it complicates the
taxonomy. Moreover, it will likely result in nearly all
characteristics being marked for some LLM compo-
nents, which is ultimately not helpful. The second
approach simplifies categorization but sacrifices much
detail. Our taxonomy adopts the third approach, en-
forcing simplification and abstraction in descriptions
of overloaded LLM components while retaining es-
sential detail. The taxonomy can easily be extended
to include approach (2) as an additional binary di-
mension.
6.2. Usefulness
The search for instances of LLM-integrated appli-
cations uncovered activities across various domains.
Substantial research involving LLM integrations, of-
ten driven by theoretical interests, is notable in robot
task planning [37, 51, 61, 33, 63] and in the TOD
field [23, 71, 4, 6, 56]. Research exploring LLM po-
tentials from a more practical perspective can be
found in novel domains, such as industrial produc-
tion [69, 26] and other technical areas [28, 70]. Fur-
17
thermore, developers of commercial LLM-based ap-
plications are beginning to communicate their efforts
and challenges [44, 7]. The taxonomy has been ap-
plied to example instances from these and additional
areas. This demonstrates its potential as a common,
unified framework for describing LLM-integrated ap-
plications, facilitating the comparison and sharing
of development knowledge between researchers and
practitioners across various domains.
When applying the taxonomy to the example in-
stances, it proved to be effective and useful as an
analytical lens. Descriptions of LLM-integrated ap-
plications commonly explain background information
and details of the application domain in addition to
its LLM integration. When used as an analytical
lens, the taxonomy quickly directs the analysis to-
wards the aspects of LLM integration, abstracting
from the specificities of the domain.
The taxonomy describes how LLM capabilities can be
leveraged in software systems, offers inspiration for
LLM-based functions, and outlines options for their
implementation as follows. The Skills dimension out-
lines the range of capabilities an LLM can contribute
to an application through a concise set of characteris-
tics, while the Function dimension suggests potential
uses, further supported by the Interaction dimension.
The Output Type dimension indicates options for en-
coding the output of an LLM in formats beyond plain
text, making it processable by software. The Output
Consumer dimension illustrates the diverse ways to
utilize or act upon LLM output. Thus, the taxonomy,
as intended, spans a design space for LLM integra-
tions.
The sampled LLM-integrated applications showcase
the creativity of researchers and developers in ap-
plying and exploiting the potentials of LLMs, rang-
ing from straightforward solutions (e.g., TruckPla-
toon) to highly sophisticated and technically com-
plex ones (e.g., AutoDroid). When using the tax-
onomy to inspire innovative uses of LLMs, we recom-
mend supplementing it with descriptions of example
applications to enhance its illustrativeness. The char-
acteristics of the Skills dimension are derived prag-
matically from the investigated example instances.
While they do not claim to be exhaustive or deeply
18
rooted in LLM theory or cognitive science, they add
relevant details to the categorizations and illustrate
design options and potentials for using LLMs as soft-
ware components.
It emerged as a key insight of this research that,
rather than analyzing an LLM-integrated application
in whole, analysis should start with the identifica-
tion and description of its distinct LLM components.
This is essential for gaining a clear understanding of
how the application utilizes the capabilities of LLMs.
The LLM-integrated application then manifests as a
combination of its LLM components. As shown in fig-
ure 1, the visualization effectively displays both the
quantity and the variety of LLM components in an
LLM-integrated application.
LLM components interact through prompt chaining,
where one LLM component’s output feeds into an-
other’s input [67]. When an LLM-integrated applica-
tion involves such an interaction, the taxonomy rep-
resents it as an LLM characteristic within a Prompt
dimension. The taxonomy can capture the variance
in these interactions. For instance, in AutoDroid
TaskExecutor and LowCode Executing, the
LLM characteristic appears in the Prompt State di-
mension, because their prompt components (knowl-
edge base excerpts and prompt definition, respec-
tively) are generated by other LLM components in a
preparatory stage. In contrast, the LLM character-
istic appears in the Prompt Task dimension for Ma-
trixProduction Operator, because its prompt
part is generated individually by the MatrixPro-
duction Manager almost immediately before use.
that
cover
Taxonomy dimensions
entire LLM-
integrated applications may be useful. Given their
complexity, these dimensions should be designed
based on a broader range of examples, which will only
become available as more LLM-integrated applica-
tions are developed and their architectures disclosed
in the future. Extensions to the taxonomy could
also include dimensions for describing the structure
of prompts in more detail, as well as dimensions ad-
dressing characteristics of the language models used.
Table 4: LLM usage in the sample instances. “Evals” indicates evaluations of various LLMs.
Used or best LLM Evals Comments
GPT-3.5
GPT-3.5-turbo
GPT-3.5
yes
GPT-4 far too slow
then awaiting the publication of GPT-4
Application
Honeycomb
LowCode
MyCrunchGpt
MatrixProduction text-davinci-003
WorkplaceRobot
AutoDroid
ProgPrompt
FactoryAssistants GPT-3.5
GPT-3.5
SgpTod
GPT-3.5-turbo
TruckPlatoon
N/A
ExcelCopilot
GPT-3
GPT-4
GPT-3
yes
GPT-4 best for tasks requiring many steps
CODEX better, but access limits prohibitive
yes
GPT-3.5 best more often than others combined
combined LLMs in Copilot for Microsoft 365 [43]
7. Conclusion
This paper investigates the use of LLMs as soft-
ware components.
Its perspective differs from cur-
rent software engineering research, which investigates
LLMs as tools for software development [14, 22] and
from research examining LLMs as autonomous agents
[11, 62, 57, 21]. This paper defines the concept of an
LLM component as a software component that re-
alizes its functionality by invoking an LLM. While
LLM components implicitly appear in various works,
termed, for example, “prompters”, “prompted LLM”,
“prompt module”, or “module” [30, 71, 6, 7], to our
knowledge, this concept has not yet been formalized
or systematically investigated.
The main contribution of this study is a taxonomy
for the analysis and description of LLM components,
extending to LLM-integrated applications by charac-
terizing them as combinations of LLM components.
In addition to the dimensions and characteristics of
the taxonomy, the study contributes a taxonomy vi-
sualization based on feature vectors, which is more
compact than the established visualizations such as
morphological boxes [55] or radar charts.
It repre-
sents an LLM-integrated application as one visual en-
tity in a tabular format, with its LLM components
displayed as rows.
The taxonomy was constructed using established
methods, based on a set of example instances, and
evaluated with a new set of example instances. The
combined samples exhibit broad variation along the
identified dimensions. For some instances, informa-
tion was not available, necessitating speculative in-
terpretation. However, since the sample is used for
identifying options rather than quantitative analysis,
this issue and the representativeness of the sample
are not primary concerns. The evaluation was con-
ducted by the developer of the taxonomy, consistent
with recent related work [21, 52, 48]. Using a new
sample for evaluation strengthens the validity of the
results.
A further significant contribution of the paper is a
systematic overview of a sample of LLM-integrated
applications across various industrial and technical
domains, illustrating a spectrum of conceptual ideas
and implementation options.
As the examples show, LLM components can re-
place traditionally coded functions in software sys-
tems and enable novel use cases. However, practi-
cal challenges persist. Developers report that new
software engineering methods are required, e.g., for
managing prompts as software assets and for test-
ing and monitoring applications. For instance, the
costs of LLM invocations prohibit the extensive au-
tomated testing that is standard in software devel-
opment practice [44, 7]. Challenges also arise from
the inherent indeterminism and uncontrollability of
LLMs. Small variations in prompts can lead to differ-
ences in outputs, while automated output processing
19
in LLM-integrated applications requires the output
to adhere to a specified format.
Furthermore,
the deployment mode of LLMs,
whether local (on the same hardware as the ap-
plication) or remote, managed privately or offered
as Language-Models-as-a-Service (LMaaS), has im-
pact on performance and usability. Table 4 gives an
overview of the LLMs used in our sample of appli-
cations. Where papers report evaluations of mul-
tiple LLMs, the table displays the chosen or best-
performing LLM. Although not representative, the
table provides some insights. LMaaS dominates,
likely due to its convenience, but more importantly,
due to the superior performance of the provided
LLMs.
Concerns regarding LMaaS include privacy, as sensi-
tive data might be transmitted to the LLM through
the prompt [64], and service quality, i.e., reliability,
availability, and costs. Costs typically depend on the
quantity of processed tokens. This quantity also af-
fects latency, which denotes the processing time of
an LLM invocation. A further important factor for
latency is the size of the LLM, with larger models
being slower [7].
When building LLM-based applications for real-
world use, the reliability and availability of an LMaaS
are crucial. Availability depends not only on the
technical stability of the service, but also on factors
such as increased latency during high usage periods
or usage restrictions imposed by the provider of an
LMaaS, as reported for ProgPrompt [51]. Beyond
technical aspects, the reliability of an LMaaS also en-
compasses its behavior. For instance, providers might
modify a model to enhance its security, potentially
impacting applications that rely on it.
Despite practical challenges, integrating LLMs into
systems has the potential to alter the way software
is constructed and the types of systems that can be
realized. Prompts are central to the functioning of
LLM components which pose specific requirements
such as strict format adherence. Therefore, an im-
portant direction for future research will be prompt
engineering specifically tailored for LLM-integrated
applications.
In future work, the taxonomy will be extended to
distinguish finer-grained parts of prompts, allowing a
more detailed description and comparison of prompts
and related experimental results. Initial studies share
results on the format-following behavior of LLMs [68]
as a subtopic of instruction-following [73], derived
with synthetic benchmark data.
It is necessary to
complement their results with experiments using data
and tasks from real application development projects
because, in the early stages of this field, synthetic
benchmarks may fail to cover relevant aspects within
the wide range of possible options. Another crucial
research direction involves exploring how LLM char-
acteristics correspond to specific tasks, such as de-
termining the optimal LLM size for intent detection
tasks. The taxonomy developed in this study can sys-
tematize such experiments and their outcomes. Ad-
ditionally, it provides a structured framework for de-
lineating design choices in LLM components, making
it a valuable addition to future training materials.
Acknowledgements
Special thanks to Antonia Weber and Constantin We-
ber for proofreading and providing insightful and con-
structive comments.
References
[1] Eleni Adamopoulou and Lefteris Moussiades. An
Overview of Chatbot Technology. In Ilias Ma-
glogiannis, Lazaros Iliadis, and Elias Pimeni-
dis, editors, Artificial Intelligence Applications
and Innovations, IFIP Advances in Information
and Communication Technology, pages 373–383,
Cham, 2020. Springer International Publishing.
doi:10.1007/978-3-030-49186-4_31.
[2] Sebastian Bader, Erich Barnstedt, Heinz Be-
denbender, Bernd Berres, Meik Billmann, and
Marko Ristin. Details of the asset adminis-
tration shell-part 1: The exchange of informa-
tion between partners in the value chain of in-
dustrie 4.0 (version 3.0 rc02). Working Paper,
Berlin: Federal Ministry for Economic Affairs
20
and Climate Action (BMWK), 2022. doi.org/
10.21256/zhaw-27075.
Soft Computing, 151:111165, January 2024.
doi:10.1016/j.asoc.2023.111165.
[3] Marcos Baez, Florian Daniel, Fabio Casati, and
Boualem Benatallah. Chatbot integration in few
patterns. IEEE Internet Computing, pages 1–1,
2020. doi:10.1109/MIC.2020.3024605.
[4] Tom Bocklisch,
Thomas Werkmeister,
Task-
Daksh Varshneya, and Alan Nichol.
Oriented Dialogue with In-Context Learn-
ing.
(arXiv:2402.12234), February 2024.
doi:10.48550/arXiv.2402.12234.
[5] Yuzhe Cai, Shaoguang Mao, Wenshan Wu, Ze-
hua Wang, Yaobo Liang, Tao Ge, Chenfei Wu,
Wang You, Ting Song, Yan Xia, Jonathan Tien,
and Nan Duan. Low-code LLM: Visual Pro-
gramming over LLMs. (arXiv:2304.08103), April
2023. doi:10.48550/arXiv.2304.08103.
[6] Lang Cao. DiagGPT: An LLM-based Chatbot
with Automatic Topic Management for Task-
Oriented Dialogue. (arXiv:2308.08043), August
2023. doi:10.48550/arXiv.2308.08043.
[7] Phillip Carter.
All
the Hard Stuff No-
body Talks About When Building Prod-
ucts with LLMs.
Honeycomb, May
2023.
https://www.honeycomb.io/blog/
hard-stuff-nobody-talks-about-llm.
[8] Phillip Carter.
So We Shipped an AI Prod-
Honeycomb, Octo-
uct. Did It Work?
ber 2023. https://www.honeycomb.io/blog/
we-shipped-ai-product.
[9] Banghao Chen, Zhaofeng Zhang, Nicolas
Langrené,
Unleash-
and Shengxin Zhu.
ing the potential of prompt engineering in
Large Language Models: A comprehensive
review.
(arXiv:2310.14735), October 2023.
doi:10.48550/arXiv.2310.14735.
[10] Wang Chen, Yan-yi Liu, Tie-zheng Guo, Da-
peng Li, Tao He, Li Zhi, Qing-wen Yang,
Hui-han Wang, and Ying-you Wen.
Sys-
industry appli-
tems engineering issues
cations of
Applied
large language model.
for
21
[11] Yuheng Cheng, Ceyao Zhang, Zhengwen Zhang,
Xiangrui Meng, Sirui Hong, Wenhao Li, Zihao
Wang, Zekai Wang, Feng Yin, Junhua Zhao, and
Xiuqiang He. Exploring Large Language Model
based Intelligent Agents: Definitions, Methods,
and Prospects.
(arXiv:2401.03428), January
2024. doi:10.48550/arXiv.2401.03428.
[12] Silvia Colabianchi, Andrea Tedeschi,
and
Francesco Costantino. Human-technology in-
tegration with industrial conversational agents:
A conceptual architecture and a taxonomy for
manufacturing.
Journal of Industrial Infor-
mation Integration, 35:100510, October 2023.
doi:10.1016/j.jii.2023.100510.
[13] Jonathan Evertz, Merlin Chlosta, Lea Schön-
herr, and Thorsten Eisenhofer. Whispers in
the Machine: Confidentiality in LLM-integrated
Systems.
(arXiv:2402.06922), February 2024.
doi:10.48550/arXiv.2402.06922.
[14] Angela Fan, Beliz Gokkaya, Mark Harman,
Mitya Lyubarskiy, Shubho Sengupta, Shin Yoo,
and Jie M. Zhang. Large Language Models
for Software Engineering: Survey and Open
Problems. (arXiv:2310.03533), November 2023.
doi:10.48550/arXiv.2310.03533.
[15] Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing
Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei
Wang, Xiangyu Zhao, Jiliang Tang, and Qing
Li. Recommender Systems in the Era of Large
Language Models (LLMs). (arXiv:2307.02046),
August 2023. doi:10.48550/arXiv.2307.02046.
[16] David Fortin. Microsoft Copilot
in Excel:
What It Can and Can’t Do. YouTube, Jan-
uary 2024. https://www.youtube.com/watch?
v=-fsu9IXMZvo.
[17] Martin Fowler. Patterns of Enterprise Applica-
tion Architecture. 2002. ISBN 978-0-321-12742-
6.
[18] Shirley Gregor. The nature of theory in infor-
mation systems. MIS quarterly, pages 611–642,
2006. doi:10.2307/25148742.
[19] Yanchu Guan, Dong Wang, Zhixuan Chu, Shiyu
Wang, Feiyue Ni, Ruihua Song, Longfei Li, Jin-
jie Gu, and Chenyi Zhuang.
Intelligent Vir-
tual Assistants with LLM-based Process Au-
tomation. (arXiv:2312.06677), December 2023.
doi:10.48550/arXiv.2312.06677.
[20] Muhammad Usman Hadi, Qasem Al Tashi,
Rizwan Qureshi, Abbas Shah, Amgad Muneer,
Muhammad Irfan, Anas Zafar, Muhammad Bi-
lal Shaikh, Naveed Akhtar, Jia Wu, and Seyedali
Mirjalili. Large Language Models: A Compre-
hensive Survey of its Applications, Challenges,
Limitations, and Future Prospects, September
2023. doi:10.36227/techrxiv.23589741.v3.
[21] Thorsten Händler.
A Taxonomy for Au-
tonomous LLM-Powered Multi-Agent Architec-
tures:.
In Proceedings of the 15th Interna-
tional Joint Conference on Knowledge Discov-
ery, Knowledge Engineering and Knowledge
Management, pages 85–98, Rome, Italy, 2023.
SCITEPRESS - Science and Technology Publi-
cations. doi:10.5220/0012239100003598.
[22] Xinyi Hou, Yanjie Zhao, Yue Liu, Zhou Yang,
Kailong Wang, Li Li, Xiapu Luo, David Lo, John
Grundy, and Haoyu Wang. Large Language
Models for Software Engineering: A Systematic
Literature Review. (arXiv:2308.10620), Septem-
ber 2023. doi:10.48550/arXiv.2308.10620.
[23] Vojtěch Hudeček and Ondrej Dusek.
Are
Large Language Models All You Need for Task-
In Svetlana Stoyanchev,
Oriented Dialogue?
Shafiq Joty, David Schlangen, Ondrej Dusek,
Casey Kennington, and Malihe Alikhani, edi-
tors, Proceedings of the 24th Annual Meeting of
the Special Interest Group on Discourse and Di-
alogue, pages 216–228, Prague, Czechia, Septem-
ber 2023. Association for Computational Lin-
guistics. doi:10.18653/v1/2023.sigdial-1.21.
[24] Kevin Maik Jablonka, Qianxiang Ai, Alexander
Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly,
Andres M. Bran, Stefan Bringuier, Catherine L.
Brinson, Kamal Choudhary, Defne Circi, Sam
Cox, Wibe A. de Jong, Matthew L. Evans, Nico-
las Gastellu, Jerome Genzling, María Victoria
Gil, Ankur K. Gupta, Zhi Hong, Alishba Im-
ran, Sabine Kruschwitz, Anne Labarre, Jakub
Lála, Tao Liu, Steven Ma, Sauradeep Majum-
dar, Garrett W. Merz, Nicolas Moitessier, Elias
Moubarak, Beatriz Mouriño, Brenden Pelkie,
Michael Pieler, Mayk Caldas Ramos, Bojana
Ranković, Samuel Rodriques, Jacob Sanders,
Philippe Schwaller, Marcus Schwarting, Jiale
Shi, Berend Smit, Ben Smith, Joren Van Herck,
Christoph Völker, Logan Ward, Sean War-
ren, Benjamin Weiser, Sylvester Zhang, Xiaoqi
Zhang, Ghezal Ahmad Zia, Aristana Scour-
tas, K. Schmidt, Ian Foster, Andrew White,
and Ben Blaiszik. 14 examples of how LLMs
can transform materials science and chem-
istry: A reflection on a large language model
hackathon. Digital Discovery, 2(5):1233–1250,
2023. doi:10.1039/D3DD00113J.
[25] Jean Kaddour,
Joshua Harris, Maximilian
Mozes, Herbie Bradley, Roberta Raileanu, and
Robert McHardy.
Challenges and Applica-
tions of Large Language Models, July 2023.
doi:10.48550/arXiv.2307.10169.
[26] Samuel Kernan Freire, Mina Foosherian, Chao-
fan Wang, and Evangelos Niforatos. Harnessing
Large Language Models for Cognitive Assistants
in Factories. In Proceedings of the 5th Interna-
tional Conference on Conversational User Inter-
faces, CUI ’23, pages 1–6, New York, NY, USA,
July 2023. Association for Computing Machin-
ery. doi:10.1145/3571884.3604313.
[27] Anis Koubaa, Wadii Boulila, Lahouari Ghouti,
Ayyub Alzahem, and Shahid Latif. Explor-
ing ChatGPT Capabilities and Limitations: A
Survey. IEEE Access, 11:118698–118721, 2023.
doi:10.1109/ACCESS.2023.3326474.
[28] Varun Kumar, Leonard Gleyzer, Adar Ka-
hana, Khemraj Shukla, and George Em Karni-
22
adakis. MyCrunchGPT: A LLM Assisted Frame-
work for Scientific Machine Learning.
Jour-
nal of Machine Learning for Modeling and
Computing, 4(4), 2023.
doi.org/10.1615/
JMachLearnModelComput.2023049518.
[29] Dennis
Jan
Kundisch,
Muntermann,
Anna Maria Oberländer, Daniel Rau, Maxi-
milian Röglinger, Thorsten Schoormann, and
Daniel Szopinski. An Update for Taxonomy
Designers. Business & Information Systems
Engineering,
2022.
doi:10.1007/s12599-021-00723-x.
64(4):421–439, August
Prompted LLMs as
Jongho
[30] Gibbeum Lee, Volker Hartmann,
and Kang-
Park, Dimitris Papailiopoulos,
wook Lee.
chatbot
modules for long open-domain conversation.
In Anna Rogers, Jordan Boyd-Graber, and
Naoaki Okazaki, editors, Findings of the as-
sociation for computational
linguistics: ACL
2023, pages 4536–4554, Toronto, Canada, July
2023. Association for Computational Linguistics.
doi:10.18653/v1/2023.findings-acl.277.
[31] Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zheng-
bao Jiang, Hiroaki Hayashi, and Graham Neu-
big. Pre-train, Prompt, and Predict: A Sys-
tematic Survey of Prompting Methods in Nat-
ural Language Processing.
ACM Comput-
ing Surveys, 55(9):195:1–195:35, January 2023.
doi:10.1145/3560815.
[32] Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang,
Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan
Zheng, and Yang Liu. Prompt Injection at-
tack against LLM-integrated Applications, June
2023. doi:10.48550/arXiv.2306.05499.
[33] Yuchen
Liu,
Luigi Palmieri,
Sebastian
Ilche Georgievski, and Marco Aiello.
Koch,
DELTA: Decomposed Efficient Long-Term
Robot Task Planning using Large Language
Models.
(arXiv:2404.03275), April 2024.
doi:10.48550/arXiv.2404.03275.
[34] Yupei Liu, Yuqi Jia, Runpeng Geng, Jinyuan
Jia, and Neil Zhenqiang Gong. Prompt Injec-
tion Attacks and Defenses in LLM-Integrated
23
Applications. (arXiv:2310.12815), October 2023.
doi:10.48550/arXiv.2310.12815.
[35] Shaoguang Mao, Qiufeng Yin, Yuzhe Cai,
https:
and Dan Qiao.
//github.com/chenfei-wu/TaskMatrix/
tree/main/LowCodeLLM, May 2023.
LowCodeLLM.
[36] Scott McLean, Gemma J. M. Read, Jason
Thompson, Chris Baber, Neville A. Stanton, and
Paul M. Salmon. The risks associated with Ar-
tificial General Intelligence: A systematic re-
view. Journal of Experimental & Theoretical
Artificial Intelligence, 35(5):649–663, July 2023.
doi:10.1080/0952813X.2021.1964003.
[37] Oier Mees, Jessica Borja-Diaz, and Wolfram
Burgard. Grounding Language with Visual Af-
In 2023
fordances over Unstructured Data.
IEEE International Conference on Robotics
and Automation (ICRA), pages 11576–11582,
London, United Kingdom, May 2023. IEEE.
doi:10.1109/ICRA48891.2023.10160396.
[38] Grégoire Mialon, Roberto Dessì, Maria
Lomeli, Christoforos Nalmpantis, Ram Pa-
sunuru, Roberta Raileanu, Baptiste Rozière,
Timo Schick,
Jane Dwivedi-Yu, Asli Ce-
likyilmaz, Edouard Grave, Yann LeCun,
and Thomas Scialom.
Augmented Lan-
guage Models: A Survey, February 2023.
doi:10.48550/arXiv.2302.07842.
[39] Melanie Mitchell.
ture of artificial general
ence,
doi:10.1126/science.ado7069.
intelligence.
383(6689):eado7069, March
Debates on the na-
Sci-
2024.
[40] Quim Motger, Xavier Franch, and Jordi Marco.
Survey,
Software-Based Dialogue Systems:
Taxonomy, and Challenges. ACM Comput-
ing Surveys, 55(5):91:1–91:42, December 2022.
doi:10.1145/3527450.
[41] Fiona Fui-Hoon Nah, Ruilin Zheng, Jingyuan
Cai, Keng Siau, and Langtao Chen. Gen-
erative AI and ChatGPT: Applications, chal-
lenges, and AI-human collaboration.
Jour-
nal of Information Technology Case and Ap-
plication Research, 25(3):277–304, July 2023.
doi:10.1080/15228053.2023.2233814.
[42] Robert C Nickerson, Upkar Varshney, and
taxon-
Jan Muntermann.
omy development and its application in in-
formation systems. European Journal of In-
formation Systems, 22(3):336–359, May 2013.
doi:10.1057/ejis.2012.26.
A method for
[43] Camille Pack, Cern McAtee, Samantha Robert-
son, Dan Brown, Aditi Srivastava, and Kweku
Ako-Adjei. Microsoft Copilot for Microsoft
365 overview.
https://learn.microsoft.
com/en-us/copilot/microsoft-365/
microsoft-365-copilot-overview,
2024.
March
Sumit Gulwani,
[44] Chris Parnin, Gustavo Soares, Rahul Pan-
dita,
and
Austin Z. Henley. Building Your Own Prod-
uct Copilot: Challenges, Opportunities, and
Needs.
(arXiv:2312.14231), December 2023.
doi:10.48550/arXiv.2312.14231.
Jessica Rich,
[45] Rodrigo Pedro, Daniel Castro, Paulo Car-
From Prompt In-
reira, and Nuno Santos.
jections to SQL Injection Attacks: How Pro-
tected is Your LLM-Integrated Web Appli-
cation?
(arXiv:2308.01990), August 2023.
doi:10.48550/arXiv.2308.01990.
[46] Ken Peffers, Tuure Tuunanen, Marcus A.
Rothenberger, and Samir Chatterjee. A De-
sign Science Research Methodology for Infor-
mation Systems Research.
Journal of Man-
agement Information Systems, 24(3):45–77, De-
cember 2007.
ISSN 0742-1222, 1557-928X.
doi:10.2753/MIS0742-1222240302.
[47] Mohaimenul Azam Khan Raiaan, Md. Sad-
dam Hossain Mukta, Kaniz Fatema, Nur Mo-
hammad Fahad, Sadman Sakib, Most Mar-
Jubaer Ahmad, Mo-
ufatul Jannat Mim,
hammed Eunus Ali, and Sami Azam. A Review
on Large Language Models: Architectures, Ap-
plications, Taxonomies, Open Issues and Chal-
24
lenges.
doi:10.1109/ACCESS.2024.3365742.
IEEE Access, 12:26839–26874, 2024.
[48] Jack Daniel Rittelmeyer and Kurt Sandkuhl.
Morphological Box for AI Solutions: Evalua-
tion and Refinement with a Taxonomy Develop-
ment Method. In Knut Hinkelmann, Francisco J.
López-Pellicer, and Andrea Polini, editors, Per-
spectives in Business Informatics Research, Lec-
ture Notes in Business Information Process-
ing, pages 145–157, Cham, 2023. Springer Na-
ture Switzerland. doi:10.1007/978-3-031-43126-
5_11.
[49] Shubhra Kanti Karmaker Santu and Dongji
TELeR: A General Taxonomy of
for Benchmarking Complex
(arXiv:2305.11430), October 2023.
Feng.
LLM Prompts
Tasks.
doi:10.48550/arXiv.2305.11430.
[50] Thorsten Schoormann, Frederik Möller, and
Daniel Szopinski. Exploring Purposes of Us-
In Proceedings of the Inter-
ing Taxonomies.
national Conference on Wirtschaftsinformatik
(WI), Nuernberg, Germany, February 2022.
[51] Ishika Singh, Valts Blukis, Arsalan Mousa-
vian, Ankit Goyal, Danfei Xu, Jonathan Trem-
blay, Dieter Fox, Jesse Thomason, and Ani-
mesh Garg. ProgPrompt: Generating Situated
Robot Task Plans using Large Language Mod-
els. In 2023 IEEE International Conference on
Robotics and Automation (ICRA), pages 11523–
11530, London, United Kingdom, May 2023.
IEEE. doi:10.1109/ICRA48891.2023.10161317.
[52] Gero Strobel, Leonardo Banh, Frederik Möller,
and Thorsten Schoormann. Exploring Gener-
ative Artificial Intelligence: A Taxonomy and
Types. In Proceedings of the 57th Hawaii Inter-
national Conference on System Sciences, Hon-
olulu, Hawaii, January 2024.
https://hdl.
handle.net/10125/106930.
[53] Hendrik Strobelt, Albert Webson, Victor Sanh,
Benjamin Hoover, Johanna Beyer, Hanspeter
Pfister, and Alexander M. Rush.
Interac-
tive and Visual Prompt Engineering for Ad-
hoc Task Adaptation With Large Language
Models.
IEEE Transactions on Visualization
and Computer Graphics, pages 1–11, 2022.
doi:10.1109/TVCG.2022.3209479.
Effective Invocation Methods of Massive LLM
Services.
(arXiv:2402.03408), February 2024.
doi:10.48550/arXiv.2402.03408.
[54] Daniel Szopinski, Thorsten Schoormann, and
Dennis Kundisch. Criteria as a Prelude for Guid-
ing Taxonomy Evaluation. In Proceedings of the
53rd Hawaii International Conference on Sys-
tem Sciences, 2020. https://hdl.handle.net/
10125/64364.
[55] Daniel Szopinski, Thorsten Schoormann, and
Visualize different: To-
Dennis Kundisch.
researching the fit between taxon-
wards
omy visualizations and taxonomy tasks.
In
Tagungsband Der 15. Internationalen Tagung
Wirtschaftsinformatik (WI 2020), Potsdam,
2020. doi:10.30844/wi_2020_k9-szopinski.
[56] Manisha Thakkar and Nitin Pise. Unified Ap-
proach for Scalable Task-Oriented Dialogue Sys-
tem.
International Journal of Advanced Com-
puter Science and Applications, 15(4), 2024.
doi:10.14569/IJACSA.2024.01504108.
[57] Oguzhan Topsakal and Tahir Cetin Akinci. Cre-
ating Large Language Model Applications Uti-
lizing Langchain: A Primer on Developing LLM
Apps Fast.
In International Conference on
Applied Engineering and Natural Sciences, vol-
ume 1, pages 1050–1056, 2023.
[58] Michael Unterkalmsteiner and Waleed Adbeen.
A compendium and evaluation of taxonomy
quality attributes.
Expert Systems, 40(1):
e13098, 2023. doi:10.1111/exsy.13098.
[59] Bryan Wang, Gang Li, and Yang Li.
En-
Interaction with Mo-
abling Conversational
In
bile UI using Large Language Models.
Proceedings of
the 2023 CHI Conference on
Human Factors in Computing Systems, CHI
’23, pages 1–17, New York, NY, USA, April
2023. Association for Computing Machinery.
doi:10.1145/3544548.3580895.
[61] Jun Wang, Guocheng He, and Yiannis Kan-
Safe Task Planning for Language-
taros.
Instructed Multi-Robot Systems using Confor-
mal Prediction.
(arXiv:2402.15368), February
2024. doi:10.48550/arXiv.2402.15368.
[62] Lei Wang, Chen Ma, Xueyang Feng, Zeyu
Zhang, Hao Yang, Jingsen Zhang, Zhiyuan
Chen, Jiakai Tang, Xu Chen, Yankai Lin,
Wayne Xin Zhao, Zhewei Wei, and Jirong
Wen.
A survey on large language model
based autonomous agents. Frontiers of Com-
puter Science,
18(6):186345, March 2024.
doi:10.1007/s11704-024-40231-1.
[63] Shu Wang, Muzhi Han, Ziyuan Jiao, Zeyu
Zhang, Ying Nian Wu, Song-Chun Zhu, and
Hangxin Liu. LLM3:Large Language Model-
based Task and Motion Planning with Motion
Failure Reasoning.
(arXiv:2403.11552), March
2024. doi:10.48550/arXiv.2403.11552.
[64] Hao Wen, Yuanchun Li, Guohong Liu, Shan-
hui Zhao, Tao Yu, Toby Jia-Jun Li, Shiqi Jiang,
Yunhao Liu, Yaqin Zhang, and Yunxin Liu. Em-
powering LLM to use Smartphone for Intelligent
Task Automation. (arXiv:2308.15272), Septem-
ber 2023. doi:10.48550/arXiv.2308.15272.
[65] Hao Wen, Yuanchun Li, and Sean KiteFly-
Kid. MobileLLM/AutoDroid. Mobile LLM, Jan-
uary 2024. https://github.com/MobileLLM/
AutoDroid.
[66] Jules White, Quchen Fu, Sam Hays, Michael
Sandborn, Carlos Olea, Henry Gilbert, Ashraf
Elnashar,
and Dou-
Jesse Spencer-Smith,
glas C. Schmidt.
A Prompt Pattern Cat-
alog to Enhance Prompt Engineering with
ChatGPT. (arXiv:2302.11382), February 2023.
doi:10.48550/arXiv.2302.11382.
[60] Can Wang, Bolin Zhang, Dianbo Sui, Zhiying
Tu, Xiaoyu Liu, and Jiabao Kang. A Survey on
[67] Tongshuang Wu, Michael Terry, and Car-
rie Jun Cai. AI Chains: Transparent and
25
Instruction-
and Le Hou.
Denny Zhou,
Following Evaluation for Large Language Mod-
els.
(arXiv:2311.07911), November 2023.
doi:10.48550/arXiv.2311.07911.
Controllable Human-AI Interaction by Chain-
ing Large Language Model Prompts.
In
Proceedings of
the 2022 CHI Conference on
Human Factors in Computing Systems, CHI
’22, pages 1–22, New York, NY, USA, April
2022. Association for Computing Machinery.
doi:10.1145/3491102.3517582.
[68] Congying Xia, Chen Xing, Jiangshu Du, Xinyi
Yang, Yihao Feng, Ran Xu, Wenpeng Yin,
and Caiming Xiong.
FOFO: A Benchmark
to Evaluate LLMs’ Format-Following Capa-
bility.
(arXiv:2402.18667), February 2024.
doi:10.48550/arXiv.2402.18667.
[69] Yuchen Xia, Manthan Shenoy, Nasser Jazdi,
and Michael Weyrich. Towards autonomous
system:
Flexible modular production sys-
language model
tem enhanced with large
agents. In 2023 IEEE 28th International Con-
ference on Emerging Technologies and Fac-
tory Automation (ETFA), pages 1–8, 2023.
doi:10.1109/ETFA54631.2023.10275362.
[70] I. de Zarzà, J. de Curtò, Gemma Roig,
and Carlos T. Calafate.
LLM Adaptive
PID Control for B5G Truck Platooning Sys-
tems.
Sensors, 23(13):5899, January 2023.
doi:10.3390/s23135899.
[71] Xiaoying Zhang, Baolin Peng, Kun Li, Jingyan
SGP-TOD: Build-
Zhou, and Helen Meng.
ing Task Bots Effortlessly via Schema-Guided
LLM Prompting. (arXiv:2305.09067), May 2023.
doi:10.48550/arXiv.2305.09067.
[72] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi
Tang, Xiaolei Wang, Yupeng Hou, Yingqian
Min, Beichen Zhang, Junjie Zhang, Zican Dong,
Yifan Du, Chen Yang, Yushuo Chen, Zhipeng
Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li,
Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun
Nie, and Ji-Rong Wen. A Survey of Large Lan-
guage Models.
(arXiv:2303.18223), May 2023.
doi:10.48550/arXiv.2303.18223.
[73] Jeffrey Zhou, Tianjian Lu, Swaroop Mishra,
Siddhartha Brahma, Sujoy Basu, Yi Luan,
26
|
synthetic_cpt | 1 | Role_of_Data_Augmentation_Strategies_in_Knowledge_Distillation_for_Wearable_Sensor_Data.pdf | IEEE INTERNET OF THINGS JOURNAL, VOL. 0, NO. 0, JANUARY 2022
1
Role of Data Augmentation Strategies in
Knowledge Distillation for Wearable Sensor Data
Eun Som Jeon, Student Member, IEEE, Anirudh Som, Ankita Shukla, Kristina Hasanaj, Matthew P. Buman,
and Pavan Turaga, Senior Member, IEEE
2
2
0
2
n
a
J
1
]
G
L
.
s
c
[
1
v
1
1
1
0
0
.
1
0
2
2
:
v
i
X
r
a
Abstract—Deep neural networks are parametrized by several
thousands or millions of parameters, and have shown tremendous
success in many classification problems. However, the large
number of parameters makes it difficult to integrate these models
into edge devices such as smartphones and wearable devices. To
address this problem, knowledge distillation (KD) has been widely
employed, that uses a pre-trained high capacity network to train
a much smaller network, suitable for edge devices. In this paper,
for the first time, we study the applicability and challenges of
using KD for time-series data for wearable devices. Successful
application of KD requires specific choices of data augmentation
methods during training. However, it is not yet known if there
exists a coherent strategy for choosing an augmentation approach
during KD. In this paper, we report the results of a detailed study
that compares and contrasts various common choices and some
hybrid data augmentation strategies in KD based human activity
analysis. Research in this area is often limited as there are not
many comprehensive databases available in the public domain
from wearable devices. Our study considers databases from
small scale publicly available to one derived from a large scale
interventional study into human activity and sedentary behavior.
We find that the choice of data augmentation techniques during
KD have a variable level of impact on end performance, and find
that the optimal network choice as well as data augmentation
strategies are specific to a dataset at hand. However, we also
conclude with a general set of recommendations that can provide
a strong baseline performance across databases.
Index Terms—Knowledge Distillation, Data Augmentation,
time-series, Wearable Sensor Data.
I. INTRODUCTION
D EEP LEARNING has achieved state-of-the-art perfor-
mance in various fields, including computer vision [1],
[2], [3], [4], speech recognition [5], [6], and wearable sensors
analysis [7], [8]. In general, stacking more layers or increasing
the number of learnable parameters causes deep networks
to exhibit improved performance [2], [3], [4], [8], [9], [10].
However, this causes the model to become large resulting in
additional need for compute and power resources, for training,
storage, and deployment. These challenges can hinder the
ability to incorporate such models into edge devices. Many
studies have explored techniques such as network pruning [11],
E. Jeon, A. Shukla and P. Turaga are with the School of Arts, Media and
Engineering and School of Electrical, Computer and Energy Engineering,
Arizona State University, Tempe, AZ 85281 USA email: (ejeon6@asu.edu;
Ankita.Shukla@asu.edu; pturaga@asu.edu).
A. Som is with the Center for Vision Technologies Group at SRI Interna-
tional, Princeton, NJ 08540 USA email: Anirudh.Som@sri.com
K. Hasanaj and M. P. Buman are with the College of Health Solutions,
Arizona State University, Phoenix, AZ 85004 USA email: (khasanaj@asu.edu;
mbuman@asu.edu).
This has been accepted at the IEEE Internet of Things Journal.
[12], quantization [12], [13], low-rank factorization [14], and
Knowledge Distillation (KD) [15] to compress deep learning
models. At the cost of lower classification accuracy, some of
these methods help to make the deep learning model smaller
and increase the speed of inference on the edge devices. Post-
training or fine-tuning strategies can be applied to recover
the lost classification performance [12], [13]. On the contrary,
KD does not require fine-tuning nor is subjected to any post-
training processes.
KD is a simple and popular technique that is used to develop
smaller and efficient models by distilling the learnt knowl-
edge/weights from a larger and more complex model. The
smaller and larger models are referred to as student and teacher
models, respectively. KD allows the student model to retain
the classification performance of the larger teacher model.
Recently, different variants of KD have been proposed [16],
[17]. These variations rely on different choices of network
architectures, teacher models, and various features used to train
the student model. Alongside, teacher models trained by early
stopping for KD (ESKD) have been explored, which have
helped improving the efficacy of KD [18]. However, to the
best of our knowledge, there is no previous study that explores
the effects, challenges, and benefits of KD for human activity
recognition using wearable sensor data.
In this paper, we firstly study KD for human activity recog-
nition from time-series data collected from wearable sensors.
Secondly, we also evaluate the role of data augmentation
techniques in KD. This is evaluated by using several time
domain data augmentation strategies for training as well as
for testing phase. The key highlights and findings from our
study are summarized below:
• We compare and contrast several KD approaches for
time-series data and conclude that EKSD performs better
as compared to other techniques.
• We perform KD on time-series data with different sizes
of teacher and student networks. We corroborate results
from previous studies that suggest that the performance of
a higher capacity teacher model is not necessarily better.
• We study the effects of data augmentation methods on
both teacher and student models. We do this to identify
which combination of augmentation methods give the
most benefit in terms of classification performance.
• Our study is evaluated on human activity recognition
task and is conducted on a small scale publicly available
dataset as well as a large scale dataset. This ensures the
observations are reliable irrespective of the dataset sizes.
IEEE INTERNET OF THINGS JOURNAL, VOL. 0, NO. 0, JANUARY 2022
2
Fig. 1. An overview of standard knowledge distillation framework (left) and proposed knowledge distillation with data augmentation method (right). A high
capacity network known as teacher is used to guide the learning of a smaller network known as student. A set of augmentations strategies are used to train
both the teacher and student networks.
The rest of the paper is organized as follows. In Section
II, we provide a brief overview of KD techniques as well
as data augmentation strategies. In Section III, we present
which augmentation methods are used and its effects on
time-series data. In Section IV, we describe our experimental
results and analysis. In Section V, we discuss our findings and
conclusions.
II. BACKGROUND
1) Knowledge Distillation: The goal of KD is to supervise
a small student network by a large teacher network, such
that the student network achieves comparable or improved
performance over teacher model. This idea was firstly explored
by Buciluˇa et al. [19] followed by several developments like
Hinton et al. [15]. The main idea of KD is to use the soft
labels which are outputs, soft probabilities, of a trained teacher
network and contain more information than just a class label,
which is illustrated in Fig. 1. For instance, if two classes
have high probabilities for a data, the data has to lie close
to a decision boundary between these two classes. Therefore,
mimicking these probabilities helps student models to get
knowledge of teachers that have been trained with labeled data
(hard labels) alone.
During training, the loss function L for a student network
is defined as:
L = (1 − λ)LC + λLK
(1)
where LC is the standard cross entropy loss, LK is KD loss,
and λ is hyper-parameter; 0 < λ < 1.
In supervised learning, the error between the output of the
softmax layer of a student network and ground-truth label is
penalized by the cross-entropy loss:
LC = H(sof tmax(as), yg)
(2)
where H(·) denotes a cross entropy loss function, as is logits
of a student (inputs to the final softmax), and yg is a ground
truth label. In the process of KD, instead of using peaky
probability distributions which may produce less accurate
results, Hinton et al. [15] proposed to use probabilities with
temperature scaling, i.e., output of a teacher network given by
ft = sof tmax(at/τ ) and a student fs = sof tmax(as/τ ) are
softened by hyperparameter τ , where τ > 1. The teacher and
student try to match these probabilities by a KL-divergence
loss:
LK = τ 2KL(ft, fs)
(3)
where KL(·) is the KL-divergence loss function.
There has been lots of approaches to improve the per-
formance of distillation. Previous methods focus on adding
more losses on intermediate layers of a student network to be
closer to a teacher [20], [21]. Averaging consecutive student
models tends to produce better performance of students [22].
By implementing KD repetitively, the performance of KD is
improved, which is called sequential knowledge distillation
[23].
Recently, learning procedures for improved efficacy of KD
has been presented. Goldblum et al. [24] suggested adver-
sarially robust distillation (ARD) loss function by minimiz-
ing dependencies between output features of a teacher. The
method used perturbed data as adversarial data to train the
student network. Interestingly, ARD students even show higher
accuracy than their teacher. We adopt augmentation methods
to create data which is similar to adversarial data of ARD.
Based on ARD, the effect of using adversarial data for KD
can be verified, however, which data augmentation is useful for
training KD is not well explored. Unlike ARD, to figure out the
role of augmentation methods for KD and which method im-
proves the performance of KD, we use augmentation methods
generating different kinds of transformed data for teachers and
students. In detail, by adopting augmentation methods, we can
generate various combinations of teachers and students which
are trained with the same or different augmentation method. It
provides to understand which transformation and combinations
can improve the performance of KD. We explain the augmen-
tation method for KD in Section III with details. Additionally,
KD tends to show an efficacy with transferring information
from early stopped model of a teacher, where training strategy
is called ESKD [18]. Early stopped teachers produce better
students than the standard knowledge distillation (Full KD)
using fully-trained teachers. Cho et al. [18] presented the
efficacy of ESKD with image datasets. We implement ESKD
on time-series data and investigate its efficacy on training
with data transformed by various augmentation methods. We
explain more details in Section III and discuss the efficiency
of ESKD in later sections.
IEEE INTERNET OF THINGS JOURNAL, VOL. 0, NO. 0, JANUARY 2022
3
In general, many studies focus on the structure of networks
and adding loss functions to existing framework of KD [25],
[26]. However, the performance of most approaches depends
on the capacity of student models. Also, availability of suffi-
cient training data for teacher and student models can affect
to the final result. In this regard, the factors that have an affect
on the distillation process need to be systematically explored,
especially on time-series data from wearable sensors.
2) Data Augmentation: Data augmentation methods have
been used to boost the generalizability of models and avoid
over-fitting. They have been used in many applications such as
time-series forecasting [27], anomaly detection [28], classifi-
cation [8], [29], and so on. There are many data augmentation
approaches for time-series data, which can be broadly grouped
under two categories [30]. The first category consists of trans-
formations in time, frequency, and time-frequency domains
[30], [31]. The second group consists of more advanced meth-
ods like decomposition [32], model-based [33], and learning-
based methods [34], [30].
Time-domain augmentation methods are straightforward
and popular. These approaches directly manipulate the orig-
inal input time-series data. For example, the original data
is transformed directly by injecting Gaussian noise or other
perturbations such as step-like trend and spikes. Window
cropping or sloping also has been used in time domain
transformation, which is similar to computer vision method of
cropping samples [35]. Other transformations include window
warping that compresses or extends a randomly chosen time
range and flipping the signal in time-domain. Additionally, one
can use blurring and perturbations in the data points, especially
for anomaly detection applications [36]. A few approaches
have focused on data augmentation in the frequency domain.
Gao et al. [36] proposed perturbations for data augmentation
in frequency domain, which improves the performance of
anomaly detection by convolutional neural networks. The
performance of classification was found to be improved by
amplitude adjusted Fourier transform and iterated amplitude
adjusted Fourier transform which are transformation meth-
ods in frequency domain [37]. Time-frequency augmentation
methods have also been recenlty investigated. SpecAugment
is a Fourier-transform based method that transforms in Mel-
Frequency for speech time-series data [31]. The method was
found to improve the performance of speech recognition. In
[38], a short Fourier transform is proposed to generate a
spectrogram for classification by LSTM neural network.
Decomposition-based, model-based, and learning-based
methods are used as advanced data augmentation methods. For
decomposition, time-series data are disintegrated to create new
data [32]. Kegel et al. firstly decomposes the time-series based
on trend, seasonality, and residual. Then, finally new time-
series data are generated with a deterministic and a stochastic
component. Bootstrapping methods on the decomposed resid-
uals for generating augmented data was found to help the
performance of a forecasting model [39]. Model-based ap-
proaches are related to modeling the dynamics, using statistical
model [33], mixture models [40], and so on. In [33], model-
based method were used to address class imbalance for time-
series classification. Learning-based methods are implemented
with learning frameworks such as generative adversarial nets
(GAN) [34] and reinforcement learning [41]. These methods
generate augmented data by pre-trained models and aim to
create realistic synthetic data [34], [41].
Finally, augmentation methods can be combined together
and applied simultaneously to the data. Combining augmenta-
tion methods in time-domain helps to improve performance in
classification [42]. However, combining various augmentation
methods may results in a large amount of augmented data,
increasing training-time, and may not always improve the
performance [30].
III. STRATEGIES FOR KNOWLEDGE DISTILLATION WITH
DATA AUGMENTATION
We would like to investigate strategies for training KD
with time-series data and identify augmentation methods for
teachers and students that can provide better performance.
The strategies include two scenarios on KD. Firstly, we apply
augmentation methods only when a student model is trained
based on KD with a teacher model trained by the original
data. Secondly, augmentation methods are applied not only
to students, but also to teacher. When a teacher model is
trained from scratch, an augmentation method is used, where
the model is to be used as a pre-trained model for distillation.
And, when a student is trained on KD, the same/different
augmentation methods are used. The set of augmentation
approaches on KD are illustrated in Fig. 1, and described in
further detail later in this section. Also, we explore the effects
of ESKD on time-series data – ESKD uses a teacher which is
obtained in the early training process. ESKD generates better
students rather than using the fully-trained teachers from Full
KD [18]. The strategy is derived from the fact that the accuracy
is improved initially. However, the accuracy towards the end
of training begins to decrease, which is lower than the earlier
accuracy. We adopt early stopped teachers with augmentation
methods for our experiments presented in Section IV.
Fig. 2. Illustration of different augmentation methods used in our knowledge
distillation framework. The original data is shown in blue and the correspond-
ing transformed data with data augmentation method is shown in red.
IEEE INTERNET OF THINGS JOURNAL, VOL. 0, NO. 0, JANUARY 2022
4
In order to see effects of augmentation on distillation, we
adopt time-domain augmentation methods which are removal,
adding noise with Gaussian noise, and shifting. The original
pattern, length of the window, and periodical points can be
preserved by this transformation. We use transformation meth-
ods in time domain so that we can analyze the results from
each method, and combinations, more easily. These methods
also have been used popularly for training deep learning net-
works [30]. We apply combinations of augmentation methods,
combined with removal and shifting, and with all methods to a
data to see the relationships between each property of datasets
for teachers and students of KD. An example of different
transformation used for data augmentation is shown in Fig.
2. We describe each of the transforms below:
is
samples. The values of
• Removal: is used to erase amplitude values of se-
quential
chosen samples
to be erased are transformed to the amplitude of
the first point. For example, we assume that n
samples are chosen as (Xt+1, Xt+2, · · · , Xt+n) and
their amplitudes are (At+1, At+2, · · · , At+n)
to be
erased. At+1
sam-
the amplitude of
ple Xt+1 and is assigned to (At+1, At+2, · · · , At+n).
That is, values (At+1, At+2, · · · , At+n) are mapped to
(At+1, At+1, · · · , At+1). The first point and the number
of samples to be erased are chosen randomly. The result
of removal is shown in Fig. 2 with a green dashed circle.
• Noise Injection: To inject noise, we apply Gaussian noise
with mean 0 and a random standard deviation. The result
of adding noise is shown in Fig. 2 with yellow dashed
circles.
the first
• Shifting: For shifting data, to keep the characteristics
such as values of peak points and periodic patterns in
the signal, we adopt index shifting and rolling methods
to the data for generating new patterns, which means
the 100% shifted signal from the original signal by
this augmentation corresponds to the original one.
For example, assuming the total number of samples
are 50 and 10 time-steps (20% of the total number
of samples) are chosen to be shifted. The values
for amplitude of samples (X1, X2, · · · , X11, · · · X50)
are
shifting
(A41, A42, · · · , A1, · · · , A39, A40)
10
of
are
(X1, X2, · · · , X11, · · · , X49, X50).
of
time-steps to be shifted is chosen randomly. Shifting is
shown in Fig. 2 with green dashed arrows.
(A1, A2, · · · , A11, · · · , A49, A50).
time-steps,
newly
samples
number
assigned
The
the
By
to
• Mix1: Applies removal as well as shifting to the same
data.
• Mix2: Applies removal, Gaussian noise injection, and
shifting simultaneously to the data.
IV. EXPERIMENTS AND ANALYSIS
In this section, we describe datasets, settings, ablations, and
results of our experiments.
A. Dataset Description
We perform experiments on two datasets: GENEActiv [43]
and PAMAP2 [44], both of which are wearable sensors based
activity datasets. We evaluate multiple teachers and students
of various capacities for KD with data augmentation methods.
1) GENEactiv: GENEactiv dataset [43] consists of 29
activities over 150 subjects. The dataset was collected with
a GENEactiv sensor which is a light-weight, waterproof, and
wrist-worn tri-axial accelerometer. The sampling frequency
of the sensors is 100Hz. In our experiments, we used 14
activities which can be categorized as daily activities such
as walking, sitting, standing, driving, and so on. Each class
has over approximately 900 data samples and the distribution
and details for activities are illustrated in Fig. 3. We split the
dataset for training and testing with no overlap in subjects.
The number of subjects for training and testing are over 130
and 43, respectively. A window size for a sliding window
is 500 time-steps or 5 seconds and the process for temporal
windows is full-non-overlapping sliding windows. The number
of windows for training is approximately 16000 and testing is
6000.
Fig. 3. Distribution of GENEactive data across different activities. Each
sample has 500 time-steps.
2) PAMAP2: PAMAP2 dataset [44] consists of 18 physical
activities for 9 subjects. The 18 activities are categorized
as 12 daily activities and 6 optional activities. The dataset
was obtained by measurements of heart rate, temperature,
accelerometers, gyroscopes, and magnetometers. The sensors
were placed on hands, chest, and ankles of the subject. The
total number of dimensions in the time-series is 54 and
the sampling frequency is 100Hz. To compare with previous
methods,
in experiments on this dataset, we used leave-
one-subject-out combination for validation comparing the ith
subject with the ith fold. The input data is in the form of time-
series from 40 channels of 4 IMUs and 12 daily activities. To
compare with previous methods, the recordings of 4 IMUs
are downsampled to 33.3Hz. The 12 action classes are: lying,
sitting, standing, walking, running, cycling, nordic walking,
ascending stairs, descending stairs, vacuum cleaning, ironing,
and rope jumping. Each class and subject are described in
Table I. There is missing data for some subjects and the
distribution of the dataset is imbalanced. A window size for
a sliding window is 100 time-steps or 3 seconds and step
size is 22 time-steps or 660 ms for segmenting the sequences,
05001000150020002500300035004000Number of SamplesTreadmill 1mph (0% grade)Treadmill 3mph (0% grade)Treadmill 3mph (5% grade)Treadmill 4mph (0% grade)Treadmill 6mph (0% grade)Treadmill 6mph (5% grade)Seated-fold/stack laundryStand-fidget with hands1 min brush teeth/hairDrive carHard surface walkCarpet with high heels/dress shoesWalk up-stairs 5 floorsWalk down-stairs 5 floorsGENEactiv DatasetIEEE INTERNET OF THINGS JOURNAL, VOL. 0, NO. 0, JANUARY 2022
5
TABLE I
DETAILS OF PAMAP2 DATASET. THE DATASET CONSISTS OF 12 ACTIVITIES RECORDED FOR 9 SUBJECTS.
Lying
Sitting
Standing
Walking
Running
Cycling
Nordic walking
Ascending stairs
Descending stairs
Vacuum cleaning
Ironing
Rope jumping
Sbj.101
407
352
325
333
318
352
302
233
217
343
353
191
Sbj.102
350
335
383
488
135
376
446
253
221
309
866
196
Sbj.103
329
432
307
435
0
0
0
147
218
304
420
0
Sbj.104
344
381
370
479
0
339
412
243
206
299
374
0
Sbj.105
354
402
330
481
369
368
394
207
185
366
496
113
Sbj.106
349
345
365
385
341
306
400
192
162
315
568
0
Sbj.107
383
181
385
506
52
339
430
258
167
322
442
0
Sbj.108
361
342
377
474
246
382
433
168
137
364
496
129
Sbj.109
0
0
0
0
0
0
0
0
0
0
0
92
Sum
2877
2770
2842
3481
1461
2462
2817
1701
1513
2622
3995
721
Nr. of subjects
8
8
8
8
6
7
7
8
8
8
8
6
which allows semi-non-overlapping sliding windows with 78%
overlapping [44].
B. Analysis of Distillation
For experiments on GENEactiv, we run 200 epochs for
each model using SGD with momentum 0.9 and the initial
learning rate lr = 0.1. The lr drops by 0.5 after 10 epochs
and drops down by 0.1 every [ t
3 ] where t is the total number
of epochs. For experiments on PAMAP2, we run 180 epochs
for each model using SGD with momentum 0.9 and the initial
learning rate lr = 0.05. The lr drops down by 0.2 after 10
epochs and drops down 0.1 every [ t
3 ] where t is the total
number of epochs. The results are averaged over 3 runs for
both the datasets. To improve the performance, feature engi-
neering [45], [46], feature selection, and reducing confusion by
combining classes [47] can be applied additionally. However,
to focus on the effects of KD which is based on feature-
learning [46], feature engineering/selection methods to boost
performance are not applied and all classes as specified in
Section IV-A are used in the following experiments.
1) Training from scratch to find a Teacher: To find a teacher
for KD, we conducted experiments with training from scratch
based on two different network architectures: ResNet [1] and
WideResNet [48]. These networks have been popularly used in
various state-of-the-art studies for KD [16], [17], [24], [18].
We modified and compared the structure having the similar
number of trainable parameters. As described in Table II,
for training from scratch, WideResNet (WRN) tends to show
better performance than ResNet18(k) where k is the dimension
of output from the first layer. The increase in accuracy with
the dimension of each block is similar to the basic ResNet.
2) Setting hyperparameters for KD: For setting hyper-
parameters in KD, we conducted several experiments with
different temperature τ as well as lambda λ. We investigated
distillation with different hyperparameters as well. We set
WRN16-3 as a teacher network [18] and WRN16-1 as a
student network, which is shown in Fig. 4. For temperature
τ , in general, τ ∈ {3, 4, 5} are used [18]. High temperature
mitigated the peakiness of teachers and helped to make the
signal to be softened. In our experiments, according to the
results from different τ , high temperature did not effectively
help to increase the accuracy. When we used τ = 4, the results
were better than other choices for both datasets with Full KD
Fig. 4. Effect of hyperparmeters τ and λ on the performance of Full
KD and ESKD approaches. The results are reported on GENEactive dataset
with WRN16-3 and WRN16-1 networks for teacher and student models
respectively.
and ESKD [18]. For λ = 0.7 and 0.99, we obtained the best
results with Full KD and ESKD for GENEactiv and PAMAP2,
respectively.
3) Analyzing Distillation with different size of Models: To
analyze distillation with different size of models, WRN16-k
and WRN28-k were used as teacher networks having different
capacity and structures in depth and width k. WRN16-1
and WRN28-1 were used as student networks, respectively.
As mentioned in the previous section, in general, a higher
capacity network trained from scratch shows better accuracy
for WRN16 and WRN28. However, as shown in Fig. 5, in
IEEE INTERNET OF THINGS JOURNAL, VOL. 0, NO. 0, JANUARY 2022
6
TABLE II
ACCURACY FOR VARIOUS MODELS TRAINED FROM SCRATCH ON GENEACTIV
Model
ResNet18(8)
ResNet18(16)
ResNet18(24)
ResNet18(32)
ResNet18(48)
ResNet18(64)
# Parameters
62,182
244,158
545,942
967,534
2,170,142
3,851,982
Accuracy (%)
63.75±0.42
65.84±0.69
66.47±0.21
66.33±0.12
68.13±0.22
68.17±0.21
Model
WRN16-1
WRN16-2
WRN16-3
WRN16-4
WRN16-6
WRN16-8
# Parameters
61,374
240,318
536,254
949,438
2,127,550
3,774,654
Accuracy (%)
67.66±0.37
67.84±0.36
68.89±0.56
69.00±0.22
70.04±0.05
69.02±0.15
Model
WRN28-1
-
WRN28-2
WRN28-3
WRN28-4
WRN28-6
# Parameters
126,782
-
500,158
1,119,550
1,985,214
4,455,358
Accuracy (%)
68.63±0.48
-
69.15±0.24
69.23±0.27
69.29±0.51
70.99±0.44
TABLE V
ACCURACY (%) FOR RELATED METHODS ON GENEACTIV DATASET WITH
7 CLASSES
Method
WRN16-1
WRN16-3
WRN16-8
ESKD (WRN16-3)
ESKD (WRN16-8)
Full KD (WRN16-3)
Full KD (WRN16-8)
SVM [49]
Choi et al. [50]
Window length
1000
89.29±0.32
89.53±0.15
89.31±0.21
89.88±0.07
(89.74)
89.58±0.13
(89.68)
89.84±0.21
(88.95)
89.36±0.06
(88.74)
86.29
89.43
500
86.83±0.15
87.95±0.25
87.29±0.17
88.16±0.15
(88.30)
87.47±0.11
(87.75)
87.05±0.19
(86.02)
86.38±0.06
(85.08)
85.86
87.86
Fig. 5. Results of distillation from different teacher models of WRN16-k and
WRN28-k on GENEactiv dataset. The higher capacity of teachers does not
always increase the accuracy of students.
TABLE VI
ACCURACY FOR RELATED METHODS ON PAMAP2 DATASET
TABLE III
ACCURACY FOR VARIOUS MODELS ON GENEACTIV DATASET
Student
WRN16-1
(ESKD)
WRN16-1
(Full KD)
Teacher
WRN16-2
WRN16-3
WRN16-4
WRN16-6
WRN16-8
WRN16-3
WRN16-8
Teacher Acc. (%)
69.06
69.99
69.80
70.24
70.19
69.68
69.28
Student Acc. (%)
69.34±0.36
69.49±0.22
69.37±0.31
67.93±0.13
68.62±0.33
68.62±0.22
68.68±0.17
most of the cases, the results from WRN16-k shows better
than the results of WRN28-k which has larger width. And
the accuracy with teachers of WRN16-3 is higher than the
one with teachers having larger width. Therefore, a teacher of
higher capacity is not always guaranteed to generate a student
whose accuracy is better.
4) Knowledge Distillation based on Fully Iterated and
Early Stopped Models: We performed additional experiments
TABLE IV
ACCURACY FOR VARIOUS MODELS ON PAMAP2 DATASET
Student
WRN16-1
(ESKD)
WRN16-1
(Full KD)
Teacher
WRN16-2
WRN16-3
WRN16-4
WRN16-6
WRN16-8
WRN16-3
WRN16-8
Teacher Acc. (%)
84.86
85.67
85.23
85.51
85.17
81.52
81.69
Student Acc. (%)
86.18±2.44
86.38±2.25
85.95±2.27
86.37±2.35
85.11±2.46
84.31±2.24
83.70±2.52
Method
WRN16-1
WRN16-3
WRN16-8
ESKD (WRN16-3)
ESKD (WRN16-8)
Full KD (WRN16-3)
Full KD (WRN16-8)
Chen and Xue [51]
Ha et al.[52]
Ha and Choi [53]
Kwapisz [54]
Catal et al. [55]
Kim et al.[56]
Accuracy (%)
82.81±2.51
84.18±2.28
83.39±2.26
86.38±2.25
(85.67)
85.11±2.46
(85.17)
84.31±2.24
(81.52)
83.70±2.52
(81.69)
83.06
73.79
74.21
71.27
85.25
81.57
with WRN16-k which gives the best results. Table III and
Table IV give detailed results for GENEactiv and PAMAP2,
respectively. Compared to training from scratch, although the
student capacity from KD is much lower, the accuracy is
higher. For instance, for the result of GENEactiv with WRN16-
8 by training from scratch, the accuracy is 69.02% and the
number of trainable parameters is 3 million in Table III. The
number of parameters for WRN16-1 as a student for KD is 61
thousand which is approximately 1.6% of 3 million. However,
the accuracy of a student with WRN16-2 teacher from ESKD
is 69.34% which is higher than the result of training from
scratch with WRN16-8. It shows a model can be compressed
with conserved or improved accuracy by KD. Also, we tested
with 7 classes on GENEactiv dataset which were used by the
123468WRN16/28-k (k: width)66.567.067.568.068.569.069.570.070.5Accuracy (%)KD with WRN16/WRN28WRN16-kWRN28-kIEEE INTERNET OF THINGS JOURNAL, VOL. 0, NO. 0, JANUARY 2022
7
method in [50]. This work used over 50 subjects for testing
set. Students of KD were WRN16-1 and trained with τ = 4
and λ = 0.7. As shown in Table V where brackets denote the
structure of teachers and their accuracy, ESKD from WRN16-3
teacher shows the best accuracy for 7 classes, which is higher
than results of models trained from scratch, Full KD, and
previous methods [49], [50]. In most of the cases, students are
even better than their teacher. In various sets of GENEactiv
having different number of classes and window length, ESKD
shows better performance than Full KD. In Table IV, the best
accuracy on PAMAP2 is 86.38% from ESKD with teacher of
WRN16-3, which is higher than results from Full KD. The
result is even better than previous methods [57], which are
described in Table VI where brackets denote the structure
of teachers and their accuracy. Therefore, KD allows model
compression and improves the accuracy across datasets. And
ESKD tends to show better performance compared to Full KD.
Also, the higher capacity models as teachers does not always
generate better performing student models.
C. Effect of augmentation on student model training
To understand distillation effects based on the various
capacity of teachers and augmentation methods, WRN16-1,
WRN16-3, and WRN16-8 are selected as “Small”, “Medium”,
and “Large” models, respectively. ESKD is used for this
experiment which tends to show better performance than the
Full KD and requires three-fourths of the total number of
epochs for training [18].
In order to find augmentation methods impacting KD on
students for training, we first trained a teacher from scratch
with the original datasets. Secondly, we trained students from
the pre-trained teacher with augmentation methods which have
different properties including removal, adding noise, shifting,
Mix1, and Mix2. For experiments on GENEactiv, for removal,
the number of samples to be removed is less than 50% of
the total number of samples. The first point and the exact
number of samples to be erased are chosen randomly. To add
noise, the value for standard deviation of Gaussian noise is
chosen uniformly at random between 0 and 0.2. For shifting,
the number of time-steps to be shifted is less than 50% of
the total number of samples. For Mix1 and Mix2, the same
parameters are applied. For experiments on PAMAP2, the
number of samples for removal is less than 10% of the total
number of samples and standard deviation of Gaussian noise
for adding noise is less than 0.1. The parameter for shifting
is less than 50% of the total number of samples. The same
parameters of each method are applied for Mix1 and Mix2.
The length of the window for PAMAP2 is only 100 which is
3 seconds and downsampled from 100Hz data. Compared to
GENEactiv whose window size is 500 time-steps or 5 seconds,
for PAMAP2, a small transformation can affect the result very
prominently. Therefore, lower values are applied to PAMAP2.
The parameters for these augmentation methods and the sensor
data for PAMAP2 to be transformed are randomly chosen.
These conditions for applying augmentation methods are used
in the following experiments as well.
TABLE VII
ACCURACY (%) OF TRAINING FROM SCRATCH ON WRN16-1 WITH
DIFFERENT AUGMENTATION METHODS
Method
Original
Removal
Noise
Shift
Mix1(R+S)
Mix2(R+N+S)
Dataset
GENEactiv
68.60±0.23
69.20±0.32
67.60±0.36
68.69±0.22
69.31±0.96
67.89±0.11
PAMAP2
82.81±2.51
83.34±2.41
82.80±2.66
83.91±2.18
83.59±2.37
83.64±2.76
Fig. 6. The validation accuracy for training from scratch and Full KD.
WRN16-1 is used for training from scratch. For Full KD, WRN16-3 is a
teacher network and WRN16-1 is a student network. R, N, S, M1, and M2 in
the legend are removal, adding noise, shifting, Mix1, and Mix2, respectively.
1) Analyzing augmentation methods on training from
scratch and KD: The accuracy of training scratch with dif-
ferent augmentation methods on WRN16-1 is presented in
Table VII. Most of the accuracies from augmentation methods,
except adding noise which can alter peaky points and change
gradients, are higher than the accuracy obtained by learning
with the original data. Compared to other methods, adding
noise may influence classification between similar activities
such as walking, which is included in both datasets as detailed
sub-categories.
The validation accuracy of scratch and Full KD learning
on GENEactiv dataset is presented in Fig. 6. Training from
scratch with the original data shows higher accuracy than
KD with original data in very early stages before 25 epochs.
However, KD shows better accuracy than the models trained
from scratch after 40 epochs. KD with augmentation tends to
perform better in accuracy than models trained from scratch
and KD learning with the original data alone. That is, data
augmentation can help to boost the generalization ability of
student models for KD. Mix1 shows the highest accuracy
among the results. The highest accuracies are seen in early
stages, which are less than 120 epochs for all methods, where
120 epochs is less than three-fourths of the total number of
epochs. On closer inspection, we find that the best accuracies
are actually seen in less than 20 epochs for training from
scratch and Full KD, less than 60 epochs for shifting, Mix1,
and Mix2, and less than 120 epochs for adding noise, respec-
tively. This implies that not only early stopped teachers but
020406080100120Epochs45505560657075Accuracy (%)Scratch/Full KD with AugmentationScratchFullFull-RFull-NFull-SFull-M1Full-M2IEEE INTERNET OF THINGS JOURNAL, VOL. 0, NO. 0, JANUARY 2022
8
TABLE VIII
ACCURACY (%) OF KD FROM VARIANTS OF TEACHER CAPACITY AND
AUGMENTATION METHODS ON GENEACTIV (λ = 0.7)
TABLE XI
ACCURACY (%) OF KD FROM VARIANTS OF TEACHER CAPACITY AND
AUGMENTATION METHODS ON PAMAP2 (λ = 0.99)
Method
Original
Removal
Noise
Shift
Mix1(R+S)
Mix2(R+N+S)
Small
68.87
69.71±0.31
69.80±0.34
69.26±0.08
70.63±0.19
70.56±0.57
69.27±0.31
Teacher
Medium
69.99
69.61±0.17
70.23±0.41
69.12±0.19
70.43±0.89
71.35±0.20
69.51±0.28
Large
70.19
68.62±0.33
70.28±0.68
69.38±0.39
70.00±0.20
70.22±0.10
69.62±0.21
Method
Original
Removal
Noise
Shift
Mix1(R+S)
Mix2(R+N+S)
Small
85.42
86.37±2.35
84.66±2.67
84.77±2.65
86.08±2.42
84.93±2.71
82.94±2.76
Teacher
Medium
85.67
86.38±2.25
85.70±2.40
85.21±2.41
86.65±2.13
85.88±2.28
83.94±2.70
Large
85.17
85.11±2.46
84.81±2.52
85.05±2.40
85.53±2.28
84.73±2.54
83.28±2.50
TABLE IX
ACCURACY (%) OF KD FROM VARIANTS OF TEACHER CAPACITY AND
AUGMENTATION METHODS ON GENEACTIV (λ = 0.99)
Method
Original
Removal
Noise
Shift
Mix1(R+S)
Mix2(R+N+S)
Small
68.87
69.44±0.19
69.48±0.22
69.99±0.14
70.96±0.10
70.40±0.27
70.56±0.23
Teacher
Medium
69.99
67.80±0.36
69.75±0.40
70.20±0.06
70.42±0.06
70.07±0.38
69.88±0.16
Large
70.19
68.67±0.20
70.01±0.27
70.12±0.14
70.16±0.24
69.36±0.16
69.71±0.30
also early stopped students are able to perform better than fully
iterated models. In training based on KD with augmentation
methods, the accuracy goes up in early stages, however, the
accuracy suffers towards to the end of training. These trends
on KD are similar to the previous ESKD study [18]. For the
following experiments, we restrict our analyses to ESKD.
2) Analyzing Augmentation Methods on Distillation: The
accuracy of each augmentation method with KD is summa-
rized in Table VIII and IX for GENEactiv and Table X and
XI for PAMAP2. The results were obtained from small-sized
students of ESKD. The gray colored cells of these tables
are the best accuracy for the augmentation method among
the different capacity teachers of KD. When a higher λ is
used, distillation from teachers is improved, and the best
results are obtained when the teacher capacity is smaller.
Also, the best performance of students, when learning with
augmentation methods and the original data, is achieved with
similar teacher capacities. For example, for GENEactiv with
λ = 0.7, the best results are generated from various capacity
of teachers. But, with λ = 0.99,
the best results tend to
be seen with smaller capacity of teachers. Even though the
evaluation protocol for PAMAP2 is leave-one-subject-out with
TABLE X
ACCURACY (%) OF KD FROM VARIANTS OF TEACHER CAPACITY AND
AUGMENTATION METHODS ON PAMAP2 (λ = 0.7)
an imbalanced distribution of data, with λ = 0.7, the best
results are obtained from larger capacity of teachers as well.
Furthermore, results from both datasets verify that larger and
more accurate teachers do not always result in better students.
Also, the best result from shifting is seen at the same capacity
of the teacher with the original data. It might be because
shifting includes the same time-series ‘shapes’ as the original
data. The method for shifting is simple but is an effectively
helpful method for training KD. For all teachers on PAMAP2
with λ = 0.99, the accuracies from training by shifting are
even higher than other combinations. Compared to previous
methods [57] with PAMAP2, the result by shifting outperforms
others. Furthermore, although the student network of KD has
the same number of parameters of the network trained from
scratch (WRN16-1), the accuracy is much higher than the latter
one; the result of Mix1 from GENEactiv and shifting from
PAMAP2 by the medium teacher is approximately 2.7% points
and 3.8% points better than the result from original data by
training from scratch, respectively. These accuracies are even
better than the results of their teachers. It also verifies that KD
with an augmentation method including shifting has benefits
to obtain improved results.
TABLE XII
p-VALUE AND (ACCURACY (%), STANDARD DEVIATION) FOR TRAINING
FROM SCRATCH AND KD ON GENEACTIV DATASET
Scratch
Original
(68.60±0.23)
KD
(Teacher: Medium)
Original (ESKD)
(69.61±0.17)
Original (Full)
(68.62±0.22)
Removal
(70.23±0.41)
Noise
(69.12±0.19)
Shift
(70.43±0.89)
Mix1(R+S)
(71.35±0.20)
Mix2(R+N+S)
(69.51±0.28)
p-value
0.030
0.045
0.006
0.012
0.025
0.073
0.055
Method
Original
Removal
Noise
Shift
Mix1(R+S)
Mix2(R+N+S)
Small
85.42
84.75±2.64
85.16±2.46
84.96±2.59
85.21±2.21
85.54±2.51
85.17±2.39
Teacher
Medium
85.67
84.47±2.32
85.51±2.27
85.52±2.26
85.45±2.19
85.60±2.19
85.27±2.33
Large
85.17
84.90±2.38
85.02±2.47
84.85±2.43
85.66±2.26
84.71±2.53
83.76±2.77
To investigate the difference in performance with a model
trained from scratch and KD with augmentation methods,
statistical analysis was conducted by calculating p-value from
a t-test with a confidence level of 95%. Table XII and XIII
show averaged accuracy, standard deviation, and calculated p-
value for WRN16-1 trained from scratch with original training
set and various student models of WRN16-1 trained with KD
IEEE INTERNET OF THINGS JOURNAL, VOL. 0, NO. 0, JANUARY 2022
9
TABLE XIII
p-VALUE AND (ACCURACY (%), STANDARD DEVIATION) FOR TRAINING
FROM SCRATCH AND KD ON PAMAP2 DATASET
Scratch
Original
(82.81±2.51)
KD
(Teacher: Medium)
Original (ESKD)
(84.47±2.32)
Original (Full)
(84.31±2.24)
Removal
(85.51±2.27)
Noise
(85.52±2.26)
Shift
(85.45±2.19)
Mix1(R+S)
(85.60±2.19)
Mix2(R+N+S)
(85.27±2.33)
p-value
0.0298
0.0007
0.0008
0.0002
0.0034
0.0024
0.0013
and augmentation. That is, student models in KD have the
same structure of the model trained from scratch and teachers
for KD are WRN16-3 (τ = 4, λ = 0.7). For GENEactiv, in
five out of the seven cases, the calculated p-values are less
than 0.05. Thus, the results in the table show statistically-
significant difference between training from scratch and KD.
in all cases, p-values are less than 0.05.
For PAMAP2,
This also represents statistically-significant difference between
training from scratch and KD. Therefore, we can conclude
that KD training with augmentation methods, which shows
better results in classification accuracy, performs significantly
different from training from scratch, at a confidence level of
95%.
TABLE XIV
ECE (%) OF TRAINING FROM SCRATCH AND KD ON GENEACTIV
DATASET
Scratch
Original
Removal
Noise
Shift
Mix1(R+S)
Mix2(R+N+S)
ECE
KD
(Teacher: Medium)
Original (ESKD)
3.22
Removal
3.56
Noise
3.45
3.24
Shift
3.72 Mix1(R+S)
3.67 Mix2(R+N+S)
ECE
2.96
2.90
2.85
2.78
2.79
2.86
TABLE XV
ECE (%) OF TRAINING FROM SCRATCH AND KD ON PAMAP2 DATASET
Scratch
Original
Removal
Noise
Shift
Mix1(R+S)
Mix2(R+N+S)
ECE
KD
(Teacher: Medium)
Original (ESKD)
2.28
Removal
3.64
Noise
5.83
2.87
Shift
4.39 Mix1(R+S)
5.55 Mix2(R+N+S)
ECE
2.16
3.09
3.01
2.22
2.96
4.17
Finally, the expected calibration error (ECE) [58] is calcu-
lated to measure the confidence of performance for models
trained from scratch and KD (τ = 4, λ = 0.7) with aug-
mentation methods. As shown in Table XIV and XV, in all
cases, ECE values for KD are lower than when models are
trained from scratch, indicating that models trained with KD
have higher reliability. Also, results of KD including shifting
are lower than results from other augmentation methods. This
additionally verifies that KD improves the performance and
shifting helps to get improved models.
TABLE XVI
THE LOSS VALUE (10−2) FOR KD (TEACHER: MEDIUM) FROM VARIOUS
METHODS ON GENEACTIV
Method
(λ=0.7)
Original
Removal
Noise
Shift
Mix1(R+S)
Mix2(R+N+S)
CE
Train
3.774
3.340
11.687
2.416
5.475
17.420
KD
Train
0.617
0.406
1.172
0.437
0.475
1.337
KD
Test
1.478
1.246
1.358
1.119
1.108
1.338
TABLE XVII
THE LOSS VALUE (10−2) FOR KD (TEACHER: MEDIUM) FROM VARIOUS
METHODS ON PAMAP2 (SUBJECT 101)
Method
(λ=0.7)
Original
Removal
Noise
Shift
Mix1(R+S)
Mix2(R+N+S)
CE
Train
0.832
1.237
1.066
0.468
1.267
1.853
KD
Train
0.156
0.146
0.138
0.129
0.150
0.177
KD
Test
1.783
1.038
1.284
1.962
0.895
1.065
3) Analyzing training for KD with augmentation methods:
The loss values of each method, for the medium-sized teacher,
are shown in Table XVI and XVII. The loss values were
obtained from the final epoch while training student models
based on Full KD. As shown in these tables, for both cross
entropy and KD loss values,
training with shifting-based
data augmentation results in lower loss, compared to other
augmentation strategies and the original model. The loss value
for noise augmentation is higher than the values of shifting.
On the other hand, the KD loss value for Mix1 is higher than
the values for removal and shifting. However, the training
loss is for these two methods and its value of testing is
lower. Compared to other methods, Mix2 shows higher loss for
training, which may be because this method generates more
complicated patterns. However, the testing KD loss value of
Mix2 is lower than the value of original and adding noise.
These findings imply that the data of original and shifting
have very similar patterns. And data based on Mix1 and Mix2
are not simply trainable data for distillation, however, these
methods have an effect of preventing a student from over-
fitting or degradation for classification. The contrast of results
from GENEactiv between each method is more prominent than
the one from PAMAP2. This is due to the fact that smaller
parameters for augmentation are applied to PAMAP2. Also,
the dataset is more challenging to train on, due to imbalanced
data and different channels in sensor data.
D. Analysis of Teacher and Student Models with a Variant
Properties of Training Set
To discuss properties of training set for teacher and student
models, we use the same parameter (τ = 4, λ = 0.7) in this
IEEE INTERNET OF THINGS JOURNAL, VOL. 0, NO. 0, JANUARY 2022
10
experiment on two datasets. In this section, we try to train a
medium teacher and a small student by training set having the
same or different properties to take into account relationships
between teachers and students. Testing set is not transformed
or modified. The medium teacher is chosen because the teacher
showed good performance in our prior experiments discussed
in previous sections. Further, distillation from a medium model
to a small model is an preferable approach [18]. Also, we
analyze which augmentation method is effective to achieve
higher accuracy. We use adding noise, shifting, and Mix1
methods which transform data differently.
TABLE XVIII
ACCURACY (%) OF TRAINING FROM SCRATCH ON WRN16-3 WITH
DIFFERENT AUGMENTATION METHODS
Dataset
GENEactiv
GENEactiv
(Top-1)
PAMAP2
PAMAP2
(Top-1)
Original
69.53±0.40
Noise
68.59±0.05
Shift
72.08±0.20
Mix1(R+S)
71.64±0.26
69.99
68.68
72.48
72.17
84.65±2.28
83.08±2.51
82.54±2.42
82.39±2.62
85.67
85.31
84.38
84.09
To obtain a medium teacher model, the model is trained
from scratch with augmentation methods. These results
are shown in Table XVIII. For GENEactiv, shifting based
data augmentation gives the best performance. However, for
PAMAP2, original data achieves the best performance. Mix1
shows slightly lower accuracy than shifting. In these experi-
ments, the student model is trained using the teacher model
that achieves best performance over several trials.
We also evaluated different combinations of data augmen-
tation strategies for teacher-student network pairs. A pair is
obtained by using one or no data augmentation strategy to
train the teacher network by training from scratch, and the
student network is trained by ESKD under different, same,
or no augmentation strategy. The results are shown in Fig. 7.
We found that KD with the same data augmentation strategy
for training teachers and students may not be the right choice
to get the best performance. When a teacher is trained by
is trained by Mix1 which showed
shifting and a student
good performance as a student in the previous sections, the
results are better than other combinations for both datasets.
Also, when a student is learned by Mix1 including shifting
transform, in general, the performance are also good for all
teachers. It implies that the method chosen for training a
student is more important than choosing a teacher; KD with
a medium teacher trained by the original data and a student
trained with shift or Mix1 outperforms other combinations.
Using the same strategy for training data for teachers and
students does not always present the best performance. When
the training set for students is more complicated than the set
for teachers, the performance in accuracy tends to be better.
That is, applying a transformation method to students can help
to increase the accuracy. It also verifies that better teachers do
not always lead to increased accuracy of students. Even if the
accuracies from these combinations of a teacher and student
are lower than models trained from scratch by WRN16-3, the
number of parameters for the student is only about 11% of the
Fig. 7. The results for students trained by different combinations of training
sets for teachers and students. The teacher and student both are learned
by augmentation methods. WRN16-3 (medium) and WRN16-1 (small) are
teacher and student networks, respectively.
one for WRN16-3. Therefore, the results still are good when
considering both performance and computation.
E. Analysis of Student Models with Different Data Augmen-
tation Strategies for Training and Testing Set
In this section, we study the effect of students on KD from
various augmentation methods for training and testing, while
a teacher is trained with the original dataset. We use the same
parameter (τ = 4, λ = 0.7) and ESKD for this experiment
on two datasets. A teacher is selected with a medium model
trained by the original data. We use adding noise, shifting and
Mix1 methods which transform data differently.
After training the teacher network on original data, a student
network is trained with different data augmentation strategies
and is evaluated on test data transformed with different data
augmentation strategies. The results are illustrated in Fig.
training student networks
8. For GENEactiv, most often,
with Mix1 show better performance on different testing sets.
However, if the testing set is affected by adding noise, training
students with adding noise and Mix2 shows much better
performance than training with shifting and Mix1. From the
results on PAMAP2, in most of the cases, training students
IEEE INTERNET OF THINGS JOURNAL, VOL. 0, NO. 0, JANUARY 2022
11
time for testing, as shown in Table XIX. WRN16-1 as a
student trained by ESKD with Mix1 augmentation achieves
the best accuracy, 71.35%, where the model takes the least
amount of time on both GPU and CPU. The results on CPU
reiterate the reason why model compression is required for
many applications, especially on edge devices, wearables, and
mobile devices, which have limited computational and power
resources and are generally implemented in real time with
only CPU. The gap in performance would be higher if an
edge device had lower computational resources.
TABLE XIX
PROCESSING TIME OF VARIOUS MODELS FOR GENEACTIVE DATASET
Model (WRN16-k)
k=1
k=1 (ESKD)
k=1 (ESKD+Mix1)
k=3
k=6
k=8
Acc.
(%)
67.66
69.61
71.35
68.89
70.04
69.02
Total
GPU
(sec)
Avg.
GPU
(ms)
Total
CPU
(sec)
Avg.
CPU
(ms)
15.226
2.6644
16.655
2.8920
16.426
16.663
16.885
2.8524
2.8934
2.9320
21.333
33.409
46.030
3.7044
5.8012
7.9928
V. CONCLUSION
In this paper, we studied many relevant aspects of knowl-
edge distillation (KD) for wearable sensor data as applied
to human activity analysis. We conducted experiments with
different sizes of teacher networks to evaluate their effect
on KD performance. We show that a high capacity teacher
network does not necessarily ensure better performance of
a student network. We further showed that
training with
augmentation methods and early stopping for KD (ESKD) is
effective when dealing with time-series data. We also establish
that the choice of augmentation strategies has more of an
impact on the student network training as opposed to the
teacher network. In most cases, KD training with the Mix1
(Removal+Shifting) data augmentation strategy for students
showed robust performance. Further, we also conclude that
a single augmentation strategy is not conclusively better all
the time. Therefore, we recommend using a combination of
augmentation methods for training KD in general. In summary,
our findings provide a comprehensive understanding of KD
and data augmentation strategies for time-series data from
wearable devices of human activity. These conclusions can be
used as a general set of recommendations to establish a strong
baseline performance on new datasets and new applications.
ACKNOWLEDGMENT
This research was funded by NIH R01GM135927, as part
of the Joint DMS/NIGMS Initiative to Support Research at the
Interface of the Biological and Mathematical Sciences, and by
NSF CAREER grant 1452163.
REFERENCES
[1] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image
recognition,” in Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition, 2016, pp. 770–778.
[2] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely
connected convolutional networks,” in Proceedings of the IEEE Confer-
ence on Computer Vision and Pattern Recognition, 2017, pp. 4700–4708.
Fig. 8. Effect on classification performance of student network with different
augmentation methods for training and testing sets. WRN16-3 (medium) and
WRN16-1 (small) are teacher and student networks, respectively.
with Mix1 shows better performance to many different testing
set. However, when the testing set is augmented by adding
noise, training with original data shows the best performance.
This is likely attributable to the window size, which has about
a hundred samples, and the dataset includes the information
of 4 kinds of IMUs. Therefore, injecting noise, which can
affect peaky points and change gradients, creates difficulties
for classification. Also, these issue can affect the both training
and testing data. Thus, if the target data includes noise, training
set and augmentation methods have to be considered along
with the length of the window and intricate signal shapes
within the windows.
F. Analysis of Testing Time
Here, we compare the evaluation time for various models on
the GENEactiv dataset. We conducted the test on a desktop
with a 3.50 GHz CPU (Intel® Xeon(R) CPU E5-1650 v3),
48 GB memory, and NVIDIA TITAN Xp (3840 NVIDIA®
CUDA® cores and 12 GB memory) graphic card. We used
a batch size of 1 and approximately 6000 data samples for
testing. Four different models were trained from scratch with
WRN16-k (k=1, 3, 6, and 8). To test with ESKD and Mix1,
WRN16-3 was used as a teacher and WRN16-1 was used
for student network. As expected, larger models take more
IEEE INTERNET OF THINGS JOURNAL, VOL. 0, NO. 0, JANUARY 2022
12
[3] N. Dalal and B. Triggs, “Histograms of oriented gradients for human
detection,” in Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition, 2005, pp. 886–893.
[25] T. Furlanello, Z. Lipton, M. Tschannen, L. Itti, and A. Anandkumar,
the International
“Born again neural networks,” in Proceedings of
Conference on Machine Learning, 2018, pp. 1607–1616.
[4] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,”
International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110,
2004.
[5] O. Abdel-Hamid, A.-r. Mohamed, H. Jiang, L. Deng, G. Penn,
and D. Yu, “Convolutional neural networks for speech recognition,”
IEEE/ACM Transactions on Audio, Speech, and Language Processing,
vol. 22, no. 10, pp. 1533–1545, 2014.
[6] W. Xiong, L. Wu, F. Alleva, J. Droppo, X. Huang, and A. Stolcke,
“The microsoft 2017 conversational speech recognition system,” in
Proceedings of the IEEE International Conference on Acoustics, Speech
and Signal Processing, 2018, pp. 5934–5938.
[7] S. Wan, L. Qi, X. Xu, C. Tong, and Z. Gu, “Deep learning models
for real-time human activity recognition with smartphones,” Mobile
Networks and Applications, vol. 25, no. 2, pp. 743–755, 2020.
[8] H. I. Fawaz, G. Forestier, J. Weber, L. Idoumghar, and P.-A. Muller,
“Deep learning for time series classification: a review,” Data Mining
and Knowledge Discovery, vol. 33, no. 4, pp. 917–963, 2019.
[9] A. Khan, A. Sohail, U. Zahoora, and A. S. Qureshi, “A survey of the
recent architectures of deep convolutional neural networks,” Artificial
Intelligence Review, vol. 53, no. 8, pp. 5455–5516, 2020.
[10] M. Gil-Mart´ın, R. San-Segundo, F. Fernandez-Martinez, and J. Ferreiros-
L´opez, “Improving physical activity recognition using a new deep
learning architecture and post-processing techniques,” Engineering Ap-
plications of Artificial Intelligence, vol. 92, p. 103679, 2020.
[11] P. Molchanov, S. Tyree, T. Karras, T. Aila, and J. Kautz, “Pruning convo-
lutional neural networks for resource efficient inference,” in Proceedings
of the International Conference on Learning Representations, 2017.
[12] S. Han, H. Mao, and W. J. Dally, “Deep compression: Compressing
deep neural networks with pruning, trained quantization and huffman
coding,” in Proceedings of the International Conference on Learning
Representations, 2016.
[13] J. Wu, C. Leng, Y. Wang, Q. Hu, and J. Cheng, “Quantized convolutional
the IEEE
neural networks for mobile devices,” in Proceedings of
Conference on Computer Vision and Pattern Recognition, 2016, pp.
4820–4828.
[14] C. Tai, T. Xiao, Y. Zhang, X. Wang et al., “Convolutional neural net-
works with low-rank regularization,” in Proceedings of the International
Conference on Learning Representations, 2016.
[15] G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural
network,” in Proceedings of the International Conference on Neural
Information Processing Systems Deep Learning and Representation
Learning Workshop, 2015.
[16] J. Yim, D. Joo, J. Bae, and J. Kim, “A gift from knowledge distillation:
Fast optimization, network minimization and transfer learning,” in
Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, 2017, pp. 4133–4141.
[17] B. Heo, M. Lee, S. Yun, and J. Y. Choi, “Knowledge distillation with
adversarial samples supporting decision boundary,” in Proceedings of
the AAAI Conference on Artificial Intelligence, vol. 33, 2019, pp. 3771–
3778.
[18] J. H. Cho and B. Hariharan, “On the efficacy of knowledge distillation,”
the IEEE International Conference on Computer
in Proceedings of
Vision, 2019, pp. 4794–4802.
[19] C. Buciluˇa, R. Caruana, and A. Niculescu-Mizil, “Model compression,”
the ACM SIGKDD International Conference on
in Proceedings of
Knowledge Discovery and Data Mining, 2006, pp. 535–541.
[20] S. Zagoruyko and N. Komodakis, “Paying more attention to attention:
Improving the performance of convolutional neural networks via at-
tention transfer,” in Proceedings of the International Conference on
Learning Representations, 2017.
[21] F. Tung and G. Mori, “Similarity-preserving knowledge distillation,” in
Proceedings of the IEEE International Conference on Computer Vision,
2019, pp. 1365–1374.
[22] A. Tarvainen and H. Valpola, “Mean teachers are better role mod-
els: Weight-averaged consistency targets improve semi-supervised deep
learning results,” in Proceedings of the International Conference on
Neural Information Processing Systems, 2017, pp. 1195–1204.
[23] Y. Zhang, T. Xiang, T. M. Hospedales, and H. Lu, “Deep mutual
learning,” in Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition, 2018, pp. 4320–4328.
[24] M. Goldblum, L. Fowl, S. Feizi, and T. Goldstein, “Adversarially
robust distillation,” in Proceedings of the AAAI Conference on Artificial
Intelligence, vol. 34, no. 04, 2020, pp. 3996–4003.
[26] C. Yang, L. Xie, S. Qiao, and A. L. Yuille, “Training deep neural net-
works in generations: A more tolerant teacher educates better students,”
in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33,
2019, pp. 5628–5635.
[27] Z. Han, J. Zhao, H. Leung, K. F. Ma, and W. Wang, “A review of
deep learning models for time series prediction,” IEEE Sensors Journal,
vol. 21, no. 6, 2019.
[28] R. Chalapathy and S. Chawla, “Deep learning for anomaly detection: A
survey,” arXiv preprint arXiv:1901.03407, 2019.
[29] A. Le Guennec, S. Malinowski, and R. Tavenard, “Data augmentation
for time series classification using convolutional neural networks,”
in ECML/PKDD workshop on advanced analytics and learning on
temporal data, 2016.
[30] Q. Wen, L. Sun, X. Song, J. Gao, X. Wang, and H. Xu, “Time
series data augmentation for deep learning: A survey,” arXiv preprint
arXiv:2002.12478, 2020.
[31] D. S. Park, W. Chan, Y. Zhang, C.-C. Chiu, B. Zoph, E. D. Cubuk,
and Q. V. Le, “Specaugment: A simple data augmentation method for
automatic speech recognition,” in Proceedings of the Interspeech, 2019,
pp. 2613–2617.
[32] L. Kegel, M. Hahmann, and W. Lehner, “Feature-based comparison
the International
and generation of time series,” in Proceedings of
Conference on Scientific and Statistical Database Management, 2018,
pp. 1–12.
[33] H. Cao, V. Y. Tan, and J. Z. Pang, “A parsimonious mixture of gaussian
trees model for oversampling in imbalanced and multimodal time-series
classification,” IEEE Transactions on Neural Networks and Learning
Systems, vol. 25, no. 12, pp. 2226–2239, 2014.
[34] C. Esteban, S. L. Hyland, and G. R¨atsch, “Real-valued (medical)
time series generation with recurrent conditional gans,” arXiv preprint
arXiv:1706.02633, 2017.
[35] X. Cui, V. Goel, and B. Kingsbury, “Data augmentation for deep neural
network acoustic modeling,” IEEE/ACM Transactions on Audio, Speech,
and Language Processing, vol. 23, no. 9, pp. 1469–1477, 2015.
[36] J. Gao, X. Song, Q. Wen, P. Wang, L. Sun, and H. Xu, “Robusttad: Ro-
bust time series anomaly detection via decomposition and convolutional
neural networks,” arXiv preprint arXiv:2002.09545, 2020.
[37] K. T. L. Eileen, Y. Kuah, K.-H. Leo, S. Sanei, E. Chew, and L. Zhao,
“Surrogate rehabilitative time series data for image-based deep learning,”
in Proceedings of the European Signal Processing Conference, 2019, pp.
1–5.
[38] O. Steven Eyobu and D. S. Han, “Feature representation and data
augmentation for human activity classification based on wearable imu
sensor data using a deep lstm neural network,” Sensors, vol. 18, no. 9,
p. 2892, 2018.
[39] C. Bergmeir, R. J. Hyndman, and J. M. Ben´ıtez, “Bagging exponential
smoothing methods using stl decomposition and box–cox transforma-
tion,” International journal of forecasting, vol. 32, no. 2, pp. 303–312,
2016.
[40] Y. Kang, R. J. Hyndman, and F. Li, “Gratis: Generating time series with
diverse and controllable characteristics,” Statistical Analysis and Data
Mining: The ASA Data Science Journal, vol. 13, no. 4, pp. 354–376,
2020.
[41] E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, and Q. V. Le, “Autoaug-
ment: Learning augmentation strategies from data,” in Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition, 2019,
pp. 113–123.
[42] T. T. Um, F. M. Pfister, D. Pichler, S. Endo, M. Lang, S. Hirche,
U. Fietzek, and D. Kuli´c, “Data augmentation of wearable sensor data for
parkinson’s disease monitoring using convolutional neural networks,” in
Proceedings of the 19th ACM International Conference on Multimodal
Interaction, 2017, pp. 216–220.
[43] Q. Wang, S. Lohit, M. J. Toledo, M. P. Buman, and P. Turaga, “A
statistical estimation framework for energy expenditure of physical
activities from a wrist-worn accelerometer,” in Proceedings of
the
Annual International Conference of the IEEE Engineering in Medicine
and Biology Society, vol. 2016, 2016, pp. 2631–2635.
[44] A. Reiss and D. Stricker, “Introducing a new benchmarked dataset for
activity monitoring,” in Proceedings of the International Symposium on
Wearable Computers, 2012, pp. 108–109.
[45] A. Zheng and A. Casari, Feature engineering for machine learning:
principles and techniques for data scientists. O’Reilly Media, Inc.,
2018.
IEEE INTERNET OF THINGS JOURNAL, VOL. 0, NO. 0, JANUARY 2022
13
[46] Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A
review and new perspectives,” IEEE transactions on pattern analysis
and machine intelligence, vol. 35, no. 8, pp. 1798–1828, 2013.
[47] A. Dutta, O. Ma, M. P. Buman, and D. W. Bliss, “Learning approach
for classification of geneactiv accelerometer data for unique activity
identification,” in 2016 IEEE 13th International Conference on Wearable
and Implantable Body Sensor Networks (BSN).
IEEE, 2016, pp. 359–
364.
[48] S. Zagoruyko and N. Komodakis, “Wide residual networks,” in Proceed-
ings of the British Machine Vision Conference, 2016.
[49] C. Cortes and V. Vapnik, “Support-vector networks,” Machine learning,
vol. 20, no. 3, pp. 273–297, 1995.
[50] H. Choi, Q. Wang, M. Toledo, P. Turaga, M. Buman, and A. Srivastava,
“Temporal alignment improves feature quality: an experiment on activity
the IEEE
recognition with accelerometer data,” in Proceedings of
Conference on Computer Vision and Pattern Recognition Workshops,
2018, pp. 349–357.
[51] Y. Chen and Y. Xue, “A deep learning approach to human activity
recognition based on single accelerometer,” in Proceedings of the IEEE
International Conference on Systems, Man, and Cybernetics, 2015, pp.
1488–1492.
[52] S. Ha, J.-M. Yun, and S. Choi, “Multi-modal convolutional neural net-
works for activity recognition,” in Proceedings of the IEEE International
Conference on Systems, Man, and Cybernetics, 2015, pp. 3017–3022.
[53] S. Ha and S. Choi, “Convolutional neural networks for human activity
recognition using multiple accelerometer and gyroscope sensors,” in
Proceedings of the International Joint Conference on Neural Networks,
2016, pp. 381–388.
[54] J. R. Kwapisz, G. M. Weiss, and S. A. Moore, “Activity recognition us-
ing cell phone accelerometers,” ACM SigKDD Explorations Newsletter,
vol. 12, no. 2, pp. 74–82, 2011.
[55] C. Catal, S. Tufekci, E. Pirmit, and G. Kocabag, “On the use of ensemble
of classifiers for accelerometer-based activity recognition,” Applied Soft
Computing, vol. 37, pp. 1018–1022, 2015.
[56] H.-J. Kim, M. Kim, S.-J. Lee, and Y. S. Choi, “An analysis of eating
activities for automatic food type recognition,” in Proceedings of the
Asia Pacific Signal and Information Processing Association Annual
Summit and Conference, 2012, pp. 1–5.
[57] A. Jordao, A. C. Nazare Jr, J. Sena, and W. R. Schwartz, “Human activity
recognition based on wearable sensor data: A standardization of the
state-of-the-art,” arXiv preprint arXiv:1806.05226, 2018.
[58] C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger, “On calibration of
modern neural networks,” in Proceedings of the International Confer-
ence on Machine Learning, 2017, pp. 1321–1330.
Eun Som Jeon received the B.E. and M.E. de-
grees in Electronics and Electrical Engineering from
Dongguk University, Seoul, Korea in 2014 and 2016,
respectively. She worked in Korea Telecom (Institute
of Convergence Technology), Seoul, Korea. She is
currently pursuing the Ph.D. degree in Computer
Engineering (Electrical Engineering) with Geometric
Media Laboratory, Arizona State University, Tempe,
AZ, USA. Her current research interests include
time-series and image data analysis, human behavior
analysis, deep learning, and artificial analysis.
Anirudh Som is an Advanced Computer Scientist
in the Center for Vision Technologies group at
SRI International. He received his M.S. and Ph.D.
degrees in Electrical Engineering from Arizona State
University in 2016 and 2020 respectively, prior to
which he received his B.Tech. degree in Electron-
ics and Communication Engineering from GITAM
University in India. His research interests are in the
fields of machine learning, computer vision, human
movement analysis, human behavior analysis and
dynamical system analysis.
Ankita Shukla is a postdoctoral researcher at Ari-
zona State University. She received her PhD and
Masters degrees in Electronics and Communication
from IIIT-Delhi, India in 2020 and 2014 respectively.
Her research interest are in the field of machine
learning, computer vision, time-series data analysis
and geometric methods.
Kristina Hasanaj is a graduate research associate
at Arizona State University. She earned her B.S.
in Exercise Science (Kinesiology concentration) and
M.A. in Exercise Physiology from Central Michigan
University. She is currently pursuing her doctoral
degree through the Nursing and Healthcare Innova-
tion Ph.D. program at Arizona State University. Her
research interests are focused around behaviors in
the 24-hour day (sleep, sedentary behavior, physical
activity) and the use of mobile health and wearable
technologies in clinical and health related settings.
Matthew P. Buman , PhD is an associate professor
in the College of Health Solutions at Arizona State
University. His research interests reflect the dynamic
interplay of behaviors in the 24-hour day, including
sleep, sedentary behavior, and physical activity. His
work focuses on the measurement of these behav-
iors using wearable technologies, interventions that
singly or in combination target these behaviors, and
the environments that impact these behaviors.
Pavan Turaga , PhD is an associate professor in the
School of Arts, Media and Engineering at Arizona
State University. He received a bachelor’s degree in
electronics and communication engineering from the
Indian Institute of Technology Guwahati, India, in
2004, and a master’s and doctorate in electrical en-
gineering from the University of Maryland, College
Park in 2007 and 2009, respectively. His research
interests include computer vision and computational
imaging with applications in activity analysis, dy-
namic scene analysis, and time-series data analysis
with geometric methods.
|
synthetic_cpt | 1 | "The_Promise_and_Challenge_of_Large_Language_Models_for_Knowledge_Engineering_Insights_from_a_Hackat(...TRUNCATED) | "Using Large Language Models for Knowledge\nEngineering (LLMKE): A Case Study on Wikidata\n\nBohui Z(...TRUNCATED) |
synthetic_cpt | 2 | Medical_Image_Synthesis_via_Fine-Grained_Image-Text_Alignment_and_Anatomy-Pathology_Prompting.pdf | "4\n2\n0\n2\n\nr\na\n\nM\n1\n1\n\n]\n\nV\nC\n.\ns\nc\n[\n\n1\nv\n5\n3\n8\n6\n0\n.\n3\n0\n4\n2\n:\nv\(...TRUNCATED) |
synthetic_cpt | 4 | "Increasing_Diversity_While_Maintaining_Accuracy_Text_Data_Generation_with_Large_Language_Models_and(...TRUNCATED) | "4\n2\n0\n2\n\ng\nu\nA\n5\n1\n\n]\n\nG\nL\n.\ns\nc\n[\n\n1\nv\n6\n5\n0\n8\n0\n.\n8\n0\n4\n2\n:\nv\ni(...TRUNCATED) |
synthetic_cpt | 1 | Search_Query_Spell_Correction_with_Weak_Supervision_in_E-commerce.pdf | "Spelling Correction with Denoising Transformer\n\nAlex Kuznetsov\nHubSpot, Inc.\nDublin, Ireland\na(...TRUNCATED) |
synthetic_cpt | 6 | Smaller_Weaker_Yet_Better_Training_LLM_Reasoners_via_Compute-Optimal_Sampling.pdf | "0\n1\n0\n2\n\nr\np\nA\n1\n1\n\n]\n\nG\nM\n.\nh\nt\na\nm\n\n[\n\n1\nv\n2\n5\n8\n1\n.\n4\n0\n0\n1\n:\(...TRUNCATED) |
synthetic_cpt | 1 | "Full-dose_PET_Synthesis_from_Low-dose_PET_Using_High-efficiency_Diffusion_Denoising_Probabilistic_M(...TRUNCATED) | "8\n1\n0\n2\n\nr\na\n\nM\n3\n2\n\n]\nS\nD\n.\nh\nt\na\nm\n\n[\n\n3\nv\n7\n7\n2\n7\n0\n.\n5\n0\n7\n1\(...TRUNCATED) |
synthetic_cpt | 1 | Towards_Robust_Evaluation_of_Unlearning_in_LLMs_via_Data_Transformations.pdf | "Abhinav Joshi♣\nSriram Vema⋄\n\nTowards Robust Evaluation of Unlearning in LLMs via Data\nTrans(...TRUNCATED) |
synthetic_cpt | 2 | ASVD_Activation-aware_Singular_Value_Decomposition_for_Compressing_Large_Language_Models.pdf | "4\n2\n0\n2\n\nt\nc\nO\n9\n2\n\n]\nL\nC\n.\ns\nc\n[\n\n4\nv\n1\n2\n8\n5\n0\n.\n2\n1\n3\n2\n:\nv\ni\n(...TRUNCATED) |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 223