id
stringlengths 9
14
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
9.62k
| title
stringlengths 4
343
| comments
stringlengths 1
609
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 12
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
112
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.76k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
535
| abstract
stringlengths 11
3.75k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1802.00554
|
Andrew Lensen
|
Andrew Lensen, Bing Xue, and Mengjie Zhang
|
Generating Redundant Features with Unsupervised Multi-Tree Genetic
Programming
|
16 pages, preprint for EuroGP '18
| null |
10.1007/978-3-319-77553-1_6
| null |
cs.NE cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, feature selection has become an increasingly important area of
research due to the surge in high-dimensional datasets in all areas of modern
life. A plethora of feature selection algorithms have been proposed, but it is
difficult to truly analyse the quality of a given algorithm. Ideally, an
algorithm would be evaluated by measuring how well it removes known bad
features. Acquiring datasets with such features is inherently difficult, and so
a common technique is to add synthetic bad features to an existing dataset.
While adding noisy features is an easy task, it is very difficult to
automatically add complex, redundant features. This work proposes one of the
first approaches to generating redundant features, using a novel genetic
programming approach. Initial experiments show that our proposed method can
automatically create difficult, redundant features which have the potential to
be used for creating high-quality feature selection benchmark datasets.
|
[
{
"created": "Fri, 2 Feb 2018 04:19:04 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Mar 2018 06:35:56 GMT",
"version": "v2"
}
] |
2019-10-24
|
[
[
"Lensen",
"Andrew",
""
],
[
"Xue",
"Bing",
""
],
[
"Zhang",
"Mengjie",
""
]
] |
Recently, feature selection has become an increasingly important area of research due to the surge in high-dimensional datasets in all areas of modern life. A plethora of feature selection algorithms have been proposed, but it is difficult to truly analyse the quality of a given algorithm. Ideally, an algorithm would be evaluated by measuring how well it removes known bad features. Acquiring datasets with such features is inherently difficult, and so a common technique is to add synthetic bad features to an existing dataset. While adding noisy features is an easy task, it is very difficult to automatically add complex, redundant features. This work proposes one of the first approaches to generating redundant features, using a novel genetic programming approach. Initial experiments show that our proposed method can automatically create difficult, redundant features which have the potential to be used for creating high-quality feature selection benchmark datasets.
|
hep-th/0209192
|
Mark A. Stern
|
Mark A. Stern
|
Quantum Mechanical Mirror Symmetry, D Branes, and B Fields
|
22 pages
| null | null |
Duke-CGTP-02-08
|
hep-th
| null |
We construct quantum mechanical models which mimic many features of string
theory. We use these models to gain improved descriptions of B fields and
gerbes. We examine analogs of T duality, D branes, and mirror symmetry and
derive quantum mechanical analogs of standard phenomena, such as the
noncommutative geometry induced by a B field.
|
[
{
"created": "Mon, 23 Sep 2002 19:31:32 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Stern",
"Mark A.",
""
]
] |
We construct quantum mechanical models which mimic many features of string theory. We use these models to gain improved descriptions of B fields and gerbes. We examine analogs of T duality, D branes, and mirror symmetry and derive quantum mechanical analogs of standard phenomena, such as the noncommutative geometry induced by a B field.
|
2007.04028
|
Amartya Sanyal
|
Amartya Sanyal, Puneet K Dokania, Varun Kanade, Philip H.S. Torr
|
How benign is benign overfitting?
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate two causes for adversarial vulnerability in deep neural
networks: bad data and (poorly) trained models. When trained with SGD, deep
neural networks essentially achieve zero training error, even in the presence
of label noise, while also exhibiting good generalization on natural test data,
something referred to as benign overfitting [2, 10]. However, these models are
vulnerable to adversarial attacks. We identify label noise as one of the causes
for adversarial vulnerability, and provide theoretical and empirical evidence
in support of this. Surprisingly, we find several instances of label noise in
datasets such as MNIST and CIFAR, and that robustly trained models incur
training error on some of these, i.e. they don't fit the noise. However,
removing noisy labels alone does not suffice to achieve adversarial robustness.
Standard training procedures bias neural networks towards learning "simple"
classification boundaries, which may be less robust than more complex ones. We
observe that adversarial training does produce more complex decision
boundaries. We conjecture that in part the need for complex decision boundaries
arises from sub-optimal representation learning. By means of simple toy
examples, we show theoretically how the choice of representation can
drastically affect adversarial robustness.
|
[
{
"created": "Wed, 8 Jul 2020 11:07:10 GMT",
"version": "v1"
}
] |
2020-07-09
|
[
[
"Sanyal",
"Amartya",
""
],
[
"Dokania",
"Puneet K",
""
],
[
"Kanade",
"Varun",
""
],
[
"Torr",
"Philip H. S.",
""
]
] |
We investigate two causes for adversarial vulnerability in deep neural networks: bad data and (poorly) trained models. When trained with SGD, deep neural networks essentially achieve zero training error, even in the presence of label noise, while also exhibiting good generalization on natural test data, something referred to as benign overfitting [2, 10]. However, these models are vulnerable to adversarial attacks. We identify label noise as one of the causes for adversarial vulnerability, and provide theoretical and empirical evidence in support of this. Surprisingly, we find several instances of label noise in datasets such as MNIST and CIFAR, and that robustly trained models incur training error on some of these, i.e. they don't fit the noise. However, removing noisy labels alone does not suffice to achieve adversarial robustness. Standard training procedures bias neural networks towards learning "simple" classification boundaries, which may be less robust than more complex ones. We observe that adversarial training does produce more complex decision boundaries. We conjecture that in part the need for complex decision boundaries arises from sub-optimal representation learning. By means of simple toy examples, we show theoretically how the choice of representation can drastically affect adversarial robustness.
|
hep-th/9303053
| null |
Alan R. White
|
Analytic Multi-Regge Theory and the Pomeron in QCD : II. Gauge Theory
Analysis
|
149 pages
|
Int.J.Mod.Phys.A8:4755-4896,1993
|
10.1142/S0217751X93001910
|
ANL-HEP-PR-93-16
|
hep-th hep-ph
| null |
The high-energy Regge behavior of gauge theories is studied via the formalism
of Analytic Multi-Regge Theory. Perturbative results for spontaneously-broken
theories are first organised into reggeon diagrams. Unbroken gauge theories are
studied via a reggeon diagram infra-red analysis of symmetry restoration.
Massless fermions play a crucial role and the case of QCD involves the
Super-Critical Pomeron as an essential intermediate stage. An introductory
review of the build up of transverse momentum diagrams and reggeon diagrams
from leading log calculations in gauge theories is presented first. It is then
shown that the results closely reproduce the general structure for multi-regge
amplitudes derived in Part I of the article, allowing the construction of
general reggeon diagrams for spontaneously-broken theories. Next it is argued
that, with a transverse-momentum cut-off, unbroken gauge theories can be
reached through an infra-red limiting process which successively decouples
fundamental representation Higgs fields . The first infra-red limit studied is
the restoration of SU(2) gauge symmetry. The analysis is dominated by the
exponentiation of divergences imposed by Reggeon Unitarity and the contribution
of massless quarks ...
|
[
{
"created": "Tue, 9 Mar 1993 14:33:25 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Mar 1993 10:12:48 GMT",
"version": "v2"
}
] |
2014-11-18
|
[
[
"White",
"Alan R.",
""
]
] |
The high-energy Regge behavior of gauge theories is studied via the formalism of Analytic Multi-Regge Theory. Perturbative results for spontaneously-broken theories are first organised into reggeon diagrams. Unbroken gauge theories are studied via a reggeon diagram infra-red analysis of symmetry restoration. Massless fermions play a crucial role and the case of QCD involves the Super-Critical Pomeron as an essential intermediate stage. An introductory review of the build up of transverse momentum diagrams and reggeon diagrams from leading log calculations in gauge theories is presented first. It is then shown that the results closely reproduce the general structure for multi-regge amplitudes derived in Part I of the article, allowing the construction of general reggeon diagrams for spontaneously-broken theories. Next it is argued that, with a transverse-momentum cut-off, unbroken gauge theories can be reached through an infra-red limiting process which successively decouples fundamental representation Higgs fields . The first infra-red limit studied is the restoration of SU(2) gauge symmetry. The analysis is dominated by the exponentiation of divergences imposed by Reggeon Unitarity and the contribution of massless quarks ...
|
hep-th/9404121
|
Miguel Navarro
|
M. Navarro, J. Guerrero and V. Aldaya
|
Optics, Mechanics and Quantization of Reparametrization Systems
|
15 pages, Latex
|
J.Math.Phys. 35 (1994) 6407-6417
|
10.1063/1.530682
| null |
hep-th
| null |
In this paper we regard the dynamics obtained from Fermat principle as begin
the classical theory of light. We (first-)quantize the action and show how
close we can get to the Maxwell theory. We show that Quantum Geometric Optics
is not a theory of fields in curved space. Considering Classical Mechanics to
be on the same footing, we show the parallelism between Quantum Mechanics and
Quantum Geometric Optics. We show that, due to the reparametrization invariance
of the classical theories, the dynamics of the quantum theories is given by a
Hamiltonian constraint. Some implications of the above analogy in the
quantization of true reparameterization invariant systems are discussed.
|
[
{
"created": "Wed, 20 Apr 1994 09:26:47 GMT",
"version": "v1"
}
] |
2009-10-28
|
[
[
"Navarro",
"M.",
""
],
[
"Guerrero",
"J.",
""
],
[
"Aldaya",
"V.",
""
]
] |
In this paper we regard the dynamics obtained from Fermat principle as begin the classical theory of light. We (first-)quantize the action and show how close we can get to the Maxwell theory. We show that Quantum Geometric Optics is not a theory of fields in curved space. Considering Classical Mechanics to be on the same footing, we show the parallelism between Quantum Mechanics and Quantum Geometric Optics. We show that, due to the reparametrization invariance of the classical theories, the dynamics of the quantum theories is given by a Hamiltonian constraint. Some implications of the above analogy in the quantization of true reparameterization invariant systems are discussed.
|
1804.05661
|
Thameur Dhieb
|
Thameur Dhieb, Sourour Njah, Houcine Boubaker, Wael Ouarda, Mounir Ben
Ayed, and Adel M. Alimi
|
An Extended Beta-Elliptic Model and Fuzzy Elementary Perceptual Codes
for Online Multilingual Writer Identification using Deep Neural Network
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Actually, the ability to identify the documents authors provides more chances
for using these documents for various purposes. In this paper, we present a new
effective biometric writer identification system from online handwriting. The
system consists of the preprocessing and the segmentation of online handwriting
into a sequence of Beta strokes in a first step. Then, from each stroke, we
extract a set of static and dynamic features from new proposed model that we
called Extended Beta-Elliptic model and from the Fuzzy Elementary Perceptual
Codes. Next, all the segments which are composed of N consecutive strokes are
categorized into groups and subgroups according to their position and their
geometric characteristics. Finally, Deep Neural Network is used as classifier.
Experimental results reveal that the proposed system achieves interesting
results as compared to those of the existing writer identification systems on
Latin and Arabic scripts.
|
[
{
"created": "Mon, 16 Apr 2018 13:27:11 GMT",
"version": "v1"
},
{
"created": "Sun, 20 May 2018 16:10:40 GMT",
"version": "v2"
},
{
"created": "Wed, 30 May 2018 19:14:28 GMT",
"version": "v3"
},
{
"created": "Sat, 10 Nov 2018 11:28:36 GMT",
"version": "v4"
}
] |
2018-11-13
|
[
[
"Dhieb",
"Thameur",
""
],
[
"Njah",
"Sourour",
""
],
[
"Boubaker",
"Houcine",
""
],
[
"Ouarda",
"Wael",
""
],
[
"Ayed",
"Mounir Ben",
""
],
[
"Alimi",
"Adel M.",
""
]
] |
Actually, the ability to identify the documents authors provides more chances for using these documents for various purposes. In this paper, we present a new effective biometric writer identification system from online handwriting. The system consists of the preprocessing and the segmentation of online handwriting into a sequence of Beta strokes in a first step. Then, from each stroke, we extract a set of static and dynamic features from new proposed model that we called Extended Beta-Elliptic model and from the Fuzzy Elementary Perceptual Codes. Next, all the segments which are composed of N consecutive strokes are categorized into groups and subgroups according to their position and their geometric characteristics. Finally, Deep Neural Network is used as classifier. Experimental results reveal that the proposed system achieves interesting results as compared to those of the existing writer identification systems on Latin and Arabic scripts.
|
1810.01758
|
Kaveh Dehghanpour
|
Qianzhi Zhang and Kaveh Dehghanpour and Zhaoyu Wang and Qiuhua Huang
|
A Learning-based Power Management for Networked Microgrids Under
Incomplete Information
| null | null |
10.1109/TSG.2019.2933502
| null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents an approximate Reinforcement Learning (RL) methodology
for bi-level power management of networked Microgrids (MG) in electric
distribution systems. In practice, the cooperative agent can have limited or no
knowledge of the MG asset behavior and detailed models behind the Point of
Common Coupling (PCC). This makes the distribution systems unobservable and
impedes conventional optimization solutions for the constrained MG power
management problem. To tackle this challenge, we have proposed a bi-level RL
framework in a price-based environment. At the higher level, a cooperative
agent performs function approximation to predict the behavior of entities under
incomplete information of MG parametric models; while at the lower level, each
MG provides power-flow-constrained optimal response to price signals. The
function approximation scheme is then used within an adaptive RL framework to
optimize the price signal as the system load and solar generation change over
time. Numerical experiments have verified that, compared to previous works in
the literature, the proposed privacy-preserving learning model has better
adaptability and enhanced computational speed.
|
[
{
"created": "Mon, 1 Oct 2018 20:38:50 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Mar 2019 16:06:44 GMT",
"version": "v2"
},
{
"created": "Fri, 14 Jun 2019 00:21:24 GMT",
"version": "v3"
}
] |
2019-08-09
|
[
[
"Zhang",
"Qianzhi",
""
],
[
"Dehghanpour",
"Kaveh",
""
],
[
"Wang",
"Zhaoyu",
""
],
[
"Huang",
"Qiuhua",
""
]
] |
This paper presents an approximate Reinforcement Learning (RL) methodology for bi-level power management of networked Microgrids (MG) in electric distribution systems. In practice, the cooperative agent can have limited or no knowledge of the MG asset behavior and detailed models behind the Point of Common Coupling (PCC). This makes the distribution systems unobservable and impedes conventional optimization solutions for the constrained MG power management problem. To tackle this challenge, we have proposed a bi-level RL framework in a price-based environment. At the higher level, a cooperative agent performs function approximation to predict the behavior of entities under incomplete information of MG parametric models; while at the lower level, each MG provides power-flow-constrained optimal response to price signals. The function approximation scheme is then used within an adaptive RL framework to optimize the price signal as the system load and solar generation change over time. Numerical experiments have verified that, compared to previous works in the literature, the proposed privacy-preserving learning model has better adaptability and enhanced computational speed.
|
0905.3602
|
Fainan Hanif
|
Muhammad Fainan Hanif and Peter J. Smith
|
Level Crossing Rates of Interference in Cognitive Radio Networks
|
submitted to the IEEE Transactions on Wireless Communications
|
IEEE Transactions on Wireless Communications, vol.9, no.4,
pp.1283-1287, 2010
|
10.1109/TWC.2010.04.090749
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The future deployment of cognitive radios is critically dependent on the fact
that the incumbent primary user system must remain as oblivious as possible to
their presence. This in turn heavily relies on the fluctuations of the
interfering cognitive radio signals. In this letter we compute the level
crossing rates of the cumulative interference created by the cognitive radios.
We derive analytical formulae for the level crossing rates in Rayleigh and
Rician fast fading conditions. We approximate Rayleigh and Rician level
crossing rates using fluctuation rates of gamma and scaled noncentral $\chi^2$
processes respectively. The analytical results and the approximations used in
their derivations are verified by Monte Carlo simulations and the analysis is
applied to a particular CR allocation strategy.
|
[
{
"created": "Fri, 22 May 2009 04:06:43 GMT",
"version": "v1"
}
] |
2013-01-03
|
[
[
"Hanif",
"Muhammad Fainan",
""
],
[
"Smith",
"Peter J.",
""
]
] |
The future deployment of cognitive radios is critically dependent on the fact that the incumbent primary user system must remain as oblivious as possible to their presence. This in turn heavily relies on the fluctuations of the interfering cognitive radio signals. In this letter we compute the level crossing rates of the cumulative interference created by the cognitive radios. We derive analytical formulae for the level crossing rates in Rayleigh and Rician fast fading conditions. We approximate Rayleigh and Rician level crossing rates using fluctuation rates of gamma and scaled noncentral $\chi^2$ processes respectively. The analytical results and the approximations used in their derivations are verified by Monte Carlo simulations and the analysis is applied to a particular CR allocation strategy.
|
1308.2277
|
Simon Childs
|
S.J. Childs
|
An Improved Temporal Formulation of Pupal Transpiration in Glossina
|
33 pages, 27 figures, 3 tables. arXiv admin note: text overlap with
arXiv:0901.2470
|
Mathematical Biosciences, 262: 214-229, 2015
| null | null |
q-bio.OT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The temporal aspect of a model of pupal dehydration is improved upon. The
observed dependence of pupal transpiration on time is attributed to an
alternation between two, essential modes, for which the deposition of a thin,
pupal skin inside the puparium and its subsequent demise are thought to be
responsible. For each mode of transpiration, the results of the Bursell (1958)
investigation into pupal dehydration are used as a rudimentary data set. These
data are generalised to all temperatures and humidities by invoking the
property of multiplicative separability. The problem, then, is that as the
temperature varies with time, so does the metabolism and the developmental
stages to which the model data pertain, must necessarily warp. The
puparial-duration formula of Phelps and Burrows (1969) and Hargrove (2004) is
exploited to facilitate a mapping between the constant-temperature time domain
of the data and that of some, more general case at hand. The resulting,
Glossina morsitans model is extrapolated to other species using their relative
surface areas, their relative protected and unprotected transpiration rates and
their different fourth instar excretions (drawing, to a lesser extent, from the
data of Buxton and Lewis, 1934). In this way the problem of pupal dehydration
is formulated as a series of integrals and the consequent survival can be
predicted. The discovery of a distinct definition for hygrophilic species,
within the formulation, prompts the investigation of the hypothetical effect of
a two-day heat wave on pupae. This leads to the conclusion that the
classification of species as hygrophilic, mesophilic and xerophilic is largely
true only in so much as their third and fourth instars are and, possibly, the
hours shortly before eclosion.
|
[
{
"created": "Sat, 10 Aug 2013 05:14:22 GMT",
"version": "v1"
},
{
"created": "Fri, 9 May 2014 19:11:14 GMT",
"version": "v2"
},
{
"created": "Tue, 19 May 2015 14:59:27 GMT",
"version": "v3"
}
] |
2015-05-20
|
[
[
"Childs",
"S. J.",
""
]
] |
The temporal aspect of a model of pupal dehydration is improved upon. The observed dependence of pupal transpiration on time is attributed to an alternation between two, essential modes, for which the deposition of a thin, pupal skin inside the puparium and its subsequent demise are thought to be responsible. For each mode of transpiration, the results of the Bursell (1958) investigation into pupal dehydration are used as a rudimentary data set. These data are generalised to all temperatures and humidities by invoking the property of multiplicative separability. The problem, then, is that as the temperature varies with time, so does the metabolism and the developmental stages to which the model data pertain, must necessarily warp. The puparial-duration formula of Phelps and Burrows (1969) and Hargrove (2004) is exploited to facilitate a mapping between the constant-temperature time domain of the data and that of some, more general case at hand. The resulting, Glossina morsitans model is extrapolated to other species using their relative surface areas, their relative protected and unprotected transpiration rates and their different fourth instar excretions (drawing, to a lesser extent, from the data of Buxton and Lewis, 1934). In this way the problem of pupal dehydration is formulated as a series of integrals and the consequent survival can be predicted. The discovery of a distinct definition for hygrophilic species, within the formulation, prompts the investigation of the hypothetical effect of a two-day heat wave on pupae. This leads to the conclusion that the classification of species as hygrophilic, mesophilic and xerophilic is largely true only in so much as their third and fourth instars are and, possibly, the hours shortly before eclosion.
|
2308.09443
|
Veronique Bruyere
|
Thomas Brihaye and V\'eronique Bruy\`ere and Gaspard Reghem
|
Quantitative Reachability Stackelberg-Pareto Synthesis is
NEXPTIME-Complete
| null | null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we deepen the study of two-player Stackelberg games played on
graphs in which Player $0$ announces a strategy and Player $1$, having several
objectives, responds rationally by following plays providing him Pareto-optimal
payoffs given the strategy of Player $0$. The Stackelberg-Pareto synthesis
problem, asking whether Player $0$ can announce a strategy which satisfies his
objective, whatever the rational response of Player $1$, has been recently
investigated for $\omega$-regular objectives. We solve this problem for
weighted graph games and quantitative reachability objectives such that Player
$0$ wants to reach his target set with a total cost less than some given upper
bound. We show that it is NEXPTIME-complete, as for Boolean reachability
objectives.
|
[
{
"created": "Fri, 18 Aug 2023 10:17:18 GMT",
"version": "v1"
}
] |
2023-08-21
|
[
[
"Brihaye",
"Thomas",
""
],
[
"Bruyère",
"Véronique",
""
],
[
"Reghem",
"Gaspard",
""
]
] |
In this paper, we deepen the study of two-player Stackelberg games played on graphs in which Player $0$ announces a strategy and Player $1$, having several objectives, responds rationally by following plays providing him Pareto-optimal payoffs given the strategy of Player $0$. The Stackelberg-Pareto synthesis problem, asking whether Player $0$ can announce a strategy which satisfies his objective, whatever the rational response of Player $1$, has been recently investigated for $\omega$-regular objectives. We solve this problem for weighted graph games and quantitative reachability objectives such that Player $0$ wants to reach his target set with a total cost less than some given upper bound. We show that it is NEXPTIME-complete, as for Boolean reachability objectives.
|
2012.02628
|
Jonathan Cohen
|
Jonathan D. Cohen
|
A Mitigation Score for COVID-19
|
15 pages, 12 figures
| null | null | null |
q-bio.OT
|
http://creativecommons.org/licenses/by/4.0/
|
This note describes a simple score to indicate the effectiveness of
mitigation against infections of COVID-19 as observed by new case counts. The
score includes normalization, making comparisons across jurisdictions possible.
The smoothing employed provides robustness in the face of reporting vagaries
while retaining salient features of evolution, enabling a clearer picture for
decision makers and the public.
|
[
{
"created": "Wed, 2 Dec 2020 21:25:50 GMT",
"version": "v1"
}
] |
2020-12-07
|
[
[
"Cohen",
"Jonathan D.",
""
]
] |
This note describes a simple score to indicate the effectiveness of mitigation against infections of COVID-19 as observed by new case counts. The score includes normalization, making comparisons across jurisdictions possible. The smoothing employed provides robustness in the face of reporting vagaries while retaining salient features of evolution, enabling a clearer picture for decision makers and the public.
|
1504.04496
|
Jose Edelstein
|
Xian O. Camanho, Jose D. Edelstein, Andres Gomberoff, J. Anibal
Sierra-Garcia
|
On AdS to dS transitions in higher-curvature gravity
|
12 pages, 3 figures; v2: comments and references added
| null | null | null |
hep-th gr-qc
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the possible existence of gravitational phase transitions from AdS
to dS geometries in the context of higher-curvature gravities. We use
Lanczos-Gauss-Bonnet (LGB) theory with a positive cosmological constant as a
toy model. This theory has two maximally symmetric vacua with positive (dS) and
negative (AdS) constant curvature. We show that a phase transition from the AdS
vacuum to a dS black hole geometry takes place when the temperature reaches a
critical value. The transition is produced by nucleation of bubbles of the new
phase that expand afterwards. We claim that this phenomenon is not particular
to the model under study, and shall also be part of generic gravitational
theories with higher-curvature terms.
|
[
{
"created": "Fri, 17 Apr 2015 12:58:57 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Jul 2015 00:04:57 GMT",
"version": "v2"
}
] |
2015-07-28
|
[
[
"Camanho",
"Xian O.",
""
],
[
"Edelstein",
"Jose D.",
""
],
[
"Gomberoff",
"Andres",
""
],
[
"Sierra-Garcia",
"J. Anibal",
""
]
] |
We study the possible existence of gravitational phase transitions from AdS to dS geometries in the context of higher-curvature gravities. We use Lanczos-Gauss-Bonnet (LGB) theory with a positive cosmological constant as a toy model. This theory has two maximally symmetric vacua with positive (dS) and negative (AdS) constant curvature. We show that a phase transition from the AdS vacuum to a dS black hole geometry takes place when the temperature reaches a critical value. The transition is produced by nucleation of bubbles of the new phase that expand afterwards. We claim that this phenomenon is not particular to the model under study, and shall also be part of generic gravitational theories with higher-curvature terms.
|
hep-th/9505126
|
Matthias Heyssler
|
Matthias Heyssler (Department of Physics, Durham) Alex C. Kalloniatis
(Max-Planck-Institut f\"ur Kernphysik,Heidelberg)
|
Costituent Quark Picture out of QCD in two dimensions - on the
Light-Cone
|
13 pages, uses elsart.sty 2 Postscript figures, uses epsf.sty
'elsart.sty' and 'elsart12.sty' are available via anonymous-ftp at
ftp://ftp.tex.ac.uk/tex-archive/macros/latex/contrib/supported/elsevier
|
Phys.Lett. B354 (1995) 453-459
|
10.1016/0370-2693(95)00654-4
|
MPIH-V25-1994
|
hep-th hep-ph
| null |
Using DLCQ as a nonperturbative method, we test Fock-space truncations in
${\rm QCD}_{1+1}$ by studying the mass spectra of hadrons in colour SU(2) and
SU(3) at finite harmonic resolution $K$. We include $q\bar q q\bar q$ states
for mesons and up to $qqq q\bar q$ states for baryons. With this truncation, we
give `predictions' for the masses of the first five states where finite $K$
effects are minimal.
|
[
{
"created": "Sat, 20 May 1995 01:54:21 GMT",
"version": "v1"
}
] |
2009-10-28
|
[
[
"Heyssler",
"Matthias",
"",
"Department of Physics, Durham"
],
[
"Kalloniatis",
"Alex C.",
"",
"Max-Planck-Institut für Kernphysik,Heidelberg"
]
] |
Using DLCQ as a nonperturbative method, we test Fock-space truncations in ${\rm QCD}_{1+1}$ by studying the mass spectra of hadrons in colour SU(2) and SU(3) at finite harmonic resolution $K$. We include $q\bar q q\bar q$ states for mesons and up to $qqq q\bar q$ states for baryons. With this truncation, we give `predictions' for the masses of the first five states where finite $K$ effects are minimal.
|
1701.02294
|
Marc P. Bellon
|
Marc P. Bellon
|
Alien Calculus and non perturbative effects in Quantum Field Theory
|
4 pages, double-column
|
Front. Phys. (2016) 11: 113201
|
10.1007/s11467-016-0580-7
| null |
hep-th hep-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In many domains of physics, methods are needed to deal with non-perturbative
aspects. I want here to argue that a good approach is to work on the Borel
transforms of the quantities of interest, the singularities of which give
non-perturbative contributions. These singularities in many cases can be
largely determined by using the alien calculus developed by Jean \'Ecalle. My
main example will be the two point function of a massless theory given as a
solution of a renormalization group equation.
|
[
{
"created": "Mon, 9 Jan 2017 18:33:58 GMT",
"version": "v1"
}
] |
2017-01-10
|
[
[
"Bellon",
"Marc P.",
""
]
] |
In many domains of physics, methods are needed to deal with non-perturbative aspects. I want here to argue that a good approach is to work on the Borel transforms of the quantities of interest, the singularities of which give non-perturbative contributions. These singularities in many cases can be largely determined by using the alien calculus developed by Jean \'Ecalle. My main example will be the two point function of a massless theory given as a solution of a renormalization group equation.
|
hep-th/9804204
|
H. W. Braden
|
H. W. Braden, V. Varela
|
The Bogomolny Equations and Solutions for Einstein-Yang-Mills-Dilaton-
$\sigma$ Models
|
24 pages LaTex, 1 Figure, revised text for publication
|
Phys.Rev.D58:124020,1998
|
10.1103/PhysRevD.58.124020
|
MS-98-006
|
hep-th gr-qc
| null |
We derive Bogomolny equations for an Einstein-Yang-Mills-dilaton-$\sigma$
model (EYMD-$\sigma$) on a static spacetime, showing that the Einstein
equations are satisfied if and only if the associated (conformally scaled)
three-metric is flat. These are precisely the static metrics for which
super-covariantly constant spinors exist. We study some general properties of
these equations and then consider the problem of obtaining axially symmetric
solutions for the gauge group SU(2).
|
[
{
"created": "Thu, 30 Apr 1998 16:27:45 GMT",
"version": "v1"
},
{
"created": "Fri, 11 Sep 1998 14:16:21 GMT",
"version": "v2"
}
] |
2008-11-26
|
[
[
"Braden",
"H. W.",
""
],
[
"Varela",
"V.",
""
]
] |
We derive Bogomolny equations for an Einstein-Yang-Mills-dilaton-$\sigma$ model (EYMD-$\sigma$) on a static spacetime, showing that the Einstein equations are satisfied if and only if the associated (conformally scaled) three-metric is flat. These are precisely the static metrics for which super-covariantly constant spinors exist. We study some general properties of these equations and then consider the problem of obtaining axially symmetric solutions for the gauge group SU(2).
|
hep-th/0408134
|
Michael Walker
|
M.L.Walker
|
Three point SUSY Ward identities without Ghosts
|
20 pages, no figures, typos corrected
|
JHEP0412:011,2004
|
10.1088/1126-6708/2004/12/011
| null |
hep-th
| null |
We utilise a non-local gauge transform which renders the entire action of
SUSY QED invariant and respects the SUSY algebra modulo the gauge-fixing
condition, to derive two- and three-point ghost-free SUSY Ward identities in
SUSY QED. We use the cluster decomposition principle to find the Green's
function Ward identities and then takes linear combinations of the latter to
derive identities for the proper functions.
|
[
{
"created": "Wed, 18 Aug 2004 12:30:50 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Sep 2004 01:29:41 GMT",
"version": "v2"
}
] |
2008-11-26
|
[
[
"Walker",
"M. L.",
""
]
] |
We utilise a non-local gauge transform which renders the entire action of SUSY QED invariant and respects the SUSY algebra modulo the gauge-fixing condition, to derive two- and three-point ghost-free SUSY Ward identities in SUSY QED. We use the cluster decomposition principle to find the Green's function Ward identities and then takes linear combinations of the latter to derive identities for the proper functions.
|
1506.09215
|
Simon Lacoste-Julien
|
Jean-Baptiste Alayrac, Piotr Bojanowski, Nishant Agrawal, Josef Sivic,
Ivan Laptev, Simon Lacoste-Julien
|
Unsupervised Learning from Narrated Instruction Videos
|
Appears in: 2016 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR 2016). 21 pages
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We address the problem of automatically learning the main steps to complete a
certain task, such as changing a car tire, from a set of narrated instruction
videos. The contributions of this paper are three-fold. First, we develop a new
unsupervised learning approach that takes advantage of the complementary nature
of the input video and the associated narration. The method solves two
clustering problems, one in text and one in video, applied one after each other
and linked by joint constraints to obtain a single coherent sequence of steps
in both modalities. Second, we collect and annotate a new challenging dataset
of real-world instruction videos from the Internet. The dataset contains about
800,000 frames for five different tasks that include complex interactions
between people and objects, and are captured in a variety of indoor and outdoor
settings. Third, we experimentally demonstrate that the proposed method can
automatically discover, in an unsupervised manner, the main steps to achieve
the task and locate the steps in the input videos.
|
[
{
"created": "Tue, 30 Jun 2015 19:55:37 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Jul 2015 16:43:36 GMT",
"version": "v2"
},
{
"created": "Mon, 30 Nov 2015 18:10:53 GMT",
"version": "v3"
},
{
"created": "Tue, 28 Jun 2016 18:43:37 GMT",
"version": "v4"
}
] |
2016-06-29
|
[
[
"Alayrac",
"Jean-Baptiste",
""
],
[
"Bojanowski",
"Piotr",
""
],
[
"Agrawal",
"Nishant",
""
],
[
"Sivic",
"Josef",
""
],
[
"Laptev",
"Ivan",
""
],
[
"Lacoste-Julien",
"Simon",
""
]
] |
We address the problem of automatically learning the main steps to complete a certain task, such as changing a car tire, from a set of narrated instruction videos. The contributions of this paper are three-fold. First, we develop a new unsupervised learning approach that takes advantage of the complementary nature of the input video and the associated narration. The method solves two clustering problems, one in text and one in video, applied one after each other and linked by joint constraints to obtain a single coherent sequence of steps in both modalities. Second, we collect and annotate a new challenging dataset of real-world instruction videos from the Internet. The dataset contains about 800,000 frames for five different tasks that include complex interactions between people and objects, and are captured in a variety of indoor and outdoor settings. Third, we experimentally demonstrate that the proposed method can automatically discover, in an unsupervised manner, the main steps to achieve the task and locate the steps in the input videos.
|
1802.01089
|
Eduard Paul Enoiu
|
Raluca Marinescu, Predrag Filipovikj, Eduard Paul Enoiu, Jonatan
Larsson, Cristina Seceleanu
|
An Energy-aware Mutation Testing Framework for EAST-ADL Architectural
Models
|
Version submitted to the 29th Nordic Workshop on Programming Theory
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Early design artifacts of embedded systems, such as architectural models,
represent convenient abstractions for reasoning about a system's structure and
functionality. One such example is the Electronic Architecture and Software
Tools-Architecture Description Language (EAST-ADL), a domain-specific
architectural language that targets the automotive industry. EAST-ADL is used
to represent both hardware and software elements, as well as related
extra-functional information (e.g., timing properties, triggering information,
resource consumption). Testing architectural models is an important activity in
engineering large-scale industrial systems, which sparks a growing research
interest. The main contributions of this paper are: (i) an approach for
creating energy-related mutants for EAST-ADL architectural models, (ii) a
method for overcoming the equivalent mutant problem (i.e., the problem of
finding a test case which can distinguish the observable behavior of a mutant
from the original one), (iii) a test generation approach based on UPPAAL
Statistical Model Checker (SMC), and (iv) a test selection criteria based on
mutation analysis using our MATS tool.
|
[
{
"created": "Sun, 4 Feb 2018 08:21:21 GMT",
"version": "v1"
}
] |
2018-02-06
|
[
[
"Marinescu",
"Raluca",
""
],
[
"Filipovikj",
"Predrag",
""
],
[
"Enoiu",
"Eduard Paul",
""
],
[
"Larsson",
"Jonatan",
""
],
[
"Seceleanu",
"Cristina",
""
]
] |
Early design artifacts of embedded systems, such as architectural models, represent convenient abstractions for reasoning about a system's structure and functionality. One such example is the Electronic Architecture and Software Tools-Architecture Description Language (EAST-ADL), a domain-specific architectural language that targets the automotive industry. EAST-ADL is used to represent both hardware and software elements, as well as related extra-functional information (e.g., timing properties, triggering information, resource consumption). Testing architectural models is an important activity in engineering large-scale industrial systems, which sparks a growing research interest. The main contributions of this paper are: (i) an approach for creating energy-related mutants for EAST-ADL architectural models, (ii) a method for overcoming the equivalent mutant problem (i.e., the problem of finding a test case which can distinguish the observable behavior of a mutant from the original one), (iii) a test generation approach based on UPPAAL Statistical Model Checker (SMC), and (iv) a test selection criteria based on mutation analysis using our MATS tool.
|
hep-th/9208072
|
Francois Gieres
|
Francois Gieres and Stefan Theisen
|
Superconformally covariant operators and super W algebras
|
23 pages, LATEX, MPI-Ph/92-66 and KA-THEP-7/92
|
J.Math.Phys. 34 (1993) 5964-5985
|
10.1063/1.530243
| null |
hep-th
| null |
We study superdifferential operators of order $2n+1$ which are covariant with
respect to superconformal changes of coordinates on a compact super Riemann
surface. We show that all such operators arise from super M\"obius covariant
ones. A canonical matrix representation is presented and applications to
classical super W algebras are discussed.
|
[
{
"created": "Thu, 27 Aug 1992 21:49:16 GMT",
"version": "v1"
}
] |
2009-10-22
|
[
[
"Gieres",
"Francois",
""
],
[
"Theisen",
"Stefan",
""
]
] |
We study superdifferential operators of order $2n+1$ which are covariant with respect to superconformal changes of coordinates on a compact super Riemann surface. We show that all such operators arise from super M\"obius covariant ones. A canonical matrix representation is presented and applications to classical super W algebras are discussed.
|
hep-th/9508075
|
Fosco Cesar Daniel
|
D. G. Barcy, C. D. Fosco and L. E. Oxman
|
On bosonization in $3$ dimensions
|
11 pages, Latex, omitted references added, typos corrected
|
Phys.Lett. B375 (1996) 267-272
|
10.1016/0370-2693(96)00224-9
| null |
hep-th cond-mat
| null |
A recently proposed path-integral bosonization scheme for massive fermions in
$3$ dimensions is extended by keeping the full momentum-dependence of the
one-loop vacuum polarization tensor. This makes it possible to discuss both the
massive and massless fermion cases on an equal footing, and moreover the
results it yields for massless fermions are consistent with the ones of
another, seemingly different, canonical quantization approach to the problem of
bosonization for a massless fermionic field in $3$ dimensions.
|
[
{
"created": "Wed, 16 Aug 1995 14:33:30 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Aug 1995 15:56:03 GMT",
"version": "v2"
}
] |
2009-10-28
|
[
[
"Barcy",
"D. G.",
""
],
[
"Fosco",
"C. D.",
""
],
[
"Oxman",
"L. E.",
""
]
] |
A recently proposed path-integral bosonization scheme for massive fermions in $3$ dimensions is extended by keeping the full momentum-dependence of the one-loop vacuum polarization tensor. This makes it possible to discuss both the massive and massless fermion cases on an equal footing, and moreover the results it yields for massless fermions are consistent with the ones of another, seemingly different, canonical quantization approach to the problem of bosonization for a massless fermionic field in $3$ dimensions.
|
1210.2255
|
Yu-tin Huang
|
Yu-tin Huang and Henrik Johansson
|
Equivalent D=3 Supergravity Amplitudes from Double Copies of
Three-Algebra and Two-Algebra Gauge Theories
|
5 pages, published version in PRL
| null |
10.1103/PhysRevLett.110.171601
|
CERN-PH-TH/2012-254, MCTP-12-22, Saclay--IPhT--T12/076
|
hep-th
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show that three-dimensional supergravity amplitudes can be obtained as
double copies of either three-algebra super-Chern-Simons matter theory or that
of two-algebra super-Yang-Mills theory, when either theory is organized to
display the color-kinematics duality. We prove that only helicity-conserving
four-dimensional gravity amplitudes have nonvanishing descendants when reduced
to three dimensions; implying the vanishing of odd-multiplicity S-matrix
elements, in agreement with Chern-Simons matter theory. We explicitly verify
the double-copy correspondence at four and six points for N=12,10,8
supergravity theories and discuss its validity for all multiplicity.
|
[
{
"created": "Mon, 8 Oct 2012 12:14:22 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Apr 2013 02:00:46 GMT",
"version": "v2"
}
] |
2013-05-01
|
[
[
"Huang",
"Yu-tin",
""
],
[
"Johansson",
"Henrik",
""
]
] |
We show that three-dimensional supergravity amplitudes can be obtained as double copies of either three-algebra super-Chern-Simons matter theory or that of two-algebra super-Yang-Mills theory, when either theory is organized to display the color-kinematics duality. We prove that only helicity-conserving four-dimensional gravity amplitudes have nonvanishing descendants when reduced to three dimensions; implying the vanishing of odd-multiplicity S-matrix elements, in agreement with Chern-Simons matter theory. We explicitly verify the double-copy correspondence at four and six points for N=12,10,8 supergravity theories and discuss its validity for all multiplicity.
|
1905.12945
|
Paolo Di Lillo
|
Paolo Di Lillo, Stefano Chiaverini, Gianluca Antonelli
|
Handling robot constraints within a Set-Based Multi-Task Priority
Inverse Kinematics Framework
| null |
Int;Conf;Rob;Aut (2019)
|
10.1109/ICRA.2019.8793625
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Set-Based Multi-Task Priority is a recent framework to handle inverse
kinematics for redundant structures. Both equality tasks, i.e., control
objectives to be driven to a desired value, and set-bases tasks, i.e., control
objectives to be satisfied with a set/range of values can be addressed in a
rigorous manner within a priority framework. In addition, optimization tasks,
driven by the gradient of a proper function, may be considered as well, usually
as lower priority tasks. In this paper the proper design of the tasks, their
priority and the use of a Set-Based Multi-Task Priority framework is proposed
in order to handle several constraints simultaneously in real-time. It is shown
that safety related tasks such as, e.g., joint limits or kinematic singularity,
may be properly handled by consider them both at an higher priority as
set-based task and at a lower within a proper optimization functional.
Experimental results on a 7DOF Jaco$^2$
|
[
{
"created": "Thu, 30 May 2019 10:19:04 GMT",
"version": "v1"
}
] |
2020-07-08
|
[
[
"Di Lillo",
"Paolo",
""
],
[
"Chiaverini",
"Stefano",
""
],
[
"Antonelli",
"Gianluca",
""
]
] |
Set-Based Multi-Task Priority is a recent framework to handle inverse kinematics for redundant structures. Both equality tasks, i.e., control objectives to be driven to a desired value, and set-bases tasks, i.e., control objectives to be satisfied with a set/range of values can be addressed in a rigorous manner within a priority framework. In addition, optimization tasks, driven by the gradient of a proper function, may be considered as well, usually as lower priority tasks. In this paper the proper design of the tasks, their priority and the use of a Set-Based Multi-Task Priority framework is proposed in order to handle several constraints simultaneously in real-time. It is shown that safety related tasks such as, e.g., joint limits or kinematic singularity, may be properly handled by consider them both at an higher priority as set-based task and at a lower within a proper optimization functional. Experimental results on a 7DOF Jaco$^2$
|
2404.16734
|
Enguerrand Prebet
|
Enguerrand Prebet, Andr\'e Platzer
|
Uniform Substitution for Differential Refinement Logic
|
IJCAR 2024
|
Automated Reasoning, 12th International Joint Conference, IJCAR
2024
|
10.1007/978-3-031-63501-4_11
| null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces a uniform substitution calculus for differential
refinement logic dRL. The logic dRL extends the differential dynamic logic dL
such that one can simultaneously reason about properties of and relations
between hybrid systems. Refinements are useful e.g. for simplifying proofs by
relating a concrete hybrid system to an abstract one from which the property
can be proved more easily. Uniform substitution is the key to parsimonious
prover microkernels. It enables the verbatim use of single axiom formulas
instead of axiom schemata with soundness-critical side conditions scattered
across the proof calculus. The uniform substitution rule can then be used to
instantiate all axioms soundly. Access to differential variables in dRL enables
more control over the notion of refinement, which is shown to be decidable on a
fragment of hybrid programs.
|
[
{
"created": "Thu, 25 Apr 2024 16:43:25 GMT",
"version": "v1"
},
{
"created": "Fri, 31 May 2024 09:00:20 GMT",
"version": "v2"
}
] |
2024-07-11
|
[
[
"Prebet",
"Enguerrand",
""
],
[
"Platzer",
"André",
""
]
] |
This paper introduces a uniform substitution calculus for differential refinement logic dRL. The logic dRL extends the differential dynamic logic dL such that one can simultaneously reason about properties of and relations between hybrid systems. Refinements are useful e.g. for simplifying proofs by relating a concrete hybrid system to an abstract one from which the property can be proved more easily. Uniform substitution is the key to parsimonious prover microkernels. It enables the verbatim use of single axiom formulas instead of axiom schemata with soundness-critical side conditions scattered across the proof calculus. The uniform substitution rule can then be used to instantiate all axioms soundly. Access to differential variables in dRL enables more control over the notion of refinement, which is shown to be decidable on a fragment of hybrid programs.
|
2212.09588
|
Mingzhu Cai
|
Mingzhu Cai, Siqi Bao, Xin Tian, Huang He, Fan Wang, Hua Wu
|
Query Enhanced Knowledge-Intensive Conversation via Unsupervised Joint
Modeling
|
Accepted for publication at ACL2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose an unsupervised query enhanced approach for
knowledge-intensive conversations, namely QKConv. There are three modules in
QKConv: a query generator, an off-the-shelf knowledge selector, and a response
generator. QKConv is optimized through joint training, which produces the
response by exploring multiple candidate queries and leveraging corresponding
selected knowledge. The joint training solely relies on the dialogue context
and target response, getting exempt from extra query annotations or knowledge
provenances. To evaluate the effectiveness of the proposed QKConv, we conduct
experiments on three representative knowledge-intensive conversation datasets:
conversational question-answering, task-oriented dialogue, and
knowledge-grounded conversation. Experimental results reveal that QKConv
performs better than all unsupervised methods across three datasets and
achieves competitive performance compared to supervised methods.
|
[
{
"created": "Mon, 19 Dec 2022 16:21:05 GMT",
"version": "v1"
},
{
"created": "Fri, 26 May 2023 11:02:13 GMT",
"version": "v2"
}
] |
2023-05-29
|
[
[
"Cai",
"Mingzhu",
""
],
[
"Bao",
"Siqi",
""
],
[
"Tian",
"Xin",
""
],
[
"He",
"Huang",
""
],
[
"Wang",
"Fan",
""
],
[
"Wu",
"Hua",
""
]
] |
In this paper, we propose an unsupervised query enhanced approach for knowledge-intensive conversations, namely QKConv. There are three modules in QKConv: a query generator, an off-the-shelf knowledge selector, and a response generator. QKConv is optimized through joint training, which produces the response by exploring multiple candidate queries and leveraging corresponding selected knowledge. The joint training solely relies on the dialogue context and target response, getting exempt from extra query annotations or knowledge provenances. To evaluate the effectiveness of the proposed QKConv, we conduct experiments on three representative knowledge-intensive conversation datasets: conversational question-answering, task-oriented dialogue, and knowledge-grounded conversation. Experimental results reveal that QKConv performs better than all unsupervised methods across three datasets and achieves competitive performance compared to supervised methods.
|
2311.07559
|
Conrad Zimmerman
|
Conrad Zimmerman, Jenna DiVincenzo, Jonathan Aldrich
|
Sound Gradual Verification with Symbolic Execution
|
Supplementary material; to be published by Principles of Programming
Languages 2024
| null |
10.1145/3632927
| null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Gradual verification, which supports explicitly partial specifications and
verifies them with a combination of static and dynamic checks, makes
verification more incremental and provides earlier feedback to developers.
While an abstract, weakest precondition-based approach to gradual verification
was previously proven sound, the approach did not provide sufficient guidance
for implementation and optimization of the required run-time checks. More
recently, gradual verification was implemented using symbolic execution
techniques, but the soundness of the approach (as with related static checkers
based on implicit dynamic frames) was an open question. This paper puts
practical gradual verification on a sound footing with a formalization of
symbolic execution, optimized run-time check generation, and run time
execution. We prove our approach is sound; our proof also covers a core subset
of the Viper tool, for which we are aware of no previous soundness result. Our
formalization enabled us to find a soundness bug in an implemented gradual
verification tool and describe the fix necessary to make it sound.
|
[
{
"created": "Mon, 13 Nov 2023 18:52:27 GMT",
"version": "v1"
}
] |
2023-11-14
|
[
[
"Zimmerman",
"Conrad",
""
],
[
"DiVincenzo",
"Jenna",
""
],
[
"Aldrich",
"Jonathan",
""
]
] |
Gradual verification, which supports explicitly partial specifications and verifies them with a combination of static and dynamic checks, makes verification more incremental and provides earlier feedback to developers. While an abstract, weakest precondition-based approach to gradual verification was previously proven sound, the approach did not provide sufficient guidance for implementation and optimization of the required run-time checks. More recently, gradual verification was implemented using symbolic execution techniques, but the soundness of the approach (as with related static checkers based on implicit dynamic frames) was an open question. This paper puts practical gradual verification on a sound footing with a formalization of symbolic execution, optimized run-time check generation, and run time execution. We prove our approach is sound; our proof also covers a core subset of the Viper tool, for which we are aware of no previous soundness result. Our formalization enabled us to find a soundness bug in an implemented gradual verification tool and describe the fix necessary to make it sound.
|
1611.03525
|
Antonia Micol Frassino
|
Antonia M. Frassino, Robert B. Mann, Fil Simovic
|
Critical points in Lovelock Black Holes
|
8 pages, 4 figures. Contribution to the Proceedings of the Second
Karl Schwarzschild Meeting, Frankfurt, 20-24 July 2015
| null | null | null |
hep-th gr-qc
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We review some of the results obtained by introducing a thermodynamic
pressure via the cosmological constant in a class of higher curvature theories
known as Lovelock gravity. In particular, we focus on a specific relation
between the higher-order Lovelock couplings that introduces a peculiar isolated
critical point for hyperbolic black holes characterized by non-standard
critical exponents.
|
[
{
"created": "Thu, 10 Nov 2016 21:47:19 GMT",
"version": "v1"
}
] |
2016-11-14
|
[
[
"Frassino",
"Antonia M.",
""
],
[
"Mann",
"Robert B.",
""
],
[
"Simovic",
"Fil",
""
]
] |
We review some of the results obtained by introducing a thermodynamic pressure via the cosmological constant in a class of higher curvature theories known as Lovelock gravity. In particular, we focus on a specific relation between the higher-order Lovelock couplings that introduces a peculiar isolated critical point for hyperbolic black holes characterized by non-standard critical exponents.
|
1501.03775
|
Cesar Ag\'on
|
Cesar A. Agon and Howard J. Schnitzer
|
Holographic Mutual Information at small separations
|
13 pages, 1 figure, some references added
| null | null |
BRX-TH-6291
|
hep-th
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The holographic mutual information for the small separation of two circles
and two strips in 2+1 dimensional space-time is considered based on the known
exact minimal surfaces spanning the boundaries on AdS4. The results suggest a
universality for the leading term in the short-distance expansion of
holographic mutual information. A conjecture for a similar result for d > 2 is
also presented, as well as comments about the analogous expansion in conformal
field theory.
|
[
{
"created": "Thu, 15 Jan 2015 18:55:42 GMT",
"version": "v1"
},
{
"created": "Tue, 10 Feb 2015 19:48:24 GMT",
"version": "v2"
},
{
"created": "Fri, 19 Aug 2016 22:10:57 GMT",
"version": "v3"
}
] |
2016-08-23
|
[
[
"Agon",
"Cesar A.",
""
],
[
"Schnitzer",
"Howard J.",
""
]
] |
The holographic mutual information for the small separation of two circles and two strips in 2+1 dimensional space-time is considered based on the known exact minimal surfaces spanning the boundaries on AdS4. The results suggest a universality for the leading term in the short-distance expansion of holographic mutual information. A conjecture for a similar result for d > 2 is also presented, as well as comments about the analogous expansion in conformal field theory.
|
2107.05746
|
Binghui Peng
|
Thomas Chen, Xi Chen, Binghui Peng, Mihalis Yannakakis
|
Computational Hardness of the Hylland-Zeckhauser Scheme
| null | null | null | null |
cs.GT cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
We study the complexity of the classic Hylland-Zeckhauser scheme [HZ'79] for
one-sided matching markets. We show that the problem of finding an
$\epsilon$-approximate equilibrium in the HZ scheme is PPAD-hard, and this
holds even when $\epsilon$ is polynomially small and when each agent has no
more than four distinct utility values. Our hardness result, when combined with
the PPAD membership result of [VY'21], resolves the approximation complexity of
the HZ scheme. We also show that the problem of approximating the optimal
social welfare (the weight of the matching) achievable by HZ equilibria within
a certain constant factor is NP-hard.
|
[
{
"created": "Mon, 12 Jul 2021 21:31:28 GMT",
"version": "v1"
}
] |
2021-07-14
|
[
[
"Chen",
"Thomas",
""
],
[
"Chen",
"Xi",
""
],
[
"Peng",
"Binghui",
""
],
[
"Yannakakis",
"Mihalis",
""
]
] |
We study the complexity of the classic Hylland-Zeckhauser scheme [HZ'79] for one-sided matching markets. We show that the problem of finding an $\epsilon$-approximate equilibrium in the HZ scheme is PPAD-hard, and this holds even when $\epsilon$ is polynomially small and when each agent has no more than four distinct utility values. Our hardness result, when combined with the PPAD membership result of [VY'21], resolves the approximation complexity of the HZ scheme. We also show that the problem of approximating the optimal social welfare (the weight of the matching) achievable by HZ equilibria within a certain constant factor is NP-hard.
|
2107.00353
|
Jeonghyun Byun
|
Jeonghyun Byun, Dongjae Lee, Hoseong Seo, Inkyu Jang, Jeongjun Choi,
H. Jin Kim
|
Stability and Robustness Analysis of Plug-Pulling using an Aerial
Manipulator
|
to be presented in 2021 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS), Prague, Czech Republic, 2021
| null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, an autonomous aerial manipulation task of pulling a plug out
of an electric socket is conducted, where maintaining the stability and
robustness is challenging due to sudden disappearance of a large interaction
force. The abrupt change in the dynamical model before and after the separation
of the plug can cause destabilization or mission failure. To accomplish aerial
plug-pulling, we employ the concept of hybrid automata to divide the task into
three operative modes, i.e, wire-pulling, stabilizing, and free-flight. Also, a
strategy for trajectory generation and a design of disturbance-observer-based
controllers for each operative mode are presented. Furthermore, the theory of
hybrid automata is used to prove the stability and robustness during the mode
transition. We validate the proposed trajectory generation and control method
by an actual wire-pulling experiment with a multirotor-based aerial
manipulator.
|
[
{
"created": "Thu, 1 Jul 2021 10:36:27 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Jul 2021 03:05:29 GMT",
"version": "v2"
}
] |
2021-07-07
|
[
[
"Byun",
"Jeonghyun",
""
],
[
"Lee",
"Dongjae",
""
],
[
"Seo",
"Hoseong",
""
],
[
"Jang",
"Inkyu",
""
],
[
"Choi",
"Jeongjun",
""
],
[
"Kim",
"H. Jin",
""
]
] |
In this paper, an autonomous aerial manipulation task of pulling a plug out of an electric socket is conducted, where maintaining the stability and robustness is challenging due to sudden disappearance of a large interaction force. The abrupt change in the dynamical model before and after the separation of the plug can cause destabilization or mission failure. To accomplish aerial plug-pulling, we employ the concept of hybrid automata to divide the task into three operative modes, i.e, wire-pulling, stabilizing, and free-flight. Also, a strategy for trajectory generation and a design of disturbance-observer-based controllers for each operative mode are presented. Furthermore, the theory of hybrid automata is used to prove the stability and robustness during the mode transition. We validate the proposed trajectory generation and control method by an actual wire-pulling experiment with a multirotor-based aerial manipulator.
|
2305.06472
|
Huan Wang
|
Yan-Fu Li, Huan Wang, Muxia Sun
|
ChatGPT-Like Large-Scale Foundation Models for Prognostics and Health
Management: A Survey and Roadmaps
|
55 pages, 10 figures
| null | null | null |
cs.LG cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Prognostics and health management (PHM) technology plays a critical role in
industrial production and equipment maintenance by identifying and predicting
possible equipment failures and damages, thereby allowing necessary maintenance
measures to be taken to enhance equipment service life and reliability while
reducing production costs and downtime. In recent years, PHM technology based
on artificial intelligence (AI) has made remarkable achievements in the context
of the industrial IoT and big data, and it is widely used in various
industries, such as railway, energy, and aviation, for condition monitoring,
fault prediction, and health management. The emergence of large-scale
foundation models (LSF-Models) such as ChatGPT and DALLE-E marks the entry of
AI into a new era of AI-2.0 from AI-1.0, where deep models have rapidly evolved
from a research paradigm of single-modal, single-task, and limited-data to a
multi-modal, multi-task, massive data, and super-large model paradigm. ChatGPT
represents a landmark achievement in this research paradigm, offering hope for
general artificial intelligence due to its highly intelligent natural language
understanding ability. However, the PHM field lacks a consensus on how to
respond to this significant change in the AI field, and a systematic review and
roadmap is required to elucidate future development directions. To fill this
gap, this paper systematically expounds on the key components and latest
developments of LSF-Models. Then, we systematically answered how to build the
LSF-Model applicable to PHM tasks and outlined the challenges and future
development roadmaps for this research paradigm.
|
[
{
"created": "Wed, 10 May 2023 21:37:44 GMT",
"version": "v1"
},
{
"created": "Fri, 12 May 2023 10:41:35 GMT",
"version": "v2"
}
] |
2023-05-15
|
[
[
"Li",
"Yan-Fu",
""
],
[
"Wang",
"Huan",
""
],
[
"Sun",
"Muxia",
""
]
] |
Prognostics and health management (PHM) technology plays a critical role in industrial production and equipment maintenance by identifying and predicting possible equipment failures and damages, thereby allowing necessary maintenance measures to be taken to enhance equipment service life and reliability while reducing production costs and downtime. In recent years, PHM technology based on artificial intelligence (AI) has made remarkable achievements in the context of the industrial IoT and big data, and it is widely used in various industries, such as railway, energy, and aviation, for condition monitoring, fault prediction, and health management. The emergence of large-scale foundation models (LSF-Models) such as ChatGPT and DALLE-E marks the entry of AI into a new era of AI-2.0 from AI-1.0, where deep models have rapidly evolved from a research paradigm of single-modal, single-task, and limited-data to a multi-modal, multi-task, massive data, and super-large model paradigm. ChatGPT represents a landmark achievement in this research paradigm, offering hope for general artificial intelligence due to its highly intelligent natural language understanding ability. However, the PHM field lacks a consensus on how to respond to this significant change in the AI field, and a systematic review and roadmap is required to elucidate future development directions. To fill this gap, this paper systematically expounds on the key components and latest developments of LSF-Models. Then, we systematically answered how to build the LSF-Model applicable to PHM tasks and outlined the challenges and future development roadmaps for this research paradigm.
|
1903.04205
|
Stefanos Leonardos Mr.
|
Vitalik Buterin and Daniel Reijsbergen and Stefanos Leonardos and
Georgios Piliouras
|
Incentives in Ethereum's Hybrid Casper Protocol
|
Conference version: IEEE International Conference on Blockchain and
Cryptocurrency (2019)
|
International Journal of Network Management, Vol. 30(5),
pp.:e2098, (2020)
|
10.1002/nem.2098
| null |
cs.CR cs.DC cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an overview of hybrid Casper the Friendly Finality Gadget (FFG): a
Proof-of-Stake checkpointing protocol overlaid onto Ethereum's Proof-of-Work
blockchain. We describe its core functionalities and reward scheme, and explore
its properties. Our findings indicate that Casper's implemented incentives
mechanism ensures liveness, while providing safety guarantees that improve over
standard Proof-of-Work protocols. Based on a minimal-impact implementation of
the protocol as a smart contract on the blockchain, we discuss additional
issues related to parametrisation, funding, throughput and network overhead and
detect potential limitations.
|
[
{
"created": "Mon, 11 Mar 2019 10:33:13 GMT",
"version": "v1"
},
{
"created": "Sat, 21 Mar 2020 09:29:36 GMT",
"version": "v2"
},
{
"created": "Sun, 18 Jul 2021 13:03:19 GMT",
"version": "v3"
}
] |
2021-07-20
|
[
[
"Buterin",
"Vitalik",
""
],
[
"Reijsbergen",
"Daniel",
""
],
[
"Leonardos",
"Stefanos",
""
],
[
"Piliouras",
"Georgios",
""
]
] |
We present an overview of hybrid Casper the Friendly Finality Gadget (FFG): a Proof-of-Stake checkpointing protocol overlaid onto Ethereum's Proof-of-Work blockchain. We describe its core functionalities and reward scheme, and explore its properties. Our findings indicate that Casper's implemented incentives mechanism ensures liveness, while providing safety guarantees that improve over standard Proof-of-Work protocols. Based on a minimal-impact implementation of the protocol as a smart contract on the blockchain, we discuss additional issues related to parametrisation, funding, throughput and network overhead and detect potential limitations.
|
2311.06973
|
Behrouz Azimian
|
Behrouz Azimian, Shiva Moshtagh, Anamitra Pal, Shanshan Ma
|
Analytical Verification of Performance of Deep Neural Network Based
Time-Synchronized Distribution System State Estimation
|
8 pages, in Journal of Modern Power Systems and Clean Energy, 2023
| null |
10.35833/MPCE.2023.000432
| null |
cs.LG cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, we demonstrated success of a time-synchronized state estimator
using deep neural networks (DNNs) for real-time unobservable distribution
systems. In this letter, we provide analytical bounds on the performance of
that state estimator as a function of perturbations in the input measurements.
It has already been shown that evaluating performance based on only the test
dataset might not effectively indicate a trained DNN's ability to handle input
perturbations. As such, we analytically verify robustness and trustworthiness
of DNNs to input perturbations by treating them as mixed-integer linear
programming (MILP) problems. The ability of batch normalization in addressing
the scalability limitations of the MILP formulation is also highlighted. The
framework is validated by performing time-synchronized distribution system
state estimation for a modified IEEE 34-node system and a real-world large
distribution system, both of which are incompletely observed by micro-phasor
measurement units.
|
[
{
"created": "Sun, 12 Nov 2023 22:01:34 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Feb 2024 21:40:46 GMT",
"version": "v2"
},
{
"created": "Tue, 20 Feb 2024 23:37:08 GMT",
"version": "v3"
},
{
"created": "Thu, 22 Feb 2024 16:33:10 GMT",
"version": "v4"
}
] |
2024-02-23
|
[
[
"Azimian",
"Behrouz",
""
],
[
"Moshtagh",
"Shiva",
""
],
[
"Pal",
"Anamitra",
""
],
[
"Ma",
"Shanshan",
""
]
] |
Recently, we demonstrated success of a time-synchronized state estimator using deep neural networks (DNNs) for real-time unobservable distribution systems. In this letter, we provide analytical bounds on the performance of that state estimator as a function of perturbations in the input measurements. It has already been shown that evaluating performance based on only the test dataset might not effectively indicate a trained DNN's ability to handle input perturbations. As such, we analytically verify robustness and trustworthiness of DNNs to input perturbations by treating them as mixed-integer linear programming (MILP) problems. The ability of batch normalization in addressing the scalability limitations of the MILP formulation is also highlighted. The framework is validated by performing time-synchronized distribution system state estimation for a modified IEEE 34-node system and a real-world large distribution system, both of which are incompletely observed by micro-phasor measurement units.
|
1007.0338
|
Dirk Kreimer
|
Spencer Bloch and Dirk Kreimer
|
Feynman amplitudes and Landau singularities for 1-loop graphs
|
31p
| null | null |
IHES M/10/20
|
hep-th math-ph math.AG math.MP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We use mixed Hodge structures to investigate Feynman amplitudes as functions
of external momenta and masses.
|
[
{
"created": "Fri, 2 Jul 2010 11:12:47 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Jul 2010 18:01:48 GMT",
"version": "v2"
}
] |
2010-07-27
|
[
[
"Bloch",
"Spencer",
""
],
[
"Kreimer",
"Dirk",
""
]
] |
We use mixed Hodge structures to investigate Feynman amplitudes as functions of external momenta and masses.
|
hep-th/0109133
|
Cai Rong-gen
|
Rong-Gen Cai
|
Gauss-Bonnet Black Holes in AdS Spaces
|
Revtex, 17 pages with 9 eps figures, v2: section II removed and
references added, the version to appear in PRD
|
Phys.Rev.D65:084014,2002
|
10.1103/PhysRevD.65.084014
| null |
hep-th gr-qc
| null |
We study thermodynamic properties and phase structures of topological black
holes in Einstein theory with a Gauss-Bonnet term and a negative cosmological
constant. The event horizon of these topological black holes can be a
hypersurface with positive, zero or negative constant curvature. When the
horizon is a zero curvature hypersurface, the thermodynamic properties of black
holes are completely the same as those of black holes without the Gauss-Bonnet
term, although the two black hole solutions are quite different. When the
horizon is a negative constant curvature hypersurface, the thermodynamic
properties of the Gauss-Bonnet black holes are qualitatively similar to those
of black holes without the Gauss-Bonnet term. When the event horizon is a
hypersurface with positive constant curvature, we find that the thermodynamic
properties and phase structures of black holes drastically depend on the
spacetime dimension $d$ and the coefficient of the Gauss-Bonnet term: when
$d\ge 6$, the properties of black hole are also qualitatively similar to the
case without the Gauss-Bonnet term, but when $d=5$, a new phase of locally
stable small black hole occurs under a critical value of the Gauss-Bonnet
coefficient, and beyond the critical value, the black holes are always
thermodynamically stable. However, the locally stable small black hole is not
globally preferred, instead a thermal anti-de Sitter space is globally
preferred. We find that there is a minimal horizon radius, below which the
Hawking-Page phase transition will not occur since for these black holes the
thermal anti de Sitter space is always globally preferred.
|
[
{
"created": "Tue, 18 Sep 2001 08:43:03 GMT",
"version": "v1"
},
{
"created": "Sat, 12 Jan 2002 03:57:23 GMT",
"version": "v2"
}
] |
2011-05-05
|
[
[
"Cai",
"Rong-Gen",
""
]
] |
We study thermodynamic properties and phase structures of topological black holes in Einstein theory with a Gauss-Bonnet term and a negative cosmological constant. The event horizon of these topological black holes can be a hypersurface with positive, zero or negative constant curvature. When the horizon is a zero curvature hypersurface, the thermodynamic properties of black holes are completely the same as those of black holes without the Gauss-Bonnet term, although the two black hole solutions are quite different. When the horizon is a negative constant curvature hypersurface, the thermodynamic properties of the Gauss-Bonnet black holes are qualitatively similar to those of black holes without the Gauss-Bonnet term. When the event horizon is a hypersurface with positive constant curvature, we find that the thermodynamic properties and phase structures of black holes drastically depend on the spacetime dimension $d$ and the coefficient of the Gauss-Bonnet term: when $d\ge 6$, the properties of black hole are also qualitatively similar to the case without the Gauss-Bonnet term, but when $d=5$, a new phase of locally stable small black hole occurs under a critical value of the Gauss-Bonnet coefficient, and beyond the critical value, the black holes are always thermodynamically stable. However, the locally stable small black hole is not globally preferred, instead a thermal anti-de Sitter space is globally preferred. We find that there is a minimal horizon radius, below which the Hawking-Page phase transition will not occur since for these black holes the thermal anti de Sitter space is always globally preferred.
|
2310.04131
|
Ilya Lvovich Shapiro
|
Ilya L. Shapiro
|
Antisymmetric Tensor Field and Cheshire Cat Smile of the Local Conformal
Symmetry
|
In v2 discussions were extended and added new references, especially
to the previous works about conformal operators acting on antisymmetric
tensor fields. Version to be submitted for publication. 20 pages, no figures
| null | null | null |
hep-th gr-qc
|
http://creativecommons.org/licenses/by/4.0/
|
The conformal version of the antisymmetric second-order tensor field in four
spacetime dimensions does not have gauge invariance extensively discussed in
the literature for more than half a century. Our first observation is that,
when coupled to fermions, only the conformal version provides renormalizability
of the theory at the one-loop level. General considerations are supported by
the derivation of one-loop divergences in the fermionic sector, indicating good
chances for asymptotic freedom. The arguments concerning one-loop
renormalizability remain valid in the presence of self-interactions and the
masses for both fermion and antisymmetric tensor fields. In the flat spacetime
limit, regardless of the conformal symmetry has gone, there is an expectation
to meet renormalizability in all loop orders.
|
[
{
"created": "Fri, 6 Oct 2023 10:00:58 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Oct 2023 23:50:37 GMT",
"version": "v2"
}
] |
2023-10-24
|
[
[
"Shapiro",
"Ilya L.",
""
]
] |
The conformal version of the antisymmetric second-order tensor field in four spacetime dimensions does not have gauge invariance extensively discussed in the literature for more than half a century. Our first observation is that, when coupled to fermions, only the conformal version provides renormalizability of the theory at the one-loop level. General considerations are supported by the derivation of one-loop divergences in the fermionic sector, indicating good chances for asymptotic freedom. The arguments concerning one-loop renormalizability remain valid in the presence of self-interactions and the masses for both fermion and antisymmetric tensor fields. In the flat spacetime limit, regardless of the conformal symmetry has gone, there is an expectation to meet renormalizability in all loop orders.
|
2301.12623
|
Hanlin Gu
|
Hanlin Gu, Jiahuan Luo, Yan Kang, Lixin Fan and Qiang Yang
|
FedPass: Privacy-Preserving Vertical Federated Deep Learning with
Adaptive Obfuscation
|
6 figures, 9 tables
| null | null | null |
cs.DC cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vertical federated learning (VFL) allows an active party with labeled feature
to leverage auxiliary features from the passive parties to improve model
performance. Concerns about the private feature and label leakage in both the
training and inference phases of VFL have drawn wide research attention. In
this paper, we propose a general privacy-preserving vertical federated deep
learning framework called FedPass, which leverages adaptive obfuscation to
protect the feature and label simultaneously. Strong privacy-preserving
capabilities about private features and labels are theoretically proved (in
Theorems 1 and 2). Extensive experimental result s with different datasets and
network architectures also justify the superiority of FedPass against existing
methods in light of its near-optimal trade-off between privacy and model
performance.
|
[
{
"created": "Mon, 30 Jan 2023 02:36:23 GMT",
"version": "v1"
},
{
"created": "Tue, 31 Jan 2023 07:40:05 GMT",
"version": "v2"
}
] |
2023-02-01
|
[
[
"Gu",
"Hanlin",
""
],
[
"Luo",
"Jiahuan",
""
],
[
"Kang",
"Yan",
""
],
[
"Fan",
"Lixin",
""
],
[
"Yang",
"Qiang",
""
]
] |
Vertical federated learning (VFL) allows an active party with labeled feature to leverage auxiliary features from the passive parties to improve model performance. Concerns about the private feature and label leakage in both the training and inference phases of VFL have drawn wide research attention. In this paper, we propose a general privacy-preserving vertical federated deep learning framework called FedPass, which leverages adaptive obfuscation to protect the feature and label simultaneously. Strong privacy-preserving capabilities about private features and labels are theoretically proved (in Theorems 1 and 2). Extensive experimental result s with different datasets and network architectures also justify the superiority of FedPass against existing methods in light of its near-optimal trade-off between privacy and model performance.
|
0708.0433
|
Karol Kozlowski Kajetan
|
K. K. Kozlowski
|
On the emptiness formation probability of the open XXZ spin-$\tf{1}{2}$
chain
|
18 pages
|
J.Stat.Mech.0802:P02006,2008
|
10.1088/1742-5468/2008/02/P02006
| null |
hep-th
| null |
This paper is devoted to the study of the emptiness formation probability
$\tau\pa{m}$ of the open XXZ chain. We derive a closed form for $\tau\pa{m}$ at
$\Delta=\tf{1}{2}$ when the boundary field vanishes. Moreover we obtain its
leading asymptotics for an arbitrary boundary field at the free fermion point.
Finally, we compute the first term of the asymptotics of $\ln\pa{\tau\pa{m}}$
in the whole massless regime $-1<\Delta<1$.
|
[
{
"created": "Thu, 2 Aug 2007 22:51:16 GMT",
"version": "v1"
}
] |
2008-11-26
|
[
[
"Kozlowski",
"K. K.",
""
]
] |
This paper is devoted to the study of the emptiness formation probability $\tau\pa{m}$ of the open XXZ chain. We derive a closed form for $\tau\pa{m}$ at $\Delta=\tf{1}{2}$ when the boundary field vanishes. Moreover we obtain its leading asymptotics for an arbitrary boundary field at the free fermion point. Finally, we compute the first term of the asymptotics of $\ln\pa{\tau\pa{m}}$ in the whole massless regime $-1<\Delta<1$.
|
hep-th/0006011
|
Valentin Khoze
|
N. Michael Davies, Timothy J. Hollowood and Valentin V. Khoze
|
Monopoles, affine algebras and the gluino condensate
|
minor changes, 23 pages, no figures
|
J.Math.Phys.44:3640-3656,2003
|
10.1063/1.1586477
| null |
hep-th hep-ph
| null |
We examine the low-energy dynamics of four-dimensional supersymmetric gauge
theories and calculate the values of the gluino condensate for all simple gauge
groups. By initially compactifying the theory on a cylinder we are able to
perform calculations in a controlled weakly-coupled way for small radius. The
dominant contributions to the path integral on the cylinder arise from magnetic
monopoles which play the role of instanton constituents. We find that the
semi-classically generated superpotential of the theory is the affine Toda
potential for an associated twisted affine algebra. We determine the
supersymmetric vacua and calculate the values of the gluino condensate. The
number of supersymmetric vacua is equal to c_2, the dual Coxeter number, and in
each vacuum the monopoles carry a fraction 1/c_2 of topological charge. As the
results are independent of the radius of the circle, they are also valid in the
strong coupling regime where the theory becomes decompactified. In this way we
obtain values for the gluino condensate which for the classical gauge groups
agree with previously known ``weak coupling instanton'' expressions (but not
with the ``strong coupling instanton'' calculations). This detailed agreement
provides further evidence in favour of the recently advocated resolution of the
the gluino condensate puzzle. We also make explicit predictions for the gluino
condensate for the exceptional groups.
|
[
{
"created": "Thu, 1 Jun 2000 21:48:19 GMT",
"version": "v1"
},
{
"created": "Mon, 5 Jun 2000 16:07:31 GMT",
"version": "v2"
}
] |
2010-11-19
|
[
[
"Davies",
"N. Michael",
""
],
[
"Hollowood",
"Timothy J.",
""
],
[
"Khoze",
"Valentin V.",
""
]
] |
We examine the low-energy dynamics of four-dimensional supersymmetric gauge theories and calculate the values of the gluino condensate for all simple gauge groups. By initially compactifying the theory on a cylinder we are able to perform calculations in a controlled weakly-coupled way for small radius. The dominant contributions to the path integral on the cylinder arise from magnetic monopoles which play the role of instanton constituents. We find that the semi-classically generated superpotential of the theory is the affine Toda potential for an associated twisted affine algebra. We determine the supersymmetric vacua and calculate the values of the gluino condensate. The number of supersymmetric vacua is equal to c_2, the dual Coxeter number, and in each vacuum the monopoles carry a fraction 1/c_2 of topological charge. As the results are independent of the radius of the circle, they are also valid in the strong coupling regime where the theory becomes decompactified. In this way we obtain values for the gluino condensate which for the classical gauge groups agree with previously known ``weak coupling instanton'' expressions (but not with the ``strong coupling instanton'' calculations). This detailed agreement provides further evidence in favour of the recently advocated resolution of the the gluino condensate puzzle. We also make explicit predictions for the gluino condensate for the exceptional groups.
|
hep-th/0005008
|
Hiroaki Kanno
|
Tohru Eguchi and Hiroaki Kanno
|
Five-Dimensional Gauge Theories and Local Mirror Symmetry
|
18 pages, Latex, minor changes, a version to appear in Nucl.Phys.B
|
Nucl.Phys.B586:331-345,2000
|
10.1016/S0550-3213(00)00375-8
|
UT-882
|
hep-th
| null |
We study the dynamics of 5-dimensional gauge theory on $M_4\times S^1$ by
compactifying type II/M theory on degenerate Calabi-Yau manifolds. We use the
local mirror symmetry and shall show that the prepotential of the 5-dimensional
SU(2) gauge theory without matter is given exactly by that of the type II
string theory compactified on the local ${\bf F}_2$, i.e. Hirzebruch surface
${\bf F}_2$ lying inside a non-compact Calabi-Yau manifold. It is shown that
our result reproduces the Seiberg-Witten theory at the 4-dimensional limit
$R\to 0$ ($R$ denotes the radius of $S^1$) and also the result of the
uncompactified 5-dimensional theory at $R\to \infty$. We also discuss SU(2)
gauge theory with $1\le N_f\le 4$ matter in vector representations and show
that they are described by the geometry of the local ${\bf F}_2$ blown up at
$N_f$ points.
|
[
{
"created": "Mon, 1 May 2000 07:05:19 GMT",
"version": "v1"
},
{
"created": "Mon, 19 Jun 2000 04:05:07 GMT",
"version": "v2"
}
] |
2010-11-19
|
[
[
"Eguchi",
"Tohru",
""
],
[
"Kanno",
"Hiroaki",
""
]
] |
We study the dynamics of 5-dimensional gauge theory on $M_4\times S^1$ by compactifying type II/M theory on degenerate Calabi-Yau manifolds. We use the local mirror symmetry and shall show that the prepotential of the 5-dimensional SU(2) gauge theory without matter is given exactly by that of the type II string theory compactified on the local ${\bf F}_2$, i.e. Hirzebruch surface ${\bf F}_2$ lying inside a non-compact Calabi-Yau manifold. It is shown that our result reproduces the Seiberg-Witten theory at the 4-dimensional limit $R\to 0$ ($R$ denotes the radius of $S^1$) and also the result of the uncompactified 5-dimensional theory at $R\to \infty$. We also discuss SU(2) gauge theory with $1\le N_f\le 4$ matter in vector representations and show that they are described by the geometry of the local ${\bf F}_2$ blown up at $N_f$ points.
|
1806.02702
|
Taejin Lee
|
Taejin Lee
|
Four-Graviton Scattering and String Path Integral in the Proper-time
gauge
|
9 pages, 1 figure, new references and comments added
| null | null | null |
hep-th gr-qc
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We evaluate the four-closed-string scattering amplitude, using the Polyakov
string path integral in the proper-time gauge. By identifying the Fock space
representation of the four-closed-string-vertex, we obtain a field theoretic
expression of the closed string scattering amplitudes. In the zero-slope limit,
the four-closed-string scattering amplitude reduces to the
four-graviton-scattering amplitude of Einstein's gravity. However, at a finite
slope, the four-graviton scattering amplitude in the proper-time gauge differs
not only from that of Einstein gravity, but also significantly differs from the
conventional one obtained by using the vertex operator technique in string
theory. This discrepancy is mainly due to the presence of closed string tachyon
poles in the four-graviton-scattering amplitude, which are missing in previous
works. Because the tachyon poles in the scattering amplitude considerably alter
the short distance behavior of gravitational interaction, they may be important
in understanding problems associated with the perturbative theory of quantum
gravity and the dark matter within the framework of string theory.
|
[
{
"created": "Thu, 7 Jun 2018 14:27:45 GMT",
"version": "v1"
},
{
"created": "Mon, 17 Jun 2019 11:13:54 GMT",
"version": "v2"
}
] |
2019-06-18
|
[
[
"Lee",
"Taejin",
""
]
] |
We evaluate the four-closed-string scattering amplitude, using the Polyakov string path integral in the proper-time gauge. By identifying the Fock space representation of the four-closed-string-vertex, we obtain a field theoretic expression of the closed string scattering amplitudes. In the zero-slope limit, the four-closed-string scattering amplitude reduces to the four-graviton-scattering amplitude of Einstein's gravity. However, at a finite slope, the four-graviton scattering amplitude in the proper-time gauge differs not only from that of Einstein gravity, but also significantly differs from the conventional one obtained by using the vertex operator technique in string theory. This discrepancy is mainly due to the presence of closed string tachyon poles in the four-graviton-scattering amplitude, which are missing in previous works. Because the tachyon poles in the scattering amplitude considerably alter the short distance behavior of gravitational interaction, they may be important in understanding problems associated with the perturbative theory of quantum gravity and the dark matter within the framework of string theory.
|
1902.06275
|
Mladen Kova\v{c}evi\'c
|
Mladen Kova\v{c}evi\'c
|
Zero-Error Capacity of Duplication Channels
|
8 pages (double-column), 4 figures. Accepted for publication in IEEE
Transactions on Communications
|
IEEE Trans. Commun., vol. 67, no. 10, pp. 6735-6742, Oct. 2019
|
10.1109/TCOMM.2019.2931342
| null |
cs.IT cs.DM math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper is concerned with the problem of error-free communication over the
i.i.d. duplication channel which acts on a transmitted sequence $ x_1 \cdots
x_n $ by inserting a random number of copies of each symbol $ x_i $ next to the
original symbol. The random variables representing the numbers of inserted
copies at each position $ i $ are independent and take values in $ \{0, 1,
\ldots, r\} $, where $ r $ is a fixed parameter. A more general model in which
blocks of $ \ell $ consecutive symbols are being duplicated, and which is
inspired by DNA-based data storage systems wherein the stored molecules are
subject to tandem-duplication mutations, is also analyzed. A construction of
optimal codes correcting all patterns of errors of this type is described, and
the zero-error capacity of the duplication channel---the largest rate at which
information can be transmitted through it in an error-free manner---is
determined for each $ \ell $ and $ r $.
|
[
{
"created": "Sun, 17 Feb 2019 15:26:07 GMT",
"version": "v1"
},
{
"created": "Thu, 21 Feb 2019 09:01:19 GMT",
"version": "v2"
},
{
"created": "Thu, 6 Jun 2019 08:06:04 GMT",
"version": "v3"
},
{
"created": "Tue, 23 Jul 2019 07:53:24 GMT",
"version": "v4"
}
] |
2020-08-13
|
[
[
"Kovačević",
"Mladen",
""
]
] |
This paper is concerned with the problem of error-free communication over the i.i.d. duplication channel which acts on a transmitted sequence $ x_1 \cdots x_n $ by inserting a random number of copies of each symbol $ x_i $ next to the original symbol. The random variables representing the numbers of inserted copies at each position $ i $ are independent and take values in $ \{0, 1, \ldots, r\} $, where $ r $ is a fixed parameter. A more general model in which blocks of $ \ell $ consecutive symbols are being duplicated, and which is inspired by DNA-based data storage systems wherein the stored molecules are subject to tandem-duplication mutations, is also analyzed. A construction of optimal codes correcting all patterns of errors of this type is described, and the zero-error capacity of the duplication channel---the largest rate at which information can be transmitted through it in an error-free manner---is determined for each $ \ell $ and $ r $.
|
1701.00245
|
Poul Olesen
|
Poul Olesen
|
Non-Abelian bootstrap of primordial magnetism
|
6 pages. Magnetic energy formula for the flat expanding universe
added. Misprint corrected. Some clarifying remarks on energy minimization
added
| null | null | null |
hep-th astro-ph.HE hep-ph math-ph math.MP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We point out that a primordial magnetic field can be generated in the
electroweak phase transition by a non-Abelian bootstrap, where the field is
generated by currents of W's, which in turn are extracted from the vacuum by
the magnetic field. This magnetic field is produced as a vortex condensate at
the electroweak phase transition. It becomes stringy as a consequence of the
dynamical evolution due to magnetohydrodynamics.
|
[
{
"created": "Sun, 1 Jan 2017 13:53:40 GMT",
"version": "v1"
},
{
"created": "Wed, 11 Jan 2017 14:16:58 GMT",
"version": "v2"
},
{
"created": "Wed, 25 Jan 2017 14:49:06 GMT",
"version": "v3"
}
] |
2017-01-27
|
[
[
"Olesen",
"Poul",
""
]
] |
We point out that a primordial magnetic field can be generated in the electroweak phase transition by a non-Abelian bootstrap, where the field is generated by currents of W's, which in turn are extracted from the vacuum by the magnetic field. This magnetic field is produced as a vortex condensate at the electroweak phase transition. It becomes stringy as a consequence of the dynamical evolution due to magnetohydrodynamics.
|
2106.13249
|
Tan Zhi-Xuan
|
Arwa Alanqary, Gloria Z. Lin, Joie Le, Tan Zhi-Xuan, Vikash K.
Mansinghka, Joshua B. Tenenbaum
|
Modeling the Mistakes of Boundedly Rational Agents Within a Bayesian
Theory of Mind
|
Accepted to CogSci 2021. 6 pages, 5 figures. (Appendix: 1 page, 1
figure)
| null | null | null |
cs.AI q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
When inferring the goals that others are trying to achieve, people
intuitively understand that others might make mistakes along the way. This is
crucial for activities such as teaching, offering assistance, and deciding
between blame or forgiveness. However, Bayesian models of theory of mind have
generally not accounted for these mistakes, instead modeling agents as mostly
optimal in achieving their goals. As a result, they are unable to explain
phenomena like locking oneself out of one's house, or losing a game of chess.
Here, we extend the Bayesian Theory of Mind framework to model boundedly
rational agents who may have mistaken goals, plans, and actions. We formalize
this by modeling agents as probabilistic programs, where goals may be confused
with semantically similar states, plans may be misguided due to
resource-bounded planning, and actions may be unintended due to execution
errors. We present experiments eliciting human goal inferences in two domains:
(i) a gridworld puzzle with gems locked behind doors, and (ii) a block-stacking
domain. Our model better explains human inferences than alternatives, while
generalizing across domains. These findings indicate the importance of modeling
others as bounded agents, in order to account for the full richness of human
intuitive psychology.
|
[
{
"created": "Thu, 24 Jun 2021 18:00:03 GMT",
"version": "v1"
}
] |
2021-06-28
|
[
[
"Alanqary",
"Arwa",
""
],
[
"Lin",
"Gloria Z.",
""
],
[
"Le",
"Joie",
""
],
[
"Zhi-Xuan",
"Tan",
""
],
[
"Mansinghka",
"Vikash K.",
""
],
[
"Tenenbaum",
"Joshua B.",
""
]
] |
When inferring the goals that others are trying to achieve, people intuitively understand that others might make mistakes along the way. This is crucial for activities such as teaching, offering assistance, and deciding between blame or forgiveness. However, Bayesian models of theory of mind have generally not accounted for these mistakes, instead modeling agents as mostly optimal in achieving their goals. As a result, they are unable to explain phenomena like locking oneself out of one's house, or losing a game of chess. Here, we extend the Bayesian Theory of Mind framework to model boundedly rational agents who may have mistaken goals, plans, and actions. We formalize this by modeling agents as probabilistic programs, where goals may be confused with semantically similar states, plans may be misguided due to resource-bounded planning, and actions may be unintended due to execution errors. We present experiments eliciting human goal inferences in two domains: (i) a gridworld puzzle with gems locked behind doors, and (ii) a block-stacking domain. Our model better explains human inferences than alternatives, while generalizing across domains. These findings indicate the importance of modeling others as bounded agents, in order to account for the full richness of human intuitive psychology.
|
2201.03363
|
Anders Sundnes L{\o}vlie
|
Anders Sundnes L{\o}vlie, Astrid Waagstein and Peter Hyldg{\aa}rd
|
The Scientific Evidence Indicator for Popular Science News
| null |
NordMedia 2019. Communication, creativity and imagination:
challenging the field. Malm\"o University, Sweden, 21st August 2019
| null | null |
cs.DL cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To what extent can news media help in providing more credible information
about science? This is the core challenge for the Science Evidence Indicator
(SEI) project, a collaboration between the Danish popular news website
videnskab.dk and the authors of this paper. Looking specifically at medical
science news, we aim to provide a transparent assessment of the scientific
sources behind a story. This entails identifying some of the criteria that
scientists use to assess research, and making it accessible and understandable
for readers. We address the following research question: How can we communicate
the quality of scientific publications in health science to a non-expert
audience? Our goal is to make the assessments understandable for the youngest
part of the website's target audience: high school students from age 16 and
upwards.
|
[
{
"created": "Mon, 10 Jan 2022 14:28:54 GMT",
"version": "v1"
}
] |
2022-01-11
|
[
[
"Løvlie",
"Anders Sundnes",
""
],
[
"Waagstein",
"Astrid",
""
],
[
"Hyldgård",
"Peter",
""
]
] |
To what extent can news media help in providing more credible information about science? This is the core challenge for the Science Evidence Indicator (SEI) project, a collaboration between the Danish popular news website videnskab.dk and the authors of this paper. Looking specifically at medical science news, we aim to provide a transparent assessment of the scientific sources behind a story. This entails identifying some of the criteria that scientists use to assess research, and making it accessible and understandable for readers. We address the following research question: How can we communicate the quality of scientific publications in health science to a non-expert audience? Our goal is to make the assessments understandable for the youngest part of the website's target audience: high school students from age 16 and upwards.
|
2109.05396
|
Vaibhav B Sinha
|
Fu Li, C. Gregory Plaxton, Vaibhav B. Sinha
|
The Obnoxious Facility Location Game with Dichotomous Preferences
|
34 pages. This is an extended version of a paper presented at the
22nd Italian Conference on Theoretical Computer Science in September 2021
|
Theoretical Computer Science 961 (2023) 113930
|
10.1016/j.tcs.2023.113930
| null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider a facility location game in which $n$ agents reside at known
locations on a path, and $k$ heterogeneous facilities are to be constructed on
the path. Each agent is adversely affected by some subset of the facilities,
and is unaffected by the others. We design two classes of mechanisms for
choosing the facility locations given the reported agent preferences:
utilitarian mechanisms that strive to maximize social welfare (i.e., to be
efficient), and egalitarian mechanisms that strive to maximize the minimum
welfare. For the utilitarian objective, we present a weakly group-strategyproof
efficient mechanism for up to three facilities, we give strongly
group-strategyproof mechanisms that achieve approximation ratios of $5/3$ and
$2$ for $k=1$ and $k > 1$, respectively, and we prove that no strongly
group-strategyproof mechanism achieves an approximation ratio less than $5/3$
for the case of a single facility. For the egalitarian objective, we present a
strategyproof egalitarian mechanism for arbitrary $k$, and we prove that no
weakly group-strategyproof mechanism achieves a $o(\sqrt{n})$ approximation
ratio for two facilities. We extend our egalitarian results to the case where
the agents are located on a cycle, and we extend our first egalitarian result
to the case where the agents are located in the unit square.
|
[
{
"created": "Sun, 12 Sep 2021 01:15:14 GMT",
"version": "v1"
},
{
"created": "Sun, 21 May 2023 20:05:40 GMT",
"version": "v2"
}
] |
2023-05-23
|
[
[
"Li",
"Fu",
""
],
[
"Plaxton",
"C. Gregory",
""
],
[
"Sinha",
"Vaibhav B.",
""
]
] |
We consider a facility location game in which $n$ agents reside at known locations on a path, and $k$ heterogeneous facilities are to be constructed on the path. Each agent is adversely affected by some subset of the facilities, and is unaffected by the others. We design two classes of mechanisms for choosing the facility locations given the reported agent preferences: utilitarian mechanisms that strive to maximize social welfare (i.e., to be efficient), and egalitarian mechanisms that strive to maximize the minimum welfare. For the utilitarian objective, we present a weakly group-strategyproof efficient mechanism for up to three facilities, we give strongly group-strategyproof mechanisms that achieve approximation ratios of $5/3$ and $2$ for $k=1$ and $k > 1$, respectively, and we prove that no strongly group-strategyproof mechanism achieves an approximation ratio less than $5/3$ for the case of a single facility. For the egalitarian objective, we present a strategyproof egalitarian mechanism for arbitrary $k$, and we prove that no weakly group-strategyproof mechanism achieves a $o(\sqrt{n})$ approximation ratio for two facilities. We extend our egalitarian results to the case where the agents are located on a cycle, and we extend our first egalitarian result to the case where the agents are located in the unit square.
|
hep-th/0309118
|
Bogdan Kulik
|
Z. Guralnik and B. Kulik
|
Properties of Chiral Wilson Loops
|
15 pages, two pictures, some references added
|
JHEP 0401 (2004) 065
|
10.1088/1126-6708/2004/01/065
| null |
hep-th
| null |
We study a class of Wilson Loops in N =4, D=4 Yang-Mills theory belonging to
the chiral ring of a N=2, d=1 subalgebra. We show that the expectation value of
these loops is independent of their shape. Using properties of the chiral ring,
we also show that the expectation value is identically 1. We find the same
result for chiral loops in maximally supersymmetric Yang-Mills theory in three,
five and six dimensions. In seven dimensions, a generalized Konishi anomaly
gives an equation for chiral loops which closely resembles the loop equations
of the three dimensional Chern-Simons theory.
|
[
{
"created": "Thu, 11 Sep 2003 11:57:36 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Sep 2003 12:09:33 GMT",
"version": "v2"
}
] |
2009-11-10
|
[
[
"Guralnik",
"Z.",
""
],
[
"Kulik",
"B.",
""
]
] |
We study a class of Wilson Loops in N =4, D=4 Yang-Mills theory belonging to the chiral ring of a N=2, d=1 subalgebra. We show that the expectation value of these loops is independent of their shape. Using properties of the chiral ring, we also show that the expectation value is identically 1. We find the same result for chiral loops in maximally supersymmetric Yang-Mills theory in three, five and six dimensions. In seven dimensions, a generalized Konishi anomaly gives an equation for chiral loops which closely resembles the loop equations of the three dimensional Chern-Simons theory.
|
2402.07066
|
Guanyang Wang
|
Prathamesh Dharangutte, Jie Gao, Ruobin Gong, Guanyang Wang
|
Differentially Private Range Queries with Correlated Input Perturbation
|
26 pages, 8 figures
| null | null | null |
cs.CR cs.LG stat.ME
|
http://creativecommons.org/licenses/by/4.0/
|
This work proposes a class of locally differentially private mechanisms for
linear queries, in particular range queries, that leverages correlated input
perturbation to simultaneously achieve unbiasedness, consistency, statistical
transparency, and control over utility requirements in terms of accuracy
targets expressed either in certain query margins or as implied by the
hierarchical database structure. The proposed Cascade Sampling algorithm
instantiates the mechanism exactly and efficiently. Our bounds show that we
obtain near-optimal utility while being empirically competitive against output
perturbation methods.
|
[
{
"created": "Sat, 10 Feb 2024 23:42:05 GMT",
"version": "v1"
}
] |
2024-02-13
|
[
[
"Dharangutte",
"Prathamesh",
""
],
[
"Gao",
"Jie",
""
],
[
"Gong",
"Ruobin",
""
],
[
"Wang",
"Guanyang",
""
]
] |
This work proposes a class of locally differentially private mechanisms for linear queries, in particular range queries, that leverages correlated input perturbation to simultaneously achieve unbiasedness, consistency, statistical transparency, and control over utility requirements in terms of accuracy targets expressed either in certain query margins or as implied by the hierarchical database structure. The proposed Cascade Sampling algorithm instantiates the mechanism exactly and efficiently. Our bounds show that we obtain near-optimal utility while being empirically competitive against output perturbation methods.
|
1803.10753
|
Milena \v{C}uki\'c Dr
|
Milena B. Cukic, Mirjana M. Platisa, Aleksandar Kalauzi, Joji Oommen,
Milos R. Ljubisavljevic
|
The comparison of Higuchi fractal dimension and Sample Entropy analysis
of sEMG: effects of muscle contraction intensity and TMS
|
21 pages, 3 Figures
| null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The aim of the study was to examine how the complexity of surface
electromyogram (sEMG) signal, estimated by Higuchi fractal dimension (HFD) and
Sample Entropy (SampEn), change depending on muscle contraction intensity and
external perturbation of the corticospinal activity during muscle contraction
induced by single-pulse Transcranial Magnetic Stimulation (spTMS). HFD and
SampEn were computed from sEMG signal recorded at three various levels of
voluntary contraction before and after spTMS. After spTMS, both HFD and SampEn
decreased at medium compared to the mild contraction. SampEn increased, while
HFD did not change significantly at strong compared to medium contraction.
spTMS significantly decreased both parameters at all contraction levels. When
same parameters were computed from the mathematically generated sine-wave
calibration curves, the results show that SampEn has better accuracy at lower
(0-40 Hz) and HFD at higher (60-120 Hz) frequencies. Changes in the sEMG
complexity associated with increased muscle contraction intensity cannot be
accurately depicted by a single complexity measure. Examination of sEMG should
entail both SampEn and HFD as they provide complementary information about
different frequency components of sEMG. Further studies are needed to explain
the implication of changes in nonlinear parameters and their relation to
underlying sEMG physiological processes.
|
[
{
"created": "Wed, 28 Mar 2018 17:44:44 GMT",
"version": "v1"
}
] |
2018-03-29
|
[
[
"Cukic",
"Milena B.",
""
],
[
"Platisa",
"Mirjana M.",
""
],
[
"Kalauzi",
"Aleksandar",
""
],
[
"Oommen",
"Joji",
""
],
[
"Ljubisavljevic",
"Milos R.",
""
]
] |
The aim of the study was to examine how the complexity of surface electromyogram (sEMG) signal, estimated by Higuchi fractal dimension (HFD) and Sample Entropy (SampEn), change depending on muscle contraction intensity and external perturbation of the corticospinal activity during muscle contraction induced by single-pulse Transcranial Magnetic Stimulation (spTMS). HFD and SampEn were computed from sEMG signal recorded at three various levels of voluntary contraction before and after spTMS. After spTMS, both HFD and SampEn decreased at medium compared to the mild contraction. SampEn increased, while HFD did not change significantly at strong compared to medium contraction. spTMS significantly decreased both parameters at all contraction levels. When same parameters were computed from the mathematically generated sine-wave calibration curves, the results show that SampEn has better accuracy at lower (0-40 Hz) and HFD at higher (60-120 Hz) frequencies. Changes in the sEMG complexity associated with increased muscle contraction intensity cannot be accurately depicted by a single complexity measure. Examination of sEMG should entail both SampEn and HFD as they provide complementary information about different frequency components of sEMG. Further studies are needed to explain the implication of changes in nonlinear parameters and their relation to underlying sEMG physiological processes.
|
1806.06738
|
Antoaneta Serguieva
|
Tooba Faisal, Nicolas Courtois and Antoaneta Serguieva
|
The Evolution of Embedding Metadata in Blockchain Transactions
|
9 pages, 6 figures, 1 table, 2018 International Joint Conference on
Neural Networks
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The use of blockchains is growing every day, and their utility has greatly
expanded from sending and receiving crypto-coins to smart-contracts and
decentralized autonomous organizations. Modern blockchains underpin a variety
of applications: from designing a global identity to improving satellite
connectivity. In our research we look at the ability of blockchains to store
metadata in an increasing volume of transactions and with evolving focus of
utilization. We further show that basic approaches to improving blockchain
privacy also rely on embedding metadata. This paper identifies and classifies
real-life blockchain transactions embedding metadata of a number of major
protocols running essentially over the bitcoin blockchain. The empirical
analysis here presents the evolution of metadata utilization in the recent
years, and the discussion suggests steps towards preventing criminal use.
Metadata are relevant to any blockchain, and our analysis considers primarily
bitcoin as a case study. The paper concludes that simultaneously with both
expanding legitimate utilization of embedded metadata and expanding blockchain
functionality, the applied research on improving anonymity and security must
also attempt to protect against blockchain abuse.
|
[
{
"created": "Mon, 18 Jun 2018 14:38:02 GMT",
"version": "v1"
}
] |
2018-06-19
|
[
[
"Faisal",
"Tooba",
""
],
[
"Courtois",
"Nicolas",
""
],
[
"Serguieva",
"Antoaneta",
""
]
] |
The use of blockchains is growing every day, and their utility has greatly expanded from sending and receiving crypto-coins to smart-contracts and decentralized autonomous organizations. Modern blockchains underpin a variety of applications: from designing a global identity to improving satellite connectivity. In our research we look at the ability of blockchains to store metadata in an increasing volume of transactions and with evolving focus of utilization. We further show that basic approaches to improving blockchain privacy also rely on embedding metadata. This paper identifies and classifies real-life blockchain transactions embedding metadata of a number of major protocols running essentially over the bitcoin blockchain. The empirical analysis here presents the evolution of metadata utilization in the recent years, and the discussion suggests steps towards preventing criminal use. Metadata are relevant to any blockchain, and our analysis considers primarily bitcoin as a case study. The paper concludes that simultaneously with both expanding legitimate utilization of embedded metadata and expanding blockchain functionality, the applied research on improving anonymity and security must also attempt to protect against blockchain abuse.
|
1602.03418
|
Swami Sankaranarayanan
|
Swami Sankaranarayanan, Azadeh Alavi, Rama Chellappa
|
Triplet Similarity Embedding for Face Verification
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present an unconstrained face verification algorithm and
evaluate it on the recently released IJB-A dataset that aims to push the
boundaries of face verification methods. The proposed algorithm couples a deep
CNN-based approach with a low-dimensional discriminative embedding learnt using
triplet similarity constraints in a large margin fashion. Aside from yielding
performance improvement, this embedding provides significant advantages in
terms of memory and post-processing operations like hashing and visualization.
Experiments on the IJB-A dataset show that the proposed algorithm outperforms
state of the art methods in verification and identification metrics, while
requiring less training time.
|
[
{
"created": "Wed, 10 Feb 2016 15:48:47 GMT",
"version": "v1"
},
{
"created": "Sun, 13 Mar 2016 18:06:34 GMT",
"version": "v2"
}
] |
2016-03-15
|
[
[
"Sankaranarayanan",
"Swami",
""
],
[
"Alavi",
"Azadeh",
""
],
[
"Chellappa",
"Rama",
""
]
] |
In this work, we present an unconstrained face verification algorithm and evaluate it on the recently released IJB-A dataset that aims to push the boundaries of face verification methods. The proposed algorithm couples a deep CNN-based approach with a low-dimensional discriminative embedding learnt using triplet similarity constraints in a large margin fashion. Aside from yielding performance improvement, this embedding provides significant advantages in terms of memory and post-processing operations like hashing and visualization. Experiments on the IJB-A dataset show that the proposed algorithm outperforms state of the art methods in verification and identification metrics, while requiring less training time.
|
1810.13098
|
Chao Li
|
Chao Li, Zhun Sun, Jinshi Yu, Ming Hou and Qibin Zhao
|
Low-Rank Embedding of Kernels in Convolutional Neural Networks under
Random Shuffling
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although the convolutional neural networks (CNNs) have become popular for
various image processing and computer vision task recently, it remains a
challenging problem to reduce the storage cost of the parameters for
resource-limited platforms. In the previous studies, tensor decomposition (TD)
has achieved promising compression performance by embedding the kernel of a
convolutional layer into a low-rank subspace. However the employment of TD is
naively on the kernel or its specified variants. Unlike the conventional
approaches, this paper shows that the kernel can be embedded into more general
or even random low-rank subspaces. We demonstrate this by compressing the
convolutional layers via randomly-shuffled tensor decomposition (RsTD) for a
standard classification task using CIFAR-10. In addition, we analyze how the
spatial similarity of the training data influences the low-rank structure of
the kernels. The experimental results show that the CNN can be significantly
compressed even if the kernels are randomly shuffled. Furthermore, the
RsTD-based method yields more stable classification accuracy than the
conventional TD-based methods in a large range of compression ratios.
|
[
{
"created": "Wed, 31 Oct 2018 04:05:54 GMT",
"version": "v1"
}
] |
2018-11-01
|
[
[
"Li",
"Chao",
""
],
[
"Sun",
"Zhun",
""
],
[
"Yu",
"Jinshi",
""
],
[
"Hou",
"Ming",
""
],
[
"Zhao",
"Qibin",
""
]
] |
Although the convolutional neural networks (CNNs) have become popular for various image processing and computer vision task recently, it remains a challenging problem to reduce the storage cost of the parameters for resource-limited platforms. In the previous studies, tensor decomposition (TD) has achieved promising compression performance by embedding the kernel of a convolutional layer into a low-rank subspace. However the employment of TD is naively on the kernel or its specified variants. Unlike the conventional approaches, this paper shows that the kernel can be embedded into more general or even random low-rank subspaces. We demonstrate this by compressing the convolutional layers via randomly-shuffled tensor decomposition (RsTD) for a standard classification task using CIFAR-10. In addition, we analyze how the spatial similarity of the training data influences the low-rank structure of the kernels. The experimental results show that the CNN can be significantly compressed even if the kernels are randomly shuffled. Furthermore, the RsTD-based method yields more stable classification accuracy than the conventional TD-based methods in a large range of compression ratios.
|
1912.09515
|
Charles Rabideau
|
Marine De Clerck, Charles Rabideau, Niklas Tanger
|
Caustics bounding entanglement wedges
|
32 pages, 8 figures. Typos fixed and minor clarifications added
| null |
10.1007/JHEP06(2020)166
| null |
hep-th
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the caustics on the boundaries of entanglement wedges in the context
of holography in asymptotically AdS$_3$ spacetimes. These entanglement wedges
play an important role in our understanding of the emergence of bulk locality.
A procedure was proposed by Sanches and Weinberg [arXiv:1703.07780] for
identifying boundary operators which are local in the bulk, which also applies
to certain regions that lie beyond the reach of HRT surfaces by taking
advantage of the lightsheets which bound entanglement wedges. We identify the
caustics which terminate these lightsheets in conical deficit and BTZ black
hole spacetimes and find that in some examples these caustics lead to a sharp
corner in the entanglement wedge. The unexpected shape of these entanglement
wedges leads, in those cases, to a breakdown of this procedure. Many of the
properties of the rich variety of caustics possible in higher dimensions
remains to be explored which, as this work demonstrates, could lead to more
unexpected features in the shapes of entanglement wedges.
|
[
{
"created": "Thu, 19 Dec 2019 19:42:39 GMT",
"version": "v1"
},
{
"created": "Wed, 20 May 2020 14:46:05 GMT",
"version": "v2"
}
] |
2020-07-15
|
[
[
"De Clerck",
"Marine",
""
],
[
"Rabideau",
"Charles",
""
],
[
"Tanger",
"Niklas",
""
]
] |
We study the caustics on the boundaries of entanglement wedges in the context of holography in asymptotically AdS$_3$ spacetimes. These entanglement wedges play an important role in our understanding of the emergence of bulk locality. A procedure was proposed by Sanches and Weinberg [arXiv:1703.07780] for identifying boundary operators which are local in the bulk, which also applies to certain regions that lie beyond the reach of HRT surfaces by taking advantage of the lightsheets which bound entanglement wedges. We identify the caustics which terminate these lightsheets in conical deficit and BTZ black hole spacetimes and find that in some examples these caustics lead to a sharp corner in the entanglement wedge. The unexpected shape of these entanglement wedges leads, in those cases, to a breakdown of this procedure. Many of the properties of the rich variety of caustics possible in higher dimensions remains to be explored which, as this work demonstrates, could lead to more unexpected features in the shapes of entanglement wedges.
|
2205.05837
|
Peter Tsimiklis
|
Florian Girelli, Matteo Laudonio, Adrian Tanasa, Panagiotis Tsimiklis
|
Group field theory on 2-groups
| null | null | null | null |
hep-th
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Group field theories are quantum field theories built on groups. They can be
seen as a tool to generate topological state-sums or quantum gravity models.
For four dimensional manifolds, different arguments have pointed towards
2-groups (such as crossed modules) as the relevant symmetry structure to probe
four dimensional topological features. Here, we introduce a group field theory
built on crossed modules which generate a four dimensional topological model,
as we prove that the Feynman diagram amplitudes can be related by Pachner
moves. This model is presumably the dual version of the Yetter-Mackaay model.
|
[
{
"created": "Thu, 12 May 2022 02:09:48 GMT",
"version": "v1"
}
] |
2022-05-13
|
[
[
"Girelli",
"Florian",
""
],
[
"Laudonio",
"Matteo",
""
],
[
"Tanasa",
"Adrian",
""
],
[
"Tsimiklis",
"Panagiotis",
""
]
] |
Group field theories are quantum field theories built on groups. They can be seen as a tool to generate topological state-sums or quantum gravity models. For four dimensional manifolds, different arguments have pointed towards 2-groups (such as crossed modules) as the relevant symmetry structure to probe four dimensional topological features. Here, we introduce a group field theory built on crossed modules which generate a four dimensional topological model, as we prove that the Feynman diagram amplitudes can be related by Pachner moves. This model is presumably the dual version of the Yetter-Mackaay model.
|
2208.09237
|
Wei Zhang
|
Wei Zhang, Jason K. Wong, Xumeng Wang, Youcheng Gong, Rongchen Zhu,
Kai Liu, Zihan Yan, Siwei Tan, Huamin Qu, Siming Chen, and Wei Chen
|
CohortVA: A Visual Analytic System for Interactive Exploration of
Cohorts based on Historical Data
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In history research, cohort analysis seeks to identify social structures and
figure mobilities by studying the group-based behavior of historical figures.
Prior works mainly employ automatic data mining approaches, lacking effective
visual explanation. In this paper, we present CohortVA, an interactive visual
analytic approach that enables historians to incorporate expertise and insight
into the iterative exploration process. The kernel of CohortVA is a novel
identification model that generates candidate cohorts and constructs cohort
features by means of pre-built knowledge graphs constructed from large-scale
history databases. We propose a set of coordinated views to illustrate
identified cohorts and features coupled with historical events and figure
profiles. Two case studies and interviews with historians demonstrate that
CohortVA can greatly enhance the capabilities of cohort identifications, figure
authentications, and hypothesis generation.
|
[
{
"created": "Fri, 19 Aug 2022 09:27:42 GMT",
"version": "v1"
}
] |
2022-08-22
|
[
[
"Zhang",
"Wei",
""
],
[
"Wong",
"Jason K.",
""
],
[
"Wang",
"Xumeng",
""
],
[
"Gong",
"Youcheng",
""
],
[
"Zhu",
"Rongchen",
""
],
[
"Liu",
"Kai",
""
],
[
"Yan",
"Zihan",
""
],
[
"Tan",
"Siwei",
""
],
[
"Qu",
"Huamin",
""
],
[
"Chen",
"Siming",
""
],
[
"Chen",
"Wei",
""
]
] |
In history research, cohort analysis seeks to identify social structures and figure mobilities by studying the group-based behavior of historical figures. Prior works mainly employ automatic data mining approaches, lacking effective visual explanation. In this paper, we present CohortVA, an interactive visual analytic approach that enables historians to incorporate expertise and insight into the iterative exploration process. The kernel of CohortVA is a novel identification model that generates candidate cohorts and constructs cohort features by means of pre-built knowledge graphs constructed from large-scale history databases. We propose a set of coordinated views to illustrate identified cohorts and features coupled with historical events and figure profiles. Two case studies and interviews with historians demonstrate that CohortVA can greatly enhance the capabilities of cohort identifications, figure authentications, and hypothesis generation.
|
1704.04332
|
Dieyan Liang
|
Dieyan Liang, Hong Shen
|
Point Sweep Coverage on Path
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An important application of wireless sensor networks is the deployment of
mobile sensors to periodically monitor (cover) a set of points of interest
(PoIs). The problem of Point Sweep Coverage is to deploy fewest sensors to
periodically cover the set of PoIs. For PoIs in a Eulerian graph, this problem
is known NP-Hard even if all sensors are with uniform velocity. In this paper,
we study the problem when PoIs are on a line and prove that the decision
version of the problem is NP-Complete if the sensors are with different
velocities. We first formulate the problem of Max-PoI sweep coverage on path
(MPSCP) to find the maximum number of PoIs covered by a given set of sensors,
and then show it is NP-Hard. We also extend it to the weighted case, Max-Weight
sweep coverage on path (MWSCP) problem to maximum the sum of the weight of PoIs
covered. For sensors with uniform velocity, we give a polynomial-time optimal
solution to MWSCP. For sensors with constant kinds of velocities, we present a
$\frac{1}{2}$-approximation algorithm. For the general case of arbitrary
velocities, we propose two algorithms. One is a
$\frac{1}{2\alpha}$-approximation algorithm family scheme, where integer
$\alpha\ge2$ is the tradeoff factor to balance the time complexity and
approximation ratio. The other is a $\frac{1}{2}(1-1/e)$-approximation
algorithm by randomized analysis.
|
[
{
"created": "Fri, 14 Apr 2017 02:24:56 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Jun 2017 04:45:46 GMT",
"version": "v2"
}
] |
2017-06-28
|
[
[
"Liang",
"Dieyan",
""
],
[
"Shen",
"Hong",
""
]
] |
An important application of wireless sensor networks is the deployment of mobile sensors to periodically monitor (cover) a set of points of interest (PoIs). The problem of Point Sweep Coverage is to deploy fewest sensors to periodically cover the set of PoIs. For PoIs in a Eulerian graph, this problem is known NP-Hard even if all sensors are with uniform velocity. In this paper, we study the problem when PoIs are on a line and prove that the decision version of the problem is NP-Complete if the sensors are with different velocities. We first formulate the problem of Max-PoI sweep coverage on path (MPSCP) to find the maximum number of PoIs covered by a given set of sensors, and then show it is NP-Hard. We also extend it to the weighted case, Max-Weight sweep coverage on path (MWSCP) problem to maximum the sum of the weight of PoIs covered. For sensors with uniform velocity, we give a polynomial-time optimal solution to MWSCP. For sensors with constant kinds of velocities, we present a $\frac{1}{2}$-approximation algorithm. For the general case of arbitrary velocities, we propose two algorithms. One is a $\frac{1}{2\alpha}$-approximation algorithm family scheme, where integer $\alpha\ge2$ is the tradeoff factor to balance the time complexity and approximation ratio. The other is a $\frac{1}{2}(1-1/e)$-approximation algorithm by randomized analysis.
|
1909.04436
|
Martin Shepperd
|
Martin Shepperd, Yuchen Guo, Ning Li, Mahir Arzoky, Andrea Capiluppi,
Steve Counsell, Giuseppe Destefanis, Stephen Swift, Allan Tucker, and Leila
Yousefi
|
The Prevalence of Errors in Machine Learning Experiments
|
20th International Conference on Intelligent Data Engineering and
Automated Learning (IDEAL), 14--16 November 2019
| null | null | null |
cs.LG cs.AI stat.AP stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Context: Conducting experiments is central to research machine learning
research to benchmark, evaluate and compare learning algorithms. Consequently
it is important we conduct reliable, trustworthy experiments. Objective: We
investigate the incidence of errors in a sample of machine learning experiments
in the domain of software defect prediction. Our focus is simple arithmetical
and statistical errors. Method: We analyse 49 papers describing 2456 individual
experimental results from a previously undertaken systematic review comparing
supervised and unsupervised defect prediction classifiers. We extract the
confusion matrices and test for relevant constraints, e.g., the marginal
probabilities must sum to one. We also check for multiple statistical
significance testing errors. Results: We find that a total of 22 out of 49
papers contain demonstrable errors. Of these 7 were statistical and 16 related
to confusion matrix inconsistency (one paper contained both classes of error).
Conclusions: Whilst some errors may be of a relatively trivial nature, e.g.,
transcription errors their presence does not engender confidence. We strongly
urge researchers to follow open science principles so errors can be more easily
be detected and corrected, thus as a community reduce this worryingly high
error rate with our computational experiments.
|
[
{
"created": "Tue, 10 Sep 2019 12:32:00 GMT",
"version": "v1"
}
] |
2019-09-11
|
[
[
"Shepperd",
"Martin",
""
],
[
"Guo",
"Yuchen",
""
],
[
"Li",
"Ning",
""
],
[
"Arzoky",
"Mahir",
""
],
[
"Capiluppi",
"Andrea",
""
],
[
"Counsell",
"Steve",
""
],
[
"Destefanis",
"Giuseppe",
""
],
[
"Swift",
"Stephen",
""
],
[
"Tucker",
"Allan",
""
],
[
"Yousefi",
"Leila",
""
]
] |
Context: Conducting experiments is central to research machine learning research to benchmark, evaluate and compare learning algorithms. Consequently it is important we conduct reliable, trustworthy experiments. Objective: We investigate the incidence of errors in a sample of machine learning experiments in the domain of software defect prediction. Our focus is simple arithmetical and statistical errors. Method: We analyse 49 papers describing 2456 individual experimental results from a previously undertaken systematic review comparing supervised and unsupervised defect prediction classifiers. We extract the confusion matrices and test for relevant constraints, e.g., the marginal probabilities must sum to one. We also check for multiple statistical significance testing errors. Results: We find that a total of 22 out of 49 papers contain demonstrable errors. Of these 7 were statistical and 16 related to confusion matrix inconsistency (one paper contained both classes of error). Conclusions: Whilst some errors may be of a relatively trivial nature, e.g., transcription errors their presence does not engender confidence. We strongly urge researchers to follow open science principles so errors can be more easily be detected and corrected, thus as a community reduce this worryingly high error rate with our computational experiments.
|
1711.02690
|
David Kutasov
|
Meseret Asrat, Amit Giveon, Nissan Itzhaki and David Kutasov
|
Holography Beyond AdS
|
16 pages; v2: reference updated
| null |
10.1016/j.nuclphysb.2018.05.005
| null |
hep-th
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We continue our study of string theory in a background that interpolates
between $AdS_3$ in the infrared and a linear dilaton spacetime $R^{1,1}\times
R_\phi$ in the UV. This background corresponds via holography to a $CFT_2$
deformed by a certain irrelevant operator of dimension $(2,2)$. We show that
for two point functions of local operators in the infrared CFT, conformal
perturbation theory in this irrelevant operator has a finite radius of
convergence in momentum space, and one can use it to flow up the
renormalization group. The spectral density develops an imaginary part above a
certain critical value of the spectral parameter; this appears to be related to
the non-locality of the theory. In position space, conformal perturbation
theory has a vanishing radius of convergence; the leading non-perturbative
effect is an imaginary part of the two point function.
|
[
{
"created": "Tue, 7 Nov 2017 19:14:44 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Nov 2017 21:52:04 GMT",
"version": "v2"
}
] |
2018-07-04
|
[
[
"Asrat",
"Meseret",
""
],
[
"Giveon",
"Amit",
""
],
[
"Itzhaki",
"Nissan",
""
],
[
"Kutasov",
"David",
""
]
] |
We continue our study of string theory in a background that interpolates between $AdS_3$ in the infrared and a linear dilaton spacetime $R^{1,1}\times R_\phi$ in the UV. This background corresponds via holography to a $CFT_2$ deformed by a certain irrelevant operator of dimension $(2,2)$. We show that for two point functions of local operators in the infrared CFT, conformal perturbation theory in this irrelevant operator has a finite radius of convergence in momentum space, and one can use it to flow up the renormalization group. The spectral density develops an imaginary part above a certain critical value of the spectral parameter; this appears to be related to the non-locality of the theory. In position space, conformal perturbation theory has a vanishing radius of convergence; the leading non-perturbative effect is an imaginary part of the two point function.
|
1805.11203
|
Xiang Zhang
|
Xiang Zhang, Philip A. Chou, Ming-Ting Sun, Maolong Tang, Shanshe
Wang, Siwei Ma, Wen Gao
|
Surface Light Field Compression using a Point Cloud Codec
| null | null | null | null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Light field (LF) representations aim to provide photo-realistic,
free-viewpoint viewing experiences. However, the most popular LF
representations are images from multiple views. Multi-view image-based
representations generally need to restrict the range or degrees of freedom of
the viewing experience to what can be interpolated in the image domain,
essentially because they lack explicit geometry information. We present a new
surface light field (SLF) representation based on explicit geometry, and a
method for SLF compression. First, we map the multi-view images of a scene onto
a 3D geometric point cloud. The color of each point in the point cloud is a
function of viewing direction known as a view map. We represent each view map
efficiently in a B-Spline wavelet basis. This representation is capable of
modeling diverse surface materials and complex lighting conditions in a highly
scalable and adaptive manner. The coefficients of the B-Spline wavelet
representation are then compressed spatially. To increase the spatial
correlation and thus improve compression efficiency, we introduce a smoothing
term to make the coefficients more similar across the 3D space. We compress the
coefficients spatially using existing point cloud compression (PCC) methods. On
the decoder side, the scene is rendered efficiently from any viewing direction
by reconstructing the view map at each point. In contrast to multi-view
image-based LF approaches, our method supports photo-realistic rendering of
real-world scenes from arbitrary viewpoints, i.e., with an unlimited six
degrees of freedom (6DOF). In terms of rate and distortion, experimental
results show that our method achieves superior performance with lighter decoder
complexity compared with a reference image-plus-geometry compression (IGC)
scheme, indicating its potential in practical virtual and augmented reality
applications.
|
[
{
"created": "Tue, 29 May 2018 00:08:30 GMT",
"version": "v1"
}
] |
2018-05-30
|
[
[
"Zhang",
"Xiang",
""
],
[
"Chou",
"Philip A.",
""
],
[
"Sun",
"Ming-Ting",
""
],
[
"Tang",
"Maolong",
""
],
[
"Wang",
"Shanshe",
""
],
[
"Ma",
"Siwei",
""
],
[
"Gao",
"Wen",
""
]
] |
Light field (LF) representations aim to provide photo-realistic, free-viewpoint viewing experiences. However, the most popular LF representations are images from multiple views. Multi-view image-based representations generally need to restrict the range or degrees of freedom of the viewing experience to what can be interpolated in the image domain, essentially because they lack explicit geometry information. We present a new surface light field (SLF) representation based on explicit geometry, and a method for SLF compression. First, we map the multi-view images of a scene onto a 3D geometric point cloud. The color of each point in the point cloud is a function of viewing direction known as a view map. We represent each view map efficiently in a B-Spline wavelet basis. This representation is capable of modeling diverse surface materials and complex lighting conditions in a highly scalable and adaptive manner. The coefficients of the B-Spline wavelet representation are then compressed spatially. To increase the spatial correlation and thus improve compression efficiency, we introduce a smoothing term to make the coefficients more similar across the 3D space. We compress the coefficients spatially using existing point cloud compression (PCC) methods. On the decoder side, the scene is rendered efficiently from any viewing direction by reconstructing the view map at each point. In contrast to multi-view image-based LF approaches, our method supports photo-realistic rendering of real-world scenes from arbitrary viewpoints, i.e., with an unlimited six degrees of freedom (6DOF). In terms of rate and distortion, experimental results show that our method achieves superior performance with lighter decoder complexity compared with a reference image-plus-geometry compression (IGC) scheme, indicating its potential in practical virtual and augmented reality applications.
|
1710.04134
|
Eric Sillekens
|
Eric Sillekens, Daniel Semrau, Gabriele Liga, Nikita A. Shevchenko,
Zhe Li, Alex Alvarado, Polina Bayvel, Robert. I. Killey, Domani\c{c} Lavery
|
A Simple Nonlinearity-Tailored Probabilistic Shaping Distribution for
Square QAM
| null | null |
10.1364/OFC.2018.M3C.4
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A new probabilistic shaping distribution that outperforms Maxwell-Boltzmann
is studied for the nonlinear fiber channel. Additional gains of 0.1 bit/symbol
MI or 0.2 dB SNR for both DP-256QAM and DP-1024QAM are reported after 200 km
nonlinear fiber transmission.
|
[
{
"created": "Wed, 11 Oct 2017 15:42:46 GMT",
"version": "v1"
},
{
"created": "Thu, 12 Oct 2017 07:42:01 GMT",
"version": "v2"
}
] |
2018-11-06
|
[
[
"Sillekens",
"Eric",
""
],
[
"Semrau",
"Daniel",
""
],
[
"Liga",
"Gabriele",
""
],
[
"Shevchenko",
"Nikita A.",
""
],
[
"Li",
"Zhe",
""
],
[
"Alvarado",
"Alex",
""
],
[
"Bayvel",
"Polina",
""
],
[
"Killey",
"Robert. I.",
""
],
[
"Lavery",
"Domaniç",
""
]
] |
A new probabilistic shaping distribution that outperforms Maxwell-Boltzmann is studied for the nonlinear fiber channel. Additional gains of 0.1 bit/symbol MI or 0.2 dB SNR for both DP-256QAM and DP-1024QAM are reported after 200 km nonlinear fiber transmission.
|
2011.08967
|
Nuno Crokidakis
|
Marcelo A. Pires, Nuno Crokidakis, Silvio M. Duarte Queir\'os
|
Diffusion plays an unusual role in ecological quasi-neutral competition
in metapopulations
|
6 figures, 10 pages, to appear in Nonlinear Dynamics
|
Nonlinear Dynamics 103, 1219 (2021)
|
10.1007/s11071-020-06105-4
| null |
q-bio.PE physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate the phenomenology emerging from a 2-species dynamics under the
scenario of a quasi-neutral competition within a metapopulation framework. We
employ stochastic and deterministic approaches, namely spatially-constrained
individual-based Monte Carlo simulations and coupled mean-field ODEs. Our
results show the multifold interplay between competition, birth-death dynamics
and spatial constraints induces a nonmonotonic relation between the ecological
majority-minority switching and the diffusion between patches. This means that
diffusion can set off birth-death ratios and enhance the preservation of a
species.
|
[
{
"created": "Tue, 17 Nov 2020 22:04:20 GMT",
"version": "v1"
}
] |
2021-02-05
|
[
[
"Pires",
"Marcelo A.",
""
],
[
"Crokidakis",
"Nuno",
""
],
[
"Queirós",
"Silvio M. Duarte",
""
]
] |
We investigate the phenomenology emerging from a 2-species dynamics under the scenario of a quasi-neutral competition within a metapopulation framework. We employ stochastic and deterministic approaches, namely spatially-constrained individual-based Monte Carlo simulations and coupled mean-field ODEs. Our results show the multifold interplay between competition, birth-death dynamics and spatial constraints induces a nonmonotonic relation between the ecological majority-minority switching and the diffusion between patches. This means that diffusion can set off birth-death ratios and enhance the preservation of a species.
|
2402.11297
|
Husein Zolkepli
|
Husein Zolkepli, Aisyah Razak, Kamarul Adha, Ariff Nazhan
|
MMMModal -- Multi-Images Multi-Audio Multi-turn Multi-Modal
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Our contribution introduces a groundbreaking multimodal large language model
designed to comprehend multi-images, multi-audio, and multi-images-multi-audio
within a single multiturn session. Leveraging state-of-the-art models, we
utilize the SigLIP encoder for visual inputs and the Whisper Encoder for audio
inputs. Notably, this multimodal large language model is bilingual, proficient
in understanding both English and Malay simultaneously. We proudly unveil two
versions of this model: TinyLlama with 1.1B parameters, and Mistral with 7B
parameters. With its ability to navigate diverse modalities and languages, our
model represents a significant advancement for the Malaysian context and
beyond.
All models released at
https://huggingface.co/collections/mesolitica/multimodal-malaysian-llm-65c6f893e03f78fa9e5c8859
|
[
{
"created": "Sat, 17 Feb 2024 14:37:38 GMT",
"version": "v1"
}
] |
2024-02-20
|
[
[
"Zolkepli",
"Husein",
""
],
[
"Razak",
"Aisyah",
""
],
[
"Adha",
"Kamarul",
""
],
[
"Nazhan",
"Ariff",
""
]
] |
Our contribution introduces a groundbreaking multimodal large language model designed to comprehend multi-images, multi-audio, and multi-images-multi-audio within a single multiturn session. Leveraging state-of-the-art models, we utilize the SigLIP encoder for visual inputs and the Whisper Encoder for audio inputs. Notably, this multimodal large language model is bilingual, proficient in understanding both English and Malay simultaneously. We proudly unveil two versions of this model: TinyLlama with 1.1B parameters, and Mistral with 7B parameters. With its ability to navigate diverse modalities and languages, our model represents a significant advancement for the Malaysian context and beyond. All models released at https://huggingface.co/collections/mesolitica/multimodal-malaysian-llm-65c6f893e03f78fa9e5c8859
|
0712.0350
|
Jerzy Lukierski
|
Marcin Daszkiewicz, Jerzy Lukierski and Mariusz Woronowicz
|
Quantization of kappa-deformed free fields and kappa-deformed
oscillators
|
9 pages. Talk presented at Supersymmetry and Quantum Supersymmetry
2007 (SQS'07) Conference (Dubna, 30.07-4.08.2007) and IV-th Central European
Seminar "Commutative and Noncommutative Quantum Fields" (Vienna,
30.11-2.12.2007). To be published in the proceedings of SQS'07 (2008)
| null | null | null |
hep-th
| null |
We describe the deformed E.T. quantization rules for kappa-deformed free
quantum fields, and relate these rules with the kappa-deformed algebra of field
oscillators.
|
[
{
"created": "Mon, 3 Dec 2007 17:15:55 GMT",
"version": "v1"
}
] |
2007-12-04
|
[
[
"Daszkiewicz",
"Marcin",
""
],
[
"Lukierski",
"Jerzy",
""
],
[
"Woronowicz",
"Mariusz",
""
]
] |
We describe the deformed E.T. quantization rules for kappa-deformed free quantum fields, and relate these rules with the kappa-deformed algebra of field oscillators.
|
2207.05764
|
Monica Jinwoo Kang
|
Monica Jinwoo Kang, Craig Lawrie, Ki-Hong Lee, Matteo Sacchi, and
Jaewon Song
|
Higgs, Coulomb, and Hall-Littlewood
|
49 pages + appendices + references, 20 figures, and 5 tables
|
Physical Review D 106, no.10, 106021 (2022)
|
10.1103/PhysRevD.106.106021
|
CALT-TH-2022-024; DESY-22-110
|
hep-th
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Higgs branch of 4d $\mathcal{N}=2$ SCFTs can be analyzed via the Hilbert
series of the Higgs branch or, in special cases, by computing the
Hall-Littlewood index. For any class $\mathcal{S}$ theory corresponding to a
genus-zero Riemann surface, they are conjectured to be identical. We present
several families of counterexamples. We find that for any class $\mathcal{S}$
theory with four or more $\mathbb{Z}_2$-twisted punctures, they do not match.
We construct 3d mirrors for such theories and analyze their Coulomb branch
Hilbert series to compute the Higgs branch Hilbert series of the 4d theory. We
further construct $a=c$ theories in class $\mathcal{S}$ using the twisted
punctures, and these theories, which includes the $\hat{D}_4(SU(2n+1))$
theories, have Hall--Littlewood index different from the Hilbert series of the
Higgs branch. We conjecture that this is the case for all $a=c$ theories with
non-empty Higgs branch, including $\mathcal{N}\ge 3$ SCFTs.
|
[
{
"created": "Tue, 12 Jul 2022 18:00:01 GMT",
"version": "v1"
}
] |
2023-02-09
|
[
[
"Kang",
"Monica Jinwoo",
""
],
[
"Lawrie",
"Craig",
""
],
[
"Lee",
"Ki-Hong",
""
],
[
"Sacchi",
"Matteo",
""
],
[
"Song",
"Jaewon",
""
]
] |
The Higgs branch of 4d $\mathcal{N}=2$ SCFTs can be analyzed via the Hilbert series of the Higgs branch or, in special cases, by computing the Hall-Littlewood index. For any class $\mathcal{S}$ theory corresponding to a genus-zero Riemann surface, they are conjectured to be identical. We present several families of counterexamples. We find that for any class $\mathcal{S}$ theory with four or more $\mathbb{Z}_2$-twisted punctures, they do not match. We construct 3d mirrors for such theories and analyze their Coulomb branch Hilbert series to compute the Higgs branch Hilbert series of the 4d theory. We further construct $a=c$ theories in class $\mathcal{S}$ using the twisted punctures, and these theories, which includes the $\hat{D}_4(SU(2n+1))$ theories, have Hall--Littlewood index different from the Hilbert series of the Higgs branch. We conjecture that this is the case for all $a=c$ theories with non-empty Higgs branch, including $\mathcal{N}\ge 3$ SCFTs.
|
1405.5513
|
Seyed Aidin Sajedi
|
Fahimeh Abdollahi and Seyed Aidin Sajedi
|
Correlation of multiple sclerosis (MS) incidence trends with solar and
geomagnetic indices: time to revise the method of reporting MS
epidemiological data
|
Single PDF, 8 pages, 3 figures
|
Iranian Journal of Neurology 2014; 13(2):64-69
| null | null |
q-bio.TO q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Background: Recently, we introduced solar related geomagnetic disturbances
(GMD) as a potential environmental risk factor for multiple sclerosis (MS). The
aim of this study was to test probable correlation between solar activities and
GMD with long-term variations of MS incidence.
Methods: After a systematic review, we studied the association between
alterations in solar wind velocity (Vsw) and planetary A index (Ap, a GMD
index) with MS incidence in Tehran and western Greece, during the 23rd solar
cycle (1996-2008), by an ecological-correlational study.
Results: We found moderate to strong correlations among MS incidence of
Tehran with Vsw (Rs=0.665, p=0.013), with one year delay, and also with Ap
(Rs=0.864, p=0.001) with 2 year delay. There were very strong correlations
among MS incidence data of Greece with Vsw (R=0.906, p<0.001) and with Ap
(R=0.844, p=0.001), both with one year lag.
Conclusion: It is the first time that a hypothesis has introduced an
environmental factor that may describe MS incidence alterations; however, it
should be reminded that correlation does not mean necessarily the existence of
a causal relationship. Important message of these findings for researchers is
to provide MS incidence reports with higher resolution for consecutive years,
based on the time of disease onset and relapses, not just the time of
diagnosis. Then, it would be possible to further investigate the validity of
GMD hypothesis or any other probable environmental risk factors.
Keywords: Correlation analysis, Multiple sclerosis, Incidence, Geomagnetic
disturbance, Geomagnetic activity, Solar wind velocity, Environmental risk
factor.
|
[
{
"created": "Sun, 18 May 2014 21:28:05 GMT",
"version": "v1"
}
] |
2014-05-22
|
[
[
"Abdollahi",
"Fahimeh",
""
],
[
"Sajedi",
"Seyed Aidin",
""
]
] |
Background: Recently, we introduced solar related geomagnetic disturbances (GMD) as a potential environmental risk factor for multiple sclerosis (MS). The aim of this study was to test probable correlation between solar activities and GMD with long-term variations of MS incidence. Methods: After a systematic review, we studied the association between alterations in solar wind velocity (Vsw) and planetary A index (Ap, a GMD index) with MS incidence in Tehran and western Greece, during the 23rd solar cycle (1996-2008), by an ecological-correlational study. Results: We found moderate to strong correlations among MS incidence of Tehran with Vsw (Rs=0.665, p=0.013), with one year delay, and also with Ap (Rs=0.864, p=0.001) with 2 year delay. There were very strong correlations among MS incidence data of Greece with Vsw (R=0.906, p<0.001) and with Ap (R=0.844, p=0.001), both with one year lag. Conclusion: It is the first time that a hypothesis has introduced an environmental factor that may describe MS incidence alterations; however, it should be reminded that correlation does not mean necessarily the existence of a causal relationship. Important message of these findings for researchers is to provide MS incidence reports with higher resolution for consecutive years, based on the time of disease onset and relapses, not just the time of diagnosis. Then, it would be possible to further investigate the validity of GMD hypothesis or any other probable environmental risk factors. Keywords: Correlation analysis, Multiple sclerosis, Incidence, Geomagnetic disturbance, Geomagnetic activity, Solar wind velocity, Environmental risk factor.
|
1901.01877
|
Siyao Li
|
Siyao Li, Daniela Tuninetti and Natasha Devroye
|
On the Capacity Region of the Layered Packet Erasure Broadcast Channel
with Feedback
|
6 pages, 1 figure, submitted to 2019 IEEE International Conference on
Communications (ICC)
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, the capacity region of the Layered Packet Erasure Broadcast
Channel (LPE-BC) with Channel Output Feedback (COF) available at the
transmitter is investigated. The LPE-BC is a high-SNR approximation of the
fading Gaussian BC recently proposed by Tse and Yates, who characterized the
capacity region for any number of users and any number of layers when there is
no COF. This paper derives capacity inner and outer bounds for the LPE-BC with
COF for the case of two users and any number of layers. The inner bounds
generalize past results for the two-user erasure BC, which is a special case of
the LPE-BC with COF with only one layer. The novelty lies in the use of
\emph{inter-user \& inter-layer network coding} retransmissions (for those
packets that have only been received by the unintended user), where each random
linear combination may involve packets intended for any user originally sent on
any of the layers. Analytical and numerical examples show that the proposed
outer bound is optimal for some LPE-BCs.
|
[
{
"created": "Mon, 7 Jan 2019 15:22:27 GMT",
"version": "v1"
}
] |
2019-01-08
|
[
[
"Li",
"Siyao",
""
],
[
"Tuninetti",
"Daniela",
""
],
[
"Devroye",
"Natasha",
""
]
] |
In this paper, the capacity region of the Layered Packet Erasure Broadcast Channel (LPE-BC) with Channel Output Feedback (COF) available at the transmitter is investigated. The LPE-BC is a high-SNR approximation of the fading Gaussian BC recently proposed by Tse and Yates, who characterized the capacity region for any number of users and any number of layers when there is no COF. This paper derives capacity inner and outer bounds for the LPE-BC with COF for the case of two users and any number of layers. The inner bounds generalize past results for the two-user erasure BC, which is a special case of the LPE-BC with COF with only one layer. The novelty lies in the use of \emph{inter-user \& inter-layer network coding} retransmissions (for those packets that have only been received by the unintended user), where each random linear combination may involve packets intended for any user originally sent on any of the layers. Analytical and numerical examples show that the proposed outer bound is optimal for some LPE-BCs.
|
q-bio/0509013
|
Atul Narang
|
Jason T. Noel, Brenton Cox, Atul Narang
|
Identification of the growth-limiting step in continuous cultures from
initial rates measured in response to substrate-excess conditions
|
12 pages
| null | null | null |
q-bio.MN
| null |
When steady state chemostat cultures are abruptly exposed to substrate-excess
conditions, they exhibit long lags before adjusting to the new environment. The
identity of the rate-limiting step for this slow response can be inferred from
the initial yields and specific growth rates measured by exposing steady state
cultures at various dilution rates to substrate-excess conditions. We measured
these parameters for glucose-limited cultures of E. coli ML308 growing at
various dilution rates between 0.03 and 0.6 1/hr. In all the cases, the initial
yields were 20-30% less than the steady state yields. The decline of the yield
implies that overflow metabolism is triggered in response to excess glucose. It
is therefore unlikely that the initial response of the cells is limited by
substrate uptake. The initial specific growth rates of cultures growing at low
dilution rates (D = 0.03, 0.05, 0.075, 0.1, 0.3 1/hr) were significantly higher
than the steady state specific growth rates. However, the increment in the
specific growth rate decreased with the dilution rate, and at D=0.6 1/hr, there
was no improvement in the specific growth rate. The initial specific growth
rates varied hyperbolically with the dilution, decreasing sharply at dilution
rates below 0.1 1/hr and saturating at D=0.6 1/hr. This is consistent with a
picture in which the initial response is limited by the activity of glutamate
dehydrogenase.
|
[
{
"created": "Mon, 12 Sep 2005 20:22:46 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Noel",
"Jason T.",
""
],
[
"Cox",
"Brenton",
""
],
[
"Narang",
"Atul",
""
]
] |
When steady state chemostat cultures are abruptly exposed to substrate-excess conditions, they exhibit long lags before adjusting to the new environment. The identity of the rate-limiting step for this slow response can be inferred from the initial yields and specific growth rates measured by exposing steady state cultures at various dilution rates to substrate-excess conditions. We measured these parameters for glucose-limited cultures of E. coli ML308 growing at various dilution rates between 0.03 and 0.6 1/hr. In all the cases, the initial yields were 20-30% less than the steady state yields. The decline of the yield implies that overflow metabolism is triggered in response to excess glucose. It is therefore unlikely that the initial response of the cells is limited by substrate uptake. The initial specific growth rates of cultures growing at low dilution rates (D = 0.03, 0.05, 0.075, 0.1, 0.3 1/hr) were significantly higher than the steady state specific growth rates. However, the increment in the specific growth rate decreased with the dilution rate, and at D=0.6 1/hr, there was no improvement in the specific growth rate. The initial specific growth rates varied hyperbolically with the dilution, decreasing sharply at dilution rates below 0.1 1/hr and saturating at D=0.6 1/hr. This is consistent with a picture in which the initial response is limited by the activity of glutamate dehydrogenase.
|
2002.02100
|
S.H. Shabbeer Basha
|
S.H. Shabbeer Basha, Viswanath Pulabaigari, Snehasis Mukherjee
|
An Information-rich Sampling Technique over Spatio-Temporal CNN for
Classification of Human Actions in Videos
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel scheme for human action recognition in videos, using a
3-dimensional Convolutional Neural Network (3D CNN) based classifier.
Traditionally in deep learning based human activity recognition approaches,
either a few random frames or every $k^{th}$ frame of the video is considered
for training the 3D CNN, where $k$ is a small positive integer, like 4, 5, or
6. This kind of sampling reduces the volume of the input data, which speeds-up
training of the network and also avoids over-fitting to some extent, thus
enhancing the performance of the 3D CNN model. In the proposed video sampling
technique, consecutive $k$ frames of a video are aggregated into a single frame
by computing a Gaussian-weighted summation of the $k$ frames. The resulting
frame (aggregated frame) preserves the information in a better way than the
conventional approaches and experimentally shown to perform better. In this
paper, a 3D CNN architecture is proposed to extract the spatio-temporal
features and follows Long Short-Term Memory (LSTM) to recognize human actions.
The proposed 3D CNN architecture is capable of handling the videos where the
camera is placed at a distance from the performer. Experiments are performed
with KTH and WEIZMANN human actions datasets, whereby it is shown to produce
comparable results with the state-of-the-art techniques.
|
[
{
"created": "Thu, 6 Feb 2020 05:07:41 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Feb 2020 06:42:20 GMT",
"version": "v2"
}
] |
2020-02-10
|
[
[
"Basha",
"S. H. Shabbeer",
""
],
[
"Pulabaigari",
"Viswanath",
""
],
[
"Mukherjee",
"Snehasis",
""
]
] |
We propose a novel scheme for human action recognition in videos, using a 3-dimensional Convolutional Neural Network (3D CNN) based classifier. Traditionally in deep learning based human activity recognition approaches, either a few random frames or every $k^{th}$ frame of the video is considered for training the 3D CNN, where $k$ is a small positive integer, like 4, 5, or 6. This kind of sampling reduces the volume of the input data, which speeds-up training of the network and also avoids over-fitting to some extent, thus enhancing the performance of the 3D CNN model. In the proposed video sampling technique, consecutive $k$ frames of a video are aggregated into a single frame by computing a Gaussian-weighted summation of the $k$ frames. The resulting frame (aggregated frame) preserves the information in a better way than the conventional approaches and experimentally shown to perform better. In this paper, a 3D CNN architecture is proposed to extract the spatio-temporal features and follows Long Short-Term Memory (LSTM) to recognize human actions. The proposed 3D CNN architecture is capable of handling the videos where the camera is placed at a distance from the performer. Experiments are performed with KTH and WEIZMANN human actions datasets, whereby it is shown to produce comparable results with the state-of-the-art techniques.
|
2404.12984
|
Marek Wodzinski
|
Mateusz Daniol, Daria Hemmerling, Jakub Sikora, Pawel Jemiolo, Marek
Wodzinski, Magdalena Wojcik-Pedziwiatr
|
Eye-tracking in Mixed Reality for Diagnosis of Neurodegenerative
Diseases
| null | null | null | null |
cs.HC cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Parkinson's disease ranks as the second most prevalent neurodegenerative
disorder globally. This research aims to develop a system leveraging Mixed
Reality capabilities for tracking and assessing eye movements. In this paper,
we present a medical scenario and outline the development of an application
designed to capture eye-tracking signals through Mixed Reality technology for
the evaluation of neurodegenerative diseases. Additionally, we introduce a
pipeline for extracting clinically relevant features from eye-gaze analysis,
describing the capabilities of the proposed system from a medical perspective.
The study involved a cohort of healthy control individuals and patients
suffering from Parkinson's disease, showcasing the feasibility and potential of
the proposed technology for non-intrusive monitoring of eye movement patterns
for the diagnosis of neurodegenerative diseases.
Clinical relevance - Developing a non-invasive biomarker for Parkinson's
disease is urgently needed to accurately detect the disease's onset. This would
allow for the timely introduction of neuroprotective treatment at the earliest
stage and enable the continuous monitoring of intervention outcomes. The
ability to detect subtle changes in eye movements allows for early diagnosis,
offering a critical window for intervention before more pronounced symptoms
emerge. Eye tracking provides objective and quantifiable biomarkers, ensuring
reliable assessments of disease progression and cognitive function. The eye
gaze analysis using Mixed Reality glasses is wireless, facilitating convenient
assessments in both home and hospital settings. The approach offers the
advantage of utilizing hardware that requires no additional specialized
attachments, enabling examinations through personal eyewear.
|
[
{
"created": "Fri, 19 Apr 2024 16:34:15 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Jun 2024 10:45:42 GMT",
"version": "v2"
}
] |
2024-06-04
|
[
[
"Daniol",
"Mateusz",
""
],
[
"Hemmerling",
"Daria",
""
],
[
"Sikora",
"Jakub",
""
],
[
"Jemiolo",
"Pawel",
""
],
[
"Wodzinski",
"Marek",
""
],
[
"Wojcik-Pedziwiatr",
"Magdalena",
""
]
] |
Parkinson's disease ranks as the second most prevalent neurodegenerative disorder globally. This research aims to develop a system leveraging Mixed Reality capabilities for tracking and assessing eye movements. In this paper, we present a medical scenario and outline the development of an application designed to capture eye-tracking signals through Mixed Reality technology for the evaluation of neurodegenerative diseases. Additionally, we introduce a pipeline for extracting clinically relevant features from eye-gaze analysis, describing the capabilities of the proposed system from a medical perspective. The study involved a cohort of healthy control individuals and patients suffering from Parkinson's disease, showcasing the feasibility and potential of the proposed technology for non-intrusive monitoring of eye movement patterns for the diagnosis of neurodegenerative diseases. Clinical relevance - Developing a non-invasive biomarker for Parkinson's disease is urgently needed to accurately detect the disease's onset. This would allow for the timely introduction of neuroprotective treatment at the earliest stage and enable the continuous monitoring of intervention outcomes. The ability to detect subtle changes in eye movements allows for early diagnosis, offering a critical window for intervention before more pronounced symptoms emerge. Eye tracking provides objective and quantifiable biomarkers, ensuring reliable assessments of disease progression and cognitive function. The eye gaze analysis using Mixed Reality glasses is wireless, facilitating convenient assessments in both home and hospital settings. The approach offers the advantage of utilizing hardware that requires no additional specialized attachments, enabling examinations through personal eyewear.
|
2109.06780
|
Danijar Hafner
|
Danijar Hafner
|
Benchmarking the Spectrum of Agent Capabilities
|
Published at ICLR 2022. Website: https://danijar.com/crafter
| null | null | null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Evaluating the general abilities of intelligent agents requires complex
simulation environments. Existing benchmarks typically evaluate only one narrow
task per environment, requiring researchers to perform expensive training runs
on many different environments. We introduce Crafter, an open world survival
game with visual inputs that evaluates a wide range of general abilities within
a single environment. Agents either learn from the provided reward signal or
through intrinsic objectives and are evaluated by semantically meaningful
achievements that can be unlocked during each episode, such as discovering
resources and crafting tools. Consistently unlocking all achievements requires
strong generalization, deep exploration, and long-term reasoning. We
experimentally verify that Crafter is of appropriate difficulty to drive future
research and provide baselines scores of reward agents and unsupervised agents.
Furthermore, we observe sophisticated behaviors emerging from maximizing the
reward signal, such as building tunnel systems, bridges, houses, and
plantations. We hope that Crafter will accelerate research progress by quickly
evaluating a wide spectrum of abilities.
|
[
{
"created": "Tue, 14 Sep 2021 15:49:31 GMT",
"version": "v1"
},
{
"created": "Sat, 12 Feb 2022 20:02:13 GMT",
"version": "v2"
}
] |
2022-02-15
|
[
[
"Hafner",
"Danijar",
""
]
] |
Evaluating the general abilities of intelligent agents requires complex simulation environments. Existing benchmarks typically evaluate only one narrow task per environment, requiring researchers to perform expensive training runs on many different environments. We introduce Crafter, an open world survival game with visual inputs that evaluates a wide range of general abilities within a single environment. Agents either learn from the provided reward signal or through intrinsic objectives and are evaluated by semantically meaningful achievements that can be unlocked during each episode, such as discovering resources and crafting tools. Consistently unlocking all achievements requires strong generalization, deep exploration, and long-term reasoning. We experimentally verify that Crafter is of appropriate difficulty to drive future research and provide baselines scores of reward agents and unsupervised agents. Furthermore, we observe sophisticated behaviors emerging from maximizing the reward signal, such as building tunnel systems, bridges, houses, and plantations. We hope that Crafter will accelerate research progress by quickly evaluating a wide spectrum of abilities.
|
1108.1751
|
Joel Oren
|
Siavosh Benabbas, Hyun Chul Lee, Joel Oren, Yuli Ye
|
Efficient Sum-Based Hierarchical Smoothing Under \ell_1-Norm
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a new regression problem which we call the Sum-Based
Hierarchical Smoothing problem. Given a directed acyclic graph and a
non-negative value, called target value, for each vertex in the graph, we wish
to find non-negative values for the vertices satisfying a certain constraint
while minimizing the distance of these assigned values and the target values in
the lp-norm. The constraint is that the value assigned to each vertex should be
no less than the sum of the values assigned to its children. We motivate this
problem with applications in information retrieval and web mining. While our
problem can be solved in polynomial time using linear programming, given the
input size in these applications such a solution might be too slow. We mainly
study the \ell_1-norm case restricting the underlying graphs to rooted trees.
For this case we provide an efficient algorithm, running in O(n^2) time. While
the algorithm is purely combinatorial, its proof of correctness is an elegant
use of linear programming duality. We believe that our approach may be
applicable to similar problems, where comparable hierarchical constraints are
involved, e.g. considering the average of the values assigned to the children
of each vertex. While similar in flavor to other smoothing problems like
Isotonic Regression (see for example [Angelov et al. SODA'06]), our problem is
arguably richer and theoretically more challenging.
|
[
{
"created": "Mon, 8 Aug 2011 17:07:06 GMT",
"version": "v1"
}
] |
2011-08-09
|
[
[
"Benabbas",
"Siavosh",
""
],
[
"Lee",
"Hyun Chul",
""
],
[
"Oren",
"Joel",
""
],
[
"Ye",
"Yuli",
""
]
] |
We introduce a new regression problem which we call the Sum-Based Hierarchical Smoothing problem. Given a directed acyclic graph and a non-negative value, called target value, for each vertex in the graph, we wish to find non-negative values for the vertices satisfying a certain constraint while minimizing the distance of these assigned values and the target values in the lp-norm. The constraint is that the value assigned to each vertex should be no less than the sum of the values assigned to its children. We motivate this problem with applications in information retrieval and web mining. While our problem can be solved in polynomial time using linear programming, given the input size in these applications such a solution might be too slow. We mainly study the \ell_1-norm case restricting the underlying graphs to rooted trees. For this case we provide an efficient algorithm, running in O(n^2) time. While the algorithm is purely combinatorial, its proof of correctness is an elegant use of linear programming duality. We believe that our approach may be applicable to similar problems, where comparable hierarchical constraints are involved, e.g. considering the average of the values assigned to the children of each vertex. While similar in flavor to other smoothing problems like Isotonic Regression (see for example [Angelov et al. SODA'06]), our problem is arguably richer and theoretically more challenging.
|
1306.4974
|
Gautam Mandal
|
Pawel Caputa, Gautam Mandal and Ritam Sinha
|
Dynamical entanglement entropy with angular momentum and U(1) charge
|
22 pages, 4 figures; (v2) many comments added for better clarity;
typos fixed; references added
| null |
10.1007/JHEP11(2013)052
|
TIFR/TH/13-16, WITS-CTP-116
|
hep-th cond-mat.stat-mech
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider time-dependent entanglement entropy (EE) for a 1+1 dimensional
CFT in the presence of angular momentum and U(1) charge. The EE saturates,
irrespective of the initial state, to the grand canonical entropy after a time
large compared with the length of the entangling interval. We reproduce the CFT
results from an AdS dual consisting of a spinning BTZ black hole and a flat
U(1) connection. The apparent discrepancy that the holographic EE does not a
priori depend on the U(1) charge while the CFT EE does, is resolved by the
charge-dependent shift between the bulk and boundary stress tensors. We show
that for small entangling intervals, the entanglement entropy obeys the first
law of thermodynamics, as conjectured recently. The saturation of the EE in the
field theory is shown to follow from a version of quantum ergodicity; the
derivation indicates that it should hold for conformal as well as massive
theories in any number of dimensions.
|
[
{
"created": "Thu, 20 Jun 2013 19:49:23 GMT",
"version": "v1"
},
{
"created": "Sun, 25 Aug 2013 12:26:54 GMT",
"version": "v2"
}
] |
2015-06-16
|
[
[
"Caputa",
"Pawel",
""
],
[
"Mandal",
"Gautam",
""
],
[
"Sinha",
"Ritam",
""
]
] |
We consider time-dependent entanglement entropy (EE) for a 1+1 dimensional CFT in the presence of angular momentum and U(1) charge. The EE saturates, irrespective of the initial state, to the grand canonical entropy after a time large compared with the length of the entangling interval. We reproduce the CFT results from an AdS dual consisting of a spinning BTZ black hole and a flat U(1) connection. The apparent discrepancy that the holographic EE does not a priori depend on the U(1) charge while the CFT EE does, is resolved by the charge-dependent shift between the bulk and boundary stress tensors. We show that for small entangling intervals, the entanglement entropy obeys the first law of thermodynamics, as conjectured recently. The saturation of the EE in the field theory is shown to follow from a version of quantum ergodicity; the derivation indicates that it should hold for conformal as well as massive theories in any number of dimensions.
|
2302.05803
|
Mohammadjavad Ghorbanalivakili
|
Jungwon Kang, Mohammadjavad Ghorbanalivakili, Gunho Sohn, David Beach,
and Veronica Marin
|
TPE-Net: Track Point Extraction and Association Network for Rail Path
Proposal Generation
|
7 pages, 6 figures, and 1 table Jungwon Kang and Mohammadjavad
Ghorbanalivakili have equal contribution
| null |
10.1109/CASE56687.2023.10260541
| null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One essential feature of an autonomous train is minimizing collision risks
with third-party objects. To estimate the risk, the control system must
identify topological information of all the rail routes ahead on which the
train can possibly move, especially within merging or diverging rails. This
way, the train can figure out the status of potential obstacles with respect to
its route and hence, make a timely decision. Numerous studies have successfully
extracted all rail tracks as a whole within forward-looking images without
considering element instances. Still, some image-based methods have employed
hard-coded prior knowledge of railway geometry on 3D data to associate
left-right rails and generate rail route instances. However, we propose a rail
path extraction pipeline in which left-right rail pixels of each rail route
instance are extracted and associated through a fully convolutional
encoder-decoder architecture called TPE-Net. Two different regression branches
for TPE-Net are proposed to regress the locations of center points of each rail
route, along with their corresponding left-right pixels. Extracted rail pixels
are then spatially clustered to generate topological information of all the
possible train routes (ego-paths), discarding non-ego-path ones. Experimental
results on a challenging, publicly released benchmark show true-positive-pixel
level average precision and recall of 0.9207 and 0.8721, respectively, at about
12 frames per second. Even though our evaluation results are not higher than
the SOTA, the proposed regression pipeline performs remarkably in extracting
the correspondences by looking once at the image. It generates strong rail
route hypotheses without reliance on camera parameters, 3D data, and
geometrical constraints.
|
[
{
"created": "Sat, 11 Feb 2023 22:49:06 GMT",
"version": "v1"
}
] |
2024-07-26
|
[
[
"Kang",
"Jungwon",
""
],
[
"Ghorbanalivakili",
"Mohammadjavad",
""
],
[
"Sohn",
"Gunho",
""
],
[
"Beach",
"David",
""
],
[
"Marin",
"Veronica",
""
]
] |
One essential feature of an autonomous train is minimizing collision risks with third-party objects. To estimate the risk, the control system must identify topological information of all the rail routes ahead on which the train can possibly move, especially within merging or diverging rails. This way, the train can figure out the status of potential obstacles with respect to its route and hence, make a timely decision. Numerous studies have successfully extracted all rail tracks as a whole within forward-looking images without considering element instances. Still, some image-based methods have employed hard-coded prior knowledge of railway geometry on 3D data to associate left-right rails and generate rail route instances. However, we propose a rail path extraction pipeline in which left-right rail pixels of each rail route instance are extracted and associated through a fully convolutional encoder-decoder architecture called TPE-Net. Two different regression branches for TPE-Net are proposed to regress the locations of center points of each rail route, along with their corresponding left-right pixels. Extracted rail pixels are then spatially clustered to generate topological information of all the possible train routes (ego-paths), discarding non-ego-path ones. Experimental results on a challenging, publicly released benchmark show true-positive-pixel level average precision and recall of 0.9207 and 0.8721, respectively, at about 12 frames per second. Even though our evaluation results are not higher than the SOTA, the proposed regression pipeline performs remarkably in extracting the correspondences by looking once at the image. It generates strong rail route hypotheses without reliance on camera parameters, 3D data, and geometrical constraints.
|
hep-th/0105319
|
Kluson Josef
|
J. Kluson
|
Some Remarks About Berkovits' Superstring Field Theory
|
14 pages, Introduction part and some typos corrected, ref. added
|
JHEP 0106:045,2001
|
10.1088/1126-6708/2001/06/045
| null |
hep-th
| null |
In this short note we would like to discuss general solutions of the
Berkovits superstring field theory, in particular the string field action for
fluctuation around such a solution. We will find that fluctuations obey the
same equation of motion as the original field with the new BRST operator. Then
we will argue that the superstring field theory action for fluctuation field
has the same form as the original one.
|
[
{
"created": "Thu, 31 May 2001 17:24:20 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Jun 2001 07:32:02 GMT",
"version": "v2"
}
] |
2010-02-03
|
[
[
"Kluson",
"J.",
""
]
] |
In this short note we would like to discuss general solutions of the Berkovits superstring field theory, in particular the string field action for fluctuation around such a solution. We will find that fluctuations obey the same equation of motion as the original field with the new BRST operator. Then we will argue that the superstring field theory action for fluctuation field has the same form as the original one.
|
hep-th/0607243
|
Pietro Antonio Grassi
|
P.A. Grassi and M. Marescotti
|
Flux Vacua and Supermanifolds
|
Latex, no figures, 35 pp, misprints and minor changes
|
JHEP 0701:068,2007
|
10.1088/1126-6708/2007/01/068
|
DISTA-UPO-06, DFTT-18/2006
|
hep-th
| null |
As been recently pointed out, physically relevant models derived from string
theory require the presence of non-vanishing form fluxes besides the usual
geometrical constraints. In the case of NS-NS fluxes, the Generalized Complex
Geometry encodes these informations in a beautiful geometrical structure. On
the other hand, the R-R fluxes call for supergeometry as the underlying
mathematical framework. In this context, we analyze the possibility of
constructing interesting supermanifolds recasting the geometrical data and RR
fluxes. To characterize these supermanifolds we have been guided by the fact
topological strings on supermanifolds require the super-Ricci flatness of the
target space. This can be achieved by adding to a given bosonic manifold enough
anticommuting coordinates and new constraints on the bosonic sub-manifold. We
study these constraints at the linear and non-linear level for a pure
geometrical setting and in the presence of p-form field strengths. We find that
certain spaces admit several super-extensions and we give a parameterization in
a simple case of d bosonic coordinates and two fermionic coordinates. In
addition, we comment on the role of the RR field in the construction of the
super-metric. We give several examples based on supergroup manifolds and coset
supermanifolds.
|
[
{
"created": "Sun, 30 Jul 2006 10:26:32 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Nov 2006 11:21:35 GMT",
"version": "v2"
},
{
"created": "Fri, 11 Apr 2008 09:54:32 GMT",
"version": "v3"
}
] |
2010-10-27
|
[
[
"Grassi",
"P. A.",
""
],
[
"Marescotti",
"M.",
""
]
] |
As been recently pointed out, physically relevant models derived from string theory require the presence of non-vanishing form fluxes besides the usual geometrical constraints. In the case of NS-NS fluxes, the Generalized Complex Geometry encodes these informations in a beautiful geometrical structure. On the other hand, the R-R fluxes call for supergeometry as the underlying mathematical framework. In this context, we analyze the possibility of constructing interesting supermanifolds recasting the geometrical data and RR fluxes. To characterize these supermanifolds we have been guided by the fact topological strings on supermanifolds require the super-Ricci flatness of the target space. This can be achieved by adding to a given bosonic manifold enough anticommuting coordinates and new constraints on the bosonic sub-manifold. We study these constraints at the linear and non-linear level for a pure geometrical setting and in the presence of p-form field strengths. We find that certain spaces admit several super-extensions and we give a parameterization in a simple case of d bosonic coordinates and two fermionic coordinates. In addition, we comment on the role of the RR field in the construction of the super-metric. We give several examples based on supergroup manifolds and coset supermanifolds.
|
1401.8173
|
Daniel Zaragoza
|
Daniel Zaragoza
|
Modeling TCP Throughput with Random Packet Drops
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The present report deals with the modeling of the long-term throughput,
a.k.a., send rate, of the Transmission Control Protocol (TCP) under the
following assumptions. (i) We consider a single 'infinite source' using a
network path from sender to receiver. (ii) Each TCP packet is randomly dropped
with probability p; independently of previous drops or any other
event/parameter. (iii) The - never changing - receiver window limits the amount
of outstanding data. (iv) The receiver acknowledges every packet. (v) The TCP
modeled here conforms to the publicly available standards (RFCs) as concerns
congestion control. We validate and determine the limits of the different
models proposed here using packet-level simulations. The contributions of the
present work are the following: (a) We determine three regimes, and their
conditions of applicability, depending on p: Linear law regime, square root law
regime, and timeout regime. (b) As concerns the relationship between the linear
and square root regimes, we give additional insights relatively to previously
published work. (c) We give the exact equations governing the TCP send rate in
any regime. (d) From the exact equation and under the further condition that
the path is not saturated, we give and discuss approximations for the send rate
of the NewReno variant of TCP. A by-product of these calculations is the
distribution of the sender window, independently of any timing or saturation
consideration. (e) These approximations give results that are accurate to a few
percent when compared to simulation results. Detailed comparison and sources of
errors between theory and simulations are also discussed.
|
[
{
"created": "Fri, 31 Jan 2014 14:20:22 GMT",
"version": "v1"
}
] |
2014-02-03
|
[
[
"Zaragoza",
"Daniel",
""
]
] |
The present report deals with the modeling of the long-term throughput, a.k.a., send rate, of the Transmission Control Protocol (TCP) under the following assumptions. (i) We consider a single 'infinite source' using a network path from sender to receiver. (ii) Each TCP packet is randomly dropped with probability p; independently of previous drops or any other event/parameter. (iii) The - never changing - receiver window limits the amount of outstanding data. (iv) The receiver acknowledges every packet. (v) The TCP modeled here conforms to the publicly available standards (RFCs) as concerns congestion control. We validate and determine the limits of the different models proposed here using packet-level simulations. The contributions of the present work are the following: (a) We determine three regimes, and their conditions of applicability, depending on p: Linear law regime, square root law regime, and timeout regime. (b) As concerns the relationship between the linear and square root regimes, we give additional insights relatively to previously published work. (c) We give the exact equations governing the TCP send rate in any regime. (d) From the exact equation and under the further condition that the path is not saturated, we give and discuss approximations for the send rate of the NewReno variant of TCP. A by-product of these calculations is the distribution of the sender window, independently of any timing or saturation consideration. (e) These approximations give results that are accurate to a few percent when compared to simulation results. Detailed comparison and sources of errors between theory and simulations are also discussed.
|
2401.03134
|
Paridhi Maheshwari
|
Paridhi Maheshwari, Hongyu Ren, Yanan Wang, Rok Sosic, Jure Leskovec
|
TimeGraphs: Graph-based Temporal Reasoning
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many real-world systems exhibit temporal, dynamic behaviors, which are
captured as time series of complex agent interactions. To perform temporal
reasoning, current methods primarily encode temporal dynamics through simple
sequence-based models. However, in general these models fail to efficiently
capture the full spectrum of rich dynamics in the input, since the dynamics is
not uniformly distributed. In particular, relevant information might be harder
to extract and computing power is wasted for processing all individual
timesteps, even if they contain no significant changes or no new information.
Here we propose TimeGraphs, a novel approach that characterizes dynamic
interactions as a hierarchical temporal graph, diverging from traditional
sequential representations. Our approach models the interactions using a
compact graph-based representation, enabling adaptive reasoning across diverse
time scales. Adopting a self-supervised method, TimeGraphs constructs a
multi-level event hierarchy from a temporal input, which is then used to
efficiently reason about the unevenly distributed dynamics. This construction
process is scalable and incremental to accommodate streaming data. We evaluate
TimeGraphs on multiple datasets with complex, dynamic agent interactions,
including a football simulator, the Resistance game, and the MOMA human
activity dataset. The results demonstrate both robustness and efficiency of
TimeGraphs on a range of temporal reasoning tasks. Our approach obtains
state-of-the-art performance and leads to a performance increase of up to 12.2%
on event prediction and recognition tasks over current approaches. Our
experiments further demonstrate a wide array of capabilities including
zero-shot generalization, robustness in case of data sparsity, and adaptability
to streaming data flow.
|
[
{
"created": "Sat, 6 Jan 2024 06:26:49 GMT",
"version": "v1"
}
] |
2024-01-09
|
[
[
"Maheshwari",
"Paridhi",
""
],
[
"Ren",
"Hongyu",
""
],
[
"Wang",
"Yanan",
""
],
[
"Sosic",
"Rok",
""
],
[
"Leskovec",
"Jure",
""
]
] |
Many real-world systems exhibit temporal, dynamic behaviors, which are captured as time series of complex agent interactions. To perform temporal reasoning, current methods primarily encode temporal dynamics through simple sequence-based models. However, in general these models fail to efficiently capture the full spectrum of rich dynamics in the input, since the dynamics is not uniformly distributed. In particular, relevant information might be harder to extract and computing power is wasted for processing all individual timesteps, even if they contain no significant changes or no new information. Here we propose TimeGraphs, a novel approach that characterizes dynamic interactions as a hierarchical temporal graph, diverging from traditional sequential representations. Our approach models the interactions using a compact graph-based representation, enabling adaptive reasoning across diverse time scales. Adopting a self-supervised method, TimeGraphs constructs a multi-level event hierarchy from a temporal input, which is then used to efficiently reason about the unevenly distributed dynamics. This construction process is scalable and incremental to accommodate streaming data. We evaluate TimeGraphs on multiple datasets with complex, dynamic agent interactions, including a football simulator, the Resistance game, and the MOMA human activity dataset. The results demonstrate both robustness and efficiency of TimeGraphs on a range of temporal reasoning tasks. Our approach obtains state-of-the-art performance and leads to a performance increase of up to 12.2% on event prediction and recognition tasks over current approaches. Our experiments further demonstrate a wide array of capabilities including zero-shot generalization, robustness in case of data sparsity, and adaptability to streaming data flow.
|
1711.01371
|
Runmin Cong
|
Runmin Cong, Jianjun Lei, Huazhu Fu, Weisi Lin, Qingming Huang,
Xiaochun Cao, and Chunping Hou
|
An Iterative Co-Saliency Framework for RGBD Images
|
13 pages, 13 figures, Accepted by IEEE Transactions on Cybernetics
2017. Project URL: https://rmcong.github.io/proj_RGBD_cosal_tcyb.html
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As a newly emerging and significant topic in computer vision community,
co-saliency detection aims at discovering the common salient objects in
multiple related images. The existing methods often generate the co-saliency
map through a direct forward pipeline which is based on the designed cues or
initialization, but lack the refinement-cycle scheme. Moreover, they mainly
focus on RGB image and ignore the depth information for RGBD images. In this
paper, we propose an iterative RGBD co-saliency framework, which utilizes the
existing single saliency maps as the initialization, and generates the final
RGBD cosaliency map by using a refinement-cycle model. Three schemes are
employed in the proposed RGBD co-saliency framework, which include the addition
scheme, deletion scheme, and iteration scheme. The addition scheme is used to
highlight the salient regions based on intra-image depth propagation and
saliency propagation, while the deletion scheme filters the saliency regions
and removes the non-common salient regions based on interimage constraint. The
iteration scheme is proposed to obtain more homogeneous and consistent
co-saliency map. Furthermore, a novel descriptor, named depth shape prior, is
proposed in the addition scheme to introduce the depth information to enhance
identification of co-salient objects. The proposed method can effectively
exploit any existing 2D saliency model to work well in RGBD co-saliency
scenarios. The experiments on two RGBD cosaliency datasets demonstrate the
effectiveness of our proposed framework.
|
[
{
"created": "Sat, 4 Nov 2017 00:41:06 GMT",
"version": "v1"
}
] |
2017-11-07
|
[
[
"Cong",
"Runmin",
""
],
[
"Lei",
"Jianjun",
""
],
[
"Fu",
"Huazhu",
""
],
[
"Lin",
"Weisi",
""
],
[
"Huang",
"Qingming",
""
],
[
"Cao",
"Xiaochun",
""
],
[
"Hou",
"Chunping",
""
]
] |
As a newly emerging and significant topic in computer vision community, co-saliency detection aims at discovering the common salient objects in multiple related images. The existing methods often generate the co-saliency map through a direct forward pipeline which is based on the designed cues or initialization, but lack the refinement-cycle scheme. Moreover, they mainly focus on RGB image and ignore the depth information for RGBD images. In this paper, we propose an iterative RGBD co-saliency framework, which utilizes the existing single saliency maps as the initialization, and generates the final RGBD cosaliency map by using a refinement-cycle model. Three schemes are employed in the proposed RGBD co-saliency framework, which include the addition scheme, deletion scheme, and iteration scheme. The addition scheme is used to highlight the salient regions based on intra-image depth propagation and saliency propagation, while the deletion scheme filters the saliency regions and removes the non-common salient regions based on interimage constraint. The iteration scheme is proposed to obtain more homogeneous and consistent co-saliency map. Furthermore, a novel descriptor, named depth shape prior, is proposed in the addition scheme to introduce the depth information to enhance identification of co-salient objects. The proposed method can effectively exploit any existing 2D saliency model to work well in RGBD co-saliency scenarios. The experiments on two RGBD cosaliency datasets demonstrate the effectiveness of our proposed framework.
|
hep-th/0202019
|
Karasik David
|
David Karasik and Aharon Davidson
|
Brane Variation Dirac Style
|
7 pages, 1 eps figure, paper revised
|
Class.Quant.Grav. 21 (2004) 1295-1302
|
10.1088/0264-9381/21/6/001
| null |
hep-th gr-qc
| null |
Dirac's method for variations of a brane embedded in co-dimension one is
demonstrated. The variation in the location of the brane invokes a rest frame
formulation of the 'sandwiched' brane action. We first demonstrate the
necessity of this method by re-deriving Snell's law. Second, we apply the
method to a general $N$-dimensional brane embedded in co-dimension one bulk in
the presence of gravity. We re-derive the brane equations: (i) Israel junction
condition, (ii) Energy/momentum conservation on the brane, and (iii)
Geodetic-type equation for the brane.
|
[
{
"created": "Mon, 4 Feb 2002 07:26:04 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Sep 2003 15:25:18 GMT",
"version": "v2"
}
] |
2009-11-07
|
[
[
"Karasik",
"David",
""
],
[
"Davidson",
"Aharon",
""
]
] |
Dirac's method for variations of a brane embedded in co-dimension one is demonstrated. The variation in the location of the brane invokes a rest frame formulation of the 'sandwiched' brane action. We first demonstrate the necessity of this method by re-deriving Snell's law. Second, we apply the method to a general $N$-dimensional brane embedded in co-dimension one bulk in the presence of gravity. We re-derive the brane equations: (i) Israel junction condition, (ii) Energy/momentum conservation on the brane, and (iii) Geodetic-type equation for the brane.
|
2402.18797
|
Guande Wu
|
Guande Wu, Jing Qian, Sonia Castelo, Shaoyu Chen, Joao Rulff, Claudio
Silva
|
ARTiST: Automated Text Simplification for Task Guidance in Augmented
Reality
|
Conditionally accepted by CHI '24
| null |
10.1145/3613904.3642669
| null |
cs.HC cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Text presented in augmented reality provides in-situ, real-time information
for users. However, this content can be challenging to apprehend quickly when
engaging in cognitively demanding AR tasks, especially when it is presented on
a head-mounted display. We propose ARTiST, an automatic text simplification
system that uses a few-shot prompt and GPT-3 models to specifically optimize
the text length and semantic content for augmented reality. Developed out of a
formative study that included seven users and three experts, our system
combines a customized error calibration model with a few-shot prompt to
integrate the syntactic, lexical, elaborative, and content simplification
techniques, and generate simplified AR text for head-worn displays. Results
from a 16-user empirical study showed that ARTiST lightens the cognitive load
and improves performance significantly over both unmodified text and text
modified via traditional methods. Our work constitutes a step towards
automating the optimization of batch text data for readability and performance
in augmented reality.
|
[
{
"created": "Thu, 29 Feb 2024 01:58:49 GMT",
"version": "v1"
}
] |
2024-03-01
|
[
[
"Wu",
"Guande",
""
],
[
"Qian",
"Jing",
""
],
[
"Castelo",
"Sonia",
""
],
[
"Chen",
"Shaoyu",
""
],
[
"Rulff",
"Joao",
""
],
[
"Silva",
"Claudio",
""
]
] |
Text presented in augmented reality provides in-situ, real-time information for users. However, this content can be challenging to apprehend quickly when engaging in cognitively demanding AR tasks, especially when it is presented on a head-mounted display. We propose ARTiST, an automatic text simplification system that uses a few-shot prompt and GPT-3 models to specifically optimize the text length and semantic content for augmented reality. Developed out of a formative study that included seven users and three experts, our system combines a customized error calibration model with a few-shot prompt to integrate the syntactic, lexical, elaborative, and content simplification techniques, and generate simplified AR text for head-worn displays. Results from a 16-user empirical study showed that ARTiST lightens the cognitive load and improves performance significantly over both unmodified text and text modified via traditional methods. Our work constitutes a step towards automating the optimization of batch text data for readability and performance in augmented reality.
|
cs/0106059
|
Henning Christiansen
|
Henning Christiansen
|
CHR as grammar formalism. A first report
|
12 pages. Presented at ERCIM Workshop on Constraints, Prague, Czech
Republic, June 18-20, 2001
|
Proc. of ERCIM Workshop on Constraints, Prague, Czech Republic,
June 18-20, 2001
| null | null |
cs.PL cs.CL
| null |
Grammars written as Constraint Handling Rules (CHR) can be executed as
efficient and robust bottom-up parsers that provide a straightforward,
non-backtracking treatment of ambiguity. Abduction with integrity constraints
as well as other dynamic hypothesis generation techniques fit naturally into
such grammars and are exemplified for anaphora resolution, coordination and
text interpretation.
|
[
{
"created": "Fri, 29 Jun 2001 08:41:02 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Christiansen",
"Henning",
""
]
] |
Grammars written as Constraint Handling Rules (CHR) can be executed as efficient and robust bottom-up parsers that provide a straightforward, non-backtracking treatment of ambiguity. Abduction with integrity constraints as well as other dynamic hypothesis generation techniques fit naturally into such grammars and are exemplified for anaphora resolution, coordination and text interpretation.
|
2312.10457
|
Kaiyou Song
|
Kaiyou Song, Shan Zhang, Tong Wang
|
Semantic-Aware Autoregressive Image Modeling for Visual Representation
Learning
|
Accepted by AAAI2024
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The development of autoregressive modeling (AM) in computer vision lags
behind natural language processing (NLP) in self-supervised pre-training. This
is mainly caused by the challenge that images are not sequential signals and
lack a natural order when applying autoregressive modeling. In this study,
inspired by human beings' way of grasping an image, i.e., focusing on the main
object first, we present a semantic-aware autoregressive image modeling
(SemAIM) method to tackle this challenge. The key insight of SemAIM is to
autoregressive model images from the semantic patches to the less semantic
patches. To this end, we first calculate a semantic-aware permutation of
patches according to their feature similarities and then perform the
autoregression procedure based on the permutation. In addition, considering
that the raw pixels of patches are low-level signals and are not ideal
prediction targets for learning high-level semantic representation, we also
explore utilizing the patch features as the prediction targets. Extensive
experiments are conducted on a broad range of downstream tasks, including image
classification, object detection, and instance/semantic segmentation, to
evaluate the performance of SemAIM. The results demonstrate SemAIM achieves
state-of-the-art performance compared with other self-supervised methods.
Specifically, with ViT-B, SemAIM achieves 84.1% top-1 accuracy for fine-tuning
on ImageNet, 51.3% AP and 45.4% AP for object detection and instance
segmentation on COCO, which outperforms the vanilla MAE by 0.5%, 1.0%, and
0.5%, respectively.
|
[
{
"created": "Sat, 16 Dec 2023 14:03:10 GMT",
"version": "v1"
}
] |
2023-12-19
|
[
[
"Song",
"Kaiyou",
""
],
[
"Zhang",
"Shan",
""
],
[
"Wang",
"Tong",
""
]
] |
The development of autoregressive modeling (AM) in computer vision lags behind natural language processing (NLP) in self-supervised pre-training. This is mainly caused by the challenge that images are not sequential signals and lack a natural order when applying autoregressive modeling. In this study, inspired by human beings' way of grasping an image, i.e., focusing on the main object first, we present a semantic-aware autoregressive image modeling (SemAIM) method to tackle this challenge. The key insight of SemAIM is to autoregressive model images from the semantic patches to the less semantic patches. To this end, we first calculate a semantic-aware permutation of patches according to their feature similarities and then perform the autoregression procedure based on the permutation. In addition, considering that the raw pixels of patches are low-level signals and are not ideal prediction targets for learning high-level semantic representation, we also explore utilizing the patch features as the prediction targets. Extensive experiments are conducted on a broad range of downstream tasks, including image classification, object detection, and instance/semantic segmentation, to evaluate the performance of SemAIM. The results demonstrate SemAIM achieves state-of-the-art performance compared with other self-supervised methods. Specifically, with ViT-B, SemAIM achieves 84.1% top-1 accuracy for fine-tuning on ImageNet, 51.3% AP and 45.4% AP for object detection and instance segmentation on COCO, which outperforms the vanilla MAE by 0.5%, 1.0%, and 0.5%, respectively.
|
2310.15758
|
Dominic Petrak
|
Dominic Petrak, Nafise Sadat Moosavi, Ye Tian, Nikolai Rozanov, Iryna
Gurevych
|
Learning From Free-Text Human Feedback -- Collect New Datasets Or Extend
Existing Ones?
|
Accepted to be presented at EMNLP 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Learning from free-text human feedback is essential for dialog systems, but
annotated data is scarce and usually covers only a small fraction of error
types known in conversational AI. Instead of collecting and annotating new
datasets from scratch, recent advances in synthetic dialog generation could be
used to augment existing dialog datasets with the necessary annotations.
However, to assess the feasibility of such an effort, it is important to know
the types and frequency of free-text human feedback included in these datasets.
In this work, we investigate this question for a variety of commonly used
dialog datasets, including MultiWoZ, SGD, BABI, PersonaChat,
Wizards-of-Wikipedia, and the human-bot split of the Self-Feeding Chatbot.
Using our observations, we derive new taxonomies for the annotation of
free-text human feedback in dialogs and investigate the impact of including
such data in response generation for three SOTA language generation models,
including GPT-2, LLAMA, and Flan-T5. Our findings provide new insights into the
composition of the datasets examined, including error types, user response
types, and the relations between them.
|
[
{
"created": "Tue, 24 Oct 2023 12:01:11 GMT",
"version": "v1"
}
] |
2023-10-25
|
[
[
"Petrak",
"Dominic",
""
],
[
"Moosavi",
"Nafise Sadat",
""
],
[
"Tian",
"Ye",
""
],
[
"Rozanov",
"Nikolai",
""
],
[
"Gurevych",
"Iryna",
""
]
] |
Learning from free-text human feedback is essential for dialog systems, but annotated data is scarce and usually covers only a small fraction of error types known in conversational AI. Instead of collecting and annotating new datasets from scratch, recent advances in synthetic dialog generation could be used to augment existing dialog datasets with the necessary annotations. However, to assess the feasibility of such an effort, it is important to know the types and frequency of free-text human feedback included in these datasets. In this work, we investigate this question for a variety of commonly used dialog datasets, including MultiWoZ, SGD, BABI, PersonaChat, Wizards-of-Wikipedia, and the human-bot split of the Self-Feeding Chatbot. Using our observations, we derive new taxonomies for the annotation of free-text human feedback in dialogs and investigate the impact of including such data in response generation for three SOTA language generation models, including GPT-2, LLAMA, and Flan-T5. Our findings provide new insights into the composition of the datasets examined, including error types, user response types, and the relations between them.
|
2005.04987
|
Daniele Silvestro
|
Daniele Silvestro and Tobias Andermann
|
Prior choice affects ability of Bayesian neural networks to identify
unknowns
| null | null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep Bayesian neural networks (BNNs) are a powerful tool, though
computationally demanding, to perform parameter estimation while jointly
estimating uncertainty around predictions. BNNs are typically implemented using
arbitrary normal-distributed prior distributions on the model parameters. Here,
we explore the effects of different prior distributions on classification tasks
in BNNs and evaluate the evidence supporting the predictions based on posterior
probabilities approximated by Markov Chain Monte Carlo sampling and by
computing Bayes factors. We show that the choice of priors has a substantial
impact on the ability of the model to confidently assign data to the correct
class (true positive rates). Prior choice also affects significantly the
ability of a BNN to identify out-of-distribution instances as unknown (false
positive rates). When comparing our results against neural networks (NN) with
Monte Carlo dropout we found that BNNs generally outperform NNs. Finally, in
our tests we did not find a single best choice as prior distribution. Instead,
each dataset yielded the best results under a different prior, indicating that
testing alternative options can improve the performance of BNNs.
|
[
{
"created": "Mon, 11 May 2020 10:32:47 GMT",
"version": "v1"
}
] |
2020-05-12
|
[
[
"Silvestro",
"Daniele",
""
],
[
"Andermann",
"Tobias",
""
]
] |
Deep Bayesian neural networks (BNNs) are a powerful tool, though computationally demanding, to perform parameter estimation while jointly estimating uncertainty around predictions. BNNs are typically implemented using arbitrary normal-distributed prior distributions on the model parameters. Here, we explore the effects of different prior distributions on classification tasks in BNNs and evaluate the evidence supporting the predictions based on posterior probabilities approximated by Markov Chain Monte Carlo sampling and by computing Bayes factors. We show that the choice of priors has a substantial impact on the ability of the model to confidently assign data to the correct class (true positive rates). Prior choice also affects significantly the ability of a BNN to identify out-of-distribution instances as unknown (false positive rates). When comparing our results against neural networks (NN) with Monte Carlo dropout we found that BNNs generally outperform NNs. Finally, in our tests we did not find a single best choice as prior distribution. Instead, each dataset yielded the best results under a different prior, indicating that testing alternative options can improve the performance of BNNs.
|
hep-th/0305126
|
Nelson R. F. Braga
|
Henrique Boschi-Filho and Nelson R. F. Braga
|
AdS space compactification and holographic mapping in the AdS/CFT
correspondence
|
5 pages, talk presented at "Renormalization Group and Anomalies in
Gravity and Cosmology", Ouro Preto, Brazil, March 2003
|
Nucl.Phys.Proc.Suppl.127:128-132,2004
|
10.1016/S0920-5632(03)02413-7
| null |
hep-th
| null |
Physical consistency of quantum fields in anti-de Sitter space time requires
that the space must be compactified by the inclusion of a boundary where
appropriate conditions are imposed. An interpretation for the presence of this
boundary is found taking AdS as a limiting case of the space generated by a
large number of coincident branes. The compactification of AdS leads to a
discretization of the spectrum of bulk fields. As a consequence, we find a one
to one mapping between the quantum states of scalar fields in AdS bulk and
boundary. Using this mapping as an approximation for the dual relation between
string dilaton field and scalar QCD glueballs the high energy QCD scaling is
reproduced. We also use this map to estimate the ratio of scalar glueball
masses.
|
[
{
"created": "Wed, 14 May 2003 21:15:31 GMT",
"version": "v1"
}
] |
2008-11-26
|
[
[
"Boschi-Filho",
"Henrique",
""
],
[
"Braga",
"Nelson R. F.",
""
]
] |
Physical consistency of quantum fields in anti-de Sitter space time requires that the space must be compactified by the inclusion of a boundary where appropriate conditions are imposed. An interpretation for the presence of this boundary is found taking AdS as a limiting case of the space generated by a large number of coincident branes. The compactification of AdS leads to a discretization of the spectrum of bulk fields. As a consequence, we find a one to one mapping between the quantum states of scalar fields in AdS bulk and boundary. Using this mapping as an approximation for the dual relation between string dilaton field and scalar QCD glueballs the high energy QCD scaling is reproduced. We also use this map to estimate the ratio of scalar glueball masses.
|
1912.02398
|
Jie An
|
Jie An, Haoyi Xiong, Jun Huan and Jiebo Luo
|
Ultrafast Photorealistic Style Transfer via Neural Architecture Search
| null | null | null | null |
cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The key challenge in photorealistic style transfer is that an algorithm
should faithfully transfer the style of a reference photo to a content photo
while the generated image should look like one captured by a camera. Although
several photorealistic style transfer algorithms have been proposed, they need
to rely on post- and/or pre-processing to make the generated images look
photorealistic. If we disable the additional processing, these algorithms would
fail to produce plausible photorealistic stylization in terms of detail
preservation and photorealism. In this work, we propose an effective solution
to these issues. Our method consists of a construction step (C-step) to build a
photorealistic stylization network and a pruning step (P-step) for
acceleration. In the C-step, we propose a dense auto-encoder named PhotoNet
based on a carefully designed pre-analysis. PhotoNet integrates a feature
aggregation module (BFA) and instance normalized skip links (INSL). To generate
faithful stylization, we introduce multiple style transfer modules in the
decoder and INSLs. PhotoNet significantly outperforms existing algorithms in
terms of both efficiency and effectiveness. In the P-step, we adopt a neural
architecture search method to accelerate PhotoNet. We propose an automatic
network pruning framework in the manner of teacher-student learning for
photorealistic stylization. The network architecture named PhotoNAS resulted
from the search achieves significant acceleration over PhotoNet while keeping
the stylization effects almost intact. We conduct extensive experiments on both
image and video transfer. The results show that our method can produce
favorable results while achieving 20-30 times acceleration in comparison with
the existing state-of-the-art approaches. It is worth noting that the proposed
algorithm accomplishes better performance without any pre- or post-processing.
|
[
{
"created": "Thu, 5 Dec 2019 05:51:54 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Jun 2020 12:56:01 GMT",
"version": "v2"
}
] |
2020-06-23
|
[
[
"An",
"Jie",
""
],
[
"Xiong",
"Haoyi",
""
],
[
"Huan",
"Jun",
""
],
[
"Luo",
"Jiebo",
""
]
] |
The key challenge in photorealistic style transfer is that an algorithm should faithfully transfer the style of a reference photo to a content photo while the generated image should look like one captured by a camera. Although several photorealistic style transfer algorithms have been proposed, they need to rely on post- and/or pre-processing to make the generated images look photorealistic. If we disable the additional processing, these algorithms would fail to produce plausible photorealistic stylization in terms of detail preservation and photorealism. In this work, we propose an effective solution to these issues. Our method consists of a construction step (C-step) to build a photorealistic stylization network and a pruning step (P-step) for acceleration. In the C-step, we propose a dense auto-encoder named PhotoNet based on a carefully designed pre-analysis. PhotoNet integrates a feature aggregation module (BFA) and instance normalized skip links (INSL). To generate faithful stylization, we introduce multiple style transfer modules in the decoder and INSLs. PhotoNet significantly outperforms existing algorithms in terms of both efficiency and effectiveness. In the P-step, we adopt a neural architecture search method to accelerate PhotoNet. We propose an automatic network pruning framework in the manner of teacher-student learning for photorealistic stylization. The network architecture named PhotoNAS resulted from the search achieves significant acceleration over PhotoNet while keeping the stylization effects almost intact. We conduct extensive experiments on both image and video transfer. The results show that our method can produce favorable results while achieving 20-30 times acceleration in comparison with the existing state-of-the-art approaches. It is worth noting that the proposed algorithm accomplishes better performance without any pre- or post-processing.
|
2403.10459
|
Marc Lafon
|
Marc Lafon and Alexandre Thomas
|
Understanding the Double Descent Phenomenon in Deep Learning
| null | null | null | null |
cs.LG cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Combining empirical risk minimization with capacity control is a classical
strategy in machine learning when trying to control the generalization gap and
avoid overfitting, as the model class capacity gets larger. Yet, in modern deep
learning practice, very large over-parameterized models (e.g. neural networks)
are optimized to fit perfectly the training data and still obtain great
generalization performance. Past the interpolation point, increasing model
complexity seems to actually lower the test error.
In this tutorial, we explain the concept of double descent and its
mechanisms. The first section sets the classical statistical learning framework
and introduces the double descent phenomenon. By looking at a number of
examples, section 2 introduces inductive biases that appear to have a key role
in double descent by selecting, among the multiple interpolating solutions, a
smooth empirical risk minimizer. Finally, section 3 explores the double descent
with two linear models, and gives other points of view from recent related
works.
|
[
{
"created": "Fri, 15 Mar 2024 16:51:24 GMT",
"version": "v1"
}
] |
2024-03-18
|
[
[
"Lafon",
"Marc",
""
],
[
"Thomas",
"Alexandre",
""
]
] |
Combining empirical risk minimization with capacity control is a classical strategy in machine learning when trying to control the generalization gap and avoid overfitting, as the model class capacity gets larger. Yet, in modern deep learning practice, very large over-parameterized models (e.g. neural networks) are optimized to fit perfectly the training data and still obtain great generalization performance. Past the interpolation point, increasing model complexity seems to actually lower the test error. In this tutorial, we explain the concept of double descent and its mechanisms. The first section sets the classical statistical learning framework and introduces the double descent phenomenon. By looking at a number of examples, section 2 introduces inductive biases that appear to have a key role in double descent by selecting, among the multiple interpolating solutions, a smooth empirical risk minimizer. Finally, section 3 explores the double descent with two linear models, and gives other points of view from recent related works.
|
2109.07920
|
Rodrigue Siry
|
Rodrigue Siry, Louis H\'emadou, Lo\"ic Simon, Fr\'ed\'eric Jurie
|
On the inductive biases of deep domain adaptation
|
10 pages, 8 Figures
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Domain alignment is currently the most prevalent solution to unsupervised
domain-adaptation tasks and are often being presented as minimizers of some
theoretical upper-bounds on risk in the target domain. However, further works
revealed severe inadequacies between theory and practice: we consolidate this
analysis and confirm that imposing domain invariance on features is neither
necessary nor sufficient to obtain low target risk. We instead argue that
successful deep domain adaptation rely largely on hidden inductive biases found
in the common practice, such as model pre-training or design of encoder
architecture. We perform various ablation experiments on popular benchmarks and
our own synthetic transfers to illustrate their role in prototypical
situations. To conclude our analysis, we propose to meta-learn parametric
inductive biases to solve specific transfers and show their superior
performance over handcrafted heuristics.
|
[
{
"created": "Thu, 16 Sep 2021 12:08:41 GMT",
"version": "v1"
}
] |
2021-09-17
|
[
[
"Siry",
"Rodrigue",
""
],
[
"Hémadou",
"Louis",
""
],
[
"Simon",
"Loïc",
""
],
[
"Jurie",
"Frédéric",
""
]
] |
Domain alignment is currently the most prevalent solution to unsupervised domain-adaptation tasks and are often being presented as minimizers of some theoretical upper-bounds on risk in the target domain. However, further works revealed severe inadequacies between theory and practice: we consolidate this analysis and confirm that imposing domain invariance on features is neither necessary nor sufficient to obtain low target risk. We instead argue that successful deep domain adaptation rely largely on hidden inductive biases found in the common practice, such as model pre-training or design of encoder architecture. We perform various ablation experiments on popular benchmarks and our own synthetic transfers to illustrate their role in prototypical situations. To conclude our analysis, we propose to meta-learn parametric inductive biases to solve specific transfers and show their superior performance over handcrafted heuristics.
|
1910.08121
|
Er-Cheng Tsai
|
Er-Cheng Tsai
|
Ghost Loops are Indispensable in Unitary Gauge
| null | null | null | null |
hep-th
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is conventionally taken for granted that the unitary gauge formulation of
quantum gauge field theory has the advantage of preservation unitarity because
only physical fields are involved but has the disadvantage of losing
renormalizability because of severe ultraviolet divergences due to vector meson
propagators. In this paper, we show how to handle the ultraviolet divergent
loops so that the physical amplitudes remain gauge invariant. One of the
consequences we arrive at is that ghost loops are needed to cancel the
divergences due to vector mesons and to give gauge invariant physical
amplitudes.
|
[
{
"created": "Thu, 17 Oct 2019 19:27:23 GMT",
"version": "v1"
}
] |
2019-10-21
|
[
[
"Tsai",
"Er-Cheng",
""
]
] |
It is conventionally taken for granted that the unitary gauge formulation of quantum gauge field theory has the advantage of preservation unitarity because only physical fields are involved but has the disadvantage of losing renormalizability because of severe ultraviolet divergences due to vector meson propagators. In this paper, we show how to handle the ultraviolet divergent loops so that the physical amplitudes remain gauge invariant. One of the consequences we arrive at is that ghost loops are needed to cancel the divergences due to vector mesons and to give gauge invariant physical amplitudes.
|
hep-th/9502032
| null |
M. C. Diamantini, P. Sodano, C. A. Trugenberger
|
Self Duality and Oblique Confinement in Planar Gauge Theories
|
32 pages, harvmac
|
Nucl.Phys. B448 (1995) 505-532
|
10.1016/0550-3213(95)00252-N
|
UGVA-DPT 1994/11-88 and DFUPG 98/94
|
hep-th cond-mat
| null |
We investigate the non-perturbative structure of two planar $Z_p \times Z_p$
lattice gauge models and discuss their relevance to two-dimensional condensed
matter systems and Josephson junction arrays. Both models involve two compact
U(1) gauge fields with Chern-Simons interactions, which break the symmetry down
to $Z_p \times Z_p$. By identifying the relevant topological excitations
(instantons) and their interactions we determine the phase structure of the
models. Our results match observed quantum phase transitions in Josephson
junction arrays and suggest also the possibility of {\it oblique confining
ground states} corresponding to quantum Hall regimes for either charges or
vortices.
|
[
{
"created": "Mon, 6 Feb 1995 10:06:47 GMT",
"version": "v1"
}
] |
2009-10-28
|
[
[
"Diamantini",
"M. C.",
""
],
[
"Sodano",
"P.",
""
],
[
"Trugenberger",
"C. A.",
""
]
] |
We investigate the non-perturbative structure of two planar $Z_p \times Z_p$ lattice gauge models and discuss their relevance to two-dimensional condensed matter systems and Josephson junction arrays. Both models involve two compact U(1) gauge fields with Chern-Simons interactions, which break the symmetry down to $Z_p \times Z_p$. By identifying the relevant topological excitations (instantons) and their interactions we determine the phase structure of the models. Our results match observed quantum phase transitions in Josephson junction arrays and suggest also the possibility of {\it oblique confining ground states} corresponding to quantum Hall regimes for either charges or vortices.
|
2205.09904
|
Jo Plested
|
Jo Plested, Tom Gedeon
|
Deep transfer learning for image classification: a survey
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Deep neural networks such as convolutional neural networks (CNNs) and
transformers have achieved many successes in image classification in recent
years. It has been consistently demonstrated that best practice for image
classification is when large deep models can be trained on abundant labelled
data. However there are many real world scenarios where the requirement for
large amounts of training data to get the best performance cannot be met. In
these scenarios transfer learning can help improve performance. To date there
have been no surveys that comprehensively review deep transfer learning as it
relates to image classification overall. However, several recent general
surveys of deep transfer learning and ones that relate to particular
specialised target image classification tasks have been published. We believe
it is important for the future progress in the field that all current knowledge
is collated and the overarching patterns analysed and discussed. In this survey
we formally define deep transfer learning and the problem it attempts to solve
in relation to image classification. We survey the current state of the field
and identify where recent progress has been made. We show where the gaps in
current knowledge are and make suggestions for how to progress the field to
fill in these knowledge gaps. We present a new taxonomy of the applications of
transfer learning for image classification. This taxonomy makes it easier to
see overarching patterns of where transfer learning has been effective and,
where it has failed to fulfill its potential. This also allows us to suggest
where the problems lie and how it could be used more effectively. We show that
under this new taxonomy, many of the applications where transfer learning has
been shown to be ineffective or even hinder performance are to be expected when
taking into account the source and target datasets and the techniques used.
|
[
{
"created": "Fri, 20 May 2022 00:03:39 GMT",
"version": "v1"
}
] |
2022-05-23
|
[
[
"Plested",
"Jo",
""
],
[
"Gedeon",
"Tom",
""
]
] |
Deep neural networks such as convolutional neural networks (CNNs) and transformers have achieved many successes in image classification in recent years. It has been consistently demonstrated that best practice for image classification is when large deep models can be trained on abundant labelled data. However there are many real world scenarios where the requirement for large amounts of training data to get the best performance cannot be met. In these scenarios transfer learning can help improve performance. To date there have been no surveys that comprehensively review deep transfer learning as it relates to image classification overall. However, several recent general surveys of deep transfer learning and ones that relate to particular specialised target image classification tasks have been published. We believe it is important for the future progress in the field that all current knowledge is collated and the overarching patterns analysed and discussed. In this survey we formally define deep transfer learning and the problem it attempts to solve in relation to image classification. We survey the current state of the field and identify where recent progress has been made. We show where the gaps in current knowledge are and make suggestions for how to progress the field to fill in these knowledge gaps. We present a new taxonomy of the applications of transfer learning for image classification. This taxonomy makes it easier to see overarching patterns of where transfer learning has been effective and, where it has failed to fulfill its potential. This also allows us to suggest where the problems lie and how it could be used more effectively. We show that under this new taxonomy, many of the applications where transfer learning has been shown to be ineffective or even hinder performance are to be expected when taking into account the source and target datasets and the techniques used.
|
1508.04633
|
Johannes Textor
|
Johannes Textor
|
Drawing and Analyzing Causal DAGs with DAGitty
|
15 pages, 2 figures
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
DAGitty is a software for drawing and analyzing causal diagrams, also known
as directed acyclic graphs (DAGs). Functions include identification of minimal
sufficient adjustment sets for estimating causal effects, diagnosis of
insufficient or invalid adjustment via the identification of biasing paths,
identification of instrumental variables, and derivation of testable
implications. DAGitty is provided in the hope that it is useful for researchers
and students in Epidemiology, Sociology, Psychology, and other empirical
disciplines. The software should run in any web browser that supports modern
JavaScript, HTML, and SVG. This is the user manual for DAGitty version 2.3. The
manual is updated with every release of a new stable version. DAGitty is
available at dagitty.net.
|
[
{
"created": "Wed, 19 Aug 2015 13:11:32 GMT",
"version": "v1"
}
] |
2015-08-20
|
[
[
"Textor",
"Johannes",
""
]
] |
DAGitty is a software for drawing and analyzing causal diagrams, also known as directed acyclic graphs (DAGs). Functions include identification of minimal sufficient adjustment sets for estimating causal effects, diagnosis of insufficient or invalid adjustment via the identification of biasing paths, identification of instrumental variables, and derivation of testable implications. DAGitty is provided in the hope that it is useful for researchers and students in Epidemiology, Sociology, Psychology, and other empirical disciplines. The software should run in any web browser that supports modern JavaScript, HTML, and SVG. This is the user manual for DAGitty version 2.3. The manual is updated with every release of a new stable version. DAGitty is available at dagitty.net.
|
1703.05278
|
Michel Fliess
|
C\'edric Join, Jean Bernier, St\'ephane Mottelet, Michel Fliess,
Sabrina Rechdaoui-Gu\'erin, Sam Azimi, Vincent Rocher
|
A simple and efficient feedback control strategy for wastewater
denitrification
|
IFAC 2017 World Congress, Toulouse, France
| null | null | null |
cs.SY math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to severe mathematical modeling and calibration difficulties open-loop
feedforward control is mainly employed today for wastewater denitrification,
which is a key ecological issue. In order to improve the resulting poor
performances a new model-free control setting and its corresponding
"intelligent" controller are introduced. The pitfall of regulating two output
variables via a single input variable is overcome by introducing also an
open-loop knowledge-based control deduced from the plant behavior. Several
convincing computer simulations are presented and discussed.
|
[
{
"created": "Wed, 15 Mar 2017 17:31:01 GMT",
"version": "v1"
}
] |
2017-03-16
|
[
[
"Join",
"Cédric",
""
],
[
"Bernier",
"Jean",
""
],
[
"Mottelet",
"Stéphane",
""
],
[
"Fliess",
"Michel",
""
],
[
"Rechdaoui-Guérin",
"Sabrina",
""
],
[
"Azimi",
"Sam",
""
],
[
"Rocher",
"Vincent",
""
]
] |
Due to severe mathematical modeling and calibration difficulties open-loop feedforward control is mainly employed today for wastewater denitrification, which is a key ecological issue. In order to improve the resulting poor performances a new model-free control setting and its corresponding "intelligent" controller are introduced. The pitfall of regulating two output variables via a single input variable is overcome by introducing also an open-loop knowledge-based control deduced from the plant behavior. Several convincing computer simulations are presented and discussed.
|
1809.04793
|
Felipe Rosso
|
Felipe Rosso
|
Holography of negative energy states
|
7 pages
|
Phys. Rev. D 99, 026002 (2019)
|
10.1103/PhysRevD.99.026002
| null |
hep-th gr-qc
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Quantum states with negative energy densities have been long known to exist
in quantum field theories. We explore the structure of such states for
holographic theories using quantum information theory tools and show how
certain negative energy states are naturally captured by the thermodynamics of
black holes with hyperbolic horizon at zero temperature, suggesting that they
provide a dual description of those states. Our results give a satisfying field
theory understanding of the distinct thermodynamics of such black holes.
|
[
{
"created": "Thu, 13 Sep 2018 06:34:00 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Jan 2019 02:49:53 GMT",
"version": "v2"
}
] |
2019-01-09
|
[
[
"Rosso",
"Felipe",
""
]
] |
Quantum states with negative energy densities have been long known to exist in quantum field theories. We explore the structure of such states for holographic theories using quantum information theory tools and show how certain negative energy states are naturally captured by the thermodynamics of black holes with hyperbolic horizon at zero temperature, suggesting that they provide a dual description of those states. Our results give a satisfying field theory understanding of the distinct thermodynamics of such black holes.
|
1010.1502
|
Dimitrios Giataganas
|
Dimitrios Giataganas
|
Semiclassical strings in marginally deformed toric AdS/CFT
|
29 pages, 9 figures
| null |
10.1007/JHEP12(2011)051
|
WITS-CTP-057
|
hep-th
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study string solutions in the beta-deformed Sasaki-Einstein gauge/gravity
dualities. We find that the BPS point-like strings move in the submanifolds
where the two U(1) circles shrink to zero size. In the corresponding T^3
fibration description, the strings live on the edges of the polyhedron, where
the T^3 fibration degenerates to T^1. Moreover, we find that for each deformed
Sasaki-Einstein manifold the BPS string solutions exist only for particular
values of the deformation parameter. Our results imply that in the dual field
theory the corresponding BPS operators exist only for these particular values
of the deformation parameter we find. We also examine the non-BPS strings,
derive their dispersion relations and compare them with the undeformed ones.
Finally, we comment on the range of the validity of our solutions and their
dependence on the deformation parameter.
|
[
{
"created": "Thu, 7 Oct 2010 18:09:56 GMT",
"version": "v1"
}
] |
2015-05-20
|
[
[
"Giataganas",
"Dimitrios",
""
]
] |
We study string solutions in the beta-deformed Sasaki-Einstein gauge/gravity dualities. We find that the BPS point-like strings move in the submanifolds where the two U(1) circles shrink to zero size. In the corresponding T^3 fibration description, the strings live on the edges of the polyhedron, where the T^3 fibration degenerates to T^1. Moreover, we find that for each deformed Sasaki-Einstein manifold the BPS string solutions exist only for particular values of the deformation parameter. Our results imply that in the dual field theory the corresponding BPS operators exist only for these particular values of the deformation parameter we find. We also examine the non-BPS strings, derive their dispersion relations and compare them with the undeformed ones. Finally, we comment on the range of the validity of our solutions and their dependence on the deformation parameter.
|
1310.0736
|
Liane Gabora
|
Liane Gabora
|
Toward a Theory of Creative Inklings
|
arXiv admin note: substantial text overlap with arXiv:1309.7414
|
In R. Ascott (Ed.), Art, technology, consciousness (pp. 159-164).
Bristol UK: Intellect Press (2000)
| null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is perhaps not so baffling that we have the ability to develop, refine,
and manifest a creative idea, once it has been conceived. But what sort of a
system could spawn the initial seed of creativity from which an idea grows?
This paper looks at how the mind is structured in such a way that we can
experience a glimmer of insight or inkling of artistic inspiration.
|
[
{
"created": "Sat, 28 Sep 2013 03:27:25 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Jul 2019 20:26:52 GMT",
"version": "v2"
}
] |
2019-07-09
|
[
[
"Gabora",
"Liane",
""
]
] |
It is perhaps not so baffling that we have the ability to develop, refine, and manifest a creative idea, once it has been conceived. But what sort of a system could spawn the initial seed of creativity from which an idea grows? This paper looks at how the mind is structured in such a way that we can experience a glimmer of insight or inkling of artistic inspiration.
|
2011.09396
|
John Rizos
|
Ignatios Antoniadis, Dimitri V. Nanopoulos, John Rizos
|
Cosmology of the string derived flipped $SU(5)$
|
29 pages, no figures, minor changes, typos corrected
|
JCAP03(2021)017
|
10.1088/1475-7516/2021/03/017
|
ACT-7-20, MI-TH-2030
|
hep-th astro-ph.CO gr-qc hep-ph
|
http://creativecommons.org/licenses/by/4.0/
|
We study the cosmology of a string derived supersymmetric flipped $SU(5)$
model in the context of free-fermionic heterotic constructions that allow full
calculability of the effective supergravity in perturbation theory around the
fermionic vacuum where all string moduli have fixed values. The model has 3
generations of chiral families and a Higgs sector leading to particle
phenomenology consistent with low energy data, that has been extensively
studied in the past. Here, we show that it can also accommodate a novel
successful cosmology, based on the no-scale effective supergravity derived from
string theory as well as an appropriate induced superpotential suppressed by
five powers of the string scale. It utilises two gauge singlet chiral
superfields present in the low energy spectrum: the inflaton $y$, identified as
the superpartner of a state mixed with R-handed neutrinos, and the goldstino
$z$ with a superpotential of the form $W_I=M_I z(y-\lambda y^2)$ (in
supergravity units) where $\lambda$ is a dimensionless ${\cal O}\left(1\right)$
parameter and $M_I$ the mass scale of inflation generated at 5th order by the
breaking of an anomalous $U(1)_A$ gauge symmetry, characteristic of heterotic
string chiral vacua. The resulting scalar potential leads to Starobinsky type
inflation. Our results can be easily generalised to a large class of models
with similar properties.
|
[
{
"created": "Wed, 18 Nov 2020 16:55:44 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Mar 2021 20:46:06 GMT",
"version": "v2"
}
] |
2021-03-12
|
[
[
"Antoniadis",
"Ignatios",
""
],
[
"Nanopoulos",
"Dimitri V.",
""
],
[
"Rizos",
"John",
""
]
] |
We study the cosmology of a string derived supersymmetric flipped $SU(5)$ model in the context of free-fermionic heterotic constructions that allow full calculability of the effective supergravity in perturbation theory around the fermionic vacuum where all string moduli have fixed values. The model has 3 generations of chiral families and a Higgs sector leading to particle phenomenology consistent with low energy data, that has been extensively studied in the past. Here, we show that it can also accommodate a novel successful cosmology, based on the no-scale effective supergravity derived from string theory as well as an appropriate induced superpotential suppressed by five powers of the string scale. It utilises two gauge singlet chiral superfields present in the low energy spectrum: the inflaton $y$, identified as the superpartner of a state mixed with R-handed neutrinos, and the goldstino $z$ with a superpotential of the form $W_I=M_I z(y-\lambda y^2)$ (in supergravity units) where $\lambda$ is a dimensionless ${\cal O}\left(1\right)$ parameter and $M_I$ the mass scale of inflation generated at 5th order by the breaking of an anomalous $U(1)_A$ gauge symmetry, characteristic of heterotic string chiral vacua. The resulting scalar potential leads to Starobinsky type inflation. Our results can be easily generalised to a large class of models with similar properties.
|
2204.10040
|
Niclas Boehmer
|
Niclas Boehmer and Klaus Heeger
|
Adapting Stable Matchings to Forced and Forbidden Pairs
| null | null | null | null |
cs.GT cs.DM cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the problem of adapting a stable matching to forced and
forbidden pairs. Specifically, given a stable matching $M_1$, a set $Q$ of
forced pairs, and a set $P$ of forbidden pairs, we want to find a stable
matching that includes all pairs from $Q$, no pair from $P$, and that is as
close as possible to $M_1$. We study this problem in four classical stable
matching settings: Stable Roommates (with Ties) and Stable Marriage (with
Ties). As our main contribution, we employ the theory of rotations for Stable
Roommates to develop a polynomial-time algorithm for adapting Stable Roommates
matchings to forced pairs. In contrast to this, we show that the same problem
for forbidden pairs is NP-hard. However, our polynomial-time algorithm for the
case of only forced pairs can be extended to a fixed-parameter tractable
algorithm with respect to the number of forbidden pairs when both forced and
forbidden pairs are present. Moreover, we also study the setting where
preferences contain ties. Here, depending on the chosen stability criterion, we
show either that our algorithmic results can be extended or that formerly
tractable problems become intractable.
|
[
{
"created": "Thu, 21 Apr 2022 11:50:12 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Nov 2022 16:04:01 GMT",
"version": "v2"
}
] |
2022-11-23
|
[
[
"Boehmer",
"Niclas",
""
],
[
"Heeger",
"Klaus",
""
]
] |
We introduce the problem of adapting a stable matching to forced and forbidden pairs. Specifically, given a stable matching $M_1$, a set $Q$ of forced pairs, and a set $P$ of forbidden pairs, we want to find a stable matching that includes all pairs from $Q$, no pair from $P$, and that is as close as possible to $M_1$. We study this problem in four classical stable matching settings: Stable Roommates (with Ties) and Stable Marriage (with Ties). As our main contribution, we employ the theory of rotations for Stable Roommates to develop a polynomial-time algorithm for adapting Stable Roommates matchings to forced pairs. In contrast to this, we show that the same problem for forbidden pairs is NP-hard. However, our polynomial-time algorithm for the case of only forced pairs can be extended to a fixed-parameter tractable algorithm with respect to the number of forbidden pairs when both forced and forbidden pairs are present. Moreover, we also study the setting where preferences contain ties. Here, depending on the chosen stability criterion, we show either that our algorithmic results can be extended or that formerly tractable problems become intractable.
|
hep-th/0412289
|
Hugo Montani
|
A. Cabrera and H. Montani
|
Hamiltonian Loop Group Actions and T-Duality for group manifolds
|
34 pages
|
J.Geom.Phys. 56 (2006) 1116-1143
|
10.1016/j.geomphys.2005.06.006
| null |
hep-th
| null |
We carry out a Hamiltonian analysis of Poisson-Lie T-duality based on the
loop geometry of the underlying phases spaces of the dual sigma and WZW models.
Duality is fully characterized by the existence of equivariant momentum maps on
the phase spaces such that the reduced phase space of the WZW model and a pure
central extension coadjoint orbit work as a bridge linking both the sigma
models. These momentum maps are associated to Hamiltonian actions of the loop
group of the Drinfeld double on both spaces and the duality transformations are
explicitly constructed in terms of these actions. Compatible dynamics arise in
a general collective form and the resulting Hamiltonian description encodes all
known aspects of this duality and its generalizations.
|
[
{
"created": "Thu, 23 Dec 2004 18:26:21 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Mar 2005 12:42:01 GMT",
"version": "v2"
}
] |
2015-06-26
|
[
[
"Cabrera",
"A.",
""
],
[
"Montani",
"H.",
""
]
] |
We carry out a Hamiltonian analysis of Poisson-Lie T-duality based on the loop geometry of the underlying phases spaces of the dual sigma and WZW models. Duality is fully characterized by the existence of equivariant momentum maps on the phase spaces such that the reduced phase space of the WZW model and a pure central extension coadjoint orbit work as a bridge linking both the sigma models. These momentum maps are associated to Hamiltonian actions of the loop group of the Drinfeld double on both spaces and the duality transformations are explicitly constructed in terms of these actions. Compatible dynamics arise in a general collective form and the resulting Hamiltonian description encodes all known aspects of this duality and its generalizations.
|
2311.13339
|
Michael Shapiro
|
Anna Laddach, Michael Shapiro
|
Non-deterministic linear thresholding systems reveal their deterministic
origins
|
4 pages
| null | null | null |
q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
Linear thresholding systems have been used as a model of neural activation
and have more recently been proposed as a model of gene activation.
Deterministic linear thresholding systems can be turned into non-deterministic
systems by the introduction of noise. Under mild conditions on the noise, we
show that the deterministic model can be deduced from the probabilities of the
non-deterministic model.
|
[
{
"created": "Wed, 22 Nov 2023 12:03:42 GMT",
"version": "v1"
}
] |
2023-11-23
|
[
[
"Laddach",
"Anna",
""
],
[
"Shapiro",
"Michael",
""
]
] |
Linear thresholding systems have been used as a model of neural activation and have more recently been proposed as a model of gene activation. Deterministic linear thresholding systems can be turned into non-deterministic systems by the introduction of noise. Under mild conditions on the noise, we show that the deterministic model can be deduced from the probabilities of the non-deterministic model.
|
1706.05826
|
Di Wang
|
Di Wang, Kimon Fountoulakis, Monika Henzinger, Michael W. Mahoney,
Satish Rao
|
Capacity Releasing Diffusion for Speed and Locality
|
Appeared in ICML 2017. Current version added reference and discussion
of work on generalized Cheeger's inequalities
| null | null | null |
cs.DS cs.AI cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Diffusions and related random walk procedures are of central importance in
many areas of machine learning, data analysis, and applied mathematics. Because
they spread mass agnostically at each step in an iterative manner, they can
sometimes spread mass "too aggressively," thereby failing to find the "right"
clusters. We introduce a novel Capacity Releasing Diffusion (CRD) Process,
which is both faster and stays more local than the classical spectral diffusion
process. As an application, we use our CRD Process to develop an improved local
algorithm for graph clustering. Our local graph clustering method can find
local clusters in a model of clustering where one begins the CRD Process in a
cluster whose vertices are connected better internally than externally by an
$O(\log^2 n)$ factor, where $n$ is the number of nodes in the cluster. Thus,
our CRD Process is the first local graph clustering algorithm that is not
subject to the well-known quadratic Cheeger barrier. Our result requires a
certain smoothness condition, which we expect to be an artifact of our
analysis. Our empirical evaluation demonstrates improved results, in particular
for realistic social graphs where there are moderately good---but not very
good---clusters.
|
[
{
"created": "Mon, 19 Jun 2017 08:18:04 GMT",
"version": "v1"
},
{
"created": "Sun, 10 Jun 2018 08:58:07 GMT",
"version": "v2"
}
] |
2018-06-12
|
[
[
"Wang",
"Di",
""
],
[
"Fountoulakis",
"Kimon",
""
],
[
"Henzinger",
"Monika",
""
],
[
"Mahoney",
"Michael W.",
""
],
[
"Rao",
"Satish",
""
]
] |
Diffusions and related random walk procedures are of central importance in many areas of machine learning, data analysis, and applied mathematics. Because they spread mass agnostically at each step in an iterative manner, they can sometimes spread mass "too aggressively," thereby failing to find the "right" clusters. We introduce a novel Capacity Releasing Diffusion (CRD) Process, which is both faster and stays more local than the classical spectral diffusion process. As an application, we use our CRD Process to develop an improved local algorithm for graph clustering. Our local graph clustering method can find local clusters in a model of clustering where one begins the CRD Process in a cluster whose vertices are connected better internally than externally by an $O(\log^2 n)$ factor, where $n$ is the number of nodes in the cluster. Thus, our CRD Process is the first local graph clustering algorithm that is not subject to the well-known quadratic Cheeger barrier. Our result requires a certain smoothness condition, which we expect to be an artifact of our analysis. Our empirical evaluation demonstrates improved results, in particular for realistic social graphs where there are moderately good---but not very good---clusters.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.