abstract
stringlengths 42
2.09k
|
---|
We revisit the foundational Moment Formula proved by Roger Lee fifteen years
ago. We show that when the underlying stock price martingale admits finite
log-moments E[|log(S)|^q] for some positive q, the arbitrage-free growth in the
left wing of the implied volatility smile is less constrained than Lee's bound.
The result is rationalised by a market trading discretely monitored variance
swaps wherein the payoff is a function of squared log-returns, and requires no
assumption for the underlying martingale to admit any negative moment. In this
respect, the result can derived from a model-independent setup. As a byproduct,
we relax the moment assumptions on the stock price to provide a new proof of
the notorious Gatheral-Fukasawa formula expressing variance swaps in terms of
the implied volatility.
|
We present a combined angle-resolved photoemission spectroscopy and
low-energy electron diffraction (LEED) study of the prominent transition metal
dichalcogenide IrTe$_2$ upon potassium (K) deposition on its surface. Pristine
IrTe$_2$ undergoes a series of charge-ordered phase transitions below room
temperature that are characterized by the formation of stripes of Ir dimers of
different periodicities. Supported by density functional theory calculations,
we first show that the K atoms dope the topmost IrTe$_2$ layer with electrons,
therefore strongly decreasing the work function and shifting only the
electronic surface states towards higher binding energy. We then follow the
evolution of its electronic structure as a function of temperature across the
charge-ordered phase transitions and observe that their critical temperatures
are unchanged for K coverages of $0.13$ and $0.21$~monolayer (ML). Using LEED,
we also confirm that the periodicity of the related stripe phases is unaffected
by the K doping. We surmise that the charge-ordered phase transitions of
IrTe$_2$ are robust against electron surface doping, because of its metallic
nature at all temperatures, and due to the importance of structural effects in
stabilizing charge order in IrTe$_2$.
|
We theoretically show that two distinctive spin textures manifest themselves
around saddle points of energy bands in a monolayer NbSe$_2$ under external
gate potentials. While the density of states at all saddle points diverge
logarithmically, ones at the zone boundaries display a windmill-shaped spin
texture while the others unidirectional spin orientations. The disparate
spin-resolved states are demonstrated to contribute an intrinsic spin Hall
conductivity significantly while their characteristics differ from each
other.Based on a minimal but essential tight-binding approximation reproducing
first-principles computation results, we established distinct effective Rashba
Hamiltonians for each saddle point, realizing the unique spin textures
depending on their momentum. Energetic positions of the saddle points in a
single layer NbSe$_2$ are shown to be well controlled by a gate potential so
that it could be a prototypical system to test a competition between various
collective phenomena triggered by diverging density of states and their spin
textures in low-dimension.
|
The time of the first occurrence of a threshold crossing event in a
stochastic process, known as the first passage time, is of interest in many
areas of sciences and engineering. Conventionally, there is an implicit
assumption that the notional 'sensor' monitoring the threshold crossing event
is always active. In many realistic scenarios, the sensor monitoring the
stochastic process works intermittently. Then, the relevant quantity of
interest is the $\textit{first detection time}$, which denotes the time when
the sensor detects the threshold crossing event for the first time. In this
work, a birth-death process monitored by a random intermittent sensor is
studied, for which the first detection time distribution is obtained. In
general, it is shown that the first detection time is related to, and is
obtainable from, the first passage time distribution. Our analytical results
display an excellent agreement with simulations. Further, this framework is
demonstrated in several applications -- the SIS compartmental and logistic
models, and birth-death processes with resetting. Finally, we solve the
practically relevant problem of inferring the first passage time distribution
from the first detection time.
|
Millimeter-wave (mmWave) and sub-Terahertz (THz) frequencies are expected to
play a vital role in 6G wireless systems and beyond due to the vast available
bandwidth of many tens of GHz. This paper presents an indoor 3-D spatial
statistical channel model for mmWave and sub-THz frequencies based on extensive
radio propagation measurements at 28 and 140 GHz conducted in an indoor office
environment from 2014 to 2020. Omnidirectional and directional path loss models
and channel statistics such as the number of time clusters, cluster delays, and
cluster powers were derived from over 15,000 measured power delay profiles. The
resulting channel statistics show that the number of time clusters follows a
Poisson distribution and the number of subpaths within each cluster follows a
composite exponential distribution for both LOS and NLOS environments at 28 and
140 GHz. This paper proposes a unified indoor statistical channel model for
mmWave and sub-Terahertz frequencies following the mathematical framework of
the previous outdoor NYUSIM channel models. A corresponding indoor channel
simulator is developed, which can recreate 3-D omnidirectional, directional,
and multiple input multiple output (MIMO) channels for arbitrary mmWave and
sub-THz carrier frequency up to 150 GHz, signal bandwidth, and antenna
beamwidth. The presented statistical channel model and simulator will guide
future air-interface, beamforming, and transceiver designs for 6G and beyond.
|
We aim at measuring the influence of the nondeterministic choices of a part
of a system on its ability to satisfy a specification. For this purpose, we
apply the concept of Shapley values to verification as a means to evaluate how
important a part of a system is. The importance of a component is measured by
giving its control to an adversary, alone or along with other components, and
testing whether the system can still fulfill the specification. We study this
idea in the framework of model-checking with various classical types of
linear-time specification, and propose several ways to transpose it to
branching ones. We also provide tight complexity bounds in almost every case.
|
The associations between emergent physical phenomena (e.g.,
superconductivity) and orbital, charge, and spin degrees of freedom of $3d$
electrons are intriguing in transition metal compounds. Here, we successfully
manipulate the superconductivity of spinel oxide Li$_{1\pm
x}$Ti$_2$O$_{4-\delta}$ (LTO) by ionic liquid gating. A dome-shaped
superconducting phase diagram is established, where two insulating phases are
disclosed both in heavily electron-doping and hole-doping regions. The
superconductor-insulator transition (SIT) in the hole-doping region can be
attributed to the loss of Ti valence electrons. In the electron-doping region,
LTO exhibits an unexpected SIT instead of a metallic behavior despite an
increase in carrier density. Furthermore, a thermal hysteresis is observed in
the normal state resistance curve, suggesting a first-order phase transition.
We speculate that the SIT and the thermal hysteresis stem from the enhanced
$3d$ electron correlations and the formation of orbital ordering by comparing
the transport and structural results of LTO with the other spinel oxide
superconductor MgTi$_2$O$_4$, as well as analysing the electronic structure by
first-principles calculations. Further comprehension of the detailed interplay
between superconductivity and orbital ordering would contribute to the
revealing of unconventional superconducting pairing mechanism.
|
This paper explores the options available to the anti-realist to defend a
Quinean empirical under-determination thesis using examples of dualities. I
first explicate a version of the empirical under-determination thesis that can
be brought to bear on theories of contemporary physics. Then I identify a class
of examples of dualities that lead to empirical under-determination. But I
argue that the resulting under-determination is benign, and is not a threat to
a cautious scientific realism. Thus dualities are not new ammunition for the
anti-realist. The paper also shows how the number of possible interpretative
options about dualities that have been considered in the literature can be
reduced, and suggests a general approach to scientific realism that one may
take dualities to favour.
|
Two-dimensional (2D) magnets have broad application prospects in the
spintronics, but how to effectively control them with a small electric field is
still an issue. Here we propose that 2D magnets can be efficiently controlled
in a multiferroic heterostructure composed of 2D magnetic material and
perovskite oxide ferroelectric (POF) whose dielectric polarization is easily
flipped under a small electric field. We illustrate the feasibility of such
strategy in the bilayer CrI3/BiFeO3(001) heterostructure by using the
first-principles calculations. Different from the traditional POF multiferroic
heterostructures which have strong interface interactions, we find that the
interface interaction between CrI3 and BiFeO3(001) is van der Waals type.
Whereas, the heterostructure has particular strong magnetoelectric coupling
where the bilayer CrI3 can be efficiently switched between ferromagnetic and
antiferromagnetic types by the polarized states of BiFeO3(001). We also
discover the competing effect between electron doping and the additional
electric field on the interlayer exchange coupling interaction of CrI3, which
is responsible to the magnetic phase transition. Our results provide a new
avenue for the tuning of 2D magnets with a small electric field.
|
Previous studies have shown that the ground state of systems of nucleons
composed by an equal number of protons and neutrons interacting via
proton-neutron pairing forces can be described accurately by a condensate of
$\alpha$-like quartets. Here we extend these studies to the low-lowing excited
states of these systems and show that these states can be accurately described
by breaking a quartet from the ground state condensate and replacing it with an
"excited" quartet. This approach, which is analogous to the one-broken-pair
approximation employed for like-particle pairing, is analysed for various
isovector and isovector-isoscalar pairing
|
This paper focuses on regularisation methods using models up to the third
order to search for up to second-order critical points of a finite-sum
minimisation problem. The variant presented belongs to the framework of [3]: it
employs random models with accuracy guaranteed with a sufficiently large
prefixed probability and deterministic inexact function evaluations within a
prescribed level of accuracy. Without assuming unbiased estimators, the
expected number of iterations is $\mathcal{O}\bigl(\epsilon_1^{-2}\bigr)$ or
$\mathcal{O}\bigl(\epsilon_1^{-{3/2}}\bigr)$ when searching for a first-order
critical point using a second or third order model, respectively, and of
$\mathcal{O}\bigl(\max[\epsilon_1^{-{3/2}},\epsilon_2^{-3}]\bigr)$ when seeking
for second-order critical points with a third order model, in which
$\epsilon_j$, $j\in\{1,2\}$, is the $j$th-order tolerance. These results match
the worst-case optimal complexity for the deterministic counterpart of the
method. Preliminary numerical tests for first-order optimality in the context
of nonconvex binary classification in imaging, with and without Artifical
Neural Networks (ANNs), are presented and discussed.
|
Organic-inorganic metal halide perovskites have recently attracted increasing
attention as highly efficient light harvesting materials for photovoltaic
applications. However, the precise control of crystallization and morphology of
organometallic perovskites deposited from solution, considered crucial for
enhancing the final photovoltaic performance, remains challenging. In this
context, here, we report on growing microcrystalline deposits of CH3NH3PbI3
(MAPbI3), by one-step solution casting on cylinde-shaped quartz substrates
(rods). We show that the substrate curvature has a strong influence on
morphology of the obtained polycrystalline deposits of MAPbI3. Although the
crystalline width and length markedly decreased for substrates with higher
curvatures, the photoluminescence (PL) spectral peak positions did not
significantly evolve for MAPbI3 deposits on substrates with different
diameters. The crystalline size reduction and denser coverage of
microcrystalline MAPbI3 deposits on cylinder-shaped substrates with higher
curvatures were attributed to two major contributions, both related to the
annealing step of the MAPbI3 deposits. In particular, the diameter-dependent
variability of the heat capacities and the substrate curvature-enhanced solvent
evaporation rate seemed to contribute the most to the crystallization process
and the resulting morphology changes of MAPbI3 deposits on cylinder-shaped
quartz substrates with various diameters. The longitudinal geometry of
cylinder-shaped substrates provided also a facile solution for checking the PL
response of the deposits of MAPbI3 exposed to the flow of various gaseous
media, such as oxygen, nitrogen and argon. Overall, the approach reported
herein inspires novel, cylinder-shaped geometries of MAPbI3 deposits, which can
find applications in low-cost photo-optical devices, including gas sensors.
|
This paper proposes a deep learning framework for classification of BBC
television programmes using audio. The audio is firstly transformed into
spectrograms, which are fed into a pre-trained convolutional Neural Network
(CNN), obtaining predicted probabilities of sound events occurring in the audio
recording. Statistics for the predicted probabilities and detected sound events
are then calculated to extract discriminative features representing the
television programmes. Finally, the embedded features extracted are fed into a
classifier for classifying the programmes into different genres. Our
experiments are conducted over a dataset of 6,160 programmes belonging to nine
genres labelled by the BBC. We achieve an average classification accuracy of
93.7% over 14-fold cross validation. This demonstrates the efficacy of the
proposed framework for the task of audio-based classification of television
programmes.
|
We prove that two Enriques surfaces defined over an algebraically closed
field of characteristic different from $2$ are isomorphic if their Kuznetsov
components are equivalent. This improves and complete our previous result joint
with Nuer where the same statement is proved for generic Enriques surfaces.
|
Let $\{(A_i,B_i)\}_{i=1}^m$ be a set pair system. F\"{u}redi, Gy\'{a}rf\'{a}s
and Kir\'{a}ly called it {\em $1$-cross intersecting} if $|A_i\cap B_j|$ is $1$
when $i\neq j$ and $0$ if $i=j$. They studied such systems and their
generalizations, and in particular considered $m(a,b,1)$ -- the maximum size of
a $1$-cross intersecting set pair system in which $|A_i|\leq a$ and $|B_i|\leq
b$ for all $i$. F\"{u}redi, Gy\'{a}rf\'{a}s and Kir\'{a}ly proved that
$m(n,n,1)\geq 5^{(n-1)/2}$ and asked whether there are upper bounds on
$m(n,n,1)$ significantly better than the classical bound ${2n\choose n}$ of
Bollob\' as for cross intersecting set pair systems.
Answering one of their questions, Holzman recently proved that if $a,b\geq
2$, then $m(a,b,1)\leq \frac{29}{30}\binom{a+b}{a}$. He also conjectured that
the factor $\frac{29}{30}$ in his bound can be replaced by $\frac{5}{6}$. The
goal of this paper is to prove this bound.
|
The structure and stability of ternary systems prepared with polysorbate 60
and various combinations of cetyl (C16) and stearyl (C18) alcohols (fatty
alcohol 16g, polysorbate 4g, water 180g) were examined as they aged over 3
months at 25oC. Rheological results showed that the consistency of these
systems increased initially during roughly the first week of aging, which was
succeeded by little changes in consistency (systems containing from 30% to 70%
C18, with the 50% C18 system showing the highest consistencies in viscosity and
elasticity) or significant breakdown of structure (remaining systems). The
formation and/or disintegration of all ternary systems were also detected by
microscopy and differential scanning calorimetry experiments. This study
emphasizes the fact that the structure and consistency of ternary systems are
dominantly controlled by the swelling capacity of the lamellar
$\alpha-$crystalline gel phase. When the conversion of this gel phase into
non-swollen $\beta$- or $\gamma$-crystals occurs, systems change from
semisolids to fluids. Molecular dynamics simulations were performed to provide
important details on the molecular mechanism of our ternary systems.
Computational results supported the hypothesis experimentally proposed for the
stability of the mixed system being due to an increase in the flexibility,
hence an increase in the configurational entropy of the chain tip of the
alcohol with a longer hydrocarbon chain (with the highest flexibility observed
in the 50:50 C18:C16 system). This finding is in excellent agreement with
experimental conclusions. Additionally, simulation data show that in the mixed
system, the alcohol with shorter hydrocarbon chain becomes more rigid. These
molecular details could not be available in experimental measurements
|
The slow revolution of the Earth and Moon around their barycentrum does not
induce Coriolis accelerations. On the other hand, the motion of Sun and Earth
is a rotation with Coriolis forces which appear not to have been calculated
yet, nor have the inertial accelerations within the system of motion of all
three celestial bodies. It is the purpose of this contribution to evaluate the
related Coriolis and centrifugal terms and to compare them to the available
atmospheric standard terms. It is a main result that the revolution is of
central importance in the combined dynamics of Earth, Moon and Sun. Covariant
flow equations are well known tools for dealing with such complicated flow
settings. They are used here to quantify the effects of the Earth's revolution
around the Earth-Moon barycenter and its rotation around the Sun on the
atmospheric circulation. It is found that the motion around the Sun adds time
dependent terms to the standard Coriolis forces. The related centrifugal
accelerations are presented. A major part of these accelerations is balanced by
the gravitational attraction by Moon and Sun, but important unbalanced
contributions remain. New light on the consequences of the Earth's revolution
is shed by repeating the calculations for a rotating Earth-Moon pair. It is
found that the revolution complicates the atmospheric dynamics.
|
It is no secret amongst deep learning researchers that finding the optimal
data augmentation strategy during training can mean the difference between
state-of-the-art performance and a run-of-the-mill result. To that end, the
community has seen many efforts to automate the process of finding the perfect
augmentation procedure for any task at hand. Unfortunately, even recent
cutting-edge methods bring massive computational overhead, requiring as many as
100 full model trainings to settle on an ideal configuration. We show how to
achieve equivalent performance in just 6: with Random Unidimensional
Augmentation. Source code is available at https://github.com/fastestimator/RUA
|
By processing in the frequency domain (FD), massive MIMO systems can approach
the theoretical per-user capacity using a single carrier modulation (SCM)
waveform with a cyclic prefix. Minimum mean squared error (MMSE) detection and
zero forcing (ZF) precoding have been shown to effectively cancel multi-user
interference while compensating for inter-symbol interference. In this paper,
we present a modified downlink precoding approach in the FD based on
regularized zero forcing (RZF), which reuses the matrix inverses calculated as
part of the FD MMSE uplink detection. By reusing these calculations, the
computational complexity of the RZF precoder is drastically lowered, compared
to the ZF precoder. Introduction of the regularization in RZF leads to a bias
in the detected data symbols at the user terminals. We show this bias can be
removed by incorporating a scaling factor at the receiver. Furthermore, it is
noted that user powers have to be optimized to strike a balance between noise
and interference seen at each user terminal. The resulting performance of the
RZF precoder exceeds that of the ZF precoder for low and moderate input
signal-to-noise ratio (SNR) conditions, and performance is equal for high input
SNR. These results are established and confirmed by analysis and simulation.
|
We analyze the orthogonal greedy algorithm when applied to dictionaries
$\mathbb{D}$ whose convex hull has small entropy. We show that if the metric
entropy of the convex hull of $\mathbb{D}$ decays at a rate of
$O(n^{-\frac{1}{2}-\alpha})$ for $\alpha > 0$, then the orthogonal greedy
algorithm converges at the same rate on the variation space of $\mathbb{D}$.
This improves upon the well-known $O(n^{-\frac{1}{2}})$ convergence rate of the
orthogonal greedy algorithm in many cases, most notably for dictionaries
corresponding to shallow neural networks. These results hold under no
additional assumptions on the dictionary beyond the decay rate of the entropy
of its convex hull. In addition, they are robust to noise in the target
function and can be extended to convergence rates on the interpolation spaces
of the variation norm. Finally, we show that these improved rates are sharp and
prove a negative result showing that the iterates generated by the orthogonal
greedy algorithm cannot in general be bounded in the variation norm of
$\mathbb{D}$.
|
Network flows are one of the most studied combinatorial optimization problems
with innumerable applications. Any flow on a directed acyclic graph (DAG) $G$
having $n$ vertices and $m$ edges can be decomposed into a set of $O(m)$ paths,
with applications from network routing to assembly of biological sequences. In
some applications, the flow decomposition corresponds to some particular data
that need to be reconstructed from the flow, which require finding paths (or
subpaths) appearing in all possible flow decompositions, referred to as safe
paths.
Recently, Ma et al. [WABI 2020] addressed a related problem in a
probabilistic framework. Later, they gave a quadratic-time algorithm based on a
global criterion, for a generalized version (AND-Quant) of the corresponding
problem, i.e., reporting if a given flow path is safe. Our contributions are as
follows:
1- A simple characterization for the safety of a given path based on a local
criterion, which can be directly adapted to give an optimal linear time
verification algorithm.
2- A simple enumeration algorithm that reports all maximal safe paths on a
flow network in $O(mn)$ time. The algorithm reports all safe paths using a
compact representation of the solution (called ${\cal P}_c$), which is
$\Omega(mn)$ in the worst case, but merely $O(m+n)$ in the best case.
3- An improved enumeration algorithm where all safe paths ending at every
vertex are represented as funnels using $O(n^2+|{\cal P}_c|)$ space. These can
be computed and used to report all maximal safe paths, using time linear in the
total space required by funnels, with an extra logarithmic factor.
Overall we present a simple characterization for the problem leading to an
optimal verification algorithm and a simple enumeration algorithm. The
enumeration algorithm is improved using the funnel structures for safe paths,
which may be of independent interest.
|
The most advanced D-Wave Advantage quantum annealer has 5000+ qubits,
however, every qubit is connected to a small number of neighbors. As such,
implementation of a fully-connected graph results in an order of magnitude
reduction in qubit count. To compensate for the reduced number of qubits, one
has to rely on special heuristic software such as qbsolv, the purpose of which
is to decompose a large problem into smaller pieces that fit onto a quantum
annealer. In this work, we compare the performance of two implementations of
such software: the original open-source qbsolv which is a part of the D-Wave
Ocean tools and a new Mukai QUBO solver from Quantum Computing Inc. (QCI). The
comparison is done for solving the electronic structure problem and is
implemented in a classical mode (Tabu search techniques). The Quantum Annealer
Eigensolver is used to map the electronic structure eigenvalue-eigenvector
equation to a type of problem solvable on modern quantum annealers. We find
that the Mukai QUBO solver outperforms the Ocean qbsolv for all calculations
done in the present work, both the ground and excited state calculations. This
work stimulates the development of software to assist in the utilization of
modern quantum annealers.
|
We derive the Thouless-Anderson-Palmer (TAP) equations for the Ghatak and
Sherrington model. Our derivation, based on the cavity method, holds at high
temperature and at all values of the crystal field. It confirms the prediction
of Yokota.
|
One of the most complex and devastating disaster scenarios that the
U.S.~Pacific Northwest region and the state of Oregon faces is a large
magnitude Cascadia Subduction Zone earthquake event. The region's electrical
grid lacks in resilience against the destruction of a megathrust earthquake, a
powerful tsunami, hundreds of aftershocks and increased volcanic activity, all
of which are highly probable components of this hazard. This research seeks to
catalyze further understanding and improvement of resilience. By systematizing
power system related experiences of historical earthquakes, and collecting
practical and innovative ideas from other regions on how to enhance network
design, construction, and operation, important steps are being taken toward a
more resilient, earthquake-resistant grid. This paper presents relevant
findings in an effort to be an overview and a useful guideline for those who
are also working towards greater electrical grid resilience.
|
The representation of data and its relationships using networks is prevalent
in many research fields such as computational biology, medical informatics and
social networks. Recently, complex networks models have been introduced to
better capture the insights of the modelled scenarios. Among others, dual
networks -based models have been introduced, which consist in mapping
information as pair of networks containing the same nodes but different edges.
We focus on the use of a novel approach to visualise and analyse dual
networks. The method uses two algorithms for community discovery, and it is
provided as a Python-based tool with a graphical user interface. The tool is
able to load dual networks and to extract both the densest connected subgraph
as well as the common modular communities. The latter is obtained by using an
adapted implementation of the Louvain algorithm.
The proposed algorithm and graphical tool have been tested by using social,
biological, and co-authorship networks. Results demonstrate that the proposed
approach is efficient and is able to extract meaningful information from dual
networks. Finally, as contribution, the proposed graphical user interface can
be considered a valuable innovation to the context.
|
In this article, we develop an algebraic framework of axioms which abstracts
various high-level properties of multi-qudit representations of generalized
Clifford algebras. We further construct an explicit model and prove that it
satisfies these axioms. Strengths of our algebraic framework include the
minimality of its assumptions, and the readiness by which one may give an
explicit construction satisfying these assumptions. In terms of applications,
this algebraic framework provides a solid foundation which opens the way for
developing a graphical calculus for multi-qudit representations of generalized
Clifford algebras using purely algebraic methods, which is addressed in a
follow-up paper.
|
Assuming the Riemann hypothesis we establish explicit bounds for the modulus
of the log-derivative of Riemann's zeta-function in the critical strip.
|
Abstract symbolic reasoning, as required in domains such as mathematics and
logic, is a key component of human intelligence. Solvers for these domains have
important applications, especially to computer-assisted education. But learning
to solve symbolic problems is challenging for machine learning algorithms.
Existing models either learn from human solutions or use hand-engineered
features, making them expensive to apply in new domains. In this paper, we
instead consider symbolic domains as simple environments where states and
actions are given as unstructured text, and binary rewards indicate whether a
problem is solved. This flexible setup makes it easy to specify new domains,
but search and planning become challenging. We introduce four environments
inspired by the Mathematics Common Core Curriculum, and observe that existing
Reinforcement Learning baselines perform poorly. We then present a novel
learning algorithm, Contrastive Policy Learning (ConPoLe) that explicitly
optimizes the InfoNCE loss, which lower bounds the mutual information between
the current state and next states that continue on a path to the solution.
ConPoLe successfully solves all four domains. Moreover, problem representations
learned by ConPoLe enable accurate prediction of the categories of problems in
a real mathematics curriculum. Our results suggest new directions for
reinforcement learning in symbolic domains, as well as applications to
mathematics education.
|
With the ongoing penetration of conversational user interfaces, a better
understanding of social and emotional characteristic inherent to dialogue is
required. Chatbots in particular face the challenge of conveying human-like
behaviour while being restricted to one channel of interaction, i.e., text. The
goal of the presented work is thus to investigate whether characteristics of
social intelligence embedded in human-chatbot interactions are perceivable by
human interlocutors and if yes, whether such influences the experienced
interaction quality. Focusing on the social intelligence dimensions
Authenticity, Clarity and Empathy, we first used a questionnaire survey
evaluating the level of perception in text utterances, and then conducted a
Wizard of Oz study to investigate the effects of these utterances in a more
interactive setting. Results show that people have great difficulties
perceiving elements of social intelligence in text. While on the one hand they
find anthropomorphic behaviour pleasant and positive for the naturalness of a
dialogue, they may also perceive it as frightening and unsuitable when
expressed by an artificial agent in the wrong way or at the wrong time.
|
Learning representations for graphs plays a critical role in a wide spectrum
of downstream applications. In this paper, we summarize the limitations of the
prior works in three folds: representation space, modeling dynamics and
modeling uncertainty. To bridge this gap, we propose to learn dynamic graph
representation in hyperbolic space, for the first time, which aims to infer
stochastic node representations. Working with hyperbolic space, we present a
novel Hyperbolic Variational Graph Neural Network, referred to as HVGNN. In
particular, to model the dynamics, we introduce a Temporal GNN (TGNN) based on
a theoretically grounded time encoding approach. To model the uncertainty, we
devise a hyperbolic graph variational autoencoder built upon the proposed TGNN
to generate stochastic node representations of hyperbolic normal distributions.
Furthermore, we introduce a reparameterisable sampling algorithm for the
hyperbolic normal distribution to enable the gradient-based learning of HVGNN.
Extensive experiments show that HVGNN outperforms state-of-the-art baselines on
real-world datasets.
|
We propose SinIR, an efficient reconstruction-based framework trained on a
single natural image for general image manipulation, including
super-resolution, editing, harmonization, paint-to-image, photo-realistic style
transfer, and artistic style transfer. We train our model on a single image
with cascaded multi-scale learning, where each network at each scale is
responsible for image reconstruction. This reconstruction objective greatly
reduces the complexity and running time of training, compared to the GAN
objective. However, the reconstruction objective also exacerbates the output
quality. Therefore, to solve this problem, we further utilize simple random
pixel shuffling, which also gives control over manipulation, inspired by the
Denoising Autoencoder. With quantitative evaluation, we show that SinIR has
competitive performance on various image manipulation tasks. Moreover, with a
much simpler training objective (i.e., reconstruction), SinIR is trained 33.5
times faster than SinGAN (for 500 X 500 images) that solves similar tasks. Our
code is publicly available at github.com/YooJiHyeong/SinIR.
|
This paper studies the model compression problem of vision transformers.
Benefit from the self-attention module, transformer architectures have shown
extraordinary performance on many computer vision tasks. Although the network
performance is boosted, transformers are often required more computational
resources including memory usage and the inference complexity. Compared with
the existing knowledge distillation approaches, we propose to excavate useful
information from the teacher transformer through the relationship between
images and the divided patches. We then explore an efficient fine-grained
manifold distillation approach that simultaneously calculates cross-images,
cross-patch, and random-selected manifolds in teacher and student models.
Experimental results conducted on several benchmarks demonstrate the
superiority of the proposed algorithm for distilling portable transformer
models with higher performance. For example, our approach achieves 75.06% Top-1
accuracy on the ImageNet-1k dataset for training a DeiT-Tiny model, which
outperforms other ViT distillation methods.
|
Adversarial examples mainly exploit changes to input pixels to which humans
are not sensitive to, and arise from the fact that models make decisions based
on uninterpretable features. Interestingly, cognitive science reports that the
process of interpretability for human classification decision relies
predominantly on low spatial frequency components. In this paper, we
investigate the robustness to adversarial perturbations of models enforced
during training to leverage information corresponding to different spatial
frequency ranges. We show that it is tightly linked to the spatial frequency
characteristics of the data at stake. Indeed, depending on the data set, the
same constraint may results in very different level of robustness (up to 0.41
adversarial accuracy difference). To explain this phenomenon, we conduct
several experiments to enlighten influential factors such as the level of
sensitivity to high frequencies, and the transferability of adversarial
perturbations between original and low-pass filtered inputs.
|
In this paper, we propose a simple yet effective crowd counting and
localization network named SCALNet. Unlike most existing works that separate
the counting and localization tasks, we consider those tasks as a pixel-wise
dense prediction problem and integrate them into an end-to-end framework.
Specifically, for crowd counting, we adopt a counting head supervised by the
Mean Square Error (MSE) loss. For crowd localization, the key insight is to
recognize the keypoint of people, i.e., the center point of heads. We propose a
localization head to distinguish dense crowds trained by two loss functions,
i.e., Negative-Suppressed Focal (NSF) loss and False-Positive (FP) loss, which
balances the positive/negative examples and handles the false-positive
predictions. Experiments on the recent and large-scale benchmark, NWPU-Crowd,
show that our approach outperforms the state-of-the-art methods by more than 5%
and 10% improvement in crowd localization and counting tasks, respectively. The
code is publicly available at https://github.com/WangyiNTU/SCALNet.
|
Harmonic generation in atoms and molecules has reshaped our understanding of
ultrafast phenomena beyond the traditional nonlinear optics and has launched
attosecond physics. Harmonics from solids represent a new frontier, where both
majority and minority spin channels contribute to harmonics.} This is true even
in a ferromagnet whose electronic states are equally available to optical
excitation. Here, we demonstrate that harmonics can be generated {mostly} from
a single spin channel in half metallic chromium dioxide. {An energy gap in the
minority channel greatly reduces the harmonic generation}, so harmonics
predominantly emit from the majority channel, with a small contribution from
the minority channel. However, this is only possible when the incident photon
energy is well below the energy gap in the minority channel, so all the
transitions in the minority channel are virtual. The onset of the photon energy
is determined by the transition energy between the dipole-allowed transition
between the O-$2p$ and Cr-$3d$ states. Harmonics {mainly} from a single spin
channel can be detected, regardless of laser field strength, as far as the
photon energy is below the minority band energy gap. This prediction should be
tested experimentally.
|
Dynamic Time Warping (DTW) is widely used for temporal data processing.
However, existing methods can neither learn the discriminative prototypes of
different classes nor exploit such prototypes for further analysis. We propose
Discriminative Prototype DTW (DP-DTW), a novel method to learn class-specific
discriminative prototypes for temporal recognition tasks. DP-DTW shows superior
performance compared to conventional DTWs on time series classification
benchmarks. Combined with end-to-end deep learning, DP-DTW can handle
challenging weakly supervised action segmentation problems and achieves state
of the art results on standard benchmarks. Moreover, detailed reasoning on the
input video is enabled by the learned action prototypes. Specifically, an
action-based video summarization can be obtained by aligning the input sequence
with action prototypes.
|
Hadronic matrix elements of local four-quark operators play a central role in
non-leptonic kaon decays, while vacuum matrix elements involving the same kind
of operators appear in inclusive dispersion relations, such as those relevant
in $\tau$-decay analyses. Using an $SU(3)_L\otimes SU(3)_R$ decomposition of
the operators, we derive generic relations between these matrix elements,
extending well-known results that link observables in the two different
sectors. Two relevant phenomenological applications are presented. First, we
determine the electroweak-penguin contribution to the kaon CP-violating ratio
$\varepsilon'/\varepsilon$, using the measured hadronic spectral functions in
$\tau$ decay. Second, we fit our $SU(3)$ dynamical parameters to the most
recent lattice data on $K\to\pi\pi$ matrix elements. The comparison of this
numerical fit with results from previous analytical approaches provides an
interesting anatomy of the $\Delta I = \frac{1}{2}$ enhancement, confirming old
suggestions about its underlying dynamical origin.
|
We present a machine learning method to predict extreme hydrologic events
from spatially and temporally varying hydrological and meteorological data. We
used a timestep reduction technique to reduce the computational and memory
requirements and trained a bidirection LSTM network to predict soil water and
stream flow from time series data observed and simulated over eighty years in
the Wabash River Watershed. We show that our simple model can be trained much
faster than complex attention networks such as GeoMAN without sacrificing
accuracy. Based on the predicted values of soil water and stream flow, we
predict the occurrence and severity of extreme hydrologic events such as
droughts. We also demonstrate that extreme events can be predicted in
geographical locations separate from locations observed during the training
process. This spatially-inductive setting enables us to predict extreme events
in other areas in the US and other parts of the world using our model trained
with the Wabash Basin data.
|
Electronic and optical properties of doped organic semiconductors are
dominated by local interactions between donor and acceptor molecules. However,
when such systems are in crystalline form, long-range order competes against
short-range couplings. In a first-principles study on three experimentally
resolved bulk structures of quaterthiophene doped by (fluorinated)
tetracyanoquinodimethane, we demonstrate the crucial role of long-range
interactions in donor/acceptor co-crystals. The band structures of the
investigated materials exhibit direct band-gaps decreasing in size with
increasing amount of F atoms in the acceptors. The valence-band maximum and
conduction-band minimum are found at the Brillouin zone boundary and the
corresponding wave-functions are segregated on donor and acceptor molecules,
respectively. With the aid of a tight-binding model, we rationalize that the
mechanisms responsible for these behaviors, which are ubiquitous in
donor/acceptor co-crystals, are driven by long-range interactions. The optical
response of the analyzed co-crystals is highly anisotropic. The absorption
onset is dominated by an intense resonance corresponding to a charge-transfer
excitation. Long-range interactions are again responsible for this behavior,
which enhances the efficiency of the co-crystals for photo-induced charge
separation and transport. In addition to this result, which has important
implications in the rational design of organic materials for opto-electronics,
our study clarifies that cluster models, accounting only for local
interactions, cannot capture the relevant impact of long-range order in
donor/acceptor co-crystals.
|
Recently, deep-learning based approaches have achieved impressive performance
for autonomous driving. However, end-to-end vision-based methods typically have
limited interpretability, making the behaviors of the deep networks difficult
to explain. Hence, their potential applications could be limited in practice.
To address this problem, we propose an interpretable end-to-end vision-based
motion planning approach for autonomous driving, referred to as IVMP. Given a
set of past surrounding-view images, our IVMP first predicts future egocentric
semantic maps in bird's-eye-view space, which are then employed to plan
trajectories for self-driving vehicles. The predicted future semantic maps not
only provide useful interpretable information, but also allow our motion
planning module to handle objects with low probability, thus improving the
safety of autonomous driving. Moreover, we also develop an optical flow
distillation paradigm, which can effectively enhance the network while still
maintaining its real-time performance. Extensive experiments on the nuScenes
dataset and closed-loop simulation show that our IVMP significantly outperforms
the state-of-the-art approaches in imitating human drivers with a much higher
success rate. Our project page is available at
https://sites.google.com/view/ivmp.
|
We develop methods for forming prediction sets in an online setting where the
data generating distribution is allowed to vary over time in an unknown
fashion. Our framework builds on ideas from conformal inference to provide a
general wrapper that can be combined with any black box method that produces
point predictions of the unseen label or estimated quantiles of its
distribution. While previous conformal inference methods rely on the assumption
that the data points are exchangeable, our adaptive approach provably achieves
the desired coverage frequency over long-time intervals irrespective of the
true data generating process. We accomplish this by modelling the distribution
shift as a learning problem in a single parameter whose optimal value is
varying over time and must be continuously re-estimated. We test our method,
adaptive conformal inference, on two real world datasets and find that its
predictions are robust to visible and significant distribution shifts.
|
We analyze a fully discrete finite element numerical scheme for the
Cahn-Hilliard-Stokes-Darcy system that models two-phase flows in coupled free
flow and porous media. To avoid a well-known difficulty associated with the
coupling between the Cahn-Hilliard equation and the fluid motion, we make use
of the operator-splitting in the numerical scheme, so that these two solvers
are decoupled, which in turn would greatly improve the computational
efficiency. The unique solvability and the energy stability have been proved
in~\cite{CHW2017}. In this work, we carry out a detailed convergence analysis
and error estimate for the fully discrete finite element scheme, so that the
optimal rate convergence order is established in the energy norm, i.e.,, in the
$\ell^\infty (0, T; H^1) \cap \ell^2 (0, T; H^2)$ norm for the phase variables,
as well as in the $\ell^\infty (0, T; H^1) \cap \ell^2 (0, T; H^2)$ norm for
the velocity variable. Such an energy norm error estimate leads to a
cancellation of a nonlinear error term associated with the convection part,
which turns out to be a key step to pass through the analysis. In addition, a
discrete $\ell^2 (0;T; H^3)$ bound of the numerical solution for the phase
variables plays an important role in the error estimate, which is accomplished
via a discrete version of Gagliardo-Nirenberg inequality in the finite element
setting.
|
In this paper, the Hankel transform of the generalized q-exponential
polynomial of the first form (q, r)-Whitney numbers of the second kind is
established using the method of Cigler. Consequently, the Hankel transform of
the first form (q, r)-Dowling numbers is obtained as special case.
|
In the article, we investigate the diquark-diquark-antiquark type fully-heavy
pentaquark states with the spin-parity $J^P={\frac{1}{2}}^-$ via the QCD sum
rules, and obtain the masses $M_{cccc\bar{c}}=7.93\pm 0.15\,\rm{GeV}$ and
$M_{bbbb\bar{b}}=23.91\pm0.15\,\rm{GeV}$. We can search for the fully-heavy
pentaquark states in the $J/\psi \Omega_{ccc}$ and $\Upsilon \Omega_{bbb}$
invariant mass spectrum in the future.
|
Optical phenomena associated with extremely localized field should be
understood with considerations of nonlocal and quantum effects, which pose a
hurdle to conceptualize the physics with a picture of eigenmodes. Here we first
propose a generalized Lorentz model to describe general nonlocal media under
linear mean-field approximation and formulate source-free Maxwell's equations
as a linear eigenvalue problem to define the quasinormal modes. Then we
introduce an orthonormalization scheme for the modes and establish a canonical
quasinormal mode framework for general nonlocal media. Explicit formalisms for
metals described by quantum hydrodynamic model and polar dielectrics with
nonlocal response are exemplified. The framework enables for the first time
direct modal analysis of mode transition in the quantum tunneling regime and
provides physical insights beyond usual far-field spectroscopic analysis.
Applied to nonlocal polar dielectrics, the framework also unveils the important
roles of longitudinal phonon polaritons in optical response.
|
Smart contracts are distributed, self-enforcing programs executing on top of
blockchain networks. They have the potential to revolutionize many industries
such as financial institutes and supply chains. However, smart contracts are
subject to code-based vulnerabilities, which casts a shadow on its
applications. As smart contracts are unpatchable (due to the immutability of
blockchain), it is essential that smart contracts are guaranteed to be free of
vulnerabilities. Unfortunately, smart contract languages such as Solidity are
Turing-complete, which implies that verifying them statically is infeasible.
Thus, alternative approaches must be developed to provide the guarantee. In
this work, we develop an approach which automatically transforms smart
contracts so that they are provably free of 4 common kinds of vulnerabilities.
The key idea is to apply runtime verification in an efficient and provably
correct manner. Experiment results with 5000 smart contracts show that our
approach incurs minor run-time overhead in terms of time (i.e., 14.79%) and gas
(i.e., 0.79%).
|
We present a structure preserving discretization of the fundamental spacetime
geometric structures of fluid mechanics in the Lagrangian description in 2D and
3D. Based on this, multisymplectic variational integrators are developed for
barotropic and incompressible fluid models, which satisfy a discrete version of
Noether theorem. We show how the geometric integrator can handle regular fluid
motion in vacuum with free boundaries and constraints such as the impact
against an obstacle of a fluid flowing on a surface. Our approach is applicable
to a wide range of models including the Boussinesq and shallow water models, by
appropriate choice of the Lagrangian.
|
Modularity of neural networks -- both biological and artificial -- can be
thought of either structurally or functionally, and the relationship between
these is an open question. We show that enforcing structural modularity via
sparse connectivity between two dense sub-networks which need to communicate to
solve the task leads to functional specialization of the sub-networks, but only
at extreme levels of sparsity. With even a moderate number of interconnections,
the sub-networks become functionally entangled. Defining functional
specialization is in itself a challenging problem without a universally agreed
solution. To address this, we designed three different measures of
specialization (based on weight masks, retraining and correlation) and found
them to qualitatively agree. Our results have implications in both neuroscience
and machine learning. For neuroscience, it shows that we cannot conclude that
there is functional modularity simply by observing moderate levels of
structural modularity: knowing the brain's connectome is not sufficient for
understanding how it breaks down into functional modules. For machine learning,
using structure to promote functional modularity -- which may be important for
robustness and generalization -- may require extremely narrow bottlenecks
between modules.
|
Being able to spot defective parts is a critical component in large-scale
industrial manufacturing. A particular challenge that we address in this work
is the cold-start problem: fit a model using nominal (non-defective) example
images only. While handcrafted solutions per class are possible, the goal is to
build systems that work well simultaneously on many different tasks
automatically. The best peforming approaches combine embeddings from ImageNet
models with an outlier detection model. In this paper, we extend on this line
of work and propose PatchCore, which uses a maximally representative memory
bank of nominal patch-features. PatchCore offers competitive inference times
while achieving state-of-the-art performance for both detection and
localization. On the standard dataset MVTec AD, PatchCore achieves an
image-level anomaly detection AUROC score of $99.1\%$, more than halving the
error compared to the next best competitor. We further report competitive
results on two additional datasets and also find competitive results in the few
samples regime.
|
Novel many-body and topological electronic phases can be created in
assemblies of interacting spins coupled to a superconductor, such as
one-dimensional topological superconductors with Majorana zero modes (MZMs) at
their ends. Understanding and controlling interactions between spins and the
emergent band structure of the in-gap Yu-Shiba-Rusinov (YSR) states they induce
in a superconductor are fundamental for engineering such phases. Here, by
precisely positioning magnetic adatoms with a scanning tunneling microscope
(STM), we demonstrate both the tunability of exchange interaction between spins
and precise control of the hybridization of YSR states they induce on the
surface of a bismuth (Bi) thin film that is made superconducting with the
proximity effect. In this platform, depending on the separation of spins, the
interplay between Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction, spin-orbit
coupling, and surface magnetic anisotropy stabilizes different types of spin
alignments. Using high-resolution STM spectroscopy at millikelvin temperatures,
we probe these spin alignments through monitoring the spin-induced YSR states
and their energy splitting. Such measurements also reveal a quantum phase
transition between the ground states with different electron number parity for
a pair of spins in a superconductor tuned by their separation. Experiments on
larger assemblies show that spin-spin interactions can be mediated in a
superconductor over long distances. Our results show that controlling
hybridization of the YSR states in this platform provides the possibility of
engineering the band structure of such states for creating topological phases.
|
We describe a numerical method that simulates the interaction of the helium
atom with sequences of femtosecond and attosecond light pulses. The method,
which is based on the close-coupling expansion of the electronic configuration
space in a B-spline bipolar spherical harmonic basis, can accurately reproduce
the excitation and single ionization of the atom, within the electrostatic
approximation. The time dependent Schr\"odinger equation is integrated with a
sequence of second-order split-exponential unitary propagators. The asymptotic
channel-, energy- and angularly-resolved photoelectron distributions are
computed by projecting the wavepacket at the end of the simulation on the
multichannel scattering states of the atom, which are separately computed
within the same close-coupling basis. This method is applied to simulate the
pump-probe ionization of helium in the vicinity of the $2s/2p$ excitation
threshold of the He$^+$ ion. This work confirms the qualitative conclusions of
one of our earliest publications [L Argenti and E Lindroth, Phys. Rev. Lett.
{\bf 105}, 53002 (2010)], in which we demonstrated the control of the $2s/2p$
ionization branching-ratio. Here, we take those calculations to convergence and
show how correlation brings the periodic modulation of the branching ratios in
almost phase opposition. The residual total ionization probability to the
$2s+2p$ channels is dominated by the beating between the $sp_{2,3}^+$ and the
$sp_{2,4}^+$ doubly excited states, which is consistent with the modulation of
the complementary signal in the $1s$ channel, measured in 2010 by Chang and
co-workers~[S Gilbertson~\emph{et al.}, Phys. Rev. Lett. {\bf 105}, 263003
(2010)].
|
Over the past decade, unprecedented progress in the development of neural
networks influenced dozens of different industries, including weed recognition
in the agro-industrial sector. The use of neural networks in agro-industrial
activity in the task of recognizing cultivated crops is a new direction. The
absence of any standards significantly complicates the understanding of the
real situation of the use of the neural network in the agricultural sector. The
manuscript presents the complete analysis of researches over the past 10 years
on the use of neural networks for the classification and tracking of weeds due
to neural networks. In particular, the analysis of the results of using various
neural network algorithms for the task of classification and tracking was
presented. As a result, we presented the recommendation for the use of neural
networks in the tasks of recognizing a cultivated object and weeds. Using this
standard can significantly improve the quality of research on this topic and
simplify the analysis and understanding of any paper.
|
The formation of $\alpha$ particle on nuclear surface has been a fundamental
problem since the early age of nuclear physics. It strongly affects the
$\alpha$ decay lifetime of heavy and superheavy elements, level scheme of light
nuclei, and the synthesis of the elements in stars. However, the
$\alpha$-particle formation in medium-mass nuclei has been poorly known despite
its importance. Here, based on the $^{48}{\rm Ti}(p,p\alpha)^{44}{\rm Ca}$
reaction analysis, we report that the $\alpha$-particle formation in a
medium-mass nucleus $^{48}{\rm Ti}$ is much stronger than that expected from a
mean-field approximation, and the estimated average distance between $\alpha$
particle and the residue is as large as 4.5 fm. This new result poses a
challenge of describing four nucleon correlations by microscopic nuclear
models.
|
Giant spin-splitting was recently predicted in collinear antiferromagnetic
materials with a specific class of magnetic space group. In this work, we have
predicted a two-dimensional (2D) antiferromagnetic Weyl semimetal (WS), CrO
with large spin-split band structure, spin-momentum locked transport properties
and high N\'eel temperature. It has two pairs of spin-polarized Weyl points at
the Fermi level. By manipulating the position of the Weyl points with strain,
four different antiferromagnetic spintronic states can be achieved: WSs with
two spin-polarized transport channels (STCs), WSs with single STC,
semiconductors with two STCs, and semiconductors with single STC. Based on
these properties, a new avenue in spintronics with 2D collinear
antiferromagnets is proposed.
|
Being the seventh most spoken language in the world, the use of the Bangla
language online has increased in recent times. Hence, it has become very
important to analyze Bangla text data to maintain a safe and harassment-free
online place. The data that has been made accessible in this article has been
gathered and marked from the comments of people in public posts by celebrities,
government officials, athletes on Facebook. The total amount of collected
comments is 44001. The dataset is compiled with the aim of developing the
ability of machines to differentiate whether a comment is a bully expression or
not with the help of Natural Language Processing and to what extent it is
improper if it is an inappropriate comment. The comments are labeled with
different categories of harassment. Exploratory analysis from different
perspectives is also included in this paper to have a detailed overview. Due to
the scarcity of data collection of categorized Bengali language comments, this
dataset can have a significant role for research in detecting bully words,
identifying inappropriate comments, detecting different categories of Bengali
bullies, etc. The dataset is publicly available at
https://data.mendeley.com/datasets/9xjx8twk8p.
|
Let $\Omega \Subset \mathbb R^n$, $f \in C^1(\mathbb R^{N\times n})$ and
$g\in C^1(\mathbb R^N)$, where $N,n \in \mathbb N$. We study the minimisation
problem of finding $u \in W^{1,\infty}_0(\Omega;\mathbb R^N)$ that satisfies \[
\big\| f(\mathrm D u) \big\|_{L^\infty(\Omega)} \! = \inf \Big\{\big\|
f(\mathrm D v) \big\|_{L^\infty(\Omega)} \! : \ v \! \in
W^{1,\infty}_0(\Omega;\mathbb R^N), \, \| g(v) \|_{L^\infty(\Omega)}\!
=1\Big\}, \] under natural assumptions on $f,g$. This includes the
$\infty$-eigenvalue problem as a special case. Herein we prove existence of a
minimiser $u_\infty$ with extra properties, derived as the limit of minimisers
of approximating constrained $L^p$ problems as $p\to \infty$. A central
contribution and novelty of this work is that $u_\infty$ is shown to solve a
divergence PDE with measure coefficients, whose leading term is a divergence
counterpart equation of the non-divergence $\infty$-Laplacian. Our results are
new even in the scalar case of the $\infty$-eigenvalue problem.
|
This paper presents the detailed simulation of a double-pixel structure for
charged particle detection based on the 3D-trench silicon sensor developed for
the TIMESPOT project and a comparison of the simulation results with
measurements performed at $\pi-$M1 beam at PSI laboratory. The simulation is
based on the combined use of several software tools (TCAD, GEANT4, TCoDe and
TFBoost) which allow to fully design and simulate the device physics response
in very short computational time, O(1-100 s) per simulated signal, by
exploiting parallel computation using single or multi-thread processors. This
allowed to produce large samples of simulated signals, perform detailed studies
of the sensor characteristics and make precise comparisons with experimental
results.
|
We investigate the possibility that radio-bright active galactic nuclei (AGN)
are responsible for the TeV--PeV neutrinos detected by IceCube. We use an
unbinned maximum-likelihood-ratio method, 10 years of IceCube muon-track data,
and 3388 radio-bright AGN selected from the Radio Fundamental Catalog. None of
the AGN in the catalog have a large global significance. The two most
significant sources have global significance of $\simeq$ 1.5$\sigma$ and
0.8$\sigma$, though 4.1$\sigma$ and 3.8$\sigma$ local significance. Our
stacking analyses show no significant correlation between the whole catalog and
IceCube neutrinos. We infer from the null search that this catalog can account
for at most 30\% (95\% CL) of the diffuse astrophysical neutrino flux measured
by IceCube. Moreover, our results disagree with recent work that claimed a
4.1$\sigma$ detection of neutrinos from the sources in this catalog, and we
discuss the reasons of the difference.
|
We propose a generalization of the coherent anomaly method to extract the
critical exponents of a phase transition occurring in the steady-state of an
open quantum many-body system. The method, originally developed by Suzuki [J.
Phys. Soc. Jpn. {\bf 55}, 4205 (1986)] for equilibrium systems, is based on the
scaling properties of the singularity in the response functions determined
through cluster mean-field calculations. We apply this method to the
dissipative transverse-field Ising model and the dissipative XYZ model in two
dimensions obtaining convergent results already with small clusters.
|
As a step towards quantization of Higher Spin Gravities we construct the
presymplectic AKSZ sigma-model for $4d$ Higher Spin Gravity which is AdS/CFT
dual of Chern-Simons vector models. It is shown that the presymplectic
structure leads to the correct quantum commutator of higher spin fields and to
the correct algebra of the global higher spin symmetry currents. The
presymplectic AKSZ model is proved to be unique, it depends on two coupling
constants in accordance with the AdS/CFT duality, and it passes some simple
checks of interactions.
|
Electric, intelligent, and network are the most important future development
directions of automobiles. Intelligent electric vehicles have shown great
potentials to improve traffic mobility and reduce emissions, especially at
unsignalized intersections. Previous research has shown that vehicle passing
order is the key factor in traffic mobility improvement. In this paper, we
propose a graph-based cooperation method to formalize the conflict-free
scheduling problem at unsignalized intersections. Firstly, conflict directed
graphs and coexisting undirected graphs are built to describe the conflict
relationship of the vehicles. Then, two graph-based methods are introduced to
solve the vehicle passing order. One method is an optimized depth-first
spanning tree method which aims to find the local optimal passing order for
each vehicle. The other method is a maximum matching algorithm that solves the
global optimal problem. The computational complexity of both methods is also
derived. Numerical simulation results demonstrate the effectiveness of the
proposed algorithms.
|
As an indispensable part of modern human-computer interaction system, speech
synthesis technology helps users get the output of intelligent machine more
easily and intuitively, thus has attracted more and more attention. Due to the
limitations of high complexity and low efficiency of traditional speech
synthesis technology, the current research focus is the deep learning-based
end-to-end speech synthesis technology, which has more powerful modeling
ability and a simpler pipeline. It mainly consists of three modules: text
front-end, acoustic model, and vocoder. This paper reviews the research status
of these three parts, and classifies and compares various methods according to
their emphasis. Moreover, this paper also summarizes the open-source speech
corpus of English, Chinese and other languages that can be used for speech
synthesis tasks, and introduces some commonly used subjective and objective
speech quality evaluation method. Finally, some attractive future research
directions are pointed out.
|
The internet advertising market is a multi-billion dollar industry, in which
advertisers buy thousands of ad placements every day by repeatedly
participating in auctions. In recent years, the industry has shifted to
first-price auctions as the preferred paradigm for selling advertising slots.
Another important and ubiquitous feature of these auctions is the presence of
campaign budgets, which specify the maximum amount the advertisers are willing
to pay over a specified time period. In this paper, we present a new model to
study the equilibrium bidding strategies in first-price auctions for
advertisers who satisfy budget constraints on average. Our model dispenses with
the common, yet unrealistic assumption that advertisers' values are independent
and instead assumes a contextual model in which advertisers determine their
values using a common feature vector. We show the existence of a natural
value-pacing-based Bayes-Nash equilibrium under very mild assumptions, and
study its structural properties. Furthermore, we generalize the existence
result to standard auctions and prove a revenue equivalence showing that all
standard auctions yield the same revenue even in the presence of budget
constraints.
|
We study the dynamics of a ferrofluid thin film confined in a Hele-Shaw cell,
and subjected to a tilted nonuniform magnetic field. It is shown that the
interface between the ferrofluid and an inviscid outer fluid (air) supports
traveling waves, governed by a novel modified Kuramoto--Sivashinsky-type
equation derived under the long-wave approximation. The balance between energy
production and dissipation in this long-wave equations allows for the existence
of dissipative solitons. These permanent traveling waves' propagation velocity
and profile shape are shown to be tunable via the external magnetic field. A
multiple-scale analysis is performed to obtain the correction to the linear
prediction of the propagation velocity, and to reveal how the nonlinearity
arrests the linear instability. The traveling periodic interfacial waves
discovered are identified as fixed points in an energy phase plane. It is shown
that transitions between states (wave profiles) occur. These transitions are
explained via the spectral stability of the traveling waves. Interestingly,
multiperiodic waves, which are a non-integrable analog of the double cnoidal
wave, are also found to propagate under the model long-wave equation. These
multiperiodic solutions are investigated numerically, and they are found to be
long-lived transients, but ultimately abruptly transition to one of the stable
periodic states identified.
|
The origin of the gamma-ray emission of the blazar Mrk 421 is still a matter
of debate. We used 5.5 years of unbiased observing campaign data, obtained
using the FACT telescope and the Fermi LAT detector at TeV and GeV energies,
the longest and densest so far, together with contemporaneous multi-wavelength
observations, to characterise the variability of Mrk 421 and to constrain the
underlying physical mechanisms. We studied and correlated light curves obtained
by ten different instruments and found two significant results. The TeV and
X-ray light curves are very well correlated with a lag of <0.6 days. The GeV
and radio (15 Ghz band) light curves are widely and strongly correlated.
Variations of the GeV light curve lead those in the radio. Lepto-hadronic and
purely hadronic models in the frame of shock acceleration predict proton
acceleration or cooling timescales that are ruled out by the short variability
timescales and delays observed in Mrk 421. Instead the observations match the
predictions of leptonic models.
|
A pair-density wave state has been suggested to exist in underdoped cuprate
superconductors, with some supporting experimental evidence emerging over the
past few years from scanning tunneling spectroscopy. Several studies have also
linked the observed quantum oscillations in these systems to a reconstruction
of the Fermi surface by a pair-density wave. Here, we show, using semiclassical
analysis and numerical calculations, that a Fermi pocket created by first-order
scattering from a pair-density wave cannot induce such oscillations. In
contrast, pockets resulting from second-order scattering can cause
oscillations. We consider the effects of a finite pair-density wave correlation
length on the signal, and demonstrate that it is only weakly sensitive to
disorder in the form of $\pi$-phase slips. Finally, we discuss our results in
the context of the cuprates and show that a bidirectional pair-density wave may
produce observed oscillation frequencies.
|
Federated Learning (FL) is emerging as a promising paradigm of
privacy-preserving machine learning, which trains an algorithm across multiple
clients without exchanging their data samples. Recent works highlighted several
privacy and robustness weaknesses in FL and addressed these concerns using
local differential privacy (LDP) and some well-studied methods used in
conventional ML, separately. However, it is still not clear how LDP affects
adversarial robustness in FL. To fill this gap, this work attempts to develop a
comprehensive understanding of the effects of LDP on adversarial robustness in
FL. Clarifying the interplay is significant since this is the first step
towards a principled design of private and robust FL systems. We certify that
local differential privacy has both positive and negative effects on
adversarial robustness using theoretical analysis and empirical verification.
|
We define the flow group of any component of any stratum of rooted abelian or
quadratic differentials (those marked with a horizontal separatrix) to be the
group generated by almost-flow loops. We prove that the flow group is equal to
the fundamental group of the component. As a corollary, we show that the plus
and minus modular Rauzy--Veech groups are finite-index subgroups of their
ambient modular monodromy groups. This partially answers a question of Yoccoz.
Using this, and recent advances on algebraic hulls and Zariski closures of
monodromy groups, we prove that the Rauzy--Veech groups are Zariski dense in
their ambient symplectic groups. Density, in turn, implies the simplicity of
the plus and minus Lyapunov spectra of any component of any stratum of
quadratic differentials. Thus, we establish the Kontsevich--Zorich conjecture.
|
By using the Boole summation formula, we obtain asymptotic expansions for the
first and higher order derivatives of the alternating Hurwitz zeta function
$$\zeta_{E}(z,q)=\sum_{n=0}^\infty\frac{(-1)^{n}}{(n+q)^z}$$
with respect to its first argument
$$\zeta_{E}^{(m)}(z,q)\equiv\frac{\partial^m}{\partial z^m}\zeta_E(z,q).$$
|
We show that every finite semilattice can be represented as an atomized
semilattice, an algebraic structure with additional elements (atoms) that
extend the semilattice's partial order. Each atom maps to one subdirectly
irreducible component, and the set of atoms forms a hypergraph that fully
defines the semilattice. An atomization always exists and is unique up to
"redundant atoms". Atomized semilattices are representations that can be used
as computational tools for building semilattice models from sentences, as well
as building its subalgebras and products. Atomized semilattices can be applied
to machine learning and to the study of semantic embeddings into algebras with
idempotent operators.
|
The recent emergence of machine-learning based generative models for speech
suggests a significant reduction in bit rate for speech codecs is possible.
However, the performance of generative models deteriorates significantly with
the distortions present in real-world input signals. We argue that this
deterioration is due to the sensitivity of the maximum likelihood criterion to
outliers and the ineffectiveness of modeling a sum of independent signals with
a single autoregressive model. We introduce predictive-variance regularization
to reduce the sensitivity to outliers, resulting in a significant increase in
performance. We show that noise reduction to remove unwanted signals can
significantly increase performance. We provide extensive subjective performance
evaluations that show that our system based on generative modeling provides
state-of-the-art coding performance at 3 kb/s for real-world speech signals at
reasonable computational complexity.
|
We consider the problem of estimating an object's physical properties such as
mass, friction, and elasticity directly from video sequences. Such a system
identification problem is fundamentally ill-posed due to the loss of
information during image formation. Current solutions require precise 3D labels
which are labor-intensive to gather, and infeasible to create for many systems
such as deformable solids or cloth. We present gradSim, a framework that
overcomes the dependence on 3D supervision by leveraging differentiable
multiphysics simulation and differentiable rendering to jointly model the
evolution of scene dynamics and image formation. This novel combination enables
backpropagation from pixels in a video sequence through to the underlying
physical attributes that generated them. Moreover, our unified computation
graph -- spanning from the dynamics and through the rendering process --
enables learning in challenging visuomotor control tasks, without relying on
state-based (3D) supervision, while obtaining performance competitive to or
better than techniques that rely on precise 3D labels.
|
In this paper, we study quasi post-critically finite degenerations for
rational maps. We construct limits for such degenerations as geometrically
finite rational maps on a finite tree of Riemann spheres. We prove the
boundedness for such degenerations of hyperbolic rational maps with Sierpinski
carpet Julia set and give criteria for the convergence for quasi-Blaschke
products $\mathcal{QB}_d$, making progress towards the analogues of Thurston's
compactness theorem for acylindrical $3$-manifold and the double limit theorem
for quasi-Fuchsian groups in complex dynamics. In the appendix, we apply such
convergence results to show the existence of certain polynomial matings.
|
Optical wireless communications (OWCs) have been recognized as a candidate
enabler of next generation in-body nano-scale networks and implants. The
development of an accurate channel model capable of accommodating the
particularities of different type of tissues is expected to boost the design of
optimized communication protocols for such applications. Motivated by this,
this paper focuses on presenting a general pathloss model for in-body OWCs. In
particular, we use experimental measurements in order to extract analytical
expressions for the absorption coefficients of the five main tissues'
constitutions, namely oxygenated and de-oxygenated blood, water, fat, and
melanin. Building upon these expressions, we derive a general formula for the
absorption coefficient evaluation of any biological tissue. To verify the
validity of this formula, we compute the absorption coefficient of complex
tissues and compare them against respective experimental results reported by
independent research works. Interestingly, we observe that the analytical
formula has high accuracy and is capable of modeling the pathloss and,
therefore, the penetration depth in complex tissues.
|
We theoretically analyze a possibility of electromagnetic wave emission due
to electron transitions between spin subbands in a ferromagnet. Different
mechanisms of such spin-flip transitions are cousidered. One mechanism is the
electron transitions caused by magnetic field of the wave. Another mechanism is
due to Rashba spin-orbit interaction. While two mentioned mechanisms exist in a
homogeneously magnetized ferromagnet, there are two other mechanisms that exist
in non-collinearly magnetized medium. First mechanism is known and is due to
the dependence of exchange interaction constant on the quasimomentum of
conduction electrons. Second one exists in any non-collinearly magnetized
medium. We study these mechanisms in a non-collinear ferromagnet with
helicoidal magnetization distribution. The estimations of probabilities of
electron transitions due to different mechanisms are made for realistic
parameters, and we compare the mechanisms. We also estimate the radiation power
and threshold current in a simple model in which spin is injected into the
ferromagnet by a spin-polarized electric current through a tunnel barrier.
|
We consider the sharp interface limit for the scalar-valued and vector-valued
Allen-Cahn equation with homogeneous Neumann boundary condition in a bounded
smooth domain $\Omega$ of arbitrary dimension $N\geq 2$ in the situation when a
two-phase diffuse interface has developed and intersects the boundary
$\partial\Omega$. The limit problem is mean curvature flow with
$90${\deg}-contact angle and we show convergence in strong norms for
well-prepared initial data as long as a smooth solution to the limit problem
exists. To this end we assume that the limit problem has a smooth solution on
$[0,T]$ for some time $T>0$. Based on the latter we construct suitable
curvilinear coordinates and set up an asymptotic expansion for the
scalar-valued and the vector-valued Allen-Cahn equation. Finally, we prove a
spectral estimate for the linearized Allen-Cahn operator in both cases in order
to estimate the difference of the exact and approximate solutions with a
Gronwall-type argument.
|
We prove that the unique possible flow in an Alexandroff $T_{0}$-space is the
trivial one. On the way of motivation, we relate Alexandroff spaces with
topological hyperspaces.
|
Transformer models have demonstrated superior performance in natural language
processing. The dot product self-attention in Transformer allows us to model
interactions between words. However, this modeling comes with significant
computational overhead. In this work, we revisit the memory-compute trade-off
associated with Transformer, particularly multi-head attention, and show a
memory-heavy but significantly more compute-efficient alternative to
Transformer. Our proposal, denoted as PairConnect, a multilayer perceptron
(MLP), models the pairwise interaction between words by explicit pairwise word
embeddings. As a result, PairConnect substitutes self dot product with a simple
embedding lookup. We show mathematically that despite being an MLP, our
compute-efficient PairConnect is strictly more expressive than Transformer. Our
experiment on language modeling tasks suggests that PairConnect could achieve
comparable results with Transformer while reducing the computational cost
associated with inference significantly.
|
Multi-agent collision-free trajectory planning and control subject to
different goal requirements and system dynamics has been extensively studied,
and is gaining recent attention in the realm of machine and reinforcement
learning. However, in particular when using a large number of agents,
constructing a least-restrictive collision avoidance policy is of utmost
importance for both classical and learning-based methods. In this paper, we
propose a Least-Restrictive Collision Avoidance Module (LR-CAM) that evaluates
the safety of multi-agent systems and takes over control only when needed to
prevent collisions. The LR-CAM is a single policy that can be wrapped around
policies of all agents in a multi-agent system. It allows each agent to pursue
any objective as long as it is safe to do so. The benefit of the proposed
least-restrictive policy is to only interrupt and overrule the default
controller in case of an upcoming inevitable danger. We use a Long Short-Term
Memory (LSTM) based Variational Auto-Encoder (VAE) to enable the LR-CAM to
account for a varying number of agents in the environment. Moreover, we propose
an off-policy meta-reinforcement learning framework with a novel reward
function based on a Hamilton-Jacobi value function to train the LR-CAM. The
proposed method is fully meta-trained through a ROS based simulation and tested
on real multi-agent system. Our results show that LR-CAM outperforms the
classical least-restrictive baseline by 30 percent. In addition, we show that
even if a subset of agents in a multi-agent system use LR-CAM, the success rate
of all agents will increase significantly.
|
In this article we will show $2$ different proofs for the fact that there
exist relatively prime positive integers $a,b$ such that: $a^2+ab+b^2=7^n$.
|
We theoretically analyze the typical learning performance of
$\ell_{1}$-regularized linear regression ($\ell_1$-LinR) for Ising model
selection using the replica method from statistical mechanics. For typical
random regular graphs in the paramagnetic phase, an accurate estimate of the
typical sample complexity of $\ell_1$-LinR is obtained. Remarkably, despite the
model misspecification, $\ell_1$-LinR is model selection consistent with the
same order of sample complexity as $\ell_{1}$-regularized logistic regression
($\ell_1$-LogR), i.e., $M=\mathcal{O}\left(\log N\right)$, where $N$ is the
number of variables of the Ising model. Moreover, we provide an efficient
method to accurately predict the non-asymptotic behavior of $\ell_1$-LinR for
moderate $M, N$, such as precision and recall. Simulations show a fairly good
agreement between theoretical predictions and experimental results, even for
graphs with many loops, which supports our findings. Although this paper mainly
focuses on $\ell_1$-LinR, our method is readily applicable for precisely
characterizing the typical learning performances of a wide class of
$\ell_{1}$-regularized $M$-estimators including $\ell_1$-LogR and interaction
screening.
|
We consider the recent surge of information on the potential benefits of
acid-suppression drugs in the context of COVID-19, with an eye on the
variability (and confusion) across the reported findings--at least as regards
the popular antacid famotidine. The inconsistencies reflect contradictory
conclusions from independent clinical-based studies that took roughly similar
approaches, in terms of experimental design (retrospective, cohort-based, etc.)
and statistical analyses (propensity-score matching and stratification, etc.).
The confusion has significant ramifications in choosing therapeutic
interventions: e.g., do potential benefits of famotidine indicate its use in a
particular COVID-19 case? Beyond this pressing therapeutic issue, conflicting
information on famotidine must be resolved before its integration in
ontological and knowledge graph-based frameworks, which in turn are useful in
drug repurposing efforts. To begin systematically structuring the rapidly
accumulating information, in the hopes of clarifying and reconciling the
discrepancies, we consider the contradictory information along three proposed
'axes': (1) a context-of-disease axis, (2) a degree-of-[therapeutic]-benefit
axis, and (3) a mechanism-of-action axis. We suspect that incongruencies in how
these axes have been (implicitly) treated in past studies has led to the
contradictory indications for famotidine and COVID-19. We also trace the
evolution of information on acid-suppression agents as regards the
transmission, severity, and mortality of COVID-19, given the many literature
reports that have accumulated. By grouping the studies conceptually and
thematically, we identify three eras in the progression of our understanding of
famotidine and COVID-19. Harmonizing these findings is a key goal for both
clinical standards-of-care (COVID and beyond) as well as ontological and
knowledge graph-based approaches.
|
We consider the asymmetric simple exclusion process (ASEP) with forward
hopping rate 1, backward hopping rate q and periodic boundary conditions. We
show that the Bethe equations of ASEP can be decoupled, at all order in
perturbation in the variable q, by introducing a formal Laurent series mapping
the Bethe roots of the totally asymmetric case q=0 (TASEP) to the Bethe roots
of ASEP. The probability of the height for ASEP is then written as a single
contour integral on the Riemann surface on which symmetric functions of TASEP
Bethe roots live.
|
In this study, we investigated a method allowing the determination of the
femur bone surface as well as its mechanical axis from some easy-to-identify
bony landmarks. The reconstruction of the whole femur is therefore performed
from these landmarks using a Statistical Shape Model (SSM). The aim of this
research is therefore to assess the impact of the number, the position, and the
accuracy of the landmarks for the reconstruction of the femur and the
determination of its related mechanical axis, an important clinical parameter
to consider for the lower limb analysis. Two statistical femur models were
created from our in-house dataset and a publicly available dataset. Both were
evaluated in terms of average point-to-point surface distance error and through
the mechanical axis of the femur. Furthermore, the clinical impact of using
landmarks on the skin in replacement of bony landmarks is investigated. The
predicted proximal femurs from bony landmarks were more accurate compared to
on-skin landmarks while both had less than 3.5 degrees mechanical axis angle
deviation error. The results regarding the non-invasive determination of the
mechanical axis are very encouraging and could open very interesting clinical
perspectives for the analysis of the lower limb either for orthopedics or
functional rehabilitation.
|
Several novel statistical methods have been developed to estimate large
integrated volatility matrices based on high-frequency financial data. To
investigate their asymptotic behaviors, they require a sub-Gaussian or finite
high-order moment assumption for observed log-returns, which cannot account for
the heavy tail phenomenon of stock returns. Recently, a robust estimator was
developed to handle heavy-tailed distributions with some bounded fourth-moment
assumption. However, we often observe that log-returns have heavier tail
distribution than the finite fourth-moment and that the degrees of heaviness of
tails are heterogeneous over the asset and time period. In this paper, to deal
with the heterogeneous heavy-tailed distributions, we develop an adaptive
robust integrated volatility estimator that employs pre-averaging and
truncation schemes based on jump-diffusion processes. We call this an adaptive
robust pre-averaging realized volatility (ARP) estimator. We show that the ARP
estimator has a sub-Weibull tail concentration with only finite 2$\alpha$-th
moments for any $\alpha>1$. In addition, we establish matching upper and lower
bounds to show that the ARP estimation procedure is optimal. To estimate large
integrated volatility matrices using the approximate factor model, the ARP
estimator is further regularized using the principal orthogonal complement
thresholding (POET) method. The numerical study is conducted to check the
finite sample performance of the ARP estimator.
|
Context. Turbulent transport in stellar radiative zones is a key ingredient
of stellar evolution theory, but the anisotropy of the transport due to the
stable stratification and the rotation of these regions is poorly understood.
The assumption of shellular rotation, which is a cornerstone of the so-called
rotational mixing, relies on an efficient horizontal transport. However, this
transport is included in many stellar evolution codes through phenomenological
models that have never been tested.
Aims. We investigate the impact of horizontal shear on the anisotropy of
turbulent transport.
Methods. We used a relaxation approximation (also known as {\tau}
approximation) to describe the anisotropising effect of stratification,
rotation, and shear on a background turbulent flow by computing velocity
correlations.
Results. We obtain new theoretical scalings for velocity correlations that
include the effect of horizontal shear. These scalings show an enhancement of
turbulent motions, which would lead to a more efficient transport of chemicals
and angular momentum, in better agreement with helio- and asteroseismic
observations of rotation in the whole Hertzsprung-Russell diagram. Moreover, we
propose a new choice for the non-linear time used in the relaxation
approximation, which characterises the source of the turbulence.
Conclusions. For the first time, we describe the effect of stratification,
rotation, and vertical and horizontal shear on the anisotropy of turbulent
transport in stellar radiative zones. The new prescriptions need to be
implemented in stellar evolution calculations. To do so, it may be necessary to
implement non-diffusive transport.
|
Industrial robots can solve very complex tasks in controlled environments,
but modern applications require robots able to operate in unpredictable
surroundings as well. An increasingly popular reactive policy architecture in
robotics is Behavior Trees but as with other architectures, programming time
still drives cost and limits flexibility. There are two main branches of
algorithms to generate policies automatically, automated planning and machine
learning, both with their own drawbacks. We propose a method for generating
Behavior Trees using a Genetic Programming algorithm and combining the two
branches by taking the result of an automated planner and inserting it into the
population. Experimental results confirm that the proposed method of combining
planning and learning performs well on a variety of robotic assembly problems
and outperforms both of the base methods used separately. We also show that
this type of high level learning of Behavior Trees can be transferred to a real
system without further training.
|
Kernel segmentation aims at partitioning a data sequence into several
non-overlapping segments that may have nonlinear and complex structures. In
general, it is formulated as a discrete optimization problem with combinatorial
constraints. A popular algorithm for optimally solving this problem is dynamic
programming (DP), which has quadratic computation and memory requirements.
Given that sequences in practice are too long, this algorithm is not a
practical approach. Although many heuristic algorithms have been proposed to
approximate the optimal segmentation, they have no guarantee on the quality of
their solutions. In this paper, we take a differentiable approach to alleviate
the aforementioned issues. First, we introduce a novel sigmoid-based
regularization to smoothly approximate the combinatorial constraints. Combining
it with objective of the balanced kernel clustering, we formulate a
differentiable model termed Kernel clustering with sigmoid-based regularization
(KCSR), where the gradient-based algorithm can be exploited to obtain the
optimal segmentation. Second, we develop a stochastic variant of the proposed
model. By using the stochastic gradient descent algorithm, which has much lower
time and space complexities, for optimization, the second model can perform
segmentation on overlong data sequences. Finally, for simultaneously segmenting
multiple data sequences, we slightly modify the sigmoid-based regularization to
further introduce an extended variant of the proposed model. Through extensive
experiments on various types of data sequences performances of our models are
evaluated and compared with those of the existing methods. The experimental
results validate advantages of the proposed models. Our Matlab source code is
available on github.
|
Phishing is the number one threat in the world of internet. Phishing attacks
are from decades and with each passing year it is becoming a major problem for
internet users as attackers are coming with unique and creative ideas to breach
the security. In this paper, different types of phishing and anti-phishing
techniques are presented. For this purpose, the Systematic Literature
Review(SLR) approach is followed to critically define the proposed research
questions. At first 80 articles were extracted from different repositories.
These articles were then filtered out using Tollgate Approach to find out
different types of phishing and anti-phishing techniques. Research study
evaluated that spear phishing, Email Spoofing, Email Manipulation and phone
phishing are the most commonly used phishing techniques. On the other hand,
according to the SLR, machine learning approaches have the highest accuracy of
preventing and detecting phishing attacks among all other anti-phishing
approaches.
|
This paper proposes a model-free nonparametric estimator of conditional
quantile of a time series regression model where the covariate vector is
repeated many times for different values of the response. This type of data is
abound in climate studies. To tackle such problems, our proposed method
exploits the replicated nature of the data and improves on restrictive linear
model structure of conventional quantile regression. Relevant asymptotic theory
for the nonparametric estimators of the mean and variance function of the model
are derived under a very general framework. We provide a detailed simulation
study which clearly demonstrates the gain in efficiency of the proposed method
over other benchmark models, especially when the true data generating process
entails nonlinear mean function and heteroskedastic pattern with time dependent
covariates. The predictive accuracy of the non-parametric method is remarkably
high compared to other methods when attention is on the higher quantiles of the
variable of interest. Usefulness of the proposed method is then illustrated
with two climatological applications, one with a well-known tropical cyclone
wind-speed data and the other with an air pollution data.
|
Polarimetric imaging is one of the most effective techniques for
high-contrast imaging and characterization of circumstellar environments. These
environments can be characterized through direct-imaging polarimetry at
near-infrared wavelengths. The SPHERE/IRDIS instrument installed on the Very
Large Telescope in its dual-beam polarimetric imaging (DPI) mode, offers the
capability to acquire polarimetric images at high contrast and high angular
resolution. However dedicated image processing is needed to get rid of the
contamination by the stellar light, of instrumental polarization effects, and
of the blurring by the instrumental point spread function. We aim to
reconstruct and deconvolve the near-infrared polarization signal from
circumstellar environments. We use observations of these environments obtained
with the high-contrast imaging infrared polarimeter SPHERE-IRDIS at the VLT. We
developed a new method to extract the polarimetric signal using an inverse
approach method that benefits from the added knowledge of the detected signal
formation process. The method includes weighted data fidelity term, smooth
penalization, and takes into account instrumental polarization. The method
enables to accurately measure the polarized intensity and angle of linear
polarization of circumstellar disks by taking into account the noise statistics
and the convolution of the observed objects by the instrumental point spread
function. It has the capability to use incomplete polarimetry cycles which
enhance the sensitivity of the observations. The method improves the overall
performances in particular for low SNR/small polarized flux compared to
standard methods.
|
Traditional and deep learning-based fusion methods generated the intermediate
decision map to obtain the fusion image through a series of post-processing
procedures. However, the fusion results generated by these methods are easy to
lose some source image details or results in artifacts. Inspired by the image
reconstruction techniques based on deep learning, we propose a multi-focus
image fusion network framework without any post-processing to solve these
problems in the end-to-end and supervised learning way. To sufficiently train
the fusion model, we have generated a large-scale multi-focus image dataset
with ground-truth fusion images. What's more, to obtain a more informative
fusion image, we further designed a novel fusion strategy based on unity fusion
attention, which is composed of a channel attention module and a spatial
attention module. Specifically, the proposed fusion approach mainly comprises
three key components: feature extraction, feature fusion and image
reconstruction. We firstly utilize seven convolutional blocks to extract the
image features from source images. Then, the extracted convolutional features
are fused by the proposed fusion strategy in the feature fusion layer. Finally,
the fused image features are reconstructed by four convolutional blocks.
Experimental results demonstrate that the proposed approach for multi-focus
image fusion achieves remarkable fusion performance compared to 19
state-of-the-art fusion methods.
|
In this work, the aim is to study the spread of a contagious disease and
information on a multilayer social system. The main idea is to find a criterion
under which the adoption of the spreading information blocks or suppresses the
epidemic spread. A two-layer network is the base of the model. The first layer
describes the direct contact interactions, while the second layer is the
information propagation layer. Both layers consist of the same nodes. The
society consists of five different categories of individuals: susceptibles,
infective, recovered, vaccinated and precautioned. Initially, only one infected
individual starts transmitting the infection. Direct contact interactions
spread the infection to the susceptibles. The information spreads through the
second layer. The SIR model is employed for the infection spread, while the
Bass equation models the adoption of information. The control parameters of the
competition between the spread of information and spread of disease are the
topology and the density of connectivity. The topology of the information layer
is a scale-free network with increasing density of edges. In the contact layer,
regular and scale-free networks with the same average degree per node are used
interchangeably. The observation is that increasing complexity of the contact
network reduces the role of individual awareness. If the contact layer consists
of networks with limited range connections, or the edges sparser than the
information network, spread of information plays a significant role in
controlling the epidemics.
|
This paper first formalises a new observability concept, called weak regular
observability, that is adapted to Fast Moving Horizon Estimation where one aims
to estimate the state of a nonlinear system efficiently on rolling time windows
in the case of small initial error. Additionally, sufficient conditions of weak
regular observability are provided in a problem of Simultaneous Localisation
and Mapping (SLAM) for different measurement models. In particular it is shown
that following circular trajectories leads to weak regular observability in a
second order 2D SLAM problem with several possible types of sensors.
|
The dark matter halo surface density, given by the product of the dark matter
core radius ($r_c$) and core density ($\rho_c$) has been shown to be a constant
for a wide range of isolated galaxy systems. Here, we carry out a test of this
{\em ansatz} using a sample of 17 relaxed galaxy groups observed using Chandra
and XMM-Newton, as an extension of our previous analysis with galaxy clusters.
We find that $\rho_c \propto r_c^{-1.35^{+0.16}_{-0.17}}$, with an intrinsic
scatter of about 27.3%, which is about 1.5 times larger than that seen for
galaxy clusters. Our results thereby indicate that the surface density is
discrepant with respect to scale invariance by about 2$\sigma$, and its value
is about four times greater than that for galaxies. Therefore, the elevated
values of the halo surface density for groups and clusters indicate that the
surface density cannot be a universal constant for all dark matter dominated
systems. Furthermore, we also implement a test of the radial acceleration
relation for this group sample. We find that the residual scatter in the radial
acceleration relation is about 0.32 dex and a factor of three larger than that
obtained using galaxy clusters. The acceleration scale which we obtain is
in-between that seen for galaxies and clusters.
|
The nature of unconventional superconductivity is intimately linked to the
microscopic nature of the pairing interactions. In this work, motivated by
cubic heavy fermion compounds with embedded multipolar moments, we
theoretically investigate superconducting instabilities instigated by
multipolar Kondo interactions. Employing multipolar fluctuations (mediated by
RKKY interaction) coupled to conduction electrons via two-channel Kondo and
novel multipolar Kondo interactions, we uncover a variety of superconducting
states characterized by higher-angular momentum Cooper pairs, $J=0,1,2,3$. We
demonstrate that both odd and even parity pairing functions are possible,
regardless of the total angular momentum of the Cooper pairs, which can be
traced back to the atypical nature of the multipolar Kondo interaction that
intertwines conduction electron spin and orbital degrees of freedom. We
determine that different (point-group) irrep classified pairing functions may
coexist with each other, with some of them characterized by gapped and point
node structures in their corresponding quasiparticle spectra. This work lays
the foundation for discovery and classification of superconducting states in
rare-earth metallic compounds with multipolar local moments.
|
Satellites, are both crucial and, despite common misbelieve, very fragile
parts our civilian and military critical infrastructure. While, many efforts
are focused on securing ground and space segments, especially when national
security or large businesses interests are affected, the small-sat, newspace
revolution democratizes access to, and exploitation of the near earth orbits.
This brings new players to the market, typically in the form of small to medium
sized companies, offering new or more affordable services. Despite the
necessity and inevitability of this process, it also opens potential new venues
for targeted attacks against space-related infrastructure. Since sources of
satellite ephemerides are very often centralized, they are subject to classical
Man-in-the-Middle attacks which open venues for TLE spoofing attack, which may
result in unnecessary collision avoidance maneuvers, in best case and
orchestrated crashes, in worst case. In this work, we propose a countermeasure
to the presented problem that include distributed solution, which will have no
central authority responsible for storing and disseminating TLE information.
Instead, each of the peers participating to the system, have full access to all
of the records stored in the system, and distribute the data in a consensual
manner,ensuring information replication at each peer node. This way, single
point of failure syndromes of classic systems, which currently exist due to the
direct ephemerids distribution mechanism, are removed. Our proposed solution is
to build data dissemination systems using permissioned, private ledgers where
peers have strong and verifiable identities, which allow also for redundancy in
SST data sourcing.
|
Opioid Use Disorder (OUD) is a public health crisis costing the US billions
of dollars annually in healthcare, lost workplace productivity, and crime.
Analyzing longitudinal healthcare data is critical in addressing many
real-world problems in healthcare. Leveraging the real-world longitudinal
healthcare data, we propose a novel multi-stream transformer model called MUPOD
for OUD identification. MUPOD is designed to simultaneously analyze multiple
types of healthcare data streams, such as medications and diagnoses, by
attending to segments within and across these data streams. Our model tested on
the data from 392,492 patients with long-term back pain problems showed
significantly better performance than the traditional models and recently
developed deep learning models.
|
This paper presents an analytical study for the metric properties of the
paraboloidal double projection, i.e. central and orthogonal projections used in
the catadioptric camera system. Metric properties have not sufficiently studied
in previous treatments of such system. These properties incorporate the
determination of the true lengths of projected lines and areas bounded by
projected lines. The advantageous main gain of determining metric elements of
the paraboloidal double projection is studying distortion analysis and camera
calibration, which is considered an essential tool in testing camera accuracy.
Also, this may be considered as a significant utility in studying comparison
analysis between different cameras projection systems.
|
Anti-ferromagnetic materials have the possibility to offer ultra fast, high
data density spintronic devices. A significant challenge is the reliable
detection of the state of the antiferromagnet, which can be achieved using
exchange bias. Here we develop an atomistic spin model of the athermal training
effect, a well known phenomenon in exchange biased systems where the bias is
significantly reduced after the first hysteresis cycle. We find that the
setting process in granular thin films relies on the presence of interfacial
mixing between the ferromagnetic and antiferromagnetic layers. We
systematically investigate the effect of the intermixing and find that the
exchange bias, switching field and coercivity all increase with increased
intermixing. The interfacial spin state is highly frustrated leading to a
systematic decrease in interfacial ordering of the ferromagnet. This metastable
spin structure of initially irreversible spins leads to a large effective
exchange coupling and thus large increase in the switching field. After the
first hysteresis cycle these metastable spins drop into a reversible ground
state that is repeatable for all subsequent hysteresis cycles, demonstrating
that the effect is truly athermal. Our simulations provide new insights into
the role of interface mixing and the importance of metastable spin structures
in exchange biased systems which could help with the design an optimisation of
antiferromagnetic spintronic devices.
|