We formulate gradient-based Markov chain Monte Carlo (MCMC) sampling as optimization on the space of probability measures, with Kullback-Leibler (KL) divergence as the objective functional. We show that an under-damped form of the Langevin algorithm perfor ...
The COVID-19 pandemic has demonstrated the importance and value of multi-period asset allocation strategies responding to rapid changes in market behavior. In this article, we formulate and solve a multi-stage stochastic optimization problem, choosing the ...
Surprise-based learning allows agents to rapidly adapt to nonstationary stochastic environments characterized by sudden changes. We show that exact Bayesian inference in a hierarchical model gives rise to a surprise-modulated trade-off between forgetting o ...
Path integrals play a crucial role in describing the dynamics of physical systems subject to classical or quantum noise. In fact, when correctly normalized, they express the probability of transition between two states of the system. In this work, we show ...
In distributed computing, many papers try to evaluate the message complexity of a distributed system as a function of the number of nodes n. But what about the cost of building the distributed system itself? Assuming that we want to reliably connect n node ...
This paper analyzes the trajectories of stochastic gradient descent (SGD) to help understand the algorithm’s convergence properties in non-convex problems. We first show that the sequence of iterates generated by SGD remains bounded and converges with prob ...
Introduction Stereotactic radiosurgery (SRS) is a valuable treatment option for persistent and/or recurrent acromegaly secondary to growth hormone (GH) secreting pituitary adenoma (PA). Here, we assess the role of biological effective dose (BED) received b ...