We present a strikingly simple proof that two rules are sufficient to automate gradient descent: 1) don’t increase the stepsize too fast and 2) don’t overstep the local curvature. No need for functional values, no line search, no information about the func ...
This paper introduces a new algorithm for consensus optimization in a multi-agent network, where all agents collaboratively find a minimizer for the sum of their private functions. All decentralized algorithms rely on communications between adjacent nodes. ...
Many important problems in contemporary machine learning involve solving highly non- convex problems in sampling, optimization, or games. The absence of convexity poses significant challenges to convergence analysis of most training algorithms, and in some ...
We introduce a probabilistic modeling approach for pedestrian speed density relationship. It is motivated by a high scatter in real data that precludes the use of traditional equilibrium relationships. To characterize the observed pattern, we relax the hom ...
Exploiting the full potential of pedestrian infrastructures is becoming vital in many environments which cannot be easily expanded to cope with the increasing demand. This is particularly true of train stations in many dense cities since space is limited. ...
We propose a class of novel variance-reduced stochastic conditional gradient methods. By adopting the recent stochastic path-integrated differential estimator technique (SPIDER) of Fang et al. (2018) for the classical Frank-Wolfe (FW) method, we introduce ...