Jekyll2019-06-11T09:13:53+02:00http://www.pokutta.com/blog/One trivial observation at a timeEverything Mathematics, Optimization, Machine Learning, and Artificial IntelligenceCheat Sheet: Acceleration from First Principles2019-06-10T01:00:00+02:002019-06-10T01:00:00+02:00http://www.pokutta.com/blog/research/2019/06/10/cheatsheet-acceleration-first-principles<p><em>TL;DR: Cheat Sheet for a derivation of acceleration from optimization first principles.</em>
<!--more--></p>
<p><em>Posts in this series (so far).</em></p>
<ol>
<li><a href="/blog/research/2018/12/07/cheatsheet-smooth-idealized.html">Cheat Sheet: Smooth Convex Optimization</a></li>
<li><a href="/blog/research/2018/10/05/cheatsheet-fw.html">Cheat Sheet: Frank-Wolfe and Conditional Gradients</a></li>
<li><a href="/blog/research/2018/10/19/cheatsheet-fw-lin-conv.html">Cheat Sheet: Linear convergence for Conditional Gradients</a></li>
<li><a href="/blog/research/2018/11/12/heb-conv.html">Cheat Sheet: Hölder Error Bounds (HEB) for Conditional Gradients</a></li>
<li><a href="/blog/research/2019/02/27/cheatsheet-nonsmooth.html">Cheat Sheet: Subgradient Descent, Mirror Descent, and Online Learning</a></li>
<li><a href="/blog/research/2019/06/10/cheatsheet-acceleration-first-principles.html">Cheat Sheet: Acceleration from First Principles</a></li>
</ol>
<p><em>My apologies for incomplete references—this should merely serve as an overview.</em></p>
<p>Acceleration in smooth convex optimization has been met with awe and has been subject to extensive research over the last years. In a nutshell, what acceleration does is that it provides an “unexpected” speedup in smooth convex optimization; we will be concerned with acceleration in the Nesterov sense [N1], [N2]. We consider the problem</p>
<script type="math/tex; mode=display">\tag{prob}
\min_{x \in \RR^n} f(x),</script>
<p>where $f$ is an $L$-smooth and $\mu$-strongly convex function. Then with standard arguments that we review below we can show that we need roughly $t(\varepsilon) = \Theta(\frac{\mu}{L} \log \frac{1}{
\varepsilon})$ iterations (of e.g., gradient descent) to achieve a primal gap</p>
<script type="math/tex; mode=display">f(x_{t(\varepsilon)}) - f(x^\esx) \leq \varepsilon,</script>
<p>where $x^\esx$ is the (unique) optimal solution to (prob). Accelerated methods achieve the same accuracy in $\Theta(\sqrt{\frac{\mu}{L}} \log \frac{1}{
\varepsilon})$ iterations, which can be a huge improvement in running time.</p>
<p>By now we have various proofs, explanations, and analyses for the phenomenon. Just to name a few, for example, if you look for a very concise analysis of acceleration for the smooth and (non-strongly) convex then there is a very nice proof on <a href="https://blogs.princeton.edu/imabandit/2018/11/21/a-short-proof-for-nesterovs-momentum/">Sébastien Bubeck’s blog</a>, which also provides a nice overview and link to other methods such as Polyak’s method [P] (Sébastien also has a very nice post about the <a href="https://blogs.princeton.edu/imabandit/2019/01/09/nemirovskis-acceleration/">Nemirovski’s acceleration with line search</a>) and on <a href="https://distill.pub/2017/momentum/">distill there is a very nice post</a> that explains momentum and acceleration for quadratics. There has been also recent work that understands acceleration as a linear coupling of mirror descent and gradient descent [AO] and other work explains acceleration as arising from a “better” discretization of the continuous time dynamics (see [SBC] and follow-up work). An ellipsoid method-like accelerated algorithm was derived in [BLS] providing some nice geometric intuition. Another very interesting perspective on acceleration by means of polynomial approximation and Chebyshev polynomials is given on <a href="http://blog.mrtz.org/2013/09/07/the-zen-of-gradient-descent.html">Moritz Hardt’s blog</a>; in fact I like this quite a bit as a possible explanation of the origin of acceleration. Very recently, in [DO] a unifying framework for the analysis of first-order methods has been presented that significantly streamlines the analysis of more complex first-order methods; we will present the later derivation in that framework and will provide a brief introduction further below.</p>
<p>What this post is about is not providing yet another <em>analysis of acceleration</em> and proving that a <em>given algorithm</em> indeed achieves an improved rate—there are already many excellent resources out there. Rather, what I will try to do is to provide a (relatively) natural <em>derivation of acceleration</em> (and associated algorithm) from optimization first principles, such as smoothness, (strong) convexity, first-order optimality, and Taylor expansions. In particular: no estimated point sequences, no lookahead or extrapolation, no momentum, no quadratic equation magic, no Chebyshev polynomials (although they are awesome!), and no guessing of secret constants: everything will follow (arguably) naturally although I was told that all of the aforementioned can be easily recovered.</p>
<p><em>Disclaimer: Just to be clear, fundamentally nothing new is going to happen here but rather I will provide a somewhat natural derivation of acceleration, which is sliced and diced together from [DO] and some recent work with Alejandro Carderera and Jelena Diakonikolas. Also, note that the derivation below can be significantly compressed but I opted for a more verbose exposition to emphasize that there is no hidden magic.</em></p>
<h2 id="how-we-got-here-the-basic-argument">How we got here: the basic argument</h2>
<p>We will first recall the standard proof of linear convergence of (vanilla) gradient descent for problems of the form (prob) and use this as an opportunity to introduce and recall definitions; see warmup section in <a href="/blog/research/2018/10/05/cheatsheet-fw.html">Cheat Sheet: Frank-Wolfe and Conditional Gradients</a> for an in-depth discussion of these concepts. In the following, let $x^\esx$ denote the (unique) optimal solution to (prob).</p>
<p>We will use the following (standard) definitions:</p>
<p class="mathcol"><strong>Definition (convexity).</strong> A differentiable function $f$ is said to be <em>convex</em> if for all $x,y \in \mathbb R^n$ it holds: <script type="math/tex">f(y) - f(x) \geq \langle \nabla f(x), y-x\rangle</script>.</p>
<p class="mathcol"><strong>Definition (smoothness).</strong> A convex function $f$ is said to be <em>$L$-smooth</em> if for all $x,y \in \mathbb R^n$ it holds: <script type="math/tex">f(y) - f(x) \leq \langle \nabla f(x), y-x\rangle + \frac{L}{2} \norm{x-y}^2</script>.</p>
<p class="mathcol"><strong>Definition (strong convexity).</strong> A convex function $f$ is said to be <em>$\mu$-strongly convex</em> if for all $x,y \in \mathbb R^n$ it holds: <script type="math/tex">f(y) - f(x) \geq \langle \nabla f(x), y-x\rangle + \frac{\mu}{2} \norm{x-y}^2</script>.</p>
<p>(Strong) convexity provides an underestimator of the function whereas smoothness provides an overestimator:</p>
<p class="center"><img src="http://www.pokutta.com/blog/assets/convexity.png" alt="Convexity and smoothness" /></p>
<p>Now suppose that we consider (vanilla) gradient descent with updates of the form</p>
<script type="math/tex; mode=display">\tag{GD}
x_{t+1} \leftarrow x_t - \frac{1}{L} \nabla f(x_t).</script>
<p>Plugging-in this update into the definition of smoothness, we immediately obtain:</p>
<script type="math/tex; mode=display">\tag{progress}
\underbrace{f(x_{t}) - f(x_{t+1})}_{\text{primal progress}} \geq \frac{\norm{\nabla f(x_t)}^2}{2L}.</script>
<p>Similarly, with standard arguments we obtain from strong convexity the upper bound on the primal gap:</p>
<script type="math/tex; mode=display">f(x_t) - f(x^\esx) \leq \frac{\norm{\nabla f(x_t)}^2}{2 \mu}.</script>
<p>We can now simply plug-in the upper bound into the progress inequality (progress) to obtain:</p>
<script type="math/tex; mode=display">f(x_{t}) - f(x_{t+1}) \geq \frac{\norm{\nabla f(x_t)}^2}{2L} \geq \frac{\mu}{L} (f(x_t) - f(x^\esx)),</script>
<p>or, via rewriting,</p>
<script type="math/tex; mode=display">f(x_{t+1}) - f(x^\esx) \leq \left(1- \frac{\mu}{L}\right) (f(x_t) - f(x^\esx)),</script>
<p>so that we obtain the coveted linear rate:</p>
<script type="math/tex; mode=display">f(x_t) - f(x^\esx) \leq \left(1- \frac{\mu}{L}\right)^t (f(x_0) - f(x^\esx)).</script>
<p>While this is great, the best-known lower bound only rules out rates faster than $\Theta(\sqrt{\frac{\mu}{L}} \log \frac{1}{\varepsilon})$, so that we are potentially quadratically slower than the best possible. Acceleration closes this gap, achieving a convergence rate of $\Theta(\sqrt{\frac{\mu}{L}} \log \frac{1}{\varepsilon})$, which is optimal.</p>
<h3 id="information-left-on-the-table">Information left on the table</h3>
<p>A natural question to ask now is of course why gradient descent cannot achieve the optimal rate and whether it is a problem with the algorithm or the analysis (which is an important question one should ask routinely). For example, going from sublinear convergence in the case smooth and (non-strongly) convex function to linear convergence in the case of smooth and strongly convex function <em>does not require any change in the algorithm</em>, e.g., in (vanilla) gradient descent but rather it is a <em>better analysis</em> that establishes the better rate. A close examination of the argument from above shows that in each iteration $t+1$ we basically rely on two inequalities:</p>
<p>Smoothness at $x_t$:</p>
<script type="math/tex; mode=display">f(y) - f(x_t) \leq \langle \nabla f(x_t), y-x_t\rangle + \frac{L}{2} \norm{x_t-y}^2.</script>
<p>Strong convexity at $x_t$:</p>
<script type="math/tex; mode=display">f(y) - f(x_t) \geq \langle \nabla f(x_t), y-x_t\rangle + \frac{\mu}{2} \norm{x_t-y}^2.</script>
<p>The key point is that we use <em>significantly less</em> information than we actually have available: In fact, in iteration $t+1$, we have iterates $x_0, \dots, x_t$ and for <em>each</em> of them we have these two inequalities. In particular, we have the strong convexity lower bound for each $x_0, \dots, x_t$ potentially providing a much better lower approximation of $f$ than just the bound from the last iterate $x_t$. Roughly in picture world it looks like this, where the left is only using last-iterate information for the lower bound and the right is using all previous iterates:</p>
<p class="center"><img src="http://www.pokutta.com/blog/assets/acc/MSCSM-comb.png" alt="MSC and SM inequalities" /></p>
<p>Can this additional information be used to improve convergence?</p>
<h2 id="acceleration">Acceleration</h2>
<p>We will now try to derive our accelerated method. To this end assume that we already have a <em>hypothetical</em> algorithm that has generated iterates $y_0, \dots, y_t$ by some (as of now unknown) rule.</p>
<h3 id="a-better-lower-bound">A better lower bound</h3>
<p>Let us see whether we can use the additional information from previous iterates to obtain a better lower bound or approximation for our function. Given the sequence of iterates $y_0, \dots, y_t$ we have the following family of inequalities from strong convexity:</p>
<script type="math/tex; mode=display">f(y_i) + \langle \nabla f(y_i), z - y_i \rangle + \frac{\mu}{2} \norm{y_i-z}^2 \leq f(z),</script>
<p>for $i \in \setb{0, \dots, t}$. Moreover we can take any positive combination of these inequalities with weights $a_0, \dots, a_t \geq 0$ and obtain:</p>
<script type="math/tex; mode=display">\sum_{i = 0}^t a_i [f(y_i) + \langle \nabla f(y_i), z - y_i \rangle + \frac{\mu}{2} \norm{y_i-z}^2] \leq \underbrace{\left(\sum_{i = 0}^t a_i\right)}_{\doteq A_t} f(z),</script>
<p>or equivalently:</p>
<script type="math/tex; mode=display">\frac{1}{A_t}\sum_{i = 0}^t a_i [f(y_i) + \langle \nabla f(y_i), z - y_i \rangle + \frac{\mu}{2} \norm{y_i-z}^2] \leq f(z).</script>
<p>This inequality holds for all $z \in \RR^n$ and in particular the optimal solution $x^\esx \in \RR^n$. We do not know $x^\esx$ however, but what we do know, taking the minimum over $z \in \RR^n$ on both sides, is the bound:</p>
<script type="math/tex; mode=display">L_t \doteq \min_{z \in \RR^n} \frac{1}{A_t}\sum_{i = 0}^t a_i [f(y_i) + \langle \nabla f(y_i), z - y_i \rangle + \frac{\mu}{2} \norm{y_i-z}^2] \leq \min_{z \in \RR^n} f(z) = f(x^\esx).</script>
<p>Note that $L_t$ is a function of $a_0, \dots, a_t$ and $y_0, \dots, y_t$ and clearly the strength of the bound depends heavily on both; we will get back to both later. Next let us compute the minimizer of $L_k$. Clearly, $L_k$ is smooth and (strongly) convex in $z$ and the first-order optimality condition leads to the equation:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
0 & = \frac{1}{A_t}\sum_{i = 0}^t a_i [\nabla f(y_i) - \mu (y_i - z) ] \\
& = \mu z + \frac{1}{A_t}\sum_{i = 0}^t a_i [\nabla f(y_i) - \mu y_i ] \\
\Leftrightarrow z &= \frac{1}{A_t}\sum_{i = 0}^t a_i [y_i - \frac{1}{\mu}\nabla f(y_i)],
\end{align*} %]]></script>
<p>so that the (dual) lower bound is optimized by</p>
<script type="math/tex; mode=display">\tag{dualSeq}
w_t \leftarrow \frac{1}{A_t}\sum_{i = 0}^t a_i [y_i - \frac{1}{\mu}\nabla f(y_i)],</script>
<p>which can also be written recursively using $A_t \doteq \sum_{i = 0}^t a_i$ as:</p>
<script type="math/tex; mode=display">\tag{dualSeqRec}
w_t \leftarrow \frac{A_{t-1}}{A_t} w_{t-1} + \frac{a_t}{A_t} [y_t - \frac{1}{\mu}\nabla f(y_t)].</script>
<h3 id="primal-steps">Primal steps</h3>
<p>For the primal steps we do exactly the same thing as before in gradient descent. After all, let us first see how far we can get with only adjusting the lower bound. Therefore, given the sequence $y_0, \dots, y_t$ in iteration $t$, we define the primal steps as</p>
<script type="math/tex; mode=display">\tag{primalSeq}
x_t \leftarrow y_t - \frac{1}{L} \nabla f(y_t),</script>
<p>which is the same update as in (GD) but we do not use the update to (directly) define the next iterates $y_{t+1}$ but rather let us give them a different name until we decide what to do with them.</p>
<h3 id="interlude-approximate-dual-gap-technique-adgt">Interlude: Approximate Dual Gap Technique (ADGT)</h3>
<p>Having now a (hopefully) better lower bound we need to see what we can do with it. For this we will use the <em>Approximate Dual Gap Technique (ADGT)</em> of [DO], a conceptually simple yet powerful technique to analyze first-order methods. In a nutshell, ADGT works as follows: Our ultimate aim is to prove that for some first-order algorithm that generates iterates $x_0, \dots, x_t, \dots$ the <em>optimality gap</em> $f(x_t) - f(x^\esx) \rightarrow 0$ with a certain convergence rate. Usually, it is very hard to say something about $f(x_t) - f(x^\esx)$ directly and so typically analyses use bounds on the optimality gap.</p>
<p>ADGT makes this explicit in a first step by working with a lower bound $L_t \leq f(x^\esx)$ in iteration $t$ and upper bound $f(x_t) \leq U_t$ and then defining a <em>gap estimate</em> in iteration $t$ as $G_t \doteq U_t - L_t$, so that $f(x_t) - f(x^\esx) \leq G_t$ in each iteration $t$. Then further, if there exists a sequence of suitably chosen, fast growing numbers $0 \leq A_0, \dots, A_t, \dots$, so that</p>
<script type="math/tex; mode=display">A_t G_t \leq A_{t-1} G_{t-1}.</script>
<p>Then in particular the gap estimate in iteration $t$ drops as $G_t \leq \frac{A_{t-1}}{A_t} G_{t-1}$ and after chaining these bounds together we obtain $f(x_t) - f(x^*) \leq \frac{A_0}{A_t} G_0$, i.e., the convergence rate is given basically by $\frac{1}{A_t}$.</p>
<p>Before going back to our attempt at acceleration, let us familiarize ourselves with ADGT by analyzing vanilla gradient descent with update (GD). To this end let us consider a <em>simplified and stronger</em> lower bound, given by</p>
<script type="math/tex; mode=display">\hat L_t \doteq \frac{1}{A_t} \sum_{i = 0}^t a_i [f(x_i) + \langle \nabla f(x_i), x^\esx - x_i \rangle + \frac{\mu}{2} \norm{x_i-x^\esx}^2] \leq f(x^\esx),</script>
<p>where we chose $z = x^\esx$ with $A_t \doteq \sum_{i = 0}^t a_i$, so that we only have to pick the $a_i$ at some point. For the upper bound we simple choose $U_t \doteq f(x_{t+1})$; mind the index shift as it will be important.</p>
<p>In order to show that $A_t G_t \leq A_{t-1} G_{t-1}$, we will analyze the upper bound change and the lower bound change separately as our goal is to show:</p>
<script type="math/tex; mode=display">0 \geq A_t G_t - A_{t-1} G_{t-1} = A_t U_t - A_{t-1} U_{t-1} - (A_t L_t - A_{t-1} L_{t-1}).</script>
<p><em>Change in upper bound.</em> <br />
The change in the upper bound can be bounded using smoothness with basically the same argument as for the vanilla GD warmup from above:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
A_t U_t - A_{t-1} U_{t-1} & = A_t f(x_{t+1}) - A_{t-1} f(x_{t}) = A_t (f(x_{t+1}) - f(x_{t})) + a_t f(x_{t}) \\
& \leq - A_t \left(\frac{\norm{\nabla f(x_{t})}^2}{2L}\right) + a_t f(x_{t}).
\end{align*} %]]></script>
<p><em>Change in lower bound.</em> <br />
The change in the lower bound follows from evaluating $\hat L_t$, which used the strong convexity of $f$ in its definition:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
A_t \hat L_t - A_{t-1} \hat L_{t-1} & = a_t [f(x_t) + \langle \nabla f(x_t), x^\esx - x_t \rangle + \frac{\mu}{2} \norm{x_t-x^\esx}^2]
\end{align*} %]]></script>
<p><em>Change in the gap estimate.</em> <br />
With this we immediately obtain that the change in the gap estimate is given by:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
A_t G_t - A_{t-1} G_{t-1} & = a_t (\underbrace{f(x_{t}) - f(x_t)}_{= 0}) - A_t \left(\frac{\norm{\nabla f(x_{t})}^2}{2L}\right) - a_t [\langle \nabla f(x_t), x^\esx - x_t \rangle + \frac{\mu}{2} \norm{x_t-x^\esx}^2] \\
& \leq - A_t \left(\frac{\norm{\nabla f(x_{t})}^2}{2L}\right) - \frac{a_t}{2\mu} [\underbrace{2\mu \langle \nabla f(x_t), x^\esx - x_t \rangle + \mu^2 \norm{x_t-x^\esx}^2}_{\geq - \norm{ \nabla f(x_t)}^2 \text{ via } a^2 + 2ab \geq - b^2}] \\
& \leq \norm{\nabla f(x_{t})}^2 \left( - \frac{A_t}{2L} + \frac{a_t}{2\mu} \right),
\end{align*} %]]></script>
<p>using the standard trick $a^2 + 2ab \geq - b^2$ that virtually every proof utilizing strong convexity uses (see <a href="/blog/research/2018/10/05/cheatsheet-fw.html">Cheat Sheet: Frank-Wolfe and Conditional Gradients</a> for a derivation of that estimation). Thus for</p>
<script type="math/tex; mode=display">A_t G_t - A_{t-1} G_{t-1} \leq 0,</script>
<p>it suffices to choose $a_t$, so that $- \frac{A_t}{2L} + \frac{a_t}{2\mu} \leq 0$ and the choice $\frac{a_t}{A_t} \doteq \frac{\mu}{L}$ suffices, leading to a contraction with rate:</p>
<script type="math/tex; mode=display">\frac{A_{t-1}}{A_t} = 1 - \frac{a_t}{A_t} = 1 - \frac{\mu}{L},</script>
<p>which is the standard rate and matches what we have derived above in the warmup. In a last step one would now relate $G_0$ to the initial gap $f(x_0) - f(x^\esx)$ to obtain a bound on the constant $A_0$. We skip this step to keep the exposition clean; it is immediate here and as it is not that crucial for our discussion.</p>
<p>Now is a good time to pause for a second. Initially we speculated that maybe not using all available information might be the reason for not obtaining a better rate. Yet, in this argument now, we <em>have used</em> more information, in fact in iteration $t$ we have used all iterates $x_0, \dots, x_{t-1}$; see definition of $\hat L_t$. Maybe this is because the iterates $x_t$ are obtained without <em>any regard</em> for the lower bound and while we use all iterates now, maybe the bound is not much stronger than the bound arising from the last iterate and maybe we could strengthen it by a better choice of the $a_i$ and $y_i$ in the general definition of $L_t$?</p>
<h3 id="adgt-on-the-hypothetical-sequence">ADGT on the hypothetical sequence</h3>
<p>While ADGT seems to be an overkill for the standard linear convergence proof compared to the argument from the warmup, we will see soon that ADGT buys us considerable extra freedom. We will now try to apply the same analysis as above once more however, we start out with our hypothetical sequence $y_0, \dots, y_t, \dots$ and see, whether maybe naturally a way arises to choose the $y_t$ not only to produce primal progress as the rule (GD) does but also dual progress, <em>improving</em> our lower bound estimate by providing better “attachment points” $y_0, \dots, y_t$ from which we obtain a stronger lower bound $L_t$ via strong convexity.</p>
<p>Observe that, given the sequence of iterates $y_0, \dots, y_t$, our primal iterates $x_t$ and dual iterates $w_t$ are the <em>optimal updates</em> in iteration $t$ for primal and dual progress respectively. Note, that in general we do not know whether $w_t = x_t$ and usually they are <em>not</em> equal.</p>
<p>We follow the same strategy as above with the aim of analyzing the change in the gap estimate for our partially specified algorithm:</p>
<p><em>Change in upper bound.</em> <br />
The change in the upper bound can be bounded using smoothness but applied to the update $x_t \leftarrow y_t - \frac{1}{L} \nabla f(y_t)$ that we used to define our primal sequence $x_t$, i.e., $f(x_t) - f(y_t) \leq - \frac{\norm{\nabla f(y_t)}^2}{2L}$. Otherwise, except for rearranging, using the upper bound $U_t \doteq f(x_{t})$ and adding zero it is the same; note the index shift from $t+1$ to $t$ in the definition of $U_t$ as we have the intermediate point $y_t$ now as will become clear:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
A_t U_t - A_{t-1} U_{t-1} & = A_t f(x_t) - A_{t-1} f(x_{t-1}) \\
& = A_t f(y_t) - A_t f(y_t) + A_t f(x_t) - A_{t-1} f(x_{t-1}) \\
& = a_t f(y_t) + A_t (f(x_t) - f(y_t)) + A_{t-1} (f(y_{t}) - f(x_{t-1})) \\
& \leq a_t f(y_t) - A_t \left(\frac{\norm{\nabla f(y_t)}^2}{2L}\right) + A_{t-1} (f(y_{t}) - f(x_{t-1})).
\end{align*} %]]></script>
<p><em>Change in lower bound.</em> <br />
The change in the lower bound this time around is more intricate however as we now have an optimization problem in the definition of $L_k$ that defines the dual iterates $w_t$. Recall that:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
L_t & = \min_{z \in \RR^n} \frac{1}{A_t}\sum_{i = 0}^t a_i [f(y_i) + \langle \nabla f(y_i), z - y_i \rangle + \frac{\mu}{2} \norm{y_i-z}^2] \\
& = \frac{1}{A_t}\sum_{i = 0}^t a_i f(y_i) + \frac{1}{A_t}\min_{z \in \RR^n} \underbrace{\sum_{i = 0}^t a_i [\langle \nabla f(y_i), z - y_i \rangle + \frac{\mu}{2} \norm{y_i-z}^2]}_{\doteq \gamma_t(z)}.
\end{align*} %]]></script>
<p>With this we can conveniently express the change in the lower bound as:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
A_t L_t - A_{t-1} L_{t-1} & = a_t f(y_t) + \gamma_t(w_t) - \gamma_{t-1}(w_{t-1}),
\end{align*} %]]></script>
<p>so that it suffices to bound the change $\gamma_t(w_t) - \gamma_{t-1}(w_{t-1})$. We have:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
\gamma_t(w_t) - \gamma_{t-1}(w_{t-1}) & = \gamma_{t-1}(w_{t}) + a_t \langle \nabla f(y_t), w_{t} - y_t \rangle + a_t\frac{\mu}{2} \norm{y_t-w_{t}}^2 - \gamma_{t-1}(w_{t-1}) \\
& = \gamma_{t-1}(w_{t-1}) + \underbrace{\langle \nabla \gamma_{t-1}(w_{t-1}), w_{t} - w_{t-1}\rangle}_{= 0\text{, as $w_{t-1}$ is a minimizer of $\gamma_{t-1}$}} + \frac{\mu A_{t-1}}{2} \norm{w_t - w_{t-1}}^2 \\ & \qquad \qquad + a_t \langle \nabla f(y_t), w_{t} - y_t \rangle + a_t \frac{\mu}{2} \norm{y_t-w_{t}}^2 - \gamma_{t-1}(w_{t-1}) \\
& = a_t \langle \nabla f(y_t), w_{t} - y_t \rangle + \frac{A_t\mu}{2}\left(\frac{A_{t-1}}{A_t} \norm{w_t - w_{t-1}}^2 + \frac{a_t}{A_t} \norm{y_t-w_{t}}^2 \right) \\
& \geq a_t \langle \nabla f(y_t), w_{t} - y_t \rangle + \frac{A_t \mu}{2} \norm{w_t - \frac{A_{t-1}}{A_t} w_{t-1} - \frac{a_t}{A_t} y_t}^2 \\
& = a_t \langle \nabla f(y_t), w_{t} - y_t \rangle + \frac{A_t \mu}{2} \norm{\frac{a_t}{A_t} \frac{1}{\mu}\nabla f(y_t)}^2,
\end{align*} %]]></script>
<p>where the second equation is by the Taylor expansion of $\gamma_{t-1}$ around $w_{t-1}$ evaluated at $w_t$, the first inequality is by Jensen’s inequality, and the fourth equation is by the recursive definition (dualSeqRec) of $w_t$. With this we obtain that the change is in the lower bound can be bounded as:</p>
<script type="math/tex; mode=display">A_t L_t - A_{t-1} L_{t-1} \geq a_t f(y_t) + a_t \langle \nabla f(y_t), w_{t} - y_t \rangle + \frac{A_t \mu}{2} \norm{\frac{a_t}{A_t} \frac{1}{\mu}\nabla f(y_t)}^2.</script>
<p><em>Change in the gap estimate.</em> <br />
With the above, we can now bound the change in the gap estimate, where the inequality is by convexity, via:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
A_t G_t - A_{t-1} G_{t-1} & \leq a_t f(y_t) - A_t \left(\frac{\norm{\nabla f(y_t)}^2}{2L}\right) + A_{t-1} (f(y_{t}) - f(x_{t-1})) - a_t f(y_t) \\ & \qquad \qquad - a_t \langle \nabla f(y_t), w_{t} - y_t \rangle - \frac{A_t \mu}{2} \norm{\frac{a_t}{A_t} \frac{1}{\mu}\nabla f(y_t)}^2 \\
& = - A_t \left(\frac{\norm{\nabla f(y_t)}^2}{2L} + \frac{a_t^2}{A_t^2} \frac{\norm{\nabla f(y_t)}^2}{2\mu} \right) + A_{t-1} (f(y_{t}) - f(x_{t-1})) - a_t \langle \nabla f(y_t), w_{t} - y_t \rangle \\
& \leq - A_t \left(\frac{\norm{\nabla f(y_t)}^2}{2L} + \frac{a_t^2}{A_t^2} \frac{\norm{\nabla f(y_t)}^2}{2\mu} \right) + A_{t-1} \langle \nabla f(y_t), y_{t} - x_{t-1} \rangle - a_t \langle \nabla f(y_t), w_{t} - y_t \rangle \\
& = - A_t \left(\frac{\norm{\nabla f(y_t)}^2}{2L} + \frac{a_t^2}{A_t^2} \frac{\norm{\nabla f(y_t)}^2}{2\mu} \right) + \langle \nabla f(y_t), A_t y_{t} - A_{t-1} x_{t-1} - a_t w_t \rangle.
\end{align*} %]]></script>
<p>While maybe a little tedious, so far nothing <em>special</em> has happened. We simply computed the change in the gap estimate via the change in the upper bound and the change in the lower bound. Now we need the right-hand side to be non-positive to complete the proof and derive the rate. Our goal is to show:</p>
<script type="math/tex; mode=display">\tag{gapCondition}
- A_t \left(\frac{\norm{\nabla f(y_t)}^2}{2L} + \frac{a_t^2}{A_t^2} \frac{\norm{\nabla f(y_t)}^2}{2\mu} \right) + \langle \nabla f(y_t), A_t y_{t} - A_{t-1} x_{t-1} - a_t w_t \rangle \leq 0,</script>
<p>which we would obtain, dividing by $A_t$ and defining $\tau \doteq \frac{a_t}{A_t}$, if we can ensure:</p>
<script type="math/tex; mode=display">\tag{impY}
y_{t} - (1-\tau) x_{t-1} - \tau w_t = \nabla f(y_t) \left(\frac{1}{2L} + \tau^2 \frac{1}{2\mu} \right),</script>
<p>as then the left hand side in (gapCondition) evaluates to $0$. Note that (impY) <em>almost</em> provides a definition of the $y_t$, which is our last missing piece but not quite: the $w_t$ depends itself on $y_t$ and we would rather have it an explicit function of $w_{t-1}$. Luckily we have the recursive definition of the $w_t$ from (dualSeqRec) that will allow us to unroll one step:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
\nabla f(y_t) \left(\frac{1}{2L} + \tau^2 \frac{1}{2\mu} \right) & = y_{t} - (1-\tau) x_{t-1} - \tau w_t \\
& = y_t - (1-\tau) x_{t-1} - \tau (1-\tau) w_{t-1} - \tau^2 (y_t - \frac{1}{\mu} \nabla f(y_t)).
\end{align*} %]]></script>
<p>After rearranging, the above becomes:</p>
<script type="math/tex; mode=display">\tag{expY}
\nabla f(y_t) \left(\frac{1}{2L} - \tau^2 \frac{1}{2\mu} \right) = (1-\tau^2) y_t - (1-\tau) x_{t-1} - \tau (1-\tau) w_{t-1},</script>
<p>and we are free to make some choices. For $\tau = \frac{a_t}{A_t} \doteq \sqrt{\frac{\mu}{L}}$, the left-hand side becomes $0$ and after dividing by $(1-\tau)$, we obtain:</p>
<script type="math/tex; mode=display">(1+\tau) y_t = x_{t-1} + \tau w_{t-1},</script>
<p>which finally provides the desired definition of the $y_t$ and, recalling that we contract at a rate of $1-\frac{a_t}{A_t}$, we achieve a contraction of the gap at a rate of $1-\frac{a_t}{A_t} = 1-\tau = 1 - \sqrt{\frac{\mu}{L}}$ as required:</p>
<script type="math/tex; mode=display">\tag{accRate}
f(x_t) - f(x^\esx) \leq \left(1 - \sqrt{\frac{\mu}{L}}\right)^t (f(x_0) - f(x^\esx)).</script>
<script type="math/tex; mode=display">\qed</script>
<p>This completes our argument. Now that we are done with this exercise it is time to pause and recap. First, let us state the full algorithm; we only output the primal sequence $x_1, \dots, x_t, \dots$ here but the other sequences are useful as well:</p>
<p class="mathcol"><strong>Algorithm.</strong> (Accelerated Gradient Descent)<br />
<em>Input:</em> $L$-smooth and $\mu$-strongly convex function $f$. Initial point $x_0$. <br />
<em>Output:</em> Sequence of iterates $x_0, \dots, x_t$ <br />
$w_0 \leftarrow x_0$ <br />
$\tau \leftarrow \sqrt{\frac{\mu}{L}}$ <br />
For $t = 1, \dots, t$ do <br />
$\qquad$ $y_t \leftarrow \frac{1}{1 + \tau} x_{t-1} + \frac{\tau}{1 + \tau} w_{t-1} \qquad \text{{update mixing sequence $y_t$}}$ <br />
$\qquad$ $w_t \leftarrow (1-\tau) w_{t-1} + \tau (y_t - \frac{1}{\mu} \nabla f(y_t)) \qquad \text{{update dual sequence $w_t$}}$ <br />
$\qquad$ $x_t \leftarrow y_t - \frac{1}{L} \nabla f(y_t)\qquad \text{{update primal sequence $x_t$}}$ <br /></p>
<p>Next, a few remarks are in order:</p>
<p><strong>Remarks.</strong> <br /></p>
<ol>
<li>What the analysis shows is that acceleration is achieved by <em>simultaneously</em> optimizing the primal and dual, i.e., upper and lower bound on the gap. This is in contrast to vanilla gradient descent that first and foremost maximizes primal progress per iteration and <em>not</em> gap closed per iteration. The key here is the definition of the sequence $y_t$ that balances primal and dual progress and ensures optimal progress in <em>gap closed</em> per iteration. Moreover, it is important to note that this is <em>not</em> simply a better analysis of the same algorithm but rather the iterates in the algorithm do really differ from gradient descent and acceleration really materializes in faster convergence rates; see computations below.</li>
<li>The $y_t$ are chosen to be a convex combination of the primal and the dual step. This combination is formed with fixed weights that do not change across the algorithm’s progression.</li>
<li>The proof establishes that in each iteration we contract the gap by a multiplicative factor $(1-\tau)$. It is neither guaranteed that we make primal progress per iteration nor that we make dual progress in each iteration. What is guaranteed is that the <em>in sum</em> we make enough progress; we will get back to this further below.</li>
<li>The primal and dual iterates in iteration $k$ are independent of each other conditioned on the $y_k, \dots, y_0$; see the graphics below. This is quite helpful for modifications as we will see in the next section.</li>
</ol>
<p class="center"><img src="http://www.pokutta.com/blog/assets/acc/inf.png" alt="Information Flow Acceleration" /></p>
<p>We will now compare the method from above to vanilla gradient descent. The instance that we consider is a quadratic with condition number $\theta = \frac{\mu}{L} = 10000$ in $\RR^n$, where $n = 100$. We run the algorithms for $2000$ iterations.</p>
<p>The first figure shows the gap evolution of $U_t$ and $L_t$ across iterations for the accelerated method. It can be seen that indeed the upper bound $U_t$, which is given by the primal function value, is not necessarily monotonic as compared to e.g., gradient descent. The plot is in log-log scale to better visualize behavior.</p>
<p class="center"><img src="http://www.pokutta.com/blog/assets/acc/gapEvolution.png" alt="Gap evolution" /></p>
<p>The next figure compares vanilla gradient descent (GD) vs. our accelerated method (AGD) with respect to the (true) primal gap as well as the gap estimate $G_t$. It can be seen that the convergence rate of AGD is much higher than the rate of GD. Note that GD is not converging prematurely but the rate is significantly lower. While the AGD gap estimate has a much higher offset, it contracts basically at the same rate as the (true) primal gap.</p>
<p class="center"><img src="http://www.pokutta.com/blog/assets/acc/GD-AGD.png" alt="GD vs. AGD" /></p>
<p>Finally, the last figure depicts the evolution of the distance to the optimal solution as well as the norm of the gradient (both optimality measures) for GD and AGD. Note that while these measures are not monotonic for AGD, they converge much faster.</p>
<p class="center"><img src="http://www.pokutta.com/blog/assets/acc/aux.png" alt="Auxiliary Measures" /></p>
<h2 id="extensions">Extensions</h2>
<p>We will now briefly discuss extensions of the argumentation from above.</p>
<h3 id="monotonic-variant">Monotonic variant</h3>
<p>The presented argument above is for the most basic case and as mentioned in the remarks usually the primal gap is not monotonously decreasing. This can be an issue in some cases. In the following we discuss a modification of the above to ensure monotonous primal progress, which, by strong convexity, also ensures monotonous decrease in distance to the optimal solution. Such modifications are reasonably well-known, e.g., <a href="http://www.mathopt.org/Optima-Issues/optima88.pdf">Nesterov</a> used it to define an accelerated method that ensures monotonous progress in the distance to the optimal solution. Here we will see that such modifications are easily handled in the ADGT framework.</p>
<p>In order to achieve the above we will actually prove something stronger: we show that we can mix-in an auxiliary sequence of points $\tilde x_1, \dots, \tilde x_t, \dots$ and the algorithm will choose the better of the two (in terms of function value) between the provided point and the accelerated step. Then it suffices to, e.g., choose the sequence $\tilde x_1, \dots, \tilde x_t, \dots$ to be standard gradient steps $\tilde x_t \leftarrow x_{t-1} - \frac{1}{L} \nabla f(x_{t-1})$ to obtain a monotonic variant.</p>
<p class="mathcol"><strong>Algorithm.</strong> (Accelerated Gradient Descent with mixed-in sequence) <br />
<em>Input:</em> $L$-smooth and $\mu$-strongly convex function $f$. Initial point $x_0$. Sequence $\tilde x_1, \dots, \tilde x_t, \dots$. <br />
<em>Output:</em> Sequence of iterates $x_0, \dots, x_t$ <br />
$w_0 \leftarrow x_0$ <br />
$\tau \leftarrow \sqrt{\frac{\mu}{L}}$ <br />
For $t = 1, \dots, t$ do <br />
$\qquad$ $y_t \leftarrow \frac{1}{1 + \tau} x_{t-1} + \frac{\tau}{1 + \tau} w_{t-1} \qquad \text{{update mixing sequence $y_t$}}$ <br />
$\qquad$ $w_t \leftarrow (1-\tau) w_{t-1} + \tau (y_t - \frac{1}{\mu} \nabla f(y_t)) \qquad \text{{update dual sequence $w_t$}}$ <br />
$\qquad$ $\bar x_t \leftarrow y_t - \frac{1}{L} \nabla f(y_t)\qquad \text{{update primal sequence $x_t$}}$ <br />
$\qquad$ $x_t \leftarrow \arg\min \setb{f(\bar x_t), f(\tilde x_t)}\qquad \text{{take better point}}$ <br /></p>
<p>At first this might seem problematic as we argued before, that it is the intricate construction of the $y_t$ that simultaneously optimize the primal upper and lower bound. In fact, we potentially sacrificed primal progress (the method is not monotonic anymore after all) to close the gap faster. Now that we play around with the definition of the $x_t$ (and in turn with the $y_t$) in such a heavy-handed way we might <em>break</em> acceleration. It turns out however that the above works just fine. To see this let us redo the analysis. First of all, note that for a given $y_t$ the analysis of the lower bound improvement remains the same as there is no dependence on $x_{t-1}$ given $y_t$. So let us re-examine the upper bound:</p>
<p><em>Change in upper bound.</em> <br />
It suffices to observe that $f(x_t) \leq f(\bar x_t)$ and hence the change in the upper bound can be bounded as before using $\bar x_t \leftarrow y_t - \nabla f(y_t)$ which implies $f(\bar x_t) - f(y_t) \leq - \frac{\norm{\nabla f(y_t)}^2}{2L}$:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
A_t U_t - A_{t-1} U_{t-1} & = A_t f(x_t) - A_{t-1} f(x_{t-1}) \leq A_t f(\bar x_t) - A_{t-1} f(x_{t-1}) \\
& = A_t f(y_t) - A_t f(y_t) + A_t f(\bar x_t) - A_{t-1} f(x_{t-1}) \\
& = a_t f(y_t) + A_t (f(\bar x_t) - f(y_t)) + A_{t-1} (f(y_{t}) - f(x_{t-1})) \\
& \leq a_t f(y_t) - A_t \left(\frac{\norm{\nabla f(y_t)}^2}{2L}\right) + A_{t-1} (f(y_{t}) - f(x_{t-1})).
\end{align*} %]]></script>
<p>Thus we obtain an identical bound for the change in the upper bound. Not really surprising as we potentially only do better due to the auxiliary sequence. Now it remains to derive the definition of the $y_t$ from $x_{t-1}$ and $w_{t-1}$.</p>
<p><em>Change in the gap estimate.</em> <br />
The first part of the estimation is a direct combination of the upper bound estimate and the lower bound estimate. Note that the definition of the $y_t$ depended on $x_{t-1}$ and $w_{t-1}$ before and we have to carefully check the impact of the changed definition of $x_{t-1}$. The manipulations combining the upper and the lower bound estimate however do not make special use of how $x_{t-1}$ is defined and we similarly end up with:</p>
<script type="math/tex; mode=display">\tag{gapCondition}
- A_t \left(\frac{\norm{\nabla f(y_t)}^2}{2L} + \frac{a_t^2}{A_t^2} \frac{\norm{\nabla f(y_t)}^2}{2\mu} \right) + \langle \nabla f(y_t), A_t y_{t} - A_{t-1} x_{t-1} - a_t w_t \rangle \leq 0,</script>
<p>as before and the goal is again to choose $y_t$ to ensure the above is satisfied. With the same computations as before we obtain with $\tau \doteq \frac{a_t}{A_t} \doteq \sqrt{\frac{\mu}{L}}$:</p>
<script type="math/tex; mode=display">\tag{expY}
\nabla f(y_t) \left(\frac{1}{2L} - \tau^2 \frac{1}{2\mu} \right) = (1-\tau^2) y_t - (1-\tau) x_{t-1} - \tau (1-\tau) w_{t-1}.</script>
<p>and plugging in the value of $\tau$ and rearranging leads to:</p>
<script type="math/tex; mode=display">(1+\tau) y_t = x_{t-1} + \tau w_{t-1},</script>
<p>and the conclusion follows as before. The key point here is that also these estimations do not rely on the specific form of $x_{t-1}$. The only thing that we really needed and where the definition of the $x_{t-1}$ played a role is that we make enough progress in terms of the upper bound estimate. Basically, we can define $y_t$ from any $x_{t-1}$ and $w_{t-1}$ as long as they satisfy the upper bound and lower bound estimates.</p>
<p>The following figure compares the monotonic variant of the accelerated method (AGDM) to the non-monotonic accelerated method (AGD) and vanilla gradient descent (GD) in terms of primal gap evolution. Observe that AGDM is not just monotonic but also has an (empirically) higher convergence rate. The instance is the same as above:</p>
<p class="center"><img src="http://www.pokutta.com/blog/assets/acc/monotonic.png" alt="Monotonic" /></p>
<h3 id="the-smooth-and-non-strongly-convex-case">The smooth and (non-strongly) convex case</h3>
<p>It is known that in the smooth and (non-strongly) convex case acceleration is also possible, improving from $O(1/t)$ (or equivalently $O(1/\varepsilon)$) convergence to $O(1/t^2)$ (or equivalently $O(1/\sqrt{\varepsilon})$) convergence. We could establish this result with the analysis from above, by adjusting the lower bound $L_t$ to not rely on strong convexity but convexity only. However, the resulting lower bound is not going to be smooth anymore, so that the simple trick of optimizing out the bound we used above is not going to work anymore. The answer to this is a more complicated (but natural) smoothening of the lower bound function and the interested reader is referred to [DO]. There is another way however, that essentially achieves the same result, up to log-factors, and leverages what we have proven already. We replicate the argument from [SAB] here.</p>
<p>The basic idea is that we take our smooth function $f$ and given an accuracy $\varepsilon$, we mix in a weak quadratic to make the function strongly convex and then we run the algorithm from above.</p>
<p>Let $f$ be $L$-smooth and assume that $x_0$, our initial iterate, is close enough to the optimal solution so that $D \doteq \norm{x_0 - x^\esx} \geq \norm{x_t - x^\esx}$ for all $t$; the “burn-in” until we reach such a point happens after at most a finite number of iterations, independent of $\varepsilon$. Given a target accuracy $\varepsilon > 0$, we simply define:</p>
<script type="math/tex; mode=display">f_\varepsilon(x) \doteq f(x) + \frac{\varepsilon}{2D^2} \norm{x_0 - x}^2.</script>
<p>Observe that $f_\varepsilon$ is now $(L + \frac{\varepsilon}{2D^2})$-smooth and $\frac{\varepsilon}{2D^2}$-strongly convex. Moreover, essentially minimizing $f_\varepsilon$ is the same a minimizing $f$ up to small error:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
f(x_t) - f(x^\esx) & = f_\varepsilon(x_t) - \frac{\varepsilon}{2D^2} \norm{x_0 - x_t}^2 - f_\varepsilon(x^\esx) + \frac{\varepsilon}{2D^2} \norm{x_0 - x^\esx}^2 \\
& \leq f_\varepsilon(x_t) - f_\varepsilon(x^\esx) + \frac{\varepsilon}{2} \leq f_\varepsilon(x_t) - f_\varepsilon(x_\varepsilon^\esx) + \frac{\varepsilon}{2},
\end{align*} %]]></script>
<p>where $x_\varepsilon^\esx$ is the optimal solution to $\min_{x} f_\varepsilon(x)$. This shows finding an $\varepsilon/2$-optimal solution $x_t$ to $\min_{x} f_\varepsilon(x)$ provides an $\varepsilon$-optimal solution to $\min_x f(x)$.</p>
<p>Now we run the accelerated method from above on $f_\varepsilon$ with accuracy $\varepsilon/2$. We had an accelerated rate (accRate) of</p>
<script type="math/tex; mode=display">f(x_t) - f(x^\esx) \leq \left(1 - \sqrt{\frac{\mu}{L}}\right)^t (f(x_0) - f(x^\esx)),</script>
<p>for a generic $L$-smooth and $\mu$-strongly convex function $f$. Moreover, $f_\varepsilon(x_0) - f_\varepsilon(x^\esx) \leq \frac{(L+\varepsilon)D^2}{2}$ by smoothness. We now simply plug-in parameters and obtain:</p>
<script type="math/tex; mode=display">f_\varepsilon(x_t) - f_\varepsilon(x^\esx) \leq \left(1 - \sqrt{\frac{\frac{\varepsilon}{2D^2}}{L + \frac{\varepsilon}{2D^2}}}\right)^t \left(\frac{(L+\varepsilon)D^2}{2}\right),</script>
<p>so that in order to achieve $f_\varepsilon(x_t) - f_\varepsilon(x^\esx) \leq \varepsilon/2$ it suffices to satisfy:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
t \log \left(1 - \sqrt{\frac{\frac{\varepsilon}{2D^2}}{L + \frac{\varepsilon}{2D^2}}}\right) & \leq - \log \frac{(L+\varepsilon)D^2}{2} + \log \frac{\varepsilon}{2} \\
\Leftrightarrow t & \geq - \frac{\log \frac{(L+\varepsilon)D^2}{\varepsilon} }{\log \left(1 - \sqrt{\frac{\frac{\varepsilon}{2D^2}}{L + \frac{\varepsilon}{2D^2}}}\right)},
\end{align*} %]]></script>
<p>and using $\log 1-r \approx - r$ for $r$ small and $L + \frac{\varepsilon}{2D^2} \approx L$ we obtain that in order to ensure $f(x_t) - f(x^\esx) \leq \varepsilon$, we need to run the accelerated method on the smooth function $f_\varepsilon$ for roughly no more than:</p>
<script type="math/tex; mode=display">t \approx \log \left(\frac{(L+\varepsilon)D^2}{\varepsilon} \right) \sqrt{\frac{2LD^2}{\varepsilon}},</script>
<p>iterations. This matches, up to a logarithmic term, the complexity that we would expect from an accelerated method in the smooth (non-strongly) convex case.</p>
<h3 id="acceleration-and-noise">Acceleration and noise</h3>
<p>One of the often-cited major drawbacks of accelerated methods is that they do not deal well with noisy or inexact gradients, i.e, they are not robust. To make matters worse, in [DGN] it was shown that basically any method that is faster than vanilla <em>Gradient Descent</em> necessarily needs to accumulate errors linearly in the number of iterations. This poses significant challenges depending on the magnitude of the noise. Slightly cheating here and considering the smooth and (non-stronlgy) convex case (check out [DGN] and Moritz’s post on <a href="http://blog.mrtz.org/2014/08/18/robustness-versus-acceleration.html">Robustness vs. Acceleration</a> for precise definitions and some nice computations), suppose that the magnitude of error in the gradients is $\delta$, then vanilla <em>Gradient Descent</em> after $t$ iterations provides a solution with guarantee:</p>
<script type="math/tex; mode=display">f(x_t) - f(x^\esx) \leq O(1/t) + \delta,</script>
<p>whereas <em>Accelerated Gradient Descent</em> (the standard one, see [DGN]) provides a solution that satisfies:</p>
<script type="math/tex; mode=display">f(x_t) - f(x^\esx) \leq O(1/t^2) + t\delta,</script>
<p>so that there is a tradeoff between accuracy, iterations, and magnitude of error. A detailed analysis of the effects of noise, various restart strategies to combat noise accumulation, as well as the (substantial) differences between noise accumulation in the constrained and unconstrained setting are discussed in [CDO]; check it out for details, here is a quick teaser:</p>
<blockquote>
<p>Our results reveal an interesting discrepancy between noise tolerance in the settings of constrained and unconstrained smooth minimization. Namely, in the setting of constrained optimization, the error due to noise does not accumulate and is proportional to the diameter of the feasible region and the expected norm of the noise. In the setting of unconstrained optimization, the bound on the error incurred due to the noise accumulates, as observed empirically by (Hardt, 2014).</p>
</blockquote>
<h3 id="references">References</h3>
<p>[N1] Nesterov, Y. (1983). A method of solving a convex programming problem with convergence rate $O (1/k^ 2)$. In Sov. Math. Dokl (Vol. 27, No. 2).</p>
<p>[N2] Nesterov, Y. (2013). Introductory lectures on convex optimization: A basic course (Vol. 87). Springer Science & Business Media. <a href="https://books.google.com/books?hl=en&lr=&id=2-ElBQAAQBAJ&oi=fnd&pg=PA1&dq=Introductory+lectures+on+convex+optimization:+A+basic+course&ots=wltS9osfmv&sig=2kEC_XSXH-OZVyY1ZmK43khv3eQ#v=onepage&q=Introductory%20lectures%20on%20convex%20optimization%3A%20A%20basic%20course&f=false">google books</a></p>
<p>[P] Polyak, B. T. (1964). Some methods of speeding up the convergence of iteration methods. USSR Computational Mathematics and Mathematical Physics, 4(5), 1-17. <a href="https://www.sciencedirect.com/science/article/abs/pii/0041555364901375">pdf</a></p>
<p>[AO] Allen-Zhu, Z., & Orecchia, L. (2014). Linear coupling: An ultimate unification of gradient and mirror descent. arXiv preprint arXiv:1407.1537. <a href="https://arxiv.org/abs/1407.1537">pdf</a></p>
<p>[SBC] Su, W., Boyd, S., & Candes, E. (2014). A differential equation for modeling Nesterov’s accelerated gradient method: Theory and insights. In Advances in Neural Information Processing Systems (pp. 2510-2518). <a href="https://arxiv.org/abs/1503.01243">pdf</a></p>
<p>[BLS] Bubeck, S., Lee, Y. T., & Singh, M. (2015). A geometric alternative to Nesterov’s accelerated gradient descent. arXiv preprint arXiv:1506.08187. <a href="https://arxiv.org/abs/1506.08187">pdf</a></p>
<p>[DO] Diakonikolas, J., & Orecchia, L. (2019). The approximate duality gap technique: A unified theory of first-order methods. SIAM Journal on Optimization, 29(1), 660-689. <a href="https://arxiv.org/pdf/1712.02485.pdf">pdf</a></p>
<p>[SAB] Scieur, D., d’Aspremont, A., & Bach, F. (2016). Regularized nonlinear acceleration. In Advances In Neural Information Processing Systems (pp. 712-720). <a href="https://arxiv.org/abs/1606.04133">pdf</a></p>
<p>[DGN] Devolder, O., Glineur, F., & Nesterov, Y. (2014). First-order methods of smooth convex optimization with inexact oracle. Mathematical Programming, 146(1-2), 37-75. <a href="http://www.optimization-online.org/DB_FILE/2010/12/2865.pdf">pdf</a></p>
<p>[CDO] Cohen, M. B., Diakonikolas, J., & Orecchia, L. (2018). On acceleration with noise-corrupted gradients. arXiv preprint arXiv:1805.12591. <a href="https://arxiv.org/abs/1805.12591">pdf</a></p>
<p><br /></p>
<h4 id="acknowledgements-and-changelog">Acknowledgements and Changelog</h4>
<p>I would like to thank Alejandro Carderera and Cyrille Combettes for pointing out several typos in an early version of this post. Computations and plots provided by Alejandro Carderera.</p>Sebastian PokuttaTL;DR: Cheat Sheet for a derivation of acceleration from optimization first principles.Blended Matching Pursuit2019-05-27T01:00:00+02:002019-05-27T01:00:00+02:00http://www.pokutta.com/blog/research/2019/05/27/bmp-abstract<p><em>TL;DR: This is an informal summary of our recent paper <a href="https://arxiv.org/abs/1904.12335">Blended Matching Pursuit</a> with <a href="https://www.linkedin.com/in/cyrille-combettes/">Cyrille W. Combettes</a>, showing that the blending approach that we used earlier for conditional gradients can be carried over also to the Matching Pursuit setting, resulting in a new and very fast algorithm for minimizing convex functions over linear spaces while maintaining sparsity close to full orthogonal projection approaches such as Orthogonal Matching Pursuit.</em>
<!--more--></p>
<h2 id="what-is-the-paper-about-and-why-you-might-care">What is the paper about and why you might care</h2>
<p>We are interested in solving the following convex optimization problem. Let $f$ be a smooth convex function with potentially additional properties and $D \subseteq \RR^n$ a finite set of vectors. We want to solve:</p>
<script type="math/tex; mode=display">\tag{opt}
\min_{x \in \operatorname{lin}D} f(x)</script>
<p>The set $D$ in the context considered here is often referred to as <em>dictionary</em> and its elements are called <em>atoms</em>. Note that this problem does also make sense for infinite dictionaries and more general Hilbert spaces but for the sake of exposition we confine ourselves here to the finite case; see the paper for more details.</p>
<h3 id="sparse-signal-recovery">Sparse Signal Recovery</h3>
<p>The problem (opt) with, e.g., $f(x) \doteq \norm{x - y}_2^2$ for a given vector $y \in \RR^n$ is of particular interest in <em>Signal Processing</em> in the context of <em>Sparse Signal Recovery</em>, where a signal $y \in \RR^n$ is measured that is known to be the sum of a <em>sparse</em> linear combination of elements in $D$ and a <em>noise term</em> $\epsilon$, e.g., $y = x + \epsilon$ for some $x \in \operatorname{lin} D$ and $\epsilon \sim N(0,\Sigma)$; see <a href="https://en.wikipedia.org/wiki/Matching_pursuit">Wikipedia</a> for more details. Here <em>sparsity</em> refers to $x$ being a linear combination of <em>few</em> elements from $D$ and the task is to reconstruct $x$ from $y$. If the signal’s sparsity is known ahead of time, say $m$, then the optimization problem of interest is:</p>
<script type="math/tex; mode=display">\tag{sparseRecovery}
\min_{x \in \RR^k} \setb{\norm{y - Dx}_2^2 \ \mid\ \norm{x}_0 \leq m},</script>
<p>where $|D| = k$ and $m \ll k$ typically. As the above problem is non-convex (and in fact NP-hard to solve), various relaxations have been used and a common one is to solve (opt) instead with an algorithm that promotes sparsity due to its algorithmic design. Other variants include relaxing the $\ell_0$-norm constraint via an $\ell_1$-norm constraint and then solving the arising constrained convex optimization problem over an appropriately scaled $\ell_1$-ball with an optimization methods that is relatively sparse, such as e.g., conditional gradients and related methods.</p>
<p>The following graphics is taken from <a href="https://en.wikipedia.org/wiki/Matching_pursuit">Wikipedia’s Matching Pursuit entry</a>. On the bottom the actual signal is depicted in the time domain and on top the inner product of the wavelet atom with the signal is shown as a heat map, where each pixel corresponds to a time-frequency wavelet atom (this would be our dictionary). In this example, we would seek a reconstruction with $3$ elements given by the centers of the ellipsoids.</p>
<p class="center"><img src="https://upload.wikimedia.org/wikipedia/commons/2/21/Matching_pursuit.png" alt="Sparse Signal" /></p>
<p>Without going into detail here, (sparseRecovery) also naturally relates to compressed sensing and our algorithm also applies to this context, as do all other algorithms that solve (sparseRecovery).</p>
<h3 id="the-general-setup">The general setup</h3>
<p>Here we actually consider the more general problem of minimizing an arbitrary smooth convex function $f$ over the linear span of the dictionary $D$ in (opt). This more general setup has many applications including the one from above. Basically, whenever we seek to project a vector into a linear space, writing it as linear combination of basis elements we are in the setup of (opt). Moreover, sparsity is often a natural requirement as it helps explainability and interpretation etc. in many cases.</p>
<h3 id="solving-the-optimization-problem">Solving the optimization problem</h3>
<p>Apart from the broad applicability, (opt) is also algorithmically interesting. It is a constrained problem as we optimize subject to $x \in \operatorname{lin} D$, yet at the same time the feasible region is unbounded. Surely one could project into the linear space etc but this is quite costly if $D$ is large and potentially very challenging if $D$ is countably infinite; in fact it is (opt) that solves exactly this problem for a <em>specific</em> vector $y$ subject to additional constraints such as, e.g., sparsity and good <em>Normalized Mean Squared Error (NMSE)</em>. When solving (opt) we thus face some interesting challenges, such as not being able to bound the diameter of the feasible region (an often used quantity in constrained convex minimization).</p>
<p>There are various algorithms to solve (opt) while maintaining sparsity. One such class are Coordinate Descent, Matching Pursuit [MZ], Orthogonal Matching Pursuit [AKGT] and similar algorithms that try to achieve sparsity due to their design. Another class solves a constraint version by introducing an $\ell_1$-constraint as discussed above to induce sparsity. This includes (vanilla) Gradient Descent (not really sparse), Conditional Gradient descent [CG] (aka the Frank-Wolfe algorithm [FW]) and its variants (see e.g., [LJ]) as well as specialized algorithms such as Compressive Sampling Matching Pursuit (CoSaMP) [NT] or Conditional Gradient with Enhancement and Truncation (CoGEnT) [RSW]. Also our recent Blended Conditional Gradients (BCG) algorithm [BPTW] applies to the formulation with $\ell_1$-ball relaxation; see also the <a href="/blog/research/2019/02/18/bcg-abstract.html">summary of the paper</a> for more details.</p>
<p>For an overview of the computational as well as reconstruction advantages and disadvantages of some of those algorithms, see [AKGT].</p>
<h2 id="our-results">Our results</h2>
<p>More recently, in [LKTJ] a unifying view of Conditional Gradients and Matching Pursuit has been established. Apart from presenting new algorithms, the authors also show that basically the Frank-Wolfe algorithm corresponds to Matching Pursuit and the Fully-Corrective Frank-Wolfe algorithm corresponds to Orthogonal Matching Pursuit; shortly after in [LRKRSSJ] an accelerated variant of Matching Pursuit has been provided. The unified view of [LKTJ] motivated us to carry over the blending idea from [BPTW] to the Matching Pursuit context, as the BCG algorithm provided very good sparsity in the constraint case in our tests. Moreover, we wanted to extend the convergence analysis to not just smooth and (strongly) convex functions but more generally smooth and sharp functions, which nicely interpolates between the convex and the strongly convex regime (see <a href="/blog/research/2018/11/12/heb-conv.html">Cheat Sheet: Hölder Error Bounds (HEB) for Conditional Gradients</a> for details on sharpness); the same can be also done for Conditional Gradients (see our recent work [KDP] or the <a href="/blog/research/2019/05/02/restartfw-abstract.html">summary</a>).</p>
<p>The basic idea behind behind <em>blending</em> is to mix together various types of steps. Here the mixing is between Matching Pursuit style steps and low-complexity Gradient Steps over the currently selected atoms. The former steps make sure that we discover new dictionary elements that we need to make progress, whereas the latter ones usually give more per-iteration progress, are cheaper in wall-clock time, and promote sparsity. Unfortunately, as straightforward as it sounds to carry over the blending to Matching Pursuit, it is not that simple. The blending that we did before in [BPTW] heavily relied on dual gap estimates (in fact variants of the Wolfe gap) to switch between the various steps, however these gaps are not available here due to the unboundedness of $\operatorname{lin} D$.</p>
<p>After navigating these technical challenges, what we ended up with is a <em>Blended Matching Pursuit (BMP)</em> algorithm, that is basically as fast (or faster) than the standard Matching Pursuit (or its generalized variant, <em>Generalized Matching Pursuit (MP)</em>, for arbitrary smooth convex functions), while maintaining a sparsity close to that of the much slower Orthogonal Matching Pursuit (OMP); the former only performs line search across the newly added atom, while the latter re-optimizes over the <em>full</em> set of selected elements in each iteration, hence offering much better sparsity at the price of much higher running times.</p>
<h3 id="example-computation-1">Example Computation 1:</h3>
<p>The following figure shows a sample computation for a sparse signal recovery instance from [RSW], which we scaled down by a factor of $10$. The actual signal has a sparsity of $s = 100$, we have $m = 500$ measurements, and the measurement happens in $n = 2000$-dimensional space. We choose $A\in\mathbb{R}^{m\times n}$ and $x^\esx \in \mathbb{R}^n$ with $\norm{x^\esx}_0=s$. The measurement is generated as $y=Ax^\esx + \mathcal{N}(0,\sigma^2I_m)$ with $\sigma = 0.05$.</p>
<p>We benchmarked BMP against MP and OMP (see [LKTJ] for pseudo-code). We also benchmarked against BCG (see [BPTW] for pseudo-code) and CoGEnT (see [RSW] for pseudo-code); for these algorithms we optimize subject to a scaled $\ell_1$-ball, where the radius has been empirically chosen so the signal is contained in the ball; otherwise we could not compare primal gap progress. Note that by scaling up the $\ell_1$-ball we might produce less sparse solutions; see [LKTJ] and the contained discussion for relating conditional gradient methods to matching pursuit methods. Each algorithm is run for either $300$ secs or until there is no (substantial) primal improvement anymore; whichever comes first.</p>
<p class="center"><img src="http://www.pokutta.com/blog/assets/bmp/convergence.png" alt="Comparison BMP vs. others" /></p>
<p>In the aforementioned <em>Sparse Signal Recovery</em> problem, another way to compare the quality of the actual reconstructions is via the <em>Normalized Mean Square Error (NMSE)</em>. The next figure shows the evolution of NMSE across the optimization:</p>
<p class="center"><img src="http://www.pokutta.com/blog/assets/bmp/nmse.png" alt="NMSE small" /></p>
<p>The rebound likely happens because once the actual signal is reconstructed, overfitting of the noise term starts, deteriorating NMSE. One could clean up the reconstruction by removing all atoms in the support with small coefficients; this is beyond the scope however. Here the same NMSE plot truncated after the first $30$ secs for better visibility of the initial phase:</p>
<p class="center"><img src="http://www.pokutta.com/blog/assets/bmp/nmse-30s.png" alt="NMSE small" /></p>
<h3 id="example-computation-2">Example Computation 2:</h3>
<p>Same setup as above, however this time actual signal has a sparsity of $s = 100$, we have $m = 1500$ measurements, and the measurement happens in $n = 6000$-dimensional space. This time we run for $1200$ secs or until no (substantial) primal progress. Here the performance of BMP is very obvious.</p>
<p class="center"><img src="http://www.pokutta.com/blog/assets/bmp/convergence2.png" alt="Comparison BMP vs. others" /></p>
<p>The next figure shows the evolution of NMSE across the optimization:</p>
<p class="center"><img src="http://www.pokutta.com/blog/assets/bmp/nmse2.png" alt="NMSE small" /></p>
<p>And truncated again, here after roughly the first $300$ secs for better visibility. We can see that BMP reaches its NMSE minimum right around $100$ atoms and it is much faster than any of the other algorithms.</p>
<p class="center"><img src="http://www.pokutta.com/blog/assets/bmp/nmse2-300s.png" alt="NMSE small" /></p>
<h3 id="references">References</h3>
<p>[MZ] Mallat, S. G., & Zhang, Z. (1993). Matching pursuits with time-frequency dictionaries. IEEE Transactions on signal processing, 41(12), 3397-3415. <a href="https://pdfs.semanticscholar.org/0b6e/98a6a8cf8283fd76fe1100b23f11f4cfa711.pdf">pdf</a></p>
<p>[TG] Tropp, J. A., & Gilbert, A. C. (2007). Signal recovery from random measurements via orthogonal matching pursuit. IEEE Transactions on information theory, 53(12), 4655-4666. <a href="https://authors.library.caltech.edu/9490/1/TROieeetit07.pdf">pdf</a></p>
<p>[CG] Levitin, E. S., & Polyak, B. T. (1966). Constrained minimization methods. Zhurnal Vychislitel’noi Matematiki i Matematicheskoi Fiziki, 6(5), 787-823. <a href="http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=zvmmf&paperid=7415&option_lang=eng">pdf</a></p>
<p>[FW] Frank, M., & Wolfe, P. (1956). An algorithm for quadratic programming. Naval research logistics quarterly, 3(1‐2), 95-110. <a href="https://onlinelibrary.wiley.com/doi/abs/10.1002/nav.3800030109">pdf</a></p>
<p>[LJ] Lacoste-Julien, S., & Jaggi, M. (2015). On the global linear convergence of Frank-Wolfe optimization variants. In Advances in Neural Information Processing Systems (pp. 496-504). <a href="http://papers.nips.cc/paper/5925-on-the-global-linear-convergence-of-frank-wolfe-optimization-variants.pdf">pdf</a></p>
<p>[NT] Needell, D., & Tropp, J. A. (2009). CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Applied and computational harmonic analysis, 26(3), 301-321. <a href="https://core.ac.uk/download/pdf/22761532.pdf">pdf</a></p>
<p>[RSW] Rao, N., Shah, P., & Wright, S. (2015). Forward–backward greedy algorithms for atomic norm regularization. IEEE Transactions on Signal Processing, 63(21), 5798-5811. <a href="https://arxiv.org/pdf/1404.5692.pdf">pdf</a></p>
<p>[BPTW] Braun, G., Pokutta, S., Tu, D., & Wright, S. (2018). Blended Conditional Gradients: the unconditioning of conditional gradients. arXiv preprint arXiv:1805.07311. <a href="https://arxiv.org/pdf/1805.07311.pdf">pdf</a></p>
<p>[AKGT] Arjoune, Y., Kaabouch, N., El Ghazi, H., & Tamtaoui, A. (2017, January). Compressive sensing: Performance comparison of sparse recovery algorithms. In 2017 IEEE 7th annual computing and communication workshop and conference (CCWC) (pp. 1-7). IEEE. <a href="https://arxiv.org/pdf/1801.09744.pdf">pdf</a></p>
<p>[LKTJ] Locatello, F., Khanna, R., Tschannen, M., & Jaggi, M. (2017). A unified optimization view on generalized matching pursuit and frank-wolfe. arXiv preprint arXiv:1702.06457. <a href="https://arxiv.org/pdf/1702.06457.pdf">pdf</a></p>
<p>[LRKRSSJ] Locatello, F., Raj, A., Karimireddy, S. P., Rätsch, G., Schölkopf, B., Stich, S. U., & Jaggi, M. (2018). On matching pursuit and coordinate descent. arXiv preprint arXiv:1803.09539. <a href="https://arxiv.org/pdf/1803.09539.pdf">pdf</a></p>
<p>[KDP] Kerdreux, T., d’Aspremont, A., & Pokutta, S. (2018). Restarting Frank-Wolfe. to appear in Proceedings of AISTATS. <a href="https://arxiv.org/abs/1810.02429">pdf</a></p>Sebastian PokuttaTL;DR: This is an informal summary of our recent paper Blended Matching Pursuit with Cyrille W. Combettes, showing that the blending approach that we used earlier for conditional gradients can be carried over also to the Matching Pursuit setting, resulting in a new and very fast algorithm for minimizing convex functions over linear spaces while maintaining sparsity close to full orthogonal projection approaches such as Orthogonal Matching Pursuit.Sharpness and Restarting Frank-Wolfe2019-05-02T01:00:00+02:002019-05-02T01:00:00+02:00http://www.pokutta.com/blog/research/2019/05/02/restartfw-abstract<p><em>TL;DR: This is an informal summary of our recent paper <a href="http://arxiv.org/abs/1810.02429">Restarting Frank-Wolfe</a> with <a href="https://www.di.ens.fr/~aspremon/">Alexandre D’Aspremont</a> and <a href="https://www.researchgate.net/profile/Thomas_Kerdreux">Thomas Kerdreux</a>, where we show how to achieve improved convergence rates under sharpness through restarting Frank-Wolfe algorithms.</em>
<!--more--></p>
<p>Note: This summary is shorter than usual as I wrote a whole post about sharpness (aka Hölder Error Bounds) and conditional gradient methods that is strongly correlated with this paper some time back; for the sake non-duplication the interested reader is referred to <a href="/blog/research/2018/11/12/heb-conv.html">Cheat Sheet: Hölder Error Bounds (HEB) for Conditional Gradients</a>, which also explains the more technical aspects of our work in a significantly broader context.</p>
<h2 id="what-is-the-paper-about-and-why-you-might-care">What is the paper about and why you might care</h2>
<p>We often want to solve <em>constrained smooth convex optimization</em> problems of the form</p>
<script type="math/tex; mode=display">\min_{x \in P} f(x),</script>
<p>where $P$ is some compact convex set and $f$ is a smooth function. If the considered function $f$ is strongly convex, then we can expect a linear rate of convergence of $O(\log 1/\varepsilon)$, i.e., it takes about $k \sim \log 1/\varepsilon$ iterations until $f(x_k) - f(x^\esx) \leq \varepsilon$ by using <em>Away-Step Frank-Wolfe</em> or <em>Pairwise Conditional Gradients</em> (see <a href="/blog/research/2018/10/19/cheatsheet-fw-lin-conv.html">Cheat Sheet: Linear convergence for Conditional Gradients</a> for more). However, in absence of strongly convexity we often have to fall back to the smooth and (non-strongly) convex case with a much lower rate of $O(1/\varepsilon)$ (without acceleration). In many cases this rate is considerably worse than what is empirically observed. To remedy this by providing a more fine-grained convergence analysis, in a recent paper [RA] analyzed convergence under <em>sharpness</em> (also known as the <em>Hölder Error Bound (HEB) condition</em>) which characterizes the behavior of $f$ around the optimal solutions:</p>
<p class="mathcol"><strong>Definition (Hölder Error Bound (HEB) condition).</strong> A convex function $f$ is satisfies the <em>Hölder Error Bound (HEB) condition on $P$</em> with parameters $0 < c < \infty$ and $\theta \in [0,1]$ if for all $x \in P$ it holds:
\[
c (f(x) - f^\esx)^\theta \geq \min_{y \in \Omega^\esx} \norm{x-y}.
\]</p>
<p>It was shown that using the notion of sharpness, one can derive much better rates, covering the whole range between the sublinear rate of $O(1/\varepsilon)$ and the linear rate $O(\log 1/\varepsilon)$. Moreover, these rates can be realized with adaptive restart schemes, requiring no knowledge about the sharpness parameters.</p>
<p>To establish the link to strong convexity, note that strong convexity which is a global property implies sharpness (with appropriate parameterization) which has to hold only locally around the optimal solutions; the converse is not true. In fact, using sharpness one can show linear convergence for certain function classes of functions that are not strongly convex.</p>
<h2 id="our-results">Our results</h2>
<p>An open question was whether adaptive restarts can be also utilized to achieve a similar adaptive behavior for Conditional Gradient type methods that access the feasible region $P$ only through a linear programming oracle and this is precisely what we study in our recent work [KDP]. There we show that one can modify the Away-Step Frank Wolfe algorithm (and similarly Pairwise Conditional Gradients) by endowing them with <em>scheduled restarts</em> to automatically adapt to the function’s sharpness. For functions with optimal solutions contained in the strict relative interior of $P$ it even suffices to modify the (vanilla) Frank-Wolfe algorithm. By doing so we obtain, depending on the function’s sharpness parameters, convergence rates of the form $O(1/\varepsilon^p)$ or $O(\log 1/\varepsilon)$. In particular, similar to [RA], we can achieve linear convergence for functions that are sufficiently sharp but not strongly convex.</p>
<p>For illustration, the next graph shows the behavior of the Frank-Wolfe Algorithm under sharpness on the probability simplex of dimension $30$ and function $\norm{x}_2^{1/\theta}$. For $\theta = 1/2$, we observe linear convergence as expected, while for the other values of $\theta$ we observe various degrees of sublinear convergence of the form $O(1/\varepsilon^p)$ with $p \geq 1$.</p>
<p class="center"><img src="http://www.pokutta.com/blog/assets/heb-simplex-30-noLine.png" alt="HEB with approx minimizer" /></p>
<h3 id="references">References</h3>
<p>[RA] Roulet, V., & d’Aspremont, A. (2017). Sharpness, restart and acceleration. In Advances in Neural Information Processing Systems (pp. 1119-1129). <a href="http://papers.nips.cc/paper/6712-sharpness-restart-and-acceleration">pdf</a></p>
<p>[KDP] Kerdreux, T., d’Aspremont, A., & Pokutta, S. (2018). Restarting Frank-Wolfe. to appear in Proceedings of AISTATS. <a href="https://arxiv.org/abs/1810.02429">pdf</a></p>Sebastian PokuttaTL;DR: This is an informal summary of our recent paper Restarting Frank-Wolfe with Alexandre D’Aspremont and Thomas Kerdreux, where we show how to achieve improved convergence rates under sharpness through restarting Frank-Wolfe algorithms.Cheat Sheet: Subgradient Descent, Mirror Descent, and Online Learning2019-02-27T13:00:00+01:002019-02-27T13:00:00+01:00http://www.pokutta.com/blog/research/2019/02/27/cheatsheet-nonsmooth<p><em>TL;DR: Cheat Sheet for non-smooth convex optimization: subgradient descent, mirror descent, and online learning. Long and technical.</em>
<!--more--></p>
<p><em>Posts in this series (so far).</em></p>
<ol>
<li><a href="/blog/research/2018/12/07/cheatsheet-smooth-idealized.html">Cheat Sheet: Smooth Convex Optimization</a></li>
<li><a href="/blog/research/2018/10/05/cheatsheet-fw.html">Cheat Sheet: Frank-Wolfe and Conditional Gradients</a></li>
<li><a href="/blog/research/2018/10/19/cheatsheet-fw-lin-conv.html">Cheat Sheet: Linear convergence for Conditional Gradients</a></li>
<li><a href="/blog/research/2018/11/12/heb-conv.html">Cheat Sheet: Hölder Error Bounds (HEB) for Conditional Gradients</a></li>
<li><a href="/blog/research/2019/02/27/cheatsheet-nonsmooth.html">Cheat Sheet: Subgradient Descent, Mirror Descent, and Online Learning</a></li>
<li><a href="/blog/research/2019/06/10/cheatsheet-acceleration-first-principles.html">Cheat Sheet: Acceleration from First Principles</a></li>
</ol>
<p><em>My apologies for incomplete references—this should merely serve as an overview.</em></p>
<p>This time we will consider non-smooth convex optimization. Our starting point is a very basic argument that is used to prove convergence of <em>Subgradient Descent (SG)</em>. From there we will consider the projected variants in the constrained setting and naturally arrive at <em>Mirror Descent (MD)</em> of [NY]; we follow the proximal point of view as presented in [BT]. We will also see that online learning algorithms such as <em>Online Gradient Descent (OGD)</em> of [Z] or <em>Online Mirror Descent (OMD)</em> and the special case of the <em>Multiplicative Weights Update (MWU)</em> algorithm arise as natural consequences.</p>
<p>This time we will consider a convex function $f: \RR^n \rightarrow \RR$ and we want to solve</p>
<script type="math/tex; mode=display">\min_{x \in K} f(x),</script>
<p>where $K$ is some convex feasible region, e.g., $K = \RR^n$ is the unconstrained case. However compared to previous posts now we will consider the <em>non-smooth</em> case. As before we assume that we only have <em>first-order access</em> to the function, via a so-called <em>first-order oracle</em>, which in the non-smooth case returns subgradients:</p>
<p class="mathcol"><strong>First-Order oracle for $f$</strong> <br />
<em>Input:</em> $x \in \mathbb R^n$ <br />
<em>Output:</em> $\partial f(x)$ and $f(x)$</p>
<p>In the above $\partial f(x)$ denotes a subgradient of the (convex!) function $f$ at point $x$. Recall that a <em>subgradient at $x \in \operatorname{dom}(f)$</em> is any vector $\partial f(x)$ such that $f(z) \geq f(x) + \partial \langle f(x), z-x \rangle$ holds for all $z \in \operatorname{dom}(f)$. So basically the same as we obtain from convexity for smooth functions, just that in the general non-smooth case, there might be more than one vector satisfying this condition. In contrast, for convex and smooth (i.e., differentiable) functions there exists only one subgradient at $x$, which is the gradient, i.e., $\partial f(x) = \nabla f(x)$ in this case. In the following we will use the notation $[n] \doteq \setb{1,\dots, n}$.</p>
<h2 id="a-basic-argument">A basic argument</h2>
<p>We will first consider gradient descent-like algorithms of the form</p>
<p>\[
\tag{dirStep}
x_{t+1} \leftarrow x_t - \eta_t d_t,
\]</p>
<p>where we choose $d_t \doteq \partial f(x_t)$ and we show how we can establish convergence of the above scheme to an (approximately) optimal solution $x_T$ to $\min_{x \in K} f(x)$ in the case $K = \RR^n$; we will choose the step length $\eta_t$ later. For completeness, the full algorithm looks like this:</p>
<p class="mathcol"><strong>Subgradient Descent Algorithm.</strong> <br />
<em>Input:</em> Convex function $f$ with first-order oracle access and some initial point $x_0 \in \RR^n$ <br />
<em>Output:</em> Sequence of points $x_0, \dots, x_T$ <br />
For $t = 1, \dots, T$ do: <br />
$\quad x_{t+1} \leftarrow x_t - \eta_t \partial f(x_t)$<br /></p>
<p>In this section we will assume that $\norm{\cdot}$ is the $\ell_2$-norm, however note that later we will allow for other norms. Let $x^\esx$ be an optimal solution to $\min_{x \in K} f(x)$ and consider the following using (dirStep) and $d_t \doteq \partial f(x_t)$.</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
\norm{x_{t+1} - x^\esx}^2 & = \norm{x_t - x^\esx}^2 - 2 \eta_t \langle \partial f(x_t), x_t - x^\esx\rangle + \eta_t^2 \norm{\partial f(x_t)}^2.
\end{align*} %]]></script>
<p>This can be rearranged to</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
\tag{basic}
2 \eta_t \langle \partial f(x_t), x_t - x^\esx\rangle & = \norm{x_t - x^\esx}^2 - \norm{x_{t+1} - x^\esx}^2 + \eta_t^2 \norm{\partial f(x_t)}^2,
\end{align*} %]]></script>
<p>as we aim to later estimate $f(x_t) - f(x^\esx) \leq \langle \partial f(x_t), x_t - x^\esx\rangle$ as $\partial f(x)$ was a subgradient. However in view of setting out to provide a unified perspective on various settings, including online learning, we will do this substitution only in the very end. Adding up those equations until iteration $T-1$ we obtain:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
\sum_{t = 0}^{T-1} 2\eta_t \langle \partial f(x_t), x_t - x^\esx\rangle & = \norm{x_0 - x^\esx}^2 - \norm{x_{T} - x^\esx}^2 + \sum_{t = 0}^{T-1} \eta_t^2 \norm{\partial f(x_t)}^2 \\
& \leq \norm{x_0 - x^\esx}^2 + \sum_{t = 0}^{T-1} \eta_t^2 \norm{\partial f(x_t)}^2.
\end{align*} %]]></script>
<p>Let us further assume that $\norm{\partial f(x_t)} \leq G$ for all $t = 0, \dots, T-1$ for some $G \in \RR$ and to simplify the exposition let us choose $\eta_t \doteq \eta > 0$ for now for all $t$ for some $\eta$ to be chosen later. We obtain:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
2\eta \sum_{t = 0}^{T-1} \langle \partial f(x_t), x_t - x^\esx\rangle & \leq \norm{x_0 - x^\esx}^2 + \eta^2 T G^2 \\
\Leftrightarrow \sum_{t = 0}^{T-1} \langle \partial f(x_t), x_t - x^\esx\rangle & \leq \frac{\norm{x_0 - x^\esx}^2}{2\eta} + \frac{\eta}{2} T G^2,
\end{align*} %]]></script>
<p>where the right-hand side is minimized for</p>
<script type="math/tex; mode=display">\eta \doteq \frac{\norm{x_0 - x^\esx}}{G} \sqrt{\frac{1}{T}},</script>
<p>leading to</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
\tag{regretBound}
\sum_{t = 0}^{T-1} \langle \partial f(x_t), x_t - x^\esx\rangle & \leq G \norm{x_0 - x^\esx} \sqrt{T}.
\end{align*} %]]></script>
<p>We will later see that (RegretBound) can be used as a starting point to develop online learning algorithms, for now however, we will derive our convergence guarantee from this. To this end we divide both sides by $T$, use convexity, and the subgradient property to conclude:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
\tag{convergenceSG}
f(\bar x) - f(x^\esx) & \leq \frac{1}{T} \sum_{t = 0}^{T-1} f(x_t) - f(x^\esx) \\
& \leq \frac{1}{T} \sum_{t = 0}^{T-1} \langle \partial f(x_t), x_t - x^\esx\rangle \\
& \leq G \norm{x_0 - x^\esx} \frac{1}{\sqrt{T}},
\end{align*} %]]></script>
<p>where $\bar x \doteq \frac{1}{T} \sum_{t=0}^{T-1} x_t$ is the average of all iterates. As such we obtain a $O(1/\sqrt{T})$ convergence rate for our algorithm. It is useful to observe that what the algorithm does is to minimize the average of the dual gaps at points $x_t$ given by $\langle \partial f(x_t), x_t - x^\esx\rangle$ and since the average of the dual gaps upper bounds the primal gap of the average point convergence follows.</p>
<p>This basic analysis is the standard analysis for <em>subgradient descent</em> and will serve as a starting point for what follows.</p>
<p>Before we continue the following remarks are in order:</p>
<ol>
<li>An important observation is that in the argument above we never used that $x^\esx$ is an optimal solution and in fact the arguments hold for <em>any</em> point $u$; in particular for some choices $u$ the left-hand side of (regretBound) <em>can be negative</em> (as in the case of (convergenceSG) which becomes vacuous in this case). We will see the implications of this very soon below in the online learning section. Ultimately, subgradient descent (and also the mirror descent as we will see later) is a <em>dual method</em> in the sense, that it directly minimizes the duality gap or equivalently maximizes the dual. That is where the strong guarantees with respect to <em>all points</em> $u$ come from.</li>
<li>Another important insight is that the argument from above does not provide a <em>descent algorithm</em>, i.e., it is <em>not guaranteed</em> that we make progress in terms of primal function value decrease in each iteration. However, what we show is that picking $\eta$ ensures that the average point $\bar x$ converges to an optimal solution: we make progress on average.</li>
<li>In the current form as stated above the choice of $\eta$ requires prior knowledge of the number of total iterations $T$ and the guarantee <em>only</em> applies to the average point obtained from averaging over all iterations $T$. However, this can be remedied in various ways. The poor man’s approach is to simply run the algorithm with a small $T$ and whenever $T$ is reached to double $T$ and restart the algorithm. This is usually referred to as the <em>doubling-trick</em> and at most doubles the number of performed iterations but now we do not need prior knowledge of $T$ and we obtain guarantees at iterations of the form $t = 2^\ell$ for $\ell = 1,2, \dots$. The smarter way is to use a variable step size as we will show later. This requires however that $\norm{x_t - x^\esx} \leq D$ holds for all iterates for some constant $D$, which might be hard to ensure in general but which can be safely assumed in the compact constrained case by choosing $D$ to be the diameter; the guarantees will depend on that parameter.</li>
</ol>
<h3 id="an-optimal-update">An “optimal” update</h3>
<p>Similar to the descent approach using smoothness, as done in several previous posts, such as e.g., <a href="/blog/research/2018/12/07/cheatsheet-smooth-idealized.html">Cheat Sheet: Smooth Convex Optimization</a>, we might try to pick $\eta_t$ in each step to maximize progress. Our starting point is</p>
<script type="math/tex; mode=display">% <![CDATA[
\tag{expand}
\begin{align*}
\norm{x_{t+1} - x^\esx}^2 & = \norm{x_t - x^\esx}^2 - 2 \eta_t \langle \partial f(x_t), x_t - x^\esx\rangle + \eta_t^2 \norm{\partial f(x_t)}^2,
\end{align*} %]]></script>
<p>from above and we want to choose $\eta_t$ to maximize progress in terms of $\norm{x_{t+1} - x^\esx}$ vs. $\norm{x_t - x^\esx}$, i.e., decrease in distance to the optimal solution. Observe that the right-hand side is convex in $\eta_t$ and optimizing over $\eta_t$ leads to</p>
<script type="math/tex; mode=display">\eta_t^\esx \doteq \frac{\langle \partial f(x_t), x_t - x^\esx\rangle}{\norm{\partial f(x_t)}^2}.</script>
<p>This choice of $\eta_t^\esx$ looks very similar to the choice that we have seen before for, e.g., gradient descent in the smooth case (see, e.g., <a href="/blog/research/2018/12/07/cheatsheet-smooth-idealized.html">Cheat Sheet: Smooth Convex Optimization</a>), with some important differences however: we cannot compute the above step length as we do not know $x^\esx$; we ignore this for now.</p>
<p>Plugging the step length back into (expand), we obtain:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
\norm{x_{t+1} - x^\esx}^2 & = \norm{x_t - x^\esx}^2 - \frac{\langle \partial f(x_t), x_t - x^\esx\rangle^2}{\norm{\partial f(x_t)}^2}.
\end{align*} %]]></script>
<p>This shows that progress in the distance squared to the optimal solution decreases by $\frac{\langle \partial f(x_t), x_t - x^\esx\rangle^2}{\norm{\partial f(x_t)}^2}$, i.e., the better aligned the gradient is with the idealized direction $x_t - x^\esx$, which points towards an optimal solution $x^\esx$, the faster the progress. In particular, if the alignment is perfect, then <em>one step</em> suffices. Note, however that this is <em>only</em> hypothetical as the computation of the optimal step length requires knowledge of an optimal solution. This is simply to demonstrate that a “non-deterministic” version would only require one step. This is in contrast to e.g., gradient step progress in function value exploiting smoothness (see <a href="/blog/research/2018/12/07/cheatsheet-smooth-idealized.html">Cheat Sheet: Smooth Convex Optimization</a>). In that case, only using first order information and smoothness we <em>naturally</em> obtain, e.g., a $O(1/t)$-rate for the smooth case, even for the non-deterministic idealized algorithm, where we guess as direction $x_t - x^\esx$ pointing towards the optimum. This is a subtle but important difference.</p>
<p>Finally, to add slightly more to the confusion (for now) compare the rearranged (expand) which captures progress <em>in the distance</em></p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
\norm{x_t - x^\esx}^2 - \norm{x_{t+1} - x^\esx}^2 & = 2 \eta_t \langle \partial f(x_t), x_t - x^\esx\rangle - \eta_t^2 \norm{\partial f(x_t)}^2,
\end{align*} %]]></script>
<p>to the smoothness induced progress <em>in function value</em> (or primal gap) for the idealized $d \doteq x_t - x^\esx$ (see, e.g., <a href="/blog/research/2018/12/07/cheatsheet-smooth-idealized.html">Cheat Sheet: Smooth Convex Optimization</a>):</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
f(x_{t}) - f(x_{t+1}) & \geq \eta_t \langle\nabla f(x_t), x_t - x^\esx \rangle - \eta_t^2 \frac{L}{2}\norm{x_t - x^\esx}^2.
\end{align*} %]]></script>
<p>These two progress-inducing (in-)equalities are very similar. In particular, in the smooth case, for $\eta_t$ tiny, the progress is identical up to the linear factor $2$ and lower order terms; this is for a good reason as I will discuss sometime in the future when we look at the continuous time versions.</p>
<h3 id="online-learning">Online Learning</h3>
<p>In the following we will discuss the connection of the above to online learning. In <em>online learning</em> we typically consider the following setup; I simplified the setup slightly for exposition and the exact requirements will become clear from the actual algorithm that we will use.</p>
<p>We consider two players: the <em>adversary</em> and the <em>player</em>. We then play a game over $T$ rounds of the following form:</p>
<p class="mathcol"><strong>Game.</strong> For $t = 0, \dots, T-1$ do: <br />
(1) Player chooses an action $x_t$ <br />
(2) Adversary picks a (convex) function $f_t$, reveals $\partial f_t(x_t)$ and $f_t(x_t)$ <br />
(3) Player updates/learns via $\partial f_t(x_t)$ and incurs cost $f_t(x_t)$ <br /></p>
<p>The goal of the game is to minimize the so-called <em>regret</em>, which is defined as:</p>
<script type="math/tex; mode=display">\tag{regret}
\sum_{t = 0}^{T-1} f_t(x_t) - \min_{x} \sum_{t = 0}^{T-1} f_t(x),</script>
<p>which measures how well our <em>dynamic strategy</em> $x_1, \dots, x_t$ compares to the <em>single best decision in hindsight</em>, i.e., a <em>static strategy</em> given perfect information.</p>
<p>Although surprising at first, it turns out that one can show that there exists an algorithm that generates a strategy $x_1, \dots, x_t$, so that (regret) is growing sublinearly, in fact typically of the order $O(\sqrt{T})$, i.e., something of the following form holds:</p>
<script type="math/tex; mode=display">\sum_{t = 0}^{T-1} f_t(x_t) - \min_{x} \sum_{t = 0}^{T-1} f_t(x) \leq O(\sqrt{T}),</script>
<p>What does this mean? If we divide both sides by $T$, we obtain the so-called <em>average regret</em> and the bound becomes:</p>
<script type="math/tex; mode=display">\frac{1}{T} \sum_{t = 0}^{T-1} f_t(x_t) - \min_{x} \sum_{t = 0}^{T-1} f_t(x) \leq O\left(\frac{1}{\sqrt{T}}\right),</script>
<p>showing that the average mistake that we make per round, in the long run, tends to $0$ at a rate of $O\left(\frac{1}{\sqrt{T}}\right)$.</p>
<p>Now it is time to wonder what this has to do with what we have seen so far. It turns out that already the our basic analysis from above provides a bound for the most basic unconstrained case for a given time horizon $T$. To this end recall the inequality (regretBound) that we established above:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
\sum_{t = 0}^{T-1} \langle \partial f(x_t), x_t - x^\esx\rangle & \leq G \norm{x_0 - x^\esx} \sqrt{T}.
\end{align*} %]]></script>
<p>A careful look at the argument that we used to establish the inequality (regretBound) reveals that it actually does not depend on $f$ being the same in each iteration and also that we can replace $x^\esx$ by any other feasible solution $u$ (as discussed before) so that we also proved:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
\sum_{t = 0}^{T-1} \langle \partial f_t(x_t), x_t - u\rangle & \leq G \norm{x_0 - u} \sqrt{T},
\end{align*} %]]></script>
<p>with $G$ now being a bound on the subgradients across the rounds, i.e., $\norm{\partial f_t(x_t)} \leq G$ and now using the fact that $\partial_t(x_t)$ is a subgradient, we obtain:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
\sum_{t = 0}^{T-1} \left (f_t(x_t) - f_t(u) \right) \leq \sum_{t = 0}^{T-1} \langle \partial f_t(x_t), x_t - u \rangle & \leq G \norm{x_0 - u} \sqrt{T},
\end{align*} %]]></script>
<p>and in particular this holds for the minimum:</p>
<script type="math/tex; mode=display">% <![CDATA[
\tag{regretSG}
\begin{align*}
\sum_{t = 0}^{T-1} f_t(x_t) - \min_x \sum_{t = 0}^{T-1} f_t(x) \leq \sum_{t = 0}^{T-1} \langle \partial f_t(x_t), x_t - u \rangle & \leq G \norm{x_0 - u} \sqrt{T},
\end{align*} %]]></script>
<p>which establishes sublinear regret for the actions played by the player according to:</p>
<p>\[
x_{t+1} \leftarrow x_t - \eta_t \partial f_t(x_t),
\]
with the step length $\eta \doteq \frac{\norm{x_0 - x^\esx}}{G} \sqrt{\frac{1}{T}}$, which in this context is also often referred to as <em>learning rate</em>. This setting requires knowledge of $T$ ahead of time. As discussed earlier this can be overcome, either with the doubling-trick or via the variable step length approach that we discuss further below; the cost in terms of regret is a $\sqrt{2}$-factor for the latter.</p>
<p>So what is our algorithm doing when deployed in an online setting? For this it is helpful to consider the update in iteration $t$ through (basic), rearranged for convenience:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
\norm{x_{t+1} - u}^2 & = \norm{x_t - u}^2 + \eta_t^2 \norm{\partial f_t(x_t)}^2 - 2 \eta_t \langle \partial f_t(x_t), x_t - u\rangle,
\end{align*} %]]></script>
<p>so for $x_{t+1}$ to move closer to a given $u$, it is necessary that</p>
<script type="math/tex; mode=display">% <![CDATA[
\eta_t^2 \norm{\partial f_t(x_t)}^2 - 2 \eta_t \langle \partial f_t(x_t), x_t - u\rangle < 0, %]]></script>
<p>or equivalently, that</p>
<script type="math/tex; mode=display">% <![CDATA[
\frac{\eta_t}{2} \norm{\partial f_t(x_t)}^2 < \langle \partial f_t(x_t), x_t - u\rangle, %]]></script>
<p>i.e., the <em>potential gain</em>, measured by the dual gap $\langle \partial f_t(x_t), x_t - u\rangle$ must be larger than $\frac{\eta_t}{2} \norm{\partial f_t(x_t)}^2$, where $\norm{\partial f_t(x_t)}^2$ is the maximally possible gain (the scalar product is maximized at $\partial f_t(x_t)$, e.g., by Cauchy-Schwarz). As such we require an $\frac{\eta_t}{2}$ fraction of the total possible gain to move closer to $u$.</p>
<p>We will later see that other online learning variants naturally arise the same way by ‘short-cutting’ the convergence proof as we have done here. In particular, we will see that the famous <em>Multiplicative Weight Update</em> algorithm is basically obtained from short-cutting the Mirror Descent convergence proof for the probability simplex with the relative entropy as Bregman divergence; more on this later.</p>
<h2 id="the-constrained-setting-projected-subgradient-descent">The constrained setting: projected subgradient descent</h2>
<p>We will now move to the constrained setting where we want require that the iterates $x_t$ are contained in some convex set $K$, i.e., $x_t \in K$. As in the above, our starting point is the <em>poor man’s identity</em> arising from expanding the norm. To this end we write:</p>
<script type="math/tex; mode=display">\norm{x_{t+1} - x^\esx}^2 = \norm{x_t - x^\esx}^2 - 2 \langle x_t - x_{t+1}, x_t - x^\esx \rangle + \norm{x_t - x_{t+1}}^2,</script>
<p>or in a more convenient form (by rearranging) as:</p>
<script type="math/tex; mode=display">\tag{normExpand}
2 \langle x_t - x_{t+1}, x_t - x^\esx \rangle = \norm{x_t - x^\esx}^2 - \norm{x_{t+1} - x^\esx}^2 + \norm{x_t - x_{t+1}}^2.</script>
<p>In the basic analysis of subgradient descent, we then used the specific form of the update $x_{t+1} \leftarrow x_t - \eta_t \partial f(x_t)$ and then summed and telescoped out. Now, things are different. A hypothetical update $x_{t+1} \leftarrow x_t - \eta_t \partial f(x_t)$ might lead outside of $K$, i.e., $x_{t+1} \not\in K$ might happen. Observe though that (normExpand) still telescopes as before by simply adding up over the iterations, however we have no idea, how $\langle x_t - x_{t+1}, x_t - x^\esx \rangle$ relates to our function $f$ of interest and clearly this has to depend on the actual step we take, i.e., on the properties of $x_{t+1}$. A natural, but slightly too optimistic thing to hope for is to find a step $x_{t+1}$, such that</p>
<script type="math/tex; mode=display">\tag{optimistic} \langle \eta_t \partial f(x_{t}), x_t - x^\esx \rangle \leq \langle x_t - x_{t+1}, x_t - x^\esx \rangle,</script>
<p>holds as this actually held even with equality in the unconstrained case. However, suppose we can show the following:</p>
<script type="math/tex; mode=display">\tag{lookAhead} \langle \eta_t \partial f(x_{t}), x_{t+1} - x^\esx \rangle \leq \langle x_t - x_{t+1}, x_{t+1} - x^\esx \rangle.</script>
<p>Note the subtle difference in the indices in the $x_{t+1} - x^\esx$ term. It is much easier to show (lookAhead), as we will do further below, because the point $x_{t+1}$ that we choose as a function of $\nabla f(x_t)$ and $x_t$ is under our control; in comparison $x_t$ is already chosen at time $t$. However, this not yet good enough to telescope out the sums due to the mismatch in indices. The following observation remedies the situation by undoing the index shift and quantifying the change:</p>
<p class="mathcol"><strong>Observation.</strong> If $\langle \eta_t \partial f(x_{t}), x_{t+1} - x^\esx \rangle \leq \langle x_t - x_{t+1}, x_{t+1} - x^\esx \rangle$, then
\[
\tag{lookAheadIneq}
\begin{align}
\langle \eta_t \partial f(x_{t}), x_{t} - x^\esx \rangle & \leq \langle x_t - x_{t+1}, x_{t} - x^\esx \rangle \newline
\nonumber & - \frac{1}{2}\norm{x_t - x_{t+1}}^2 \newline
\nonumber & +\frac{1}{2}\norm{\eta_t \partial f(x_t)}^2
\end{align}
\]</p>
<p>Before proving the observation, observe that in the unconstrained case, where we choose $x_{t+1} = x_t - \eta \partial f(x_t)$, the inequality in the observation reduces to (optimistic), holding even with equality, and when plugging this back into our poor man’s identify this exactly becomes the basic argument from beginning of the post. This is a good news as it indicates that the observation reduces to what we know already in the unconstrained case. As such we might want to think of the observation as relating the step $x_{t+1} - x_t$ that we take with $\partial f(x_t)$, assuming that we can choose $x_{t+1}$ to satisfy (lookAhead).</p>
<p><em>Proof (of observation).</em>
Our starting point is the inequality (lookAhead) whose validity we establish a little later:</p>
<script type="math/tex; mode=display">\langle \eta_t \partial f(x_{t}), x_{t+1} - x^\esx \rangle \leq \langle x_t - x_{t+1}, x_{t+1} - x^\esx \rangle.</script>
<p>We will simply brute-force rewrite the inequality into the desired form and collect the error terms in the process. The above inequality is equivalent to:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
& \langle \eta_t \partial f(x_{t}), x_{t} - x^\esx \rangle + \langle \eta_t \partial f(x_{t}), x_{t+1} - x_t \rangle \\
\leq\ & \langle x_t - x_{t+1}, x_{t} - x^\esx \rangle + \langle x_t - x_{t+1}, x_{t+1} - x_t \rangle.
\end{align*} %]]></script>
<p>Rewriting we obtain:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
& \langle \eta_t \partial f(x_{t}), x_{t} - x^\esx \rangle \\
\leq\ & \langle x_t - x_{t+1}, x_{t} - x^\esx \rangle -\norm{x_{t+1} - x_t}^2 - \langle \eta_t \partial f(x_{t}), x_{t+1} - x_t \rangle \\
=\ & \langle x_t - x_{t+1}, x_{t} - x^\esx \rangle - \frac{1}{2}\norm{x_{t+1} - x_t}^2 - \frac{1}{2}\left(\norm{x_{t+1} - x_t}^2 - 2 \langle \eta_t \partial f(x_{t}), x_{t+1} - x_t \rangle\right) \\
\leq\ & \langle x_t - x_{t+1}, x_{t} - x^\esx \rangle - \frac{1}{2}\norm{x_{t+1} - x_t}^2 + \frac{1}{2} \norm{ \eta_t \partial f(x_{t})}^2,
\end{align*} %]]></script>
<p>where the last inequality uses the binomial formula, i.e., $(a+b)^2 = a^2 - 2ab +b^2 \geq 0$ and hence $a^2 \geq -b^2 +2ab$.</p>
<p>$\qed$</p>
<p>With the observation we can immediately conclude our convergence proof and the argument becomes identical to the basic case from above. Recall that our starting point is (normExpand):</p>
<script type="math/tex; mode=display">2 \langle x_t - x_{t+1}, x_t - x^\esx \rangle = \norm{x_t - x^\esx}^2 - \norm{x_{t+1} - x^\esx}^2 + \norm{x_t - x_{t+1}}^2.</script>
<p>Now we can estimate the term on the left-hand side using our observation. This leads to:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
& 2 \langle \eta_t \partial f(x_{t}), x_{t} - x^\esx \rangle + \norm{x_t - x_{t+1}}^2 - \norm{\eta_t \partial f(x_t)}^2\\
& \leq 2 \langle x_t - x_{t+1}, x_t - x^\esx \rangle \\
& = \norm{x_t - x^\esx}^2 - \norm{x_{t+1} - x^\esx}^2 + \norm{x_t -x_{t+1}}^2,
\end{align*} %]]></script>
<p>and after subtracting $\norm{x_t - x_{t+1}}^2$ and adding $\norm{\eta_t \partial f(x_t)}^2$, we obtain:</p>
<script type="math/tex; mode=display">2 \langle \eta_t \partial f(x_{t}), x_{t} - x^\esx \rangle
\leq \norm{x_t - x^\esx}^2 - \norm{x_{t+1} - x^\esx}^2 + \norm{\eta_t \partial f(x_t)}^2,</script>
<p>which is exactly (basic) as above and we can conclude the argument the same way: summing up and telescoping and then optimizing $\eta_t$. In particular, the convergence rate (convergenceSG) and regret bound (regretBound) stay the same with no deterioration due to constraints or projections:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
\sum_{t = 0}^{T-1} \langle \partial f(x_t), x_t - x^\esx\rangle & \leq G \norm{x_0 - x^\esx} \sqrt{T},
\end{align*} %]]></script>
<p>and</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
f(\bar x) - f(x^\esx) & \leq G \norm{x_0 - x^\esx} \frac{1}{\sqrt{T}}
\end{align*} %]]></script>
<p>So the key is really establishing (lookAhead) as it immediately implies all we need to establish convergence in the constrained case. This is what we will do now, which will also, finally, specify our choice of $x_{t+1}$.</p>
<h3 id="using-optimization-to-prove-what-you-want">Using optimization to prove what you want</h3>
<p>Have you ever wondered why people add these weird 2-norms to their optimization problem to “regularize” the problem, i.e., they solve problems of the form $\min_{x} f(x) + \lambda \norm{x - z}^2$? Then this section might provide some insight into this. We will see that it is actually not about the “problem” that is solved, but about what an optimal solution might guarantee; bear with me.</p>
<p>So what we want to establish is inequality (lookAhead), i.e.,</p>
<script type="math/tex; mode=display">\langle \eta_t \partial f(x_{t}), x_{t+1} - x^\esx \rangle \leq \langle x_t - x_{t+1}, x_{t+1} - x^\esx \rangle,</script>
<p>or slightly more generally stated as our proof will work for <em>all</em> $u \in K$ (and in particular the choice $u = x^\esx$):</p>
<script type="math/tex; mode=display">\langle \eta_t \partial f(x_{t}), x_{t+1} - u \rangle \leq \langle x_t - x_{t+1}, x_{t+1} - u \rangle.</script>
<p>Rearranging the above we obtain:</p>
<script type="math/tex; mode=display">\tag{optCon} \langle \eta_t \partial f(x_{t}), x_{t+1} - u \rangle - \langle x_t - x_{t+1}, x_{t+1} - u \rangle \leq 0.</script>
<p>What we will do now is to interpret the above as an <em>optimality condition</em> to some smooth convex optimization problem of the form $\max_{x \in K} g(x)$, where $g(x)$ is some smooth and convex function. Recall, from the previous posts, e.g., <a href="/blog/research/2018/10/05/cheatsheet-fw.html">Cheat Sheet: Frank-Wolfe and Conditional Gradients</a>, that the first order optimality condition states, that for all $u \in K$ it holds:</p>
<script type="math/tex; mode=display">\langle \nabla g(x), x - u \rangle \leq 0,</script>
<p>provided that $x \in K$ is an optimal solution, as otherwise we would be able to make progress via, e.g., a gradient step or a Frank-Wolfe step. By simply reverse engineering (aka remembering how we differentiate), we guess</p>
<script type="math/tex; mode=display">\tag{proj} g(x) \doteq \langle \eta_t \partial f(x_{t}), x \rangle + \frac{1}{2}\norm{x-x_t}^2,</script>
<p>so that that its optimality condition produces (optCon). We now simply choose</p>
<script type="math/tex; mode=display">\tag{constrainedStep}x_{t+1} \doteq \arg\min_{x \in K} \langle \eta_t \partial f(x_{t}), x \rangle + \frac{1}{2}\norm{x-x_t}^2,</script>
<p>and (just to be sure) we inspect the optimality condition that states:</p>
<script type="math/tex; mode=display">\begin{align*}
\langle \eta_t \partial f(x_{t}), x_{t+1} - u \rangle - \langle x_t - x_{t+1}, x_{t+1} - u \rangle = \langle \nabla g(x_{t+1}), x_{t+1} - u \rangle \leq 0,
\end{align*}</script>
<p>which is exactly (lookAhead). This step then ensures convergence with (maybe surprisingly) a rate identical to the unconstrained case. The resulting algorithm is often referred to as <em>projected subgradient descent</em> and the problem whose optimal solution defines $x_{t+1}$ is the projection problem. We provide the <em>projected subgradient descent</em> algorithm below:</p>
<p class="mathcol"><strong>Projected Subgradient Descent Algorithm.</strong> <br />
<em>Input:</em> Convex function $f$ with first-order oracle access and some initial point $x_0 \in K$<br />
<em>Output:</em> Sequence of points $x_0, \dots, x_T$ <br />
For $t = 1, \dots, T$ do: <br />
$\quad x_{t+1} \leftarrow \arg\min_{x \in K} \langle \eta_t \partial f(x_{t}), x \rangle + \frac{1}{2}\norm{x-x_t}^2$<br /></p>
<h3 id="variable-step-length">Variable step length</h3>
<p>We will now briefly explain how to replace the constant step length from before that requires a priori knowledge of $T$ by a variable step length, so that the convergence guarantee holds for any iterate $x_t$. To this end let $D \geq 0$ be a constant so that $\max_{x,y \in K} \norm{x-y} \leq D$. We now choose $\eta_t \doteq \tau \sqrt{\frac{1}{t+1}}$, where we will specify the constant $\tau \geq 0$ soon.</p>
<p class="mathcol"><strong>Observation.</strong> For $\eta_t$ as above it holds:
\[\sum_{t = 0}^{T-1} \eta_t \leq \tau\left(2 \sqrt{T} - 1\right).\]</p>
<p><em>Proof.</em> There are various ways of showing the above. We follow the argument in [Z]. We have:
<script type="math/tex">% <![CDATA[
\begin{align*}
\sum_{t = 0}^{T-1} \eta_t & = \tau \sum_{t = 0}^{T-1} \frac{1}{\sqrt{t+1}} \\
& \leq \tau \left(1 + \int_{0}^{T-1}\frac{dt}{\sqrt{t+1}}\right) \\
& \leq \tau \left(1 + \left[2 \sqrt{t+1}\right]_0^{T-1} \right) = \tau (2\sqrt{T}-1) \qed
\end{align*} %]]></script></p>
<p>Now we restart from inequality (basic) from earlier:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
2\eta_t \langle \partial f(x_t), x_t - x^\esx\rangle & = \norm{x_t - x^\esx}^2 - \norm{x_{t+1} - x^\esx}^2 + \eta_t^2 \norm{\partial f(x_t)}^2,
\end{align*} %]]></script>
<p>however before we sum up and telescope we first divide by $2\eta_t$, i.e.,</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
\sum_{t=0}^{T-1}\langle \partial f(x_t), x_t - x^\esx\rangle & = \sum_{t=0}^{T-1} \left(\frac{\norm{x_t - x^\esx}^2}{2\eta_t} - \frac{\norm{x_{t+1} - x^\esx}^2}{2\eta_t} + \frac{\eta_t}{2} \norm{\partial f(x_t)}^2\right) \\
& \leq \frac{\norm{x_0 - x^\esx}^2}{2\eta_{0}} - \frac{\norm{x_{T} - x^\esx}^2}{2\eta_{T-1}} \\ & \qquad + \frac{1}{2} \sum_{t=1}^{T-1} \left(\frac{1}{\eta_{t}} - \frac{1}{\eta_{t-1}} \right) \norm{x_t - x^\esx}^2 + \sum_{t=0}^{T-1}\frac{\eta_t}{2} \norm{\partial f(x_t)}^2 \\
& \leq D^2 \left(\frac{1}{2\eta_0} + \frac{1}{2} \sum_{t=1}^{T-1} \left(\frac{1}{\eta_{t}} - \frac{1}{\eta_{t-1}} \right) \right) + \sum_{t=0}^{T-1}\frac{\eta_t}{2} G^2 \\
& \leq \frac{D^2 }{2\eta_{T-1}} + \sum_{t=0}^{T-1}\frac{\eta_t}{2} G^2 \\
& \leq \frac{1}{2}\left(\frac{D^2 }{\tau} \sqrt{T} + 2 G^2 \tau \sqrt{T} \right) = DG\sqrt{2T},
\end{align*} %]]></script>
<p>where we applied the observation from above in the last but one inequality, plugged in the definition of $\eta_t$, and used the choice $\tau \doteq \frac{D}{G\sqrt{2}}$ in the last equation, which minimizes the term in the brackets in the last inequality. In summary, we have shown:</p>
<script type="math/tex; mode=display">\tag{regretBoundAnytime}
\sum_{t=0}^{T-1}\langle \partial f(x_t), x_t - x^\esx\rangle \leq DG\sqrt{2T}.</script>
<p>From (regretBoundAnytime) we can now derive convergence rates as usual: summing up, then averaging the iterates, and using convexity.</p>
<p>It is useful to compare (regretBoundAnytime) to the case with fixed step length, which is given in (regretBound): using a variable step length costs us a factor of $\sqrt{2}$, however the above bound in (regretBoundAnytime) now holds for <em>all</em> $t$ and a priori knowledge of $T$ is not required. Such regret bounds are sometimes referred to as <em>anytime regret bounds</em>.</p>
<h3 id="online-sub-gradient-descent">Online (Sub-)Gradient Descent</h3>
<p>Starting from (regretBoundAnytime), we can also follow the same path as in the online learning section from above. This recovers the Online (Sub-)Gradient Descent algorithm of [Z]: Consider the online learning setting from before and choose</p>
<script type="math/tex; mode=display">x_{t+1} \leftarrow \arg\min_{x \in K} \langle \eta_t \partial f_t(x_{t}), x \rangle + \frac{1}{2}\norm{x-x_t}^2.</script>
<p>Then, we obtain the regret bound</p>
<script type="math/tex; mode=display">\tag{regretOGDanytime}
\sum_{t=0}^{T-1} f_t(x_t) - \min_{x \in K} \sum_{t=0}^{T-1} f_t(x) \leq
\sum_{t=0}^{T-1}\langle \partial f(x_t), x_t - x^\esx\rangle \leq DG\sqrt{2T},</script>
<p>in the anytime setting and</p>
<script type="math/tex; mode=display">\tag{regretOGD}
\sum_{t=0}^{T-1} f_t(x_t) - \min_{x \in K} \sum_{t=0}^{T-1} f_t(x) \leq
\sum_{t=0}^{T-1}\langle \partial f(x_t), x_t - x^\esx\rangle \leq DG\sqrt{T},</script>
<p>when $T$ is known ahead of time, where $D$ and $G$ are a bound on the diameter of the feasible domain and the norm of the gradients respectively as before.</p>
<h2 id="mirror-descent">Mirror Descent</h2>
<p>We will now derive Nemirovski’s Mirror Descent algorithm (see e.g., [NY]) and we will be following somewhat the proximal perspective as outlined in [BT]. Simplifying and running the risk of attracting the wrath of the optimization titans, <em>Mirror Descent</em> arises from subgradient descent by replacing the $\ell_2$-norm with a “generalized distance” that satisfies the inequalities that we needed in the basic argument from above.</p>
<p>Why would you want to do that? Adjusting the distance function will allow us to fine-tune the iterates and the resulting dimension-dependent term for the geometry under consideration.</p>
<p>In the following, as we move away from the $\ell_2$-norm, which is self-dual, we will need the definition of the <em>dual norm</em> defined as <script type="math/tex">\norm{w}_\esx \doteq \max\setb{\langle w , x \rangle : \norm{x} = 1}</script>. Note that for the $\ell_p$-norm the $\ell_q$-norm is dual if $\frac{1}{p} + \frac{1}{q} = 1$. For the $\ell_1$-norm the dual norm is $\ell_\infty$. We will also need the <em>(generalized) Cauchy-Schwarz inequality</em> or <em>Hölder inequality</em>: $\langle y , x \rangle \leq \norm{y}_\esx \norm{x}$. A very useful consequence of this inequality is:</p>
<script type="math/tex; mode=display">\tag{genBinomial}
\norm{a}^2 - 2 \langle a , b \rangle + \norm{b}^2_\esx \geq 0,</script>
<p>which follows from</p>
<script type="math/tex; mode=display">\begin{align*}
\norm{a}^2 - 2 \langle a , b \rangle + \norm{b}^2_\esx \geq \norm{a}^2 - 2 \norm{a} \norm{b}_\esx + \norm{b}^2_\esx = (\norm{a} - \norm{b}_\esx)^2 \geq 0.
\end{align*}</script>
<h3 id="generalized-distance-aka-bregman-divergence">“Generalized Distance” aka Bregman divergence</h3>
<p>We will first introduce the generalization of norms that we will be working with. To this end, let us first collect the desired properties that we needed in the proof of the basic argument; I will already suggestively use the final notation to not create notational overload. Let our desired function be called $V_x(y)$ and let us further assume in a first step the choice $V_x(y) = \frac{1}{2} \norm{x-y}^2$; note the factor $\frac{1}{2}$ is only used to make the proofs cleaner.</p>
<p>In the very first step we used the expansion of the $\ell_2$-norm. As $x^\esx$ plays no special role, we write everything with respect to any feasible $u$:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
\norm{x_{t+1} - u}^2 & = \norm{x_t - u}^2 - 2 \langle x_{t} - x_{t+1}, x_t - u\rangle + \norm{x_{t+1} - x_t}^2.
\end{align*} %]]></script>
<p>Rescaling and substituting $V_x(y) = \frac{1}{2} \norm{x-y}^2$, and observing that $\nabla V_x(y) = y - x$, we obtain:</p>
<script type="math/tex; mode=display">% <![CDATA[
\tag{req-1}
\begin{align*}
V_{x_{t+1}}(u) & = V_{x_{t}}(u) - \langle \nabla V_{x_{t+1}}(x_{t}), x_t - u\rangle + V_{x_{t+1}}(x_{t}),
\end{align*} %]]></script>
<p>where the last choice $V_{x_{t+1}}(x_{t})$ used a non-deterministic guess as by symmetry of the $\ell_2$-norm also $V_{x_{t}}(x_{t+1})$ would have been feasible.</p>
<p>We will need another inequality if we aim to mimic the same proof in the constrained case. Recall that we needed (lookAheadIneq)</p>
<p>\[\langle \eta_t \partial f(x_{t}), x_{t} - x^\esx \rangle \leq \langle x_t - x_{t+1}, x_{t} - x^\esx \rangle - \frac{1}{2}\norm{x_t - x_{t+1}}^2+\frac{1}{2}\norm{\eta_t \partial f(x_t)}^2,\]</p>
<p>to relate the step $x_{t+1} - x_t$ that we take with $\partial f(x_t)$, assuming that $x_{t+1}$ was chosen appropriately. In the proof the term $\frac{1}{2}\norm{x_t - x_{t+1}}^2$ simply arose from the mechanics of the (standard) scalar product, which is inherently linked to the $\ell_2$-norm. Slightly jumping ahead (the later proof will make this requirement natural), we additionally require</p>
<script type="math/tex; mode=display">\tag{req-2}
\begin{align*}
V_x(y) \geq \frac{1}{2}\norm{x-y}^2,
\end{align*}</script>
<p>Moreover, the term $\frac{1}{2}\norm{\eta_t \partial f(x_t)}^2$ in (lookAheadIneq) is actually using the <em>dual norm</em>, which we did not have to pay attention to as the $\ell_2$-norm is self-dual. We will redo the full argument in the next sections with the correct distinctions for completeness. First, however we will complete the definition of the $V_x(y).$</p>
<p>There is a natural class of functions that satisfy (req-1) and (req-2), so called <em>Bregman divergences</em>, which are defined through <em>Distance Generating Functions (DGFs)</em>. Let us choose some norm $\norm{\cdot}$, which is not necessarily the $\ell_2$-norm.</p>
<p class="mathcol"><strong>Definition. (DGF and Bregman Divergence)</strong> Let $K \subseteq \RR^n$ be a closed convex set. Then $\phi: K \rightarrow \RR$ is called a <em>Distance Generating Function (DGF)</em> if $\phi$ is $1$-strongly convex with respect to $\norm{\cdot}$, i.e., for all $x \in K \setminus \partial K, y \in K$ we have $\phi(y) \geq \phi(x) + \langle \nabla \phi, y-x \rangle + \frac{1}{2}\norm{x-y}^2$. The <em>Bregman divergence (induced by $\phi$)</em> is defined as
\[V_x(y) \doteq \phi(y) - \langle \nabla \phi(x), y - x \rangle - \phi(x),\]
$x \in K \setminus \partial K, y \in K$.</p>
<p>Observe that the strong convexity requirement of the DGF is with respect to the chosen norm. This is important as it allows us to “fine-tune” our geometry. Before we establish some basic properties of Bregman divergences, here are two common examples:</p>
<p class="mathcol"><strong>Examples. (Bregman Divergences)</strong> <br />
(a) Let $\norm{x} \doteq \norm{x}_2$ be the $\ell_2$-norm and $\phi(x) \doteq \frac{1}{2} \norm{x}^2$. Clearly, $\phi(x)$ is $1$-strongly convex with respect to $\norm{\cdot}$ (for any $K$). The resulting Bregman divergence is $V_x(y) = \frac{1}{2}\norm{x-y}^2$, which is the choice used for (projected) subgradient descent above. <br />
(b) Let <script type="math/tex">\norm{x} \doteq \norm{x}_1</script> be the $\ell_1$-norm and <script type="math/tex">\phi(x) \doteq \sum_{i \in [n]} x_i \log x_i</script> be the (negative) entropy. Then $\phi(x)$ is $1$-strongly convex for all <script type="math/tex">K \subseteq \Delta_n \doteq \setb{x \geq 0 \mid \sum_{i \in [n]}x_i = 1}</script>, which is the <em>probability simplex</em> with respect to <script type="math/tex">\norm{\cdot}_1</script>. The resulting Bregman divergence is <script type="math/tex">V_x(y) = \sum_{i \in [n]} y_i \log \frac{y_i}{x_i} = D(y \| x)</script>, which is the <em>Kullback-Leibler divergence</em> or <em>relative entropy</em>.</p>
<p>We will now establish some basic properties for $V_x(y)$ and show that $V_x(y)$ satisfies the required properties:</p>
<p class="mathcol"><strong>Lemma. (Properties of the Bregman Divergence)</strong> Let $V_x(y)$ be a Bregman divergence defined via some DGF $\phi$. Then the following holds: <br />
(a) Point-separating: $V_x(x) = 0$ (and also $V_x(y) = 0 \Leftrightarrow x = y$ via (b))<br />
(b) Compatible with norm: $V_x(y) \geq \frac{1}{2} \norm{x-y}^2 \geq 0$<br />
(c) $\Delta$-Inequality: $\langle - \nabla V_x(y), y - u \rangle = V_x(u) - V_y(u) - V_x(y)$</p>
<p><em>Proof.</em> Property (a) follows directly from the definition and (b) follows from $\phi$ in the definition of $V_x(y)$ being $1$-strongly convex. Property (c) follows from straightforward expansion and computation:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
\langle - \nabla V_x(y), y - u \rangle & = \langle\nabla\phi(x) -\nabla \phi(y) , y-u \rangle \\
& = (\phi(u) - \phi(x) - \langle \nabla \phi(x), u -x \rangle) \\
& - (\phi(u) - \phi(y) - \langle \nabla \phi(y), u - y \rangle) \\
& - (\phi(y) - \phi(x) - \langle \nabla \phi(x), y - x \rangle) \\
& = V_x(u) - V_y(u) - V_x(y).
\end{align*} %]]></script>
<script type="math/tex; mode=display">\qed</script>
<h3 id="back-to-basics">Back to basics</h3>
<p>In a first step we will redo our basic argument from the beginning of the post with a Bregman divergence instead of the expansion of the $\ell_2$-norm. To this end let $K \subseteq \RR^n$ (possibly $K = \RR^n$) be a closed convex set. We consider a generic algorithm that produces iterates $x_1, \dots, x_t, \dots$. We will define the choice of the iterates later. Our starting point is the $\Delta$-inequality of the Bregman divergence with the choices $y \leftarrow x_{t+1}$, $x \leftarrow x_t$, and $u \in K$ arbitrary:</p>
<script type="math/tex; mode=display">\tag{basicBreg}
\langle - \nabla V_{x_t}(x_{t+1}), x_{t+1} - u \rangle = V_{x_t}(u) - V_{x_{t+1}}(u) - V_{x_t}(x_{t+1}).</script>
<p>We could now try the same strategy, summing up and telescoping out:</p>
<script type="math/tex; mode=display">\sum_{t = 0}^{T-1} \langle - \nabla V_{x_t}(x_{t+1}), x_{t+1} - u \rangle = V_{x_0}(u) - V_{x_{T}}(u) - \sum_{t = 0}^{T-1} V_{x_t}(x_{t+1}).</script>
<p>But how to continue? First observe that in contrast to the telescoping of the $\ell_2$-norm expansion we have a <em>negative</em> term on the right hand-side (this is technical and could have been done the same way for the $\ell_2$-norm) and the left-hand side, as of now, has no relation to the function $f$; clearly, we have not even defined our step yet. So let us try the obvious first-order guess, i.e., replacing the $\ell_2$-norm with the Bregman divergence.</p>
<p>To this end, we define
<script type="math/tex">\tag{IteratesMD} x_{t+1} \doteq \arg\min_{x \in K} \langle \eta_t \partial f(x_t), x \rangle + V_{x_t}(x).</script></p>
<p>Mimicking the approach we took for projected gradient descent, let us inspect the optimality condition of the system. For all $u \in K$ it holds:</p>
<script type="math/tex; mode=display">\tag{optConBreg}
\langle \eta_t \partial f(x_t),x_{t+1} - u \rangle + \langle \nabla V_{x_t}(x_{t+1}), x_{t+1} - u \rangle \leq 0</script>
<p>or equivalently we obtain:</p>
<script type="math/tex; mode=display">\tag{lookaheadMD}
\langle \eta_t \partial f(x_t),x_{t+1} - u \rangle \leq - \langle \nabla V_{x_t}(x_{t+1}), x_{t+1} - u \rangle</script>
<p>as before, we now have to fix the index mismatch on the left:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
\langle \eta_t \partial f(x_t),x_{t+1} - u \rangle & \leq - \langle \nabla V_{x_t}(x_{t+1}), x_{t+1} - u \rangle \\
\Leftrightarrow \langle \eta_t \partial f(x_t),x_{t} - u \rangle + \langle \eta_t \partial f(x_t),x_{t+1} - x_t \rangle & \leq - \langle \nabla V_{x_t}(x_{t+1}), x_{t+1} - u \rangle
\end{align*} %]]></script>
<p>This we can then plug back into (basicBreg) to obtain the key inequality for Mirror Descent:</p>
<script type="math/tex; mode=display">% <![CDATA[
\tag{basicMD}
\begin{align*}
\langle \eta_t \partial f(x_t),x_{t} - u \rangle & \leq - \nabla V_{x_t}(x_{t+1}), x_{t+1} - u \rangle - \langle \eta_t \partial f(x_t),x_{t+1} - x_t \rangle \\
& = V_{x_t}(u) - V_{x_{t+1}}(u) - V_{x_t}(x_{t+1}) - \langle \eta_t \partial f(x_t),x_{t+1} - x_t \rangle \\
& \leq V_{x_t}(u) - V_{x_{t+1}}(u) - \frac{1}{2} \norm{x_t - x_{t+1}}^2 - \langle \eta_t \partial f(x_t),x_{t+1} - x_t \rangle \\
& \leq V_{x_t}(u) - V_{x_{t+1}}(u) + \left (\langle \eta_t \partial f(x_t),x_ t - x_{t+1} \rangle- \frac{1}{2} \norm{x_t - x_{t+1}}^2 \right) \\
& \leq V_{x_t}(u) - V_{x_{t+1}}(u) + \frac{\eta_t^2}{2}\norm{\partial f(x_t)}_\esx^2,
\end{align*} %]]></script>
<p>where the last inequality follows via (genBinomial). We can now simply sum up and telescope out to obtain the generic regret bound for Mirror Descent:</p>
<script type="math/tex; mode=display">\tag{regretBoundMD}
\begin{align*}
\sum_{t=0}^{T-1} \langle \eta_t \partial f(x_t),x_{t} - u \rangle \leq V_{x_0}(u) + \sum_{t=0}^{T-1} \frac{\eta_t^2}{2} \norm{\partial f(x_t)}_\esx^2,
\end{align*}</script>
<p>and further we can again use convexity, averaging of the iterates, and picking $\eta_t = \eta \doteq \sqrt{\frac{2M}{G^2T}}$ (by optimizing out) to arrive at the convergence rate of Mirror Descent:</p>
<script type="math/tex; mode=display">% <![CDATA[
\tag{convergenceMD}
\begin{align*}
f(\bar x) - f(x^\esx) & \leq \frac{1}{T} \sum_{t=0}^{T-1} \left (f(x_t) - f(x^\esx) \right) \\
& \leq \sum_{t=0}^{T-1} \langle \partial f(x_t),x_{t} - x^\esx \rangle \leq \frac{M}{\eta T} + \frac{\eta G^2}{2} \\
& \leq \sqrt{\frac{2M G^2}{T}},
\end{align*} %]]></script>
<p>where $\norm{\partial f(x_t)}_\esx \leq G$ and <script type="math/tex">V_{x_0}(u) \leq M</script> for all $u \in K$..</p>
<p>For completeness, the Mirror Descent algorithm is specified below:</p>
<p class="mathcol"><strong>Mirror Descent Algorithm.</strong> <br />
<em>Input:</em> Convex function $f$ with first-order oracle access and some initial point $x_0 \in K$<br />
<em>Output:</em> Sequence of points $x_0, \dots, x_T$ <br />
For $t = 1, \dots, T$ do: <br />
$\quad x_{t+1} \leftarrow \arg\min_{x \in K} \langle \eta_t \partial f(x_t), x \rangle + V_{x_t}(x)$</p>
<h3 id="online-mirror-descent-and-multiplicative-weights">Online Mirror Descent and Multiplicative Weights</h3>
<p>Alternatively, starting from (regretBoundMD) we can yet again observe that one could use a different function $f_t$ in each iteration $t$, which leads us to <em>Online Mirror Descent</em> as we will briefly discuss in this section. From (regretBoundMD) we have with $\eta_t = \eta$ chosen below:</p>
<script type="math/tex; mode=display">\begin{align*}
\sum_{t=0}^{T-1} \langle \eta \partial f_t(x_t),x_{t} - u \rangle \leq V_{x_0}(u) + \frac{\eta^2}{2} \sum_{t=0}^{T-1} \norm{\partial f_t(x_t)}_\esx^2.
\end{align*}</script>
<p>Rearranging with $\norm{\partial f_t(x_t)}_\esx \leq G$ and <script type="math/tex">V_{x_0}(u) \leq M</script> for all $u \in K$, gives</p>
<script type="math/tex; mode=display">\begin{align*}
\sum_{t=0}^{T-1} \langle \partial f_t(x_t),x_{t} - u \rangle \leq \frac{M}{\eta} + \frac{\eta T}{2} G^2.
\end{align*}</script>
<p>With the (optimal) choice $\eta = \sqrt{\frac{2M}{G^2T}}$ and using the subgradient property, we obtain the online learning regret bound for Mirror Descent:</p>
<script type="math/tex; mode=display">\tag{regretMD}
\begin{align*}
\sum_{t = 0}^{T-1} f_t(x_t) - \min_{x \in K} \sum_{t = 0}^{T-1} f_t(x) \leq \min_{x \in K} \sum_{t=0}^{T-1} \langle \partial f_t(x_t),x_{t} - x \rangle \leq \sqrt{2M G^2T},
\end{align*}</script>
<p>and, paying another factor $\sqrt{2}$, we can make this bound <em>anytime</em>.</p>
<p>We will now consider the important special case of $K = \Delta_n$ being the probability simplex and <script type="math/tex">V_x(y) = \sum_{i \in [n]} y_i \log \frac{y_i}{x_i} = D(y \| x)</script> being the <em>relative entropy</em>, which will lead to (an alternative proof of) the <em>Multiplicative Weight Update (MWU)</em> algorithm; this argument is folklore and has been widely known by experts, see e.g., [BT, AO]. In particular, it generalizes immediately to the matrix case in contrast to other proofs of the MWU algorithm. We refer the interested reader to [AHK] for an overview of the many applications of the MWU algorithm or equivalently Mirror Descent over the probability simplex with relative entropy as Bregman divergence.</p>
<p>Via information-theoretic inequalities or just by-hand calculations (see [BT]), it can be easily seen that <script type="math/tex">D(x\| y)</script> is $1$-stronly convex with respect to <script type="math/tex">\norm{.}_1</script>, whose dual norm is <script type="math/tex">\norm{.}_\infty</script>. Moreover, <script type="math/tex">D(x \| x_0) \leq \log n</script> for all $x \in \Delta_n$, with $x_0 = (1/n, \dots, 1/n)$ being the uniform distribution.</p>
<p>Recall that the iterates are defined via (IteratesMD), which in our case becomes</p>
<script type="math/tex; mode=display">\tag{IteratesMWU} x_{t+1} \doteq \arg\min_{x \in K} \langle \eta_t \partial f_t(x_t), x \rangle + D(x \| x_t),</script>
<p>and making this explicit amounts to updates of the form (to be read coordinate-wise):</p>
<script type="math/tex; mode=display">\tag{IteratesMWUExp}
x_{t+1} \leftarrow x_t \cdot \frac{e^{-\eta_t \partial f_t(x_t)}}{K_t},</script>
<p>where $K_t$ is chosen such that $\norm{x_{t+1}}_1 = 1$, i.e., $K_t = \norm{x_t \cdot e^{-\eta_t \partial f_t(x_t)}}_1$, which is precisely the Multiplicative Weight Update algorithm.</p>
<p>With the bounds $M = \log n$ and $\norm{\partial f_t(x_t)}_\infty \leq G$ for all $t = 0, \dots T-1$, the regret bound in this case becomes:</p>
<script type="math/tex; mode=display">\tag{regretMWU}
\begin{align*}
\sum_{t = 0}^{T-1} f_t(x_t) - \min_{x \in K} \sum_{t = 0}^{T-1} f_t(x) \leq \min_{x \in K} \sum_{t=0}^{T-1} \langle \partial f_t(x_t),x_{t} - x \rangle \leq \sqrt{2 \log(n) G^2 T},
\end{align*}</script>
<p>for the the variant with known $T$ and we can pay another factor $\sqrt{2}$ to make this bound an anytime guarantee.</p>
<h3 id="mirror-descent-vs-gradient-descent">Mirror Descent vs. Gradient Descent</h3>
<p>One of the key questions of course if whether the improvement in convergence rate through fine-tuning against the geometry materializes in actual computations or whether it is just an improvement on paper. Following [AO], some comments are helpful: if $V_x(y) \doteq \frac{1}{2}\norm{x-y^2}$, then Mirror Descent and (Sub-)Gradient Descent produce identical iterates. If on the other hand we, e.g., pick $K = \Delta_n$ the probability simplex in $\RR^n$, then we can pick <script type="math/tex">V_x(y) \doteq D(y\|x)</script> to be the relative entropy, and the iterates from Gradient Descent with the $\ell_2$-norm and Mirror Descent with relative entropy will be very different; and so will be the convergence behavior. Mirror Descent provides a guarantee of</p>
<script type="math/tex; mode=display">\begin{align*}
f(\bar x) - f(x^\esx) \leq \frac{\sqrt{2 \log(n) G^2}}{\sqrt{T}},
\end{align*}</script>
<p>where $\bar x = \frac{1}{T} \sum_0^{T-1} x_t$ and $\norm{\partial f(x_t)}_\infty \leq G$ for all $t = 0, \dots T-1$ in this case.</p>
<p>Below we compare Mirror Descent and Gradient Descent over $K = \Delta_n$ with $n = 10000$ (left) and across different values of $n$ on the (right) for some randomly generated functions; both plots are log-log plots. As can be seen Mirror Descent can scale much better than Gradient Descent by choosing a Bregman divergence that is optimized for the geometry. For $K = \Delta_n$, the dependence on $n$ for Mirror Descent (with relative entropy and $\ell_1$-norm) is only logarithmic vs. for Gradient Descent it is linear in the dimension $n$. This logarithmic dependency makes Mirror Descent well suited for large-scale applications in this case.</p>
<p class="center"><img src="http://www.pokutta.com/blog/assets/md/MD-arranged.png" alt="MD vs. GD" /></p>
<h2 id="extensions">Extensions</h2>
<p>Finally I will talk about some natural extensions. While the full arguments will be beyond the scope, the interested reader might consult [Z2] for proofs. Also, there are various natural extensions in the online learning case, e.g., where we compare to slowly changing strategies; see [Z] for details.</p>
<h3 id="stochastic-versions">Stochastic versions</h3>
<p>It is relatively easy to see that the above bounds can be transferred to the stochastic setting, where we have an unbiased (sub-)gradient estimator only. We then obtain basically the same convergence rates and regret bounds <em>in expectation</em>.</p>
<p>More specifically, see [LNS] for details, for unbiased stochastic subgradients $\partial f(x,\xi)$ ($\xi$ is a random variable), so that their variance is bounded, i.e.,</p>
<script type="math/tex; mode=display">\mathbb E[\norm{\partial f(x,\xi)}_\esx^2] \leq M_\esx^2,</script>
<p>one can achieve (using Mirror Descent) a guarantee of:</p>
<script type="math/tex; mode=display">\mathbb E \left[f(x_T) - f(x^\esx)\right] \leq O\left(\frac{\sqrt{2} D M_\esx}{\sqrt{\alpha}\sqrt{T}}\right),</script>
<p>where $D$ is the “diameter” of the feasible region w.r.t. to the distance generating function used in the Bregman divergence. Note that the iterates $x_t$ are random now depending on the realizations of the $\xi$ when sampling (sub-)gradients. Moreover, via Markov’s inequality one can prove probabilistic statements of the form:</p>
<script type="math/tex; mode=display">\mathbb P\left[f(x_T) - f(x^\esx) > \varepsilon \right]\leq O(1) \frac{\sqrt{2} D M_\esx}{\varepsilon \sqrt{\alpha}\sqrt{T}}.</script>
<h3 id="smooth-case">Smooth case</h3>
<p>When $f$ is smooth we can modify (basicMD) to obtain the improved $O(1/t)$ rate. Recall that $f$ is $L$-smooth with respect to $\norm{\cdot}$ if:</p>
<script type="math/tex; mode=display">\tag{smooth}
f(y) - f(x) \leq \langle \nabla f(x), y-x \rangle + \frac{L}{2} \norm{x-y}^2.</script>
<p>for all $x,y \in \mathbb R^n$. Choosing $x \leftarrow x_t$ and $y \leftarrow x_{t+1}$ we obtain:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
\langle \nabla f(x_t), x_{t+1} - x^\esx \rangle & = \langle \nabla f(x_t), x_{t} - x^\esx \rangle + \langle \nabla f(x_t), x_{t+1} - x_t \rangle \\
& \geq f(x_t) - f(x^\esx) + f(x_{t+1}) - f(x_t) - \frac{L}{2} \norm{x_{t+1}-x_t}^2 \\
& = f(x_{t+1}) - f(x^\esx) - \frac{L}{2} \norm{x_{t+1}-x_t}^2
\end{align*} %]]></script>
<p>We now modify (basicMD) with $u = x^\esx$ as follows:</p>
<script type="math/tex; mode=display">% <![CDATA[
\tag{basicMDSmooth}
\begin{align*}
\langle \eta_t \nabla f(x_t),x_{t+1} - x^\esx \rangle & \leq - \nabla V_{x_t}(x_{t+1}), x_{t+1} - x^\esx \rangle \\
& = V_{x_t}(x^\esx) - V_{x_{t+1}}(x^\esx) - V_{x_t}(x_{t+1}).
\end{align*} %]]></script>
<p>Chaining in the inequality we obtained from smoothness:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
\eta_t (f(x_{t+1}) - f(x^\esx)) & \leq \langle \eta_t \nabla f(x_t),x_{t+1} - x^\esx \rangle + \eta_t \frac{L}{2} \norm{x_{t+1}-x_t}^2 \\
& \leq - \nabla V_{x_t}(x_{t+1}), x_{t+1} - x^\esx \rangle + \eta_t \frac{L}{2} \norm{x_{t+1}-x_t}^2 \\
& = V_{x_t}(x^\esx) - V_{x_{t+1}}(x^\esx) - V_{x_t}(x_{t+1}) + \eta_t \frac{L}{2} \norm{x_{t+1}-x_t}^2 \\
& \leq V_{x_t}(x^\esx) - V_{x_{t+1}}(x^\esx) - \frac{1}{2} \norm{x_{t+1}-x_t}^2 + \eta_t \frac{L}{2} \norm{x_{t+1}-x_t}^2,
\end{align*} %]]></script>
<p>where the last inequality used the compatibility of the Bregman divergence with the norm. Picking $\eta_t = \frac{1}{L}$ results in</p>
<script type="math/tex; mode=display">\frac{1}{L} (f(x_{t+1}) - f(x^\esx)) \leq V_{x_t}(x^\esx) - V_{x_{t+1}}(x^\esx),</script>
<p>which we telescope out to</p>
<script type="math/tex; mode=display">\sum_{t = 0}^{T-1} (f(x_{t+1}) - f(x^\esx)) \leq L V_{x_0}(x^\esx),</script>
<p>and by convexity the average $\bar x = \frac{1}{T} \sum_{t = 0}^{T-1} x_t$ satisfies:</p>
<script type="math/tex; mode=display">f(\bar x) - f(x^\esx) \leq \frac{L V_{x_0}(x^\esx)}{T},</script>
<p>which is the expected rate for the smooth case. Note that this improvement does not translate to the online case.</p>
<h3 id="strongly-convex-case">Strongly convex case</h3>
<p>Finally we will show that if $f$ is $\mu$-strongly convex <em>with respect to $V_x(y)$</em> (not necessarily smooth though), then we can also obtain improved rates. This improvement translates also to the online learning case, i.e., we get the corresponding improvement in regret. Recall that a function is $\mu$-strongly convex with respect to $V_x(y)$ if:</p>
<script type="math/tex; mode=display">f(y) - f(x) \geq \langle \nabla f(x),y-x \rangle + \mu V_x(y),</script>
<p>holds for all $x,y \in \mathbb R^n$. Choosing $x \leftarrow x_t$ and $y \leftarrow x^\esx$, we obtain:</p>
<script type="math/tex; mode=display">\langle \nabla f(x_t), x_t - x^\esx \rangle \geq f(x_t) - f(x^\esx) + \mu V_{x_t}(x^\esx).</script>
<p>Again we start with (basicMD), which we will modify:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
\langle \eta_t \nabla f(x_t),x_{t} - x^\esx \rangle & \leq V_{x_t}(x^\esx) - V_{x_{t+1}}(x^\esx) + \frac{\eta_t^2}{2}\norm{\partial f(x_t)}_\esx^2.
\end{align*} %]]></script>
<p>We now plug in the bound from strong convexity to obtain:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
\eta_t (f(x_t) - f(x^\esx) + \mu V_{x_t}(x^\esx)) & \leq
\langle \eta_t \nabla f(x_t),x_{t} - x^\esx \rangle \\
& \leq V_{x_t}(x^\esx) - V_{x_{t+1}}(x^\esx) + \frac{\eta_t^2}{2}\norm{\partial f(x_t)}_\esx^2,
\end{align*} %]]></script>
<p>which can be simplified to</p>
<script type="math/tex; mode=display">% <![CDATA[
\tag{basicMDSC}
\begin{align*}
\eta_t (f(x_t) - f(x^\esx))
& \leq V_{x_t}(x^\esx) - V_{x_{t+1}}(x^\esx) + \frac{\eta_t^2}{2}\norm{\partial f(x_t)}_\esx^2 - \frac{\eta_t \mu}{2} \norm{x_t - x^\esx}^2 \\
& \leq \left(1- \eta_t \mu\right) V_{x_t}(x^\esx) - V_{x_{t+1}}(x^\esx) + \frac{\eta_t^2}{2}\norm{\partial f(x_t)}_\esx^2.
\end{align*} %]]></script>
<p>Choosing $\eta_t = \frac{1}{\mu t}$ now we obtain:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
\frac{1}{\mu t} (f(x_t) - f(x^\esx))
& \leq \left(1- \frac{1}{t}\right) V_{x_t}(x^\esx) - V_{x_{t+1}}(x^\esx) + \frac{1}{2\mu^2t^2}\norm{\partial f(x_t)}_\esx^2 \\
\Leftrightarrow \frac{1}{\mu} (f(x_t) - f(x^\esx))
& \leq \left(t- 1\right) V_{x_t}(x^\esx) - t V_{x_{t+1}}(x^\esx) + \frac{1}{2\mu^2t}\norm{\partial f(x_t)}_\esx^2,
\end{align*} %]]></script>
<p>which we can finally sum up (starting at $t=1$), multiply by $\mu$, and telescope out to arrive at:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
\sum_{t = 1}^{T} (f(x_t) - f(x^\esx))
& \leq - T \mu V_{x_{T+1}}(x^\esx) + \frac{G^2}{2\mu} \sum_{t = 1}^{T-1} \frac{1}{t} \leq \frac{G^2 \log T}{2\mu},
\end{align*} %]]></script>
<p>and with the usual averaging and using convexity we obtain:</p>
<script type="math/tex; mode=display">f(\bar x) - f(x^\esx) \leq \frac{G^2 \log T}{2\mu T}</script>
<p>for the convergence rate and</p>
<script type="math/tex; mode=display">\sum_{t = 0}^{T-1} f_t(x_t) - \min_x \sum_{t = 0}^{T-1} f_t(x) \leq \frac{G^2 }{2\mu} \log T,</script>
<p>for the regret; note that this bound is already anytime. In order to obtain the regret bound, simply replace $x^\esx$ by an arbitrary $u$ and $f(x_t)$ by $f_t(x_t)$. Note however, that this time the argument is directly on the primal difference $f_t(x_t) - f_t(u)$, rather than the dual gaps, i.e., after plugging-in the strong convexity inequality we start from:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
\eta_t (f_t(x_t) - f_t(u) + \mu V_{x_t}(u)) & \leq
\langle \eta_t \nabla f_t(x_t), x_{t} - u \rangle \\
& \leq V_{x_t}(u) - V_{x_{t+1}}(u) + \frac{\eta_t^2}{2}\norm{\partial f_t(x_t)}_\esx^2,
\end{align*} %]]></script>
<p>and continue the same way.</p>
<h3 id="references">References</h3>
<p>[NY] Nemirovsky, A. S., & Yudin, D. B. (1983). Problem complexity and method efficiency in optimization.</p>
<p>[BT] Beck, A., & Teboulle, M. (2003). Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 31(3), 167-175. <a href="https://web.iem.technion.ac.il/images/user-files/becka/papers/3.pdf">pdf</a></p>
<p>[Z] Zinkevich, M. (2003). Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the 20th International Conference on Machine Learning (ICML-03) (pp. 928-936). <a href="http://www.aaai.org/Papers/ICML/2003/ICML03-120.pdf">pdf</a></p>
<p>[AO] Allen-Zhu, Z., & Orecchia, L. (2014). Linear coupling: An ultimate unification of gradient and mirror descent. arXiv preprint arXiv:1407.1537. <a href="https://arxiv.org/abs/1407.1537">pdf</a></p>
<p>[AHK] Arora, S., Hazan, E., & Kale, S. (2012). The multiplicative weights update method: a meta-algorithm and applications. Theory of Computing, 8(1), 121-164. <a href="http://www.theoryofcomputing.org/articles/v008a006/v008a006.pdf">pdf</a></p>
<p>[Z2] Zhang, X. Bregman Divergence and Mirror Descent. <a href="http://users.cecs.anu.edu.au/~xzhang/teaching/bregman.pdf">pdf</a></p>
<p>[LNS] Lan, G., Nemirovski, A., & Shapiro, A. (2012). Validation analysis of mirror descent stochastic approximation method. Mathematical programming, 134(2), 425-458. <a href="https://www2.isye.gatech.edu/~nemirovs/MP_Valid_2011.pdf">pdf</a></p>
<p><br /></p>
<h4 id="changelog">Changelog</h4>
<p>03/02/2019: Fixed several typos and added clarifications as pointed out by Matthieu Bloch.</p>
<p>03/04/2019: Fixed several typos and a norm/divergence mismatch in the strongly convex case as pointed out by Cyrille Combettes.</p>
<p>05/15/2019: Added pointer to stochastic case and summary of the results in this case.</p>Sebastian PokuttaTL;DR: Cheat Sheet for non-smooth convex optimization: subgradient descent, mirror descent, and online learning. Long and technical.Mixing Frank-Wolfe and Gradient Descent2019-02-18T06:00:00+01:002019-02-18T06:00:00+01:00http://www.pokutta.com/blog/research/2019/02/18/bcg-abstract<p><em>TL;DR: This is an informal summary of our recent paper <a href="https://arxiv.org/abs/1805.07311">Blended Conditional Gradients</a> with <a href="https://users.renyi.hu/~braung/">Gábor Braun</a>, <a href="https://www.linkedin.com/in/dan-tu/">Dan Tu</a>, and <a href="http://pages.cs.wisc.edu/~swright/">Stephen Wright</a>, showing how mixing Frank-Wolfe and Gradient Descent gives a new, very fast, projection-free algorithm for constrained smooth convex minimization.</em>
<!--more--></p>
<h2 id="what-is-the-paper-about-and-why-you-might-care">What is the paper about and why you might care</h2>
<p>Frank-Wolfe methods [FW] (also called conditional gradient methods [CG]) have been very successful in solving <em>constrained smooth convex minimization</em> problems of the form:</p>
<script type="math/tex; mode=display">\min_{x \in P} f(x),</script>
<p>where $P$ is some compact and convex feasible region; you might want to think of, e.g., $P$ being a polytope, which is one of the most common cases. We assume so-called <em>first-order access</em> to the objective function $f$, i.e., we have an oracle that returns function evaluation $f(x)$ and gradient information $\nabla f(x)$ for a provided point $x \in P$. Moreover, we assume that we have access to the feasible region $P$ by means of a so-called <em>linear optimization oracle</em>, which upon being presented with a linear objective $c \in \RR^n$ returns $\arg\min_{x \in P} \langle c, x \rangle$. The basic Frank-Wolfe algorithm looks like this:</p>
<p class="mathcol"><strong>Frank-Wolfe Algorithm [FW]</strong> <br />
<em>Input:</em> Smooth convex function $f$ with first-order oracle access, feasible region $P$ with linear optimization oracle access, initial point (usually a vertex) $x_0 \in P$. <br />
<em>Output:</em> Sequence of points $x_0, \dots, x_T$ <br />
For $t = 1, \dots, T$ do: <br />
$\quad v_t \leftarrow \arg\min_{x \in P} \langle \nabla f(x_{t-1}), x \rangle$ <br />
$\quad x_{t+1} \leftarrow (1-\gamma_t) x_t + \gamma_t v_t$</p>
<p>The Frank-Wolfe algorithm has a couple of important advantages:</p>
<ol>
<li>It is very easy to implement</li>
<li>It does not require projections (as projected gradient descent does)</li>
<li>It maintains iterates as reasonably sparse convex combination of vertices.</li>
</ol>
<p>Generally, one can expect an $O(1/t)$ convergence for the general convex smooth case and linear convergence for strongly convex functions with appropriate modifications of the Frank-Wolfe algorithm. The interested reader might check out <a href="/blog/research/2018/10/05/cheatsheet-fw.html">Cheat Sheet: Frank-Wolfe and Conditional Gradients</a> and <a href="/blog/research/2018/10/19/cheatsheet-fw-lin-conv.html">Cheat Sheet: Linear convergence for Conditional Gradients</a> for an extensive overview.</p>
<p>In the context of Frank-Wolfe methods a key assumption is that linear optimization is <em>cheap</em>. Compared to projections one would have to perform, say for projected gradient descent this is almost always true (except for very simple feasible regions where projection is trivial). As such traditionally one would account for the linear optimization oracle call with an $O(1)$ cost and disregard it in the analysis. However, if the feasible region is complex (e.g., arising from an integer program or just being a really large linear program), this assumption is not warranted anymore and one might ask a few natural questions:</p>
<ol>
<li>Do we really have to call the (expensive) linear programming oracle in each iteration?</li>
<li>Do we really need to compute (approximately) optimal solutions to the LP or does something completely different suffice?</li>
<li>More generally, can we reuse information?</li>
</ol>
<p>It turns out that one can replace the linear programming oracle by what we call a <em>weak separation oracle</em>; see [BPZ] for more details. Without going into full detail, as I will have a dedicated post about <em>lazification</em> (what we dubbed this technique), what the oracle does, it basically wraps around the actual linear programming oracle. Before calling the linear programming oracle, the weak separation oracle can answer by checking previous answers to oracle calls (caching). Moreover, one gains that one does not have to solve the LPs to (approximate) optimality but rather it suffices to check for a certain minimal improvement, which is compatible with the to-be-achieved convergence rate. In particular, we do not need any optimality proofs. One can then show that one can <em>maintain</em> the same convergence rates using the weak separation oracle as for the respective Frank-Wolfe variant utilizing the linear programming oracle, while drastically reducing the number of LP oracle calls.</p>
<h2 id="our-results">Our results</h2>
<p>In practice, while lazification can provide huge speedups for Frank-Wolfe type methods when the LPs are hard to solve, this technique loses its advantage when the LPs are simple. The reason for this is that at the end of the day, there is a trade-off between the quality of the computed directions in terms of providing progress vs. how hard they are to compute: the weak-separation oracle computes potentially worse approximations but does so very fast.</p>
<p>However, what we show in our <em>Blended Conditional Gradients</em> paper is that one can:</p>
<ol>
<li>Cut-out a <em>huge fraction of LP oracle calls</em> (sometimes less than 1% of the iterations require an actual LP oracle call)</li>
<li>While working with <em>actual gradients</em> as descent direction providing much better progress than traditional Frank-Wolfe directions and</li>
<li>Staying fully <em>projection-free</em>.</li>
</ol>
<p>This is achieved by <em>blending together</em> conditional gradient descent steps and gradient steps in a special way. The resulting algorithm has a per-iteration cost that is very comparable to gradient descent in most of the steps and when the LP oracle is called the per-iteration cost is comparable to the standard Frank-Wolfe Algorithm. In progress per iteration though our algorithm, which we call <em>Blended Conditional Gradients (BCG)</em> (see [BPTW] for details), typically outperforms Away-Step and Pairwise Conditional Gradients (the current state-of-the-art methods). We are often even faster in wall-clock performance as we eschew most LP oracle calls. Naturally, we maintain worst-case convergence rates that match those of Away-Step Frank-Wolfe and Pairwise Conditional Gradients; the known lower bounds only assume first-order oracle access and LP oracle access and are unconditional.</p>
<p>Rather than stating the algorithm’s (worst-case) convergence rates for the various cases which are identical to the ones for Away-Step Frank-Wolfe and Pairwise Conditional Gradients achieving $O(1/\varepsilon)$-convergence for general smooth and convex functions and $O(\log 1/\varepsilon)$-convergence for smooth and strongly convex functions (see [LJ] and [BPTW] for details), I rather present some computational results as they highlight the typical behavior. The following graphics provide a pretty representative overview of the computational performance. Everything is in log-scale and we ran each algorithm with a fixed time limit; we refer the reader to [BPTW] for more details.</p>
<p>The first example is a benchmark of BCG vs. Away-Step Frank-Wolfe (AFW), Pairwise Conditional Gradients (PCG), and Vanilla Frank-Wolfe (FW) on a LASSO instance. BCG significantly outperforms the other variants and in fact the empirical convergence rate of BCG is much higher than the rates of the other algorithms; recall we are in log-scale and we expect linear convergence for all variants due to the characteristics of the instance (optimal solution in strict interior).</p>
<p class="center"><img src="http://www.pokutta.com/blog/assets/bcg/bcg4.png" alt="BCG vs. normal" /></p>
<p>One might say that the above is not completely unexpected in particular because BCG uses also the lazification technique from our previous work in [BPZ]. So let us see how we compare to lazified variants of Frank-Wolfe. In the next graph we compare BCG vs. LPCG vs. PCG. The problem we solve here is a structured regression problem over a spanning tree polytope. Clearly, while LPCG is faster than PCG, BCG is significantly faster than either of those, both in iterations and wall-clock time.</p>
<p class="center"><img src="http://www.pokutta.com/blog/assets/bcg/bcg3.png" alt="BCG vs. lazy" /></p>
<p>To better understand what is going on let us see how often the LP oracle is actually called throughout the iterations. In the next graph we plot iterations vs. cumulative number of calls to the (true) LP oracle. Here we added also Fully-Corrective Frank-Wolfe (FCFW) variants that fully optimize out over the active set and hence should have the lowest number of required LP calls; we implemented two variants: one that optimizes over the active set for a fixed number of iterations (the faster one in grey) and one that optimizes to a specific accuracy (the slower one in orange). The next plot shows two instances: LASSO (left) and structured regression over a <em>netgen</em> instance (right); for the former lazification is not helpful as the LP oracle is too simple for the latter it is. As expected for the non-lazy variants such as FW, AFW, and PCG we have a straight line as we perform one LP call per iteration. For LCG, BCG, and the two FCFW variants we obtain a significant reduction in actual calls to the LP oracle with BCG sitting right between the (non-)lazy variants and the FCFW variants. BCG attains a large fraction of the reduction in calls of the much slower FCFW variants while being extremely fast compared to FCFW and all other variants.</p>
<p class="center"><img src="http://www.pokutta.com/blog/assets/bcg/bcg1.png" alt="Cache rates" /></p>
<p>On a fundamental level one might argue that what it really comes down to is how well the algorithm uses the information obtained from an LP call. Clearly, there is a trade-off: on the one hand, better utilization of that information will be increasingly more expensive making the algorithm slower, so that it might be advantageous to rather do another LP call, on the other hand, calling the LP oracle too often results in suboptimal use of the LP call’s information and these calls can be expensive. Managing this tradeoff is critical to achieve high performance and BCG uses a convergence criterion to maintain a very favorable balance. To get an idea how well the various algorithms are using the LP call information, consider the next graphic, where we run various algorithms on a LASSO instance. As can be seen in primal and dual progress, BCG is using LP information in a much more aggressive way, while maintaining a very high speed—the two FCFW variants that would use the information from the LP calls even more aggressively only performed a handful of iterations though as they are extremely slow (see the grey and orange line right at the beginning of the red line). As a measure we depict primal and dual progress vs (true) LP oracle call.</p>
<p class="center"><img src="http://www.pokutta.com/blog/assets/bcg/bcg2.png" alt="Progress per LP call" /></p>
<h3 id="bcg-code">BCG Code</h3>
<p>If you are interested in using BCG, we made a preliminary version of our code available on <a href="https://github.com/pokutta/bcg">github</a>; a significant update with more options and additional algorithms is coming soon.</p>
<h3 id="references">References</h3>
<p>[FW] Frank, M., & Wolfe, P. (1956). An algorithm for quadratic programming. Naval research logistics quarterly, 3(1‐2), 95-110. <a href="https://onlinelibrary.wiley.com/doi/abs/10.1002/nav.3800030109">pdf</a></p>
<p>[CG] Levitin, E. S., & Polyak, B. T. (1966). Constrained minimization methods. Zhurnal Vychislitel’noi Matematiki i Matematicheskoi Fiziki, 6(5), 787-823. <a href="http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=zvmmf&paperid=7415&option_lang=eng">pdf</a></p>
<p>[BPZ] Braun, G., Pokutta, S., & Zink, D. (2017, August). Lazifying conditional gradient algorithms. In Proceedings of the 34th International Conference on Machine Learning-Volume 70 (pp. 566-575). JMLR. org. <a href="https://arxiv.org/abs/1610.05120">pdf</a></p>
<p>[BPTW] Braun, G., Pokutta, S., Tu, D., & Wright, S. (2018). Blended Conditional Gradients: the unconditioning of conditional gradients. arXiv preprint arXiv:1805.07311. <a href="https://arxiv.org/abs/1805.07311">pdf</a></p>
<p>[LJ] Lacoste-Julien, S., & Jaggi, M. (2015). On the global linear convergence of Frank-Wolfe optimization variants. In Advances in Neural Information Processing Systems (pp. 496-504). <a href="http://papers.nips.cc/paper/5925-on-the-global-linear-convergence-of-frank-wolfe-optimization-variants.pdf">pdf</a></p>Sebastian PokuttaTL;DR: This is an informal summary of our recent paper Blended Conditional Gradients with Gábor Braun, Dan Tu, and Stephen Wright, showing how mixing Frank-Wolfe and Gradient Descent gives a new, very fast, projection-free algorithm for constrained smooth convex minimization.The Zeroth World2019-02-06T05:06:41+01:002019-02-06T05:06:41+01:00http://www.pokutta.com/blog/random/2019/02/06/zeroth-world<p><em>TL;DR: On the impact of AI on society and economy and its potential to enable a zeroth world with unprecedented economic output.</em>
<!--more--></p>
<p>In this post I want to talk about the impact that artificial intelligence might have on society and economy; not because of the “terminator scenario” but because of what it already can achieve <em>right now</em>. Over the last few months I have had many such discussions within industry, academia, and government and this is a summary of what I think; as always <em>biased and incomplete</em>.</p>
<p>Before delving into the actual discussion, I would like to clarify what I consider artificial intelligence (AI) as this is a very elusive term that has been overloaded several times to suit various narratives. When I talk about <em>artificial intelligence (AI)</em>, what I am talking about is <em>any technology, technology complex, or system</em>, that:
1) (Sensing) gathers information through direct input, sensors, etc.
2) (Learning) processes information with the explicit or implicit aim of forming an evaluation of its environment.
3) (Deciding) Decides on a course of action.
4) (Acting) Informs or implements that course of action.</p>
<p>For those familiar, this is quite similar to the <a href="https://en.wikipedia.org/wiki/OODA_loop">OODA loop</a>, an abstraction that captures dynamic decision-making with feedback. The (minor) difference here, is that we (a) consider broader systems and we (b) do not require necessarily a feedback. In terms of (Acting) we also assume some form of autonomy, however the action might be either only suggested by the system or directly executed. The purpose of this “definition” is not to add yet another definition to the mix but to make precise, <em>for the purpose of this post</em>, what we will be talking about. For simplicity from now on we will refer to such systems as AI or AI systems. We will also refer to larger systems as AI system if they contain such technology at their core.</p>
<p>Examples of where such AI systems are used or appear are:</p>
<ul>
<li>Credit ratings</li>
<li>Amazon’s “people also bought”</li>
<li>Autonomous vehicles</li>
<li>Medical decision-support systems</li>
<li>Facial recognition</li>
<li>…</li>
</ul>
<p>Also, note that I chose the term “AI systems” vs many other equally fitting terms as it seems to be more “accessible” than some of the more technical ones, such as <em>Machine Learning</em> or <em>Decision-Support Systems</em>. Otherwise this choice is really arbitrary; let’s not make it about the choice of words.</p>
<h2 id="impact-through-hybridization">Impact through Hybridization</h2>
<p>A lot of the current discussion has been centered around the direct substitution of technology, workers, etc. by AI systems, as in <em>robot-in-human-out</em>. In believe however that this is not the likely scenario in the short to mid term as it would require a very high maturity level of current AI and machine learning technology that seems far away. Those wary of AI would argue that the <em>singularity</em>, where basically AI systems improve themselves, will drive maturity exponentially fast. Whether this is likely to happen I do not know as predictions of such type are tough. Most of those voices wary of AI, seem to argue from a utilitarian perspective a la Bernoulli and rather want to err on the safe side; from a risk management perspective not necessarily a bad approach. Most of those unconcerned argue that the we have not figured out some very basic challenges and as such there is no real risk.</p>
<p class="center"><img src="https://imgs.xkcd.com/comics/skynet.png" alt="Comparison different step size rules" />
<a href="https://xkcd.com/1046/">[Source: XKCD]</a></p>
<p>While this discourse might be important in its own right, I want to focus more on the <em>(relatively) immediate, short-term</em> impact: timelines of the order of 10 - 20 years, which is really short compared to the speed with which societies and economic systems adapt.</p>
<h3 id="scaling-and-enabling-through-ai">Scaling and Enabling through AI</h3>
<p>In order for AI to have a disruptive impact on society full maturity is not required; neither is <em>explainability</em> although this might be desirable. The reason for this is that we can simply “pair up a human with an AI”, which I refer to as <em>Hybridization</em>, forming a symbiotic system in a more Xenoblade-esque fashion. The basic principle is that 90% of the basics can be performed efficiently and faster by an AI and for the remaining 10% we have human override. This will (1) enable an individual to perform tasks that were out of reach at unprecedented speed and (2) allows an individual to aggressively scale up her/his operations by operating on a higher level, letting the AI take care of the basics.</p>
<p>While this sounds Sci-Fi at first, a closer look reveals that we have been operating like this for many decades, we build tools to automate basic tasks (where basic is relative to the current level). This leads to an <em>automate-and-elevate</em> paradigm or cycle: automate the basics (e.g., via a machine or computer) and then go to the next level. A couple of examples:</p>
<ul>
<li>Driver + Google Maps</li>
<li>Engineer + finite elements software</li>
<li>Vlogger + Camera + Final Cut Pro</li>
<li>MD + X-Ray</li>
</ul>
<p>I am sure you can come up with hundreds of other examples. What all these examples have in common is (1) an enabling factor and (2) a scale-up factor. Take the “Engineer + finite elements software” example: The engineer can suddenly compute and test designs that were impossible to verify by himself beforehand and required a larger number of other people to be involved. However with this tool, the number of involved people can be significantly reduced (the individual’s productivity skyrockets) and completely new unthinkable things can be suddenly done.</p>
<p>What AI systems bring to the mix is that they suddenly allow us to (at least partially) tool and automate tasks that were out of reach so far because of “messy inputs”, i.e., these AI systems allow us to redefine what we consider “basic”.</p>
<h3 id="an-example">An example</h3>
<p>Let us consider the example of autonomous driving. Not because I like it particularly but because most of us have a pretty good idea about driving. Also today’s cars already have very basic automation, such as “cruise control” and “lane assist” systems, so that the idea is not that foreign. Traditionally, a car has one driver. While AI for autonomous driving seems far from being completely there yet, we <em>do not need this</em> to achieve disruptive improvements. Here are two use cases:</p>
<p>Use case 1: Let the AI take care of the basic driving tasks. Whenever a situation is unclear the controls are transferred to a centralized control center, where professional drivers take over for the duration of the “complex task” and then the controls are passed back to the car. This might allow a single driver, together with AI subsystems to operate 4-10 cars at a time; the range is arbitrary but seems reasonable: not correcting for correlation and tail risks, a 4x factor would require the AI to tackle 75% of the driven miles autonomously and a factor of 10x would require 90% of the driven miles being handled autonomously. Current disengagement rates of Waymo seem to be far better than that.</p>
<p>Use case 2: Long-haul trucking. Highway autonomy is much easier than intracity operations. Have truck drivers drive the truck to a “handover point” on a highway. Truck driver gets off the truck, the truck drives autonomously via the highway network to the handover point close to its destination. Human truck driver “picks up” the truck for last-mile intracity driving. If you now consider the ratio between the intracity portions and the highway portions of the trip, the number of required drivers can be reduced significantly; a 10x factor seems conservative. Moreover, rest times etc can be cut out as well.</p>
<p>Clearly, we can also combine use case 1 and 2 for extra safety with minimal extra cost. What we see however from this basic example is that AI systems can scale-up what a single human can do by significant multiples. Also in the long-haul example from above, the quality of life of the drivers goes up, e.g., less time spent away from family (that is for those that keep their job). However, the <em>very important</em> flip-side of this hybridization is that it threatens to displace a huge fraction of jobs: at a scaling of 10x about 90% of the jobs might be at risk; this is of course a naive estimate.</p>
<p>Other tasks, which might become “basic” are:</p>
<ul>
<li><em>Call center operations:</em> We already have call systems handling large portions of the call until being passed to an operator. AI-based systems bring this to another level. Think: <a href="https://www.theverge.com/2018/12/5/18123785/google-duplex-how-to-use-reservations">Google Duplex</a></li>
<li><em>Checking NDAs and contracts:</em> Time consuming and not value add. There are several systems (have not verified their accuracy) that offer automatic review, e.g., <a href="https://www.ndalynn.com/">NDALynn</a>, <a href="https://www.lawgeex.com/">LawGeex</a> (see also <a href="https://www.techspot.com/news/77189-machine-learning-algorithm-beats-20-lawyers-nda-legal.html">TechSpot</a>).</li>
<li><em>Managing investment portfolios:</em> Robo-Advisors in the retail space deliver similar or better performance than traditional and costly (and often subpar) investment advisors; after all, the hot shots are mostly working for funds or UHNWIs. (see <a href="http://money.com/money/5330932/best-robo-advisors-beginner-advanced-2018/">here</a> and <a href="https://www.barrons.com/articles/the-top-robo-advisors-an-exclusive-ranking-1532740937">here</a>)</li>
<li><em>Design of (simple) machine learning solutions:</em> Google’s <a href="https://cloud.google.com/automl/">AutoML</a> automates the creation of high-performance machine learning models. Upload your data and get a deployment ready model with REST API etc. No data scientist required.</li>
<li>I know of other large companies using AI systems to automate the RFP process by sifting through thousands of pages of specifications to determine a product offering.</li>
</ul>
<p>Of course, just to be clear, all of the above come also with certain usage risks if not used properly or without the necessary expertise.</p>
<h2 id="the-bigger-picture-learning-rate-and-discovery-rate">The bigger picture: learning rate and discovery rate</h2>
<p>What this all might lead to is a <em>Zeroth World</em> whose advantage (broadly speaking in terms of development: economic, educational, societal, etc) over the First World might be as large as the advantage of the First World over the Third World.</p>
<h3 id="gdp-per-employed-person">GDP per employed person</h3>
<p>A very skewed but still informative metric is GDP per person employed. It gives generally a good idea of the productivity levels achieved <em>on average</em>. There are a couple of special cases, for example China with an extremely high variance. Nonetheless, in the graphics below generated from <a href="https://www.google.com/publicdata/explore?ds=d5bncppjof8f9_&ctype=l&met_y=sl_gdp_pcap_em_kd#!ctype=l&strail=false&bcs=d&nselm=h&met_y=sl_gdp_pcap_em_kd&scale_y=log&ind_y=false&rdim=region&idim=region:NAC&idim=country:SGP:JPN:DEU:CHN:FRA:CMR:COG:ETH:GHA:KEN:NGA:SDN:ECU&ifdim=region&tdim=true&hl=en_US&dl=en_US&ind=false">Google’s dataset</a> you can see a strict separation between (some) First World countries and (some) Third World countries; note that the scale is logarithmic:</p>
<p class="center"><img src="http://www.pokutta.com/blog/assets/gdp-employed-person-comp.png" alt="GDP per employed person" />
<a href="https://www.google.com/publicdata/explore?ds=d5bncppjof8f9_&ctype=l&met_y=sl_gdp_pcap_em_kd#!ctype=l&strail=false&bcs=d&nselm=h&met_y=sl_gdp_pcap_em_kd&scale_y=log&ind_y=false&rdim=region&idim=region:NAC&idim=country:SGP:JPN:DEU:CHN:FRA:CMR:COG:ETH:GHA:KEN:NGA:SDN:ECU&ifdim=region&tdim=true&hl=en_US&dl=en_US&ind=false">[Source: Google’s dataset]</a></p>
<p>Now imagine that some countries, upon leveraging AI systems achieve a 10x gain in output per employed person. That will be the <em>Zeroth World</em>: people operating at 10x of their First World productivity levels. Hard to imagine, but that is roughly the separation between the US and Ghana for example.</p>
<p>The graph above is very compatible with well-known trends, e.g., <a href="https://www.reuters.com/article/us-singapore-semiconductors-analysis/singapores-automation-incentives-draw-tech-firms-boost-economy-idUSKBN17T3DX">Singapore strongly investing in automation</a> or China being the country with <a href="https://www.dbs.com/aics/templatedata/article/generic/data/en/GR/042018/180409_insights_understanding_china_automation_drive_is_essential_and_welcome.xml">largest number of industrial robots going online</a>. JP Morgan estimates that automation could add <a href="https://www.businessinsider.com/automation-one-trillion-dollars-global-economy-jpmam-report-2017-11">up to $1.1 trilion</a> to the global economy over the next 10-15 years. While this is only 1-1.5% of an overall boost in global GDP, in actuality the effect might be much more pronounced as it will be concentrated in few countries leading to a much stronger separation; still even if the whole boost would be accounted to the US it would still be just about 5%. But AI systems go beyond mere manufacturing automation and it is hard to estimate the cumulative effect. To put things into context, in manufacturing an extreme shift happened around the 2000’s when the first wave of strong automation kicked in. Over the last 30 or so years we roughly doubled manufacturing output and close to halved the number of people; see the graphics from <a href="https://www.businessinsider.com/manufacturing-output-versus-employment-chart-2016-12">Business Insider</a>:</p>
<p class="center"><img src="https://amp.businessinsider.com/images/584b0056ca7f0c5c008b4a92-960-720.png" alt="Manufacturing output vs. automation" />
<a href="https://www.businessinsider.com/manufacturing-output-versus-employment-chart-2016-12">[Source: Business Insider]</a></p>
<p>That is 4x in about 30 years in a physical space, with large, tangible assets and more generally with lots of overall inertia in the system. It is quite likely that AI systems will have an even more pronounced effect because they are more widely deployable, so that the 10x scenario is <em>not that</em> ambitious.</p>
<h3 id="learning-rate-vs-discovery-rate">Learning rate vs discovery rate</h3>
<p>To better understand what AI systems reasonably can and cannot do, without making strong predictions about the future we need to differentiate between the <em>learning rate</em> and the <em>discovery rate</em> of a technology. In a nutshell, the learning rate captures how fast, e.g., prices, resources required etc fell over time for an <em>existing</em> solution or product, i.e., by how much flying got cheaper over time. This captures various improvements over time in deploying a given technology. The learning rate makes no statement about new discoveries or overcoming fundamental roadblocks. That is exactly what the <em>discovery rate</em> captures. While the learning rate tends to be quite observable and often follows a relatively stable trend over time, the discovery rate is much more unpredictable (due to its nature) and that is where often speculation about the future and its various scenarios comes into play. I will not go there: the learning rate alone can provide us with some insights. Note that we refer to those two as “rates” as it is very insightful to consider the world in logarithmic scale, e.g., measuring time to double or halve. Let us consider the examples of <a href="https://aiimpacts.org/wikipedia-history-of-gflops-costs/">historical prices for GFlops</a>:</p>
<p class="center"><img src="http://www.pokutta.com/blog/assets/History-of-GFLOPS-prices.png" alt="Learning rate GFlops" /></p>
<p class="center"><a href="https://aiimpacts.org/wikipedia-history-of-gflops-costs/">[Source: AIImpacts.org]</a></p>
<p>We can find a very similar trend in <a href="https://jcmit.net/memoryprice.htm">historical prices for storage</a>:</p>
<p class="center"><img src="http://www.pokutta.com/blog/assets/MemoryDiskPriceGraph-2018Dec.jpg" alt="Learning rate storage" />
<a href="https://jcmit.net/memoryprice.htm">[Source: jcmit.net]</a></p>
<p>These two are probably pretty much expected as they roughly follow <a href="https://en.wikipedia.org/wiki/Moore%27s_law">Moore’s law</a>, however there are many similar examples in other industries with different rates. For examples <a href="https://www.vox.com/2016/8/24/12620920/us-solar-power-costs-falling">historical prices for solar panels</a> or <a href="https://www.theatlantic.com/business/archive/2013/02/how-airline-ticket-prices-fell-50-in-30-years-and-why-nobody-noticed/273506/">flights</a>. Now let us compare this to the recent increase in <a href="https://blog.openai.com/ai-and-compute/">compute deployed for training ai systems</a>:</p>
<p class="center"><img src="https://openai.com/content/images/2018/05/compute_diagram-log@2x-3.png" alt="Learning rate storage" />
<a href="https://blog.openai.com/ai-and-compute/">[Source: OpenAI Blog]</a></p>
<p>Compared to Moore’s law with a doubling rate of roughly every 18 months (so far) for the compute deployed here the doubling rate much higher at about only 3.5 months (so far). Clearly, neither can continue forever at such aggressive rates, however this example points at two things: (a) we are moving <em>much faster</em> than anything that we have seen so far and (b) with the deployment of more compute usually a roughly similar increase in required data comes along (the reason being, that training algorithms, usually based on variants of stochastic gradient descent, can only make so many passes over the data before overfitting). Notably those applications in the graph with the highest compute are not relying on labeled data (except for maybe Neural Machine Translation to some extent; not sure) but are reinforcement learning systems, where training data is generated through simulation and (self-)play. For more details see the <a href="ttps://blog.openai.com/ai-and-compute/">AI and Compute</a> post on OpenAI’s blog. The graph above is not exactly the learning rate as it lacks the relation to e.g., price, however it clearly shows how fast we are progressing. It is not hard to imagine that with new hardware architectures, in a not too distant future that type of power will be available on your cell phone. So even <em>without new discoveries</em>, just following the natural learning rate of the industry and making the current state-of-the-art cheaper, will have profound impact. For example, just a few days ago Google’s Deepmind <a href="https://blog.usejournal.com/an-analysis-on-how-deepminds-starcraft-2-ai-s-superhuman-speed-could-be-a-band-aid-fix-for-the-1702fb8344d6">(not completely uncontroversially)</a> won against pro players at playing <a href="https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/">StarCraft 2</a> (see also <a href="https://www.theverge.com/2019/1/24/18196135/google-deepmind-ai-starcraft-2-victory">here</a>). The training of this system required an enormous amount of computational resources. Even in light of the controversy, this is still an important achievement in terms of scaling technology, large-scale training with multiple agents, demonstrating that well designed reinforcement learning systems <em>can</em> learn very complex tasks, and more generally to “make it work”; whether reinforcement learning in general is the right approach to such problems is left for another discussion. In a few years we will teach building such integrated large-scale systems at universities end-to-end as a senior design type of project and then a few years later you will be able to download such a bot in the <em>App Store</em>. Crazy? Think of <em>neural style transfer</em> a few years back. You can now get <a href="https://prisma-ai.com/">Prisma</a> on your cell phone. Sure it might offload the computation to the cloud—at least previous versions did so—but that is not the point. The point is that complex AI system designs at the cutting edge are made available to the broader public only <em>a few years</em> after their inceptions. <a href="https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html">Google Duplex</a> is another such example making restaurant etc. reservations for you. To be clear, I am also very well aware of the limitations etc., but at the same time I fail to see a <em>fundamental roadblock</em> and existing limitations might be removed quickly with good engineering and research.</p>
<h3 id="impact-on-society-and-economy">Impact on society and economy</h3>
<p>In a nutshell: we are moving very fast. In fact, so fast that the consequences are unclear. Forget about the “terminator scenario” as a threat to society. Not because it might or it might not happen but rather because already the <em>current technology</em> just following its natural learning rate cycle poses a much more immediate challenge with the potential to lead to <em>huge</em> disruptions, both positive and negative.</p>
<p>One very critical impact to think about is workforce. If AI enables people to be more productive then either the economic output increases or the number of people required to achieve a given output level will decrease; these are the two sides of the same coin. The reality is that while there (likely) will be significant improvements in terms of economic output, there is only so much increase the “world” can absorb in a short period of time: at an economic world growth of about 2-3% per year the time it takes to 10x the output is roughly 80-100 years; even with significantly improved efficiency due to AI systems you can only push the output so far. What this means is that we might be facing a transitory period where efficiency improvements will drastically impact employment levels and it will take a considerable time for the workforce to adjust to these changes. In light of this one might actually contemplate whether populations in several developed countries are shrinking in early anticipation of the times ahead.</p>
<p>The other critical thing to think about is the concentration of power and wealth that might be accompanied by these shifts. Already today, we see that tech companies accumulate wealth and capital at unprecedented rates, leveraging the network effects of the internet. Yet, still somewhat tight to the physical world, e.g., due to users, there is still <em>some limit</em> to their growth. It is easily imaginable however, that the next “category of scale” will be defined by AI companies, with an insane concentration of resources, wealth, and power that pales current concentration levels in the valley.</p>
<p>We will likely also see the empowering of individuals beyond what we could imagine just a few years back by (a) multiplying the sheer output of an individuals due to scaling but also (b) by enabling the individual to do new things leveraging AI support systems. Then the “best” will dominate and technology will enable that individual to act globally, removing the last of the geographic entry barriers. As a simple example, take the recent “vlog” phenomenon, where one-person video productions can achieve a level of professionalism that rivals that of large scale productions. Executed from any place in the world and distributed world-wide through youtube. Moreover, the individual can directly “sell” to her/his target audience cutting out the middle man. This might provide a larger diversity and also a democratization of such disciplines but at the same time might also remove a useful filter in some cases.</p>
<p>These shifts, brought about by AI systems and resulting technology, come with a lot of (potential) positives and negatives and the promises of AI systems are great. Being high on possibilities of this new paradigm, it is easy to forget though that there might be severe unintended consequences with potentially critical impact on our societies and economies. In order to enable sustainable progress we need to not just be aware but prepare and actively shape the use of these new technologies.</p>Sebastian PokuttaTL;DR: On the impact of AI on society and economy and its potential to enable a zeroth world with unprecedented economic output.Toolchain Tuesday No. 52018-12-23T01:00:00+01:002018-12-23T01:00:00+01:00http://www.pokutta.com/blog/random/2018/12/23/toolchain-5<p><em>TL;DR: Part of a series of posts about tools, services, and packages that I use in day-to-day operations to boost efficiency and free up time for the things that really matter. Use at your own risk - happy to answer questions. For the full, continuously expanding list so far see <a href="/blog/pages/toolchain.html">here</a>.</em>
<!--more--></p>
<p>This is the fifth installment of a series of posts; the <a href="/blog/pages/toolchain.html">full list</a> is expanding over time. This time around will be about modeling languages for optimization problems.</p>
<h2 id="software">Software:</h2>
<h3 id="cvxopt">CVXOPT</h3>
<p>Low-level <code class="highlighter-rouge">Python</code> interface for convex optimization.</p>
<p><em>Learning curve: ⭐️⭐️⭐️⭐️</em>
<em>Usefulness: ⭐️⭐️⭐️</em> <br />
<em>Site: <a href="https://cvxopt.org">https://cvxopt.org</a></em></p>
<p><code class="highlighter-rouge">CVXOPT</code> is basically a <code class="highlighter-rouge">Python</code> interface to various optimization solvers, providing an intermediate, relatively low-level, matrix-based interface. This is in contrast to some of the modeling languages from below that provide a higher level of abstraction; basically these modeling languages generate the matrix structure by transcribing the statements of the modeling language. Nonetheless, <code class="highlighter-rouge">CVXOPT</code> is a great tool to solve optimization problems in <code class="highlighter-rouge">Python</code>.</p>
<p>From <a href="https://cvxopt.org">https://cvxopt.org</a>:</p>
<blockquote>
<p>CVXOPT is a free software package for convex optimization based on the Python programming language. It can be used with the interactive Python interpreter, on the command line by executing Python scripts, or integrated in other software via Python extension modules. Its main purpose is to make the development of software for convex optimization applications straightforward by building on Python’s extensive standard library and on the strengths of Python as a high-level programming language.</p>
</blockquote>
<p>Here is sample code from <a href="https://cvxopt.org">https://cvxopt.org</a> to give you an idea:</p>
<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Risk-return trade-off.</span>
<span class="kn">from</span> <span class="nn">math</span> <span class="kn">import</span> <span class="n">sqrt</span>
<span class="kn">from</span> <span class="nn">cvxopt</span> <span class="kn">import</span> <span class="n">matrix</span>
<span class="kn">from</span> <span class="nn">cvxopt.blas</span> <span class="kn">import</span> <span class="n">dot</span>
<span class="kn">from</span> <span class="nn">cvxopt.solvers</span> <span class="kn">import</span> <span class="n">qp</span><span class="p">,</span> <span class="n">options</span>
<span class="n">n</span> <span class="o">=</span> <span class="mi">4</span>
<span class="n">S</span> <span class="o">=</span> <span class="n">matrix</span><span class="p">(</span> <span class="p">[[</span> <span class="mf">4e-2</span><span class="p">,</span> <span class="mf">6e-3</span><span class="p">,</span> <span class="o">-</span><span class="mf">4e-3</span><span class="p">,</span> <span class="mf">0.0</span> <span class="p">],</span>
<span class="p">[</span> <span class="mf">6e-3</span><span class="p">,</span> <span class="mf">1e-2</span><span class="p">,</span> <span class="mf">0.0</span><span class="p">,</span> <span class="mf">0.0</span> <span class="p">],</span>
<span class="p">[</span><span class="o">-</span><span class="mf">4e-3</span><span class="p">,</span> <span class="mf">0.0</span><span class="p">,</span> <span class="mf">2.5e-3</span><span class="p">,</span> <span class="mf">0.0</span> <span class="p">],</span>
<span class="p">[</span> <span class="mf">0.0</span><span class="p">,</span> <span class="mf">0.0</span><span class="p">,</span> <span class="mf">0.0</span><span class="p">,</span> <span class="mf">0.0</span> <span class="p">]]</span> <span class="p">)</span>
<span class="n">pbar</span> <span class="o">=</span> <span class="n">matrix</span><span class="p">([</span><span class="o">.</span><span class="mi">12</span><span class="p">,</span> <span class="o">.</span><span class="mi">10</span><span class="p">,</span> <span class="o">.</span><span class="mo">07</span><span class="p">,</span> <span class="o">.</span><span class="mo">03</span><span class="p">])</span>
<span class="n">G</span> <span class="o">=</span> <span class="n">matrix</span><span class="p">(</span><span class="mf">0.0</span><span class="p">,</span> <span class="p">(</span><span class="n">n</span><span class="p">,</span><span class="n">n</span><span class="p">))</span>
<span class="n">G</span><span class="p">[::</span><span class="n">n</span><span class="o">+</span><span class="mi">1</span><span class="p">]</span> <span class="o">=</span> <span class="o">-</span><span class="mf">1.0</span>
<span class="n">h</span> <span class="o">=</span> <span class="n">matrix</span><span class="p">(</span><span class="mf">0.0</span><span class="p">,</span> <span class="p">(</span><span class="n">n</span><span class="p">,</span><span class="mi">1</span><span class="p">))</span>
<span class="n">A</span> <span class="o">=</span> <span class="n">matrix</span><span class="p">(</span><span class="mf">1.0</span><span class="p">,</span> <span class="p">(</span><span class="mi">1</span><span class="p">,</span><span class="n">n</span><span class="p">))</span>
<span class="n">b</span> <span class="o">=</span> <span class="n">matrix</span><span class="p">(</span><span class="mf">1.0</span><span class="p">)</span>
<span class="n">N</span> <span class="o">=</span> <span class="mi">100</span>
<span class="n">mus</span> <span class="o">=</span> <span class="p">[</span> <span class="mi">10</span><span class="o">**</span><span class="p">(</span><span class="mf">5.0</span><span class="o">*</span><span class="n">t</span><span class="o">/</span><span class="n">N</span><span class="o">-</span><span class="mf">1.0</span><span class="p">)</span> <span class="k">for</span> <span class="n">t</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">N</span><span class="p">)</span> <span class="p">]</span>
<span class="n">options</span><span class="p">[</span><span class="s">'show_progress'</span><span class="p">]</span> <span class="o">=</span> <span class="bp">False</span>
<span class="n">xs</span> <span class="o">=</span> <span class="p">[</span> <span class="n">qp</span><span class="p">(</span><span class="n">mu</span><span class="o">*</span><span class="n">S</span><span class="p">,</span> <span class="o">-</span><span class="n">pbar</span><span class="p">,</span> <span class="n">G</span><span class="p">,</span> <span class="n">h</span><span class="p">,</span> <span class="n">A</span><span class="p">,</span> <span class="n">b</span><span class="p">)[</span><span class="s">'x'</span><span class="p">]</span> <span class="k">for</span> <span class="n">mu</span> <span class="ow">in</span> <span class="n">mus</span> <span class="p">]</span>
<span class="n">returns</span> <span class="o">=</span> <span class="p">[</span> <span class="n">dot</span><span class="p">(</span><span class="n">pbar</span><span class="p">,</span><span class="n">x</span><span class="p">)</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">xs</span> <span class="p">]</span>
<span class="n">risks</span> <span class="o">=</span> <span class="p">[</span> <span class="n">sqrt</span><span class="p">(</span><span class="n">dot</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">S</span><span class="o">*</span><span class="n">x</span><span class="p">))</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">xs</span> <span class="p">]</span>
</code></pre></div></div>
<h3 id="pyomo">Pyomo</h3>
<p><code class="highlighter-rouge">Pyomo</code> is a python-based open-source optimization modeling language supporting a wide range of optimization paradigms and solvers.</p>
<p><em>Learning curve: ⭐️⭐️⭐️</em>
<em>Usefulness: ⭐️⭐️⭐️⭐️</em> <br />
<em>Site: <a href="http://www.pyomo.org/">http://www.pyomo.org/</a></em></p>
<p><code class="highlighter-rouge">Pyomo</code> is a Python-based open-source optimization modeling language. It supports a variety of different optimization paradigms and integrates with a wide range of solvers including <code class="highlighter-rouge">BARON</code>, <code class="highlighter-rouge">CBC</code>, <code class="highlighter-rouge">CPLEX</code>, <code class="highlighter-rouge">Gurobi</code>, and <code class="highlighter-rouge">glpsol</code>; check the <a href="https://pyomo.readthedocs.io/en/latest/index.html">Pyomo Manual</a>. Another great resource with examples is the <a href="https://github.com/jckantor/ND-Pyomo-Cookbook">Pyomo Cookbook</a>.</p>
<p>What sets <code class="highlighter-rouge">Pyomo</code> apart from <code class="highlighter-rouge">MathProg</code> and <code class="highlighter-rouge">CVXOPT</code> is that it is a relatively high-level modeling language (compared to <code class="highlighter-rouge">CVXOPT</code>) while being written in <code class="highlighter-rouge">Python</code> (compared to <code class="highlighter-rouge">MathProg</code>) allowing for easy integration with a plethora of other packages.</p>
<p>From <a href="http://www.pyomo.org/">http://www.pyomo.org/</a>:</p>
<blockquote>
<p>A core capability of Pyomo is modeling structured optimization applications. Pyomo can be used to define general symbolic problems, create specific problem instances, and solve these instances using commercial and open-source solvers. Pyomo’s modeling objects are embedded within a full-featured high-level programming language providing a rich set of supporting libraries, which distinguishes Pyomo from other algebraic modeling languages like AMPL, AIMMS and GAMS.</p>
</blockquote>
<p>Supported problem types include:</p>
<ul>
<li>Linear programming</li>
<li>Quadratic programming</li>
<li>Nonlinear programming</li>
<li>Mixed-integer linear programming</li>
<li>Mixed-integer quadratic programming</li>
<li>Mixed-integer nonlinear programming</li>
<li>Stochastic programming</li>
<li>Generalized disjunctive programming</li>
<li>Differential algebraic equations</li>
<li>Bilevel programming</li>
<li>Mathematical programs with equilibrium constraints</li>
</ul>
<p>Here is an example of a model:</p>
<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">from</span> <span class="nn">pyomo.environ</span> <span class="kn">import</span> <span class="o">*</span>
<span class="n">model</span> <span class="o">=</span> <span class="n">ConcreteModel</span><span class="p">()</span>
<span class="c"># declare decision variables</span>
<span class="n">model</span><span class="o">.</span><span class="n">y</span> <span class="o">=</span> <span class="n">Var</span><span class="p">(</span><span class="n">domain</span><span class="o">=</span><span class="n">NonNegativeReals</span><span class="p">)</span>
<span class="c"># declare objective</span>
<span class="n">model</span><span class="o">.</span><span class="n">profit</span> <span class="o">=</span> <span class="n">Objective</span><span class="p">(</span>
<span class="n">expr</span> <span class="o">=</span> <span class="mi">30</span><span class="o">*</span><span class="n">model</span><span class="o">.</span><span class="n">y</span><span class="p">,</span>
<span class="n">sense</span> <span class="o">=</span> <span class="n">maximize</span><span class="p">)</span>
<span class="c"># declare constraints</span>
<span class="n">model</span><span class="o">.</span><span class="n">laborA</span> <span class="o">=</span> <span class="n">Constraint</span><span class="p">(</span><span class="n">expr</span> <span class="o">=</span> <span class="n">model</span><span class="o">.</span><span class="n">y</span> <span class="o"><=</span> <span class="mi">80</span><span class="p">)</span>
<span class="n">model</span><span class="o">.</span><span class="n">laborB</span> <span class="o">=</span> <span class="n">Constraint</span><span class="p">(</span><span class="n">expr</span> <span class="o">=</span> <span class="n">model</span><span class="o">.</span><span class="n">y</span> <span class="o"><=</span> <span class="mi">100</span><span class="p">)</span>
<span class="c"># solve</span>
<span class="n">SolverFactory</span><span class="p">(</span><span class="s">'glpk'</span><span class="p">)</span><span class="o">.</span><span class="n">solve</span><span class="p">(</span><span class="n">model</span><span class="p">)</span><span class="o">.</span><span class="n">write</span><span class="p">()</span>
</code></pre></div></div>
<h3 id="mathprog">MathProg</h3>
<p><code class="highlighter-rouge">MathProg</code> (aka <code class="highlighter-rouge">GMPL</code>) is a modeling language for Mixed-Integer Linear Programs.</p>
<p><em>Learning curve: ⭐️⭐️</em>
<em>Usefulness: ⭐️⭐️⭐️</em> <br />
<em>Site: <a href="https://www.gnu.org/software/glpk/">https://www.gnu.org/software/glpk/</a></em></p>
<p><code class="highlighter-rouge">MathProg</code>, also known as <code class="highlighter-rouge">GMPL</code> is a modeling language for Mixed-Integer Linear Programs (MILP). It is included with <code class="highlighter-rouge">glpk</code>, the <em>GNU Linear Programming Kit</em> and it supports reading and writing from data sources (databases via <code class="highlighter-rouge">ODBC</code> or <code class="highlighter-rouge">JDBC</code>) and, apart from <code class="highlighter-rouge">glpsol</code> which is <code class="highlighter-rouge">glpk</code>’s own MILP solver it supports various other solvers such as <code class="highlighter-rouge">CPLEX</code>, <code class="highlighter-rouge">Gurobi</code>, or <code class="highlighter-rouge">SCIP</code> through LP-files. From the <a href="https://en.wikibooks.org/wiki/GLPK/GMPL_(MathProg)">[GLPK wikibook]</a>, which is also a great resource for <code class="highlighter-rouge">GMPL</code>:</p>
<blockquote>
<p>GNU MathProg is a high-level language for creating mathematical programming models. MathProg is specific to GLPK, but resembles a subset of AMPL. MathProg can also be referred to as GMPL (GNU Mathematical Programming Language), the two terms being interchangeable.</p>
</blockquote>
<p><code class="highlighter-rouge">MathProg</code> is in particular great for fast prototyping. Unfortunately, it does not directly support other IP solvers through its built-in interface but requires to go the route via LP-files. As a consequence the relatively powerful output post-processing of <code class="highlighter-rouge">MathProg</code> cannot be used in that case and <code class="highlighter-rouge">glpk</code>’s own solver <code class="highlighter-rouge">glpsol</code> that integrates with <code class="highlighter-rouge">MathProg</code> natively is only able to handle smaller to midsize problems. This limits the use case (therefore the ⭐️⭐️⭐️-rating on usefulness). Probably the natural course of things is to ‘graduate’ to <code class="highlighter-rouge">Pyomo</code> over time. Despite all of this, the actual <code class="highlighter-rouge">MathProg</code> language very useful. I have used it <em>very often</em> for the actual modeling task, to separate (supporting) code from model. I then generate an LP file from that model and its datasources that I then solve with e.g., <code class="highlighter-rouge">Gurobi</code>. Finally, I parse the output back in, mostly using <code class="highlighter-rouge">Python</code>; there are other <code class="highlighter-rouge">Python</code> packages for <code class="highlighter-rouge">glpsol</code> that handle this parsing for you if you do not want to implement it yourself (relatively easy though). <code class="highlighter-rouge">Pyomo</code> for example also goes through the LP-file route to interface with <code class="highlighter-rouge">glpsol</code>.</p>
<p>See <a href="https://en.wikibooks.org/wiki/GLPK/Obtaining_GLPK">here</a> on how to obtain <code class="highlighter-rouge">glpk</code>. To give you an idea of the syntax, check out this example that also includes some solution post-processing (similar to what e.g., <code class="highlighter-rouge">OPL</code> can do):</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># A TRANSPORTATION PROBLEM
#
# This problem finds a least cost shipping schedule that meets
# requirements at markets and supplies at factories.
#
# References:
# Dantzig G B, "Linear Programming and Extensions."
# Princeton University Press, Princeton, New Jersey, 1963,
# Chapter 3-3.
set I;
/* canning plants */
set J;
/* markets */
param a{i in I};
/* capacity of plant i in cases */
param b{j in J};
/* demand at market j in cases */
param d{i in I, j in J};
/* distance in thousands of miles */
param f;
/* freight in dollars per case per thousand miles */
param c{i in I, j in J} := f * d[i,j] / 1000;
/* transport cost in thousands of dollars per case */
var x{i in I, j in J} >= 0;
/* shipment quantities in cases */
minimize cost: sum{i in I, j in J} c[i,j] * x[i,j];
/* total transportation costs in thousands of dollars */
s.t. supply{i in I}: sum{j in J} x[i,j] <= a[i];
/* observe supply limit at plant i */
s.t. demand{j in J}: sum{i in I} x[i,j] >= b[j];
/* satisfy demand at market j */
solve;
# Report / Result Section (Optional)
printf '#################################\n';
printf 'Transportation Problem / LP Model Result\n';
printf '\n';
printf 'Minimum Cost = %.2f\n', cost;
printf '\n';
printf '\n';
printf 'Variables (i.e. shipment quantities in cases ) \n';
printf 'Shipment quantities in cases\n';
printf 'Canning Plants Markets Solution (Cases) \n';
printf{i in I, j in J}:'%14s %10s %11s\n',i,j, x[i,j];
printf '\n';
printf 'Constraints\n';
printf '\n';
printf 'Observe supply limit at plant i\n';
printf 'Canning Plants Solution Sign Required\n';
for {i in I} {
printf '%14s %10.2f <= %.3f\n', i, sum {j in J} x[i,j], a[i];
}
printf '\n';
printf 'Satisfy demand at market j\n';
printf 'Market Solution Sign Required\n';
for {j in J} {
printf '%5s %10.2f >= %.3f\n', j, sum {i in I} x[i,j], b[j];
}
data;
set I := Seattle San-Diego;
set J := New-York Chicago Topeka;
param a := Seattle 350
San-Diego 600;
param b := New-York 325
Chicago 300
Topeka 275;
param d : New-York Chicago Topeka :=
Seattle 2.5 1.7 1.8
San-Diego 2.5 1.8 1.4 ;
param f := 90;
end;
</code></pre></div></div>TL;DR: Part of a series of posts about tools, services, and packages that I use in day-to-day operations to boost efficiency and free up time for the things that really matter. Use at your own risk - happy to answer questions. For the full, continuously expanding list so far see here.Cheat Sheet: Smooth Convex Optimization2018-12-07T05:00:00+01:002018-12-07T05:00:00+01:00http://www.pokutta.com/blog/research/2018/12/07/cheatsheet-smooth-idealized<p><em>TL;DR: Cheat Sheet for smooth convex optimization and analysis via an idealized gradient descent algorithm. While technically a continuation of the Frank-Wolfe series, this should have been the very first post and this post will become the Tour d’Horizon for this series. Long and technical.</em>
<!--more--></p>
<p><em>Posts in this series (so far).</em></p>
<ol>
<li><a href="/blog/research/2018/12/07/cheatsheet-smooth-idealized.html">Cheat Sheet: Smooth Convex Optimization</a></li>
<li><a href="/blog/research/2018/10/05/cheatsheet-fw.html">Cheat Sheet: Frank-Wolfe and Conditional Gradients</a></li>
<li><a href="/blog/research/2018/10/19/cheatsheet-fw-lin-conv.html">Cheat Sheet: Linear convergence for Conditional Gradients</a></li>
<li><a href="/blog/research/2018/11/12/heb-conv.html">Cheat Sheet: Hölder Error Bounds (HEB) for Conditional Gradients</a></li>
<li><a href="/blog/research/2019/02/27/cheatsheet-nonsmooth.html">Cheat Sheet: Subgradient Descent, Mirror Descent, and Online Learning</a></li>
<li><a href="/blog/research/2019/06/10/cheatsheet-acceleration-first-principles.html">Cheat Sheet: Acceleration from First Principles</a></li>
</ol>
<p><em>My apologies for incomplete references—this should merely serve as an overview.</em></p>
<p>In this fourth installment of the series on Conditional Gradients, which actually should have been the very first post, I will talk about an idealized gradient descent algorithm for smooth convex optimization, which allows to obtain convergence rates and from which we can instantiate several known algorithms, including gradient descent and Frank-Wolfe variants. This post will become a Tour d’Horizon of the various results from this series. To be clear, the focus is on <em>projection-free</em> methods in the <em>constraint</em> case, however I will deal with other approaches to complement the exposition.</p>
<p>While I will use notation that is compatible with previous posts, in particular the <a href="/blog/research/2018/10/05/cheatsheet-fw.html">first post</a>, I will make this post as self-contained as possible with few forward references, so that this will become “Post Zero”. As before I will use Frank-Wolfe [FW] and Conditional Gradients [CG] interchangeably.</p>
<p>Our setup will be as follows. We will consider a convex function $f: \RR^n \rightarrow \RR$ and we want to solve</p>
<script type="math/tex; mode=display">\min_{x \in K} f(x),</script>
<p>where $K$ is some feasible region, e.g., $K = \RR^n$ is the unconstrained case. We will in particular consider smooth functions as detailed further below and we assume that we only have <em>first-order access</em> to the function, via a so-called <em>first-order oracle</em>:</p>
<p class="mathcol"><strong>First-Order oracle for $f$</strong> <br />
<em>Input:</em> $x \in \mathbb R^n$ <br />
<em>Output:</em> $\nabla f(x)$ and $f(x)$</p>
<p>For now we disregard how we can access the feasible region $K$ as there are various access models and we will specify the model based on the algorithmic class that we target later. For the sake of simplicity, we will be using the $\ell_2$-norm but the arguments can be easily extended to other norms, e.g., replacing Cauchy-Schwartz inequalities by Hölder inequalities and using dual norms.</p>
<h2 id="an-idealized-gradient-descent-algorithm">An idealized gradient descent algorithm</h2>
<p>In a first step we will devise an idealized gradient descent algorithm, for which we will then derive convergence guarantees under different assumptions on the function $f$ under consideration. We will then show how known guarantees can be easily obtained from this idealized gradient descent algorithm.</p>
<p>Let $f: \RR^n \rightarrow \RR$ be a convex function and $K$ be some feasible region. We are interested in studying ‘gradient descent-like’ algorithms. To this end let $x_t \in K$ be some point and we consider updates of the form</p>
<p>\[
\tag{dirStep}
x_{t+1} \leftarrow x_t - \eta_t d_t,
\]</p>
<p>for some direction $d_t \in \RR^n$ and $\eta_t \in \RR$ for $t$. For example, we would obtain standard gradient descent by choosing $d \doteq \nabla f(x_t)$ and $\eta_t = \frac{1}{L}$, where $L$ is the Lipschitz constant of $f$.</p>
<h3 id="measures-of-progress">Measures of progress</h3>
<p>We will consider two important measures that drive the overall convergence rate. The first is a <em>measure of progress</em>, which in our context will be provided by the smoothness of the function. This will be the only measure of progress that we will consider, but there are many others for different setups. Note that the arguments here using smoothness do not rely on the convexity of the function; something to remember for later.</p>
<p>Let us recall the definition of smoothness:</p>
<p class="mathcol"><strong>Definition (smoothness).</strong> A convex function $f$ is said to be <em>$L$-smooth</em> if for all $x,y \in \mathbb R^n$ it holds:
\[
f(y) - f(x) \leq \langle \nabla f(x), y-x \rangle + \frac{L}{2} \norm{x-y}^2.
\]</p>
<p>There are two things to remember about smoothness:</p>
<ol>
<li>If $x$ is an optimal solution to (the unconstrained) $f$, then $\nabla f(x) = 0$, so that smoothness provides an <em>upper bound</em> on the distance to optimality: $f(x) - f(x^\esx) \leq \frac{L}{2} \norm{x-x^\esx}^2$.</li>
<li>More generally it provides an upper bound for the change of the function by means of a quadratic.</li>
</ol>
<p>The <em>most important thing</em> however is that <em>smoothness induces progress</em> in schemes such as (dirStep). For this let us consider the smoothness inequality at two iterates $x_t$ and $x_{t+1}$ in the scheme from above. Plugging in the definition of (dirStep)
we obtain</p>
<script type="math/tex; mode=display">\underbrace{f(x_{t}) - f(x_{t+1})}_{\text{primal progress}} \geq \eta \langle\nabla f(x_t),d\rangle - \eta^2 \frac{L}{2} \|d\|^2</script>
<p>Note that the function on the right is concave in $\eta$ and so we can maximize the right-hand side to obtain a lower bound on the progress. Taking the derivative on the right-hand side and asserting criticality we obtain:</p>
<script type="math/tex; mode=display">\langle\nabla f(x_t),d\rangle - \eta L \norm{d}^2 = 0,</script>
<p>which leads to the optimal choice $\eta^\esx \doteq \frac{\langle\nabla f(x_t),d\rangle}{L \norm{d}^2}$. This induces a progress lower bound of:</p>
<p class="mathcol"><strong>Progress induced by smoothness (for $d$).</strong>
\[
\begin{equation}
\tag{Progress from $d$}
\underbrace{f(x_{t}) - f(x_{t+1})}_{\text{primal progress}} \geq \frac{\langle\nabla f(x_t),d\rangle^2}{2L \norm{d}^2}.
\end{equation}
\]</p>
<p>We will now formulate our <em>idealized gradient descent</em> by using the <em>(normalized) idealized direction</em> $d \doteq \frac{x_t - x^\esx}{\norm{ x_t - x^\esx }}$, where we basically make steps in the direction of the optimal solution $x^\esx$; note that in general there might be multiple optimal solutions, in which case we choose arbitrarily but fixed.</p>
<p class="mathcol"><strong>Idealized Gradient Descent (IGD)</strong> <br />
<em>Input:</em> Smooth convex function $f$ with first-order oracle access and smoothness parameter $L$. <br />
<em>Output:</em> Sequence of points $x_0, \dots, x_T$ <br />
For $t = 0, \dots, T-1$ do: <br />
$\quad x_{t+1} \leftarrow x_t - \eta_t \frac{x_t - x^\esx}{\norm{ x_t - x^\esx }}$ with $\eta_t = \frac{\langle\nabla f(x_t),\frac{x_t - x^\esx}{\norm{ x_t - x^\esx }}\rangle}{L}$</p>
<p>It is important to note that in reality we <em>do not</em> have access to this idealized direction. Moreover, if we would have access, we could perform line search along this direction to get the optimal solution $x^\esx$ in a <em>single</em> step. However, what we assume here is that the <em>algorithm does not know that this is an optimal direction</em> and hence only having first-order access, the smoothness condition, and assuming that we do not do line search etc., the best the algorithm can do is use the optimal step length from smoothness, which is exactly how we choose $\eta_t$. Also note, that we could have defined $d$ as the unnormalized idealized direction $x_t - x^\esx$, however the normalization simplifies exposition.</p>
<p>Let us now briefly establish the progress guarantees for IGD. For the sake of brevity let $h_t \doteq h(x_t) \doteq f(x_t) - f(x^\esx)$ denote the <em>primal gap (at $x_t$)</em>. Plugging in the parameters into the progress inequality, we obtain</p>
<p class="mathcol"><strong>Progress guarantee for IGD.</strong>
\[
\begin{equation}
\tag{IGD Progress}
\underbrace{f(x_{t}) - f(x_{t+1})}_{\text{primal progress}} = h_{t} - h_{t+1} \geq \frac{\langle \nabla f(x_t),x_t - x^\esx \rangle^2}{2L \norm{x_t - x^\esx}^2}.
\end{equation}
\]</p>
<h3 id="measures-of-optimality">Measures of optimality</h3>
<p>We will now introduce <em>measures of optimality</em> that together with (IGD Progress) induce convergence rates for IGD. These rates are <em>idealized rates</em> as they depend on the idealized direction, nonetheless we will see that actual rates for known algorithms almost immediately follow from here in the following section. We will start with some basic measures first; I might expand this list over time if I come across other measures that can be explained relatively easily.</p>
<p>In order to establish (idealized) convergence rates, we have to relate $\langle \nabla f(x_t),x_t - x^\esx \rangle$ with $f(x_t) - f(x^\esx)$. There are many different such relations that we refer to as <em>measures of optimality</em>, as they effectively provide a guarantee on the primal gap $h_t$ via dual information as will become clear soon.</p>
<p>To put things into perspective, smoothness provides a <em>quadratic</em> upper bound on $f(x)$, while convexity provides a <em>linear</em> lower bound on $f(x)$ and strong convexity provides a <em>quadratic</em> lower bound on $f(x)$. The HEB condition (to be introduced later), which will be one of the considered measures of optimality, basically interpolates between linear and quadratic lower bounds by capturing how sharp the function curves around the optimal solution(s). The following graphics shows the relation between convexity, strong convexity, and smoothness on the left and functions with different $\theta$-values in the HEB condition (as explained further below) on the right.</p>
<p class="center"><img src="http://www.pokutta.com/blog/assets/heb-conv-over.png" alt="Convexity HEB overview" /></p>
<h4 id="convexity">Convexity</h4>
<p>Our first measure of optimality is <em>convexity</em>.</p>
<p class="mathcol"><strong>Definition (convexity).</strong> A differentiable function $f$ is said to be <em>convex</em> if for all $x,y \in \mathbb R^n$ it holds:
\[f(y) - f(x) \geq \langle \nabla f(x), y-x \rangle.\]</p>
<p>From this we can derive a very basic guarantee on the primal gap $h_t$, by choosing $y \leftarrow x^\esx$ and $x \leftarrow x_t$ and we obtain:</p>
<p class="mathcol"><strong>Primal Bound (convexity).</strong> At an iterate $x_t$ convexity induces a primal bound of the form:
\[
\tag{PB-C}
f(x_t) - f(x^\esx) \leq \langle \nabla f(x_t),x_t - x^\esx \rangle.
\]</p>
<p>Combining (PB-C) with (IGD-Progress) we obtain:</p>
<script type="math/tex; mode=display">h_{t} - h_{t+1} \geq \frac{\langle \nabla f(x_t),x_t - x^\esx \rangle^2}{2L \norm{x_t - x^\esx}^2} \geq \frac{h_t^2}{2L \norm{x_t - x^\esx}^2} \geq \frac{h_t^2}{2L \norm{x_0 - x^\esx}^2},</script>
<p>where the last inequality is not immediate but also not hard to show. Rearranging things we obtain:</p>
<p class="mathcol"><strong>IGD contraction (convexity).</strong> Assuming convexity the primal gap $h_t$ contracts as:
\[
\tag{Rec-C}
h_{t+1} \leq h_t \left(1 - \frac{h_t}{2L \norm{x_0 - x^\esx}^2}\right),
\]
which leads to a convergence rate after solving the recurrence of
\[
\tag{Rate-C}
h_T \leq \frac{2L \norm{x_0 - x^\esx}^2}{T+4}.
\]</p>
<h4 id="strong-convexity">Strong Convexity</h4>
<p>Our second measure of optimality is <em>strong convexity</em>.</p>
<p class="mathcol"><strong>Definition (strong convexity).</strong> A convex function $f$ is said to be <em>$\mu$-strongly convex</em> if for all $x,y \in \mathbb R^n$ it holds:
\[
f(y) - f(x) \geq \langle \nabla f(x),y-x \rangle + \frac{\mu}{2} \norm{x-y}^2.
\]</p>
<p>The strong convexity inequality is basically the reverse inequality of smoothness and we can use an argument similar to the one we used for the progress bound. For this we choose $x \leftarrow x_t$ and $y \leftarrow x_t - \eta e_t$ with $e_t \doteq x_t - x^\esx = d_t \norm{x_t-x^\esx}$ being the unnormalized idealized direction to obtain:</p>
<script type="math/tex; mode=display">f(x_t - \eta e_t) - f(x_t) \geq - \eta \langle\nabla f(x_t),e_t\rangle + \eta^2\frac{\mu}{2} \| e_t \|^2.</script>
<p>Now we minimize the right-hand side over $\eta$ and obtain that the minimum is achieved for the choice $\eta^\esx \doteq \frac{\langle\nabla f(x_t), e_t\rangle}{\mu \norm{e_t}^2}$; this is basically the same form as the $\eta^*$ from above. Plugging this back in, we obtain</p>
<script type="math/tex; mode=display">f(x_t) - f(x_t - \eta e_t) \leq \frac{\langle\nabla f(x_t),e_t\rangle^2}{2 \mu \norm{e_t}^2},</script>
<p>and as the right-hand side is now independent of $\eta$, we can in particular choose $\eta = 1$ and obtain:</p>
<p class="mathcol"><strong>Primal Bound (strong convexity).</strong> At an iterate $x_t$ strong convexity induces a primal bound of the form:
\[
\tag{PB-SC}
f(x_t) - f(x^\esx) \leq \frac{\langle \nabla f(x_t),x_t - x^\esx \rangle^2}{2\mu \norm{x_t - x^\esx}^2}.
\]</p>
<p>Combining (PB-SC) with (IGD-Progress) we obtain:</p>
<script type="math/tex; mode=display">h_{t} - h_{t+1} \geq \frac{\langle \nabla f(x_t),x_t - x^\esx \rangle^2}{2L \norm{x_t - x^\esx}^2} \geq \frac{\mu}{L} h_t.</script>
<p class="mathcol"><strong>IGD contraction (strong convexity).</strong> Assuming strong convexity the primal gap $h_t$ contracts as:
\[
\tag{Rec-SC}
h_{t+1} \leq h_t \left(1 - \frac{\mu}{L}\right),
\]
which leads to a convergence rate after solving the recurrence of
\[
\tag{Rate-SC}
h_T \leq \left(1 - \frac{\mu}{L}\right)^T h_0 \leq e^{-\frac{\mu}{L}T}h_0.
\]
or equivalently, $h_T \leq \varepsilon$ for
\[
T \geq \frac{L}{\mu} \log \frac{h_0}{\varepsilon}.
\]</p>
<h4 id="hölder-error-bound-heb-condition">Hölder Error Bound (HEB) Condition</h4>
<p>One might wonder whether there are rates between those induced by convexity and those induced by strong convexity. This brings us to the Hölder Error Bound (HEB) condition that interpolates smoothly between the two regimes. Here we will confine the discussion to the basics that induce the bounds that we need; for an in-depth discussion and relation to e.g., the dominated gradient property, see the <a href="/blog/research/2018/11/12/heb-conv.html">HEB post</a> in this series. Let $K^\esx$ denote the set of optimal solutions to $\min_{x \in K} f(x)$ and let $f^\esx \doteq f(x)$ for some $x \in K^\esx$.</p>
<p class="mathcol"><strong>Definition (Hölder Error Bound (HEB) condition).</strong> A convex function $f$ is satisfies the <em>Hölder Error Bound (HEB) condition on $K$</em> with parameters $0 < c < \infty$ and $\theta \in [0,1]$ if for all $x \in K$ it holds:
\[
c (f(x) - f^\esx)^\theta \geq \min_{y \in K^\esx} \norm{x-y}.
\]</p>
<p>Note that in contrast to convexity and strong convexity the HEB condition is a <em>local</em> condition as can be seen from its definition. As we assume that our functions are smooth it follows $\theta \leq 1/2$ (see <a href="/blog/research/2018/11/12/heb-conv.html">HEB post</a> for details). We can now combine (HEB) for any $x^\esx \in K^\esx$ with convexity to obtain:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
f(x) - f^\esx & = f(x) - f(x^\esx) \leq \langle \nabla f(x), x - x^\esx \rangle \\
& = \frac{\langle \nabla f(x), x - x^\esx \rangle}{\norm{x - x^\esx}} \norm{x - x^\esx} \\
& \leq \frac{\langle \nabla f(x), x - x^\esx \rangle}{\norm{x - x^\esx}} c (f(x) - f^\esx)^\theta.
\end{align*} %]]></script>
<p>Via rearranging we derive:
\[
\frac{1}{c}(f(x) - f^\esx)^{1-\theta} \leq \frac{\langle \nabla f(x), x - x^\esx \rangle}{\norm{x - x^\esx}}.
\]</p>
<p class="mathcol"><strong>Primal Bound (HEB).</strong> At an iterate $x_t$ HEB induces a primal bound of the form:
\[
\tag{PB-HEB}
\frac{1}{c}(f(x_t) - f^\esx)^{1-\theta} \leq \frac{\langle \nabla f(x_t), x_t - x^\esx \rangle}{\norm{x_t - x^\esx}}
\]
for any $x^\esx \in K^\esx$.</p>
<p>Combining (PB-HEB) with (IGD-Progress) we obtain:</p>
<script type="math/tex; mode=display">h_t - h_{t+1} \geq \frac{\langle \nabla f(x_t), x_t - x^\esx\rangle^2}{2L \norm{x_t - x^\esx}^2}
\geq \frac{\left(\frac{1}{c}h_t^{1-\theta} \right)^2}{2L},</script>
<p>which can be rearranged to:</p>
<script type="math/tex; mode=display">h_{t+1} \leq h_t - \frac{\frac{1}{c^2}h_t^{2-2\theta}}{2L}
\leq h_t \left(1 - \frac{1}{2Lc^2} h_t^{1-2\theta}\right).</script>
<p class="mathcol"><strong>IGD contraction (HEB).</strong> Assuming HEB the primal gap $h_t$ contracts as:
\[
\tag{Rec-HEB}
h_{t+1} \leq h_t \left(1 - \frac{1}{2Lc^2} h_t^{1-2\theta}\right),
\]
which leads to a convergence rate after solving the recurrence of
\[
\tag{Rate-HEB}
h_T \leq
\begin{cases}
\left(1 - \frac{1}{2Lc^2}\right)^T h_0 & \theta = 1/2 \newline
O(1) \left(\frac{1}{T} \right)^\frac{1}{1-2\theta} & \text{if } \theta < 1/2
\end{cases}
\]
or equivalently for the latter case, to ensure $h_T \leq \varepsilon$ it suffices to choose $T \geq \Omega\left(\frac{1}{\varepsilon^{1 - 2\theta}}\right)$. Note that the $O(1)$ term hides the dependence on $h_0$ for simplicity of exposition.</p>
<h2 id="obtaining-known-algorithms">Obtaining known algorithms</h2>
<p>We will now derive several known algorithms and results using IGD from above. The basic task that we have to accomplish is always the same. We show that the direction $d_t$ that our algorithm under consideration takes in iteration $t$ satisfies:</p>
<p>\[
\tag{Scaling}
\frac{\langle \nabla f(x_t),d_t \rangle}{\norm{d_t}} \geq \alpha_t \frac{\langle \nabla f(x_t), x_t - x^\esx \rangle}{\norm{x_t - x^\esx}},
\]</p>
<p>for some $\alpha_t \geq 0$. The reason why we want to show (Scaling) is that, assuming that we use the optimal step length $\eta_t^\esx = \frac{\langle\nabla f(x_t),d_t\rangle}{L \norm{d_t}^2}$ from the smoothness equation, this ensures that for the progress from our step it holds:</p>
<p>\[
\tag{ProgressApprox}
h_t - h_{t+1} \geq \frac{\langle \nabla f(x_t),d_t \rangle^2}{2L\norm{d_t}^2} \geq \alpha_t^2 \frac{\langle \nabla f(x_t),x_t - x^\esx \rangle^2}{2L\norm{x_t - x^\esx}^2},
\]</p>
<p>so that we lose the approximation factor $\alpha_t^2$ in the primal progress inequality. Usually, we will see that we can compute a constant $\alpha_t = \alpha > 0$ for all $t$. This allows us to immediately apply all previous convergence bounds derived for IGD, corrected by the approximation factor $\alpha^2$ that we (might) lose now in each iteration.</p>
<p>Note, that for several of the algorithms presented below accelerated variants can be obtained, so that the presented rates are not optimal; I will address this and talk about acceleration in a future post. In general the method via IGD might not necessarily provide the sharpest constants etc but rather favors simplicity of exposition.</p>
<h3 id="gradient-descent">Gradient Descent</h3>
<p>We will start with the (vanilla) <em>Gradient Descent (GD)</em> algorithms in the unconstrained setting, i.e., $K = \RR^n$.</p>
<p class="mathcol"><strong>(Vanilla) Gradient Descent (GD)</strong> <br />
<em>Input:</em> Smooth convex function $f$ with first-order oracle access, initial point $x_0 \in \RR^n$. <br />
<em>Output:</em> Sequence of points $x_0, \dots, x_T$ <br />
For $t = 0, \dots, T-1$ do: <br />
$\quad x_{t+1} \leftarrow x_t - \gamma_t \nabla f(x_t)$</p>
<p>In order to show (Scaling) for $d_t \doteq \nabla f(x_t)$ consider:
\[
\tag{ScalingGD}
\frac{\langle \nabla f(x_t),\nabla f(x_t) \rangle}{\norm{\nabla f(x_t)}} = \norm{\nabla f(x_t)} \geq \frac{\langle \nabla f(x_t), x_t - x^\esx \rangle}{\norm{x_t - x^\esx}},
\]
by Cauchy-Schwarz, so that we can choose $\alpha_t = 1$ for all $t \in [T]$. In order to obtain (ProgressApprox) we pick the optimal step length $\gamma_t^\esx = \frac{\langle\nabla f(x_t),d_t\rangle}{L \norm{d_t}^2} = \frac{1}{L}$.</p>
<p>We now obtain the convergence rate by simply combining the approximation from above with the IGD convergence rates. These bounds readily follow from plugging-in and we only copy-and-paste them here for completeness.</p>
<h4 id="general-convergence-for-smooth-functions">General convergence for smooth functions</h4>
<p>For the (general) smooth case we obtain:</p>
<p class="mathcol"><strong>GD contraction (convexity).</strong> Assuming convexity the primal gap $h_t$ contracts as:
\[
\tag{GD-Rec-C}
h_{t+1} \leq h_t \left(1 - \frac{h_t}{2L \norm{x_0 - x^\esx}^2}\right),
\]
which leads to a convergence rate after solving the recurrence of
\[
\tag{GD-Rate-C}
h_T \leq \frac{2L \norm{x_0 - x^\esx}^2}{T+4}.
\]</p>
<h4 id="linear-convergence-for-strongly-convex-functions">Linear convergence for strongly convex functions</h4>
<p>For smooth and strongly convex functions we obtain:</p>
<p class="mathcol"><strong>GD contraction (strong convexity).</strong> Assuming strong convexity the primal gap $h_t$ contracts as:
\[
\tag{GD-Rec-SC}
h_{t+1} \leq h_t \left(1 - \frac{\mu}{L}\right),
\]
which leads to a convergence rate after solving the recurrence of
\[
\tag{GD-Rate-SC}
h_T \leq \left(1 - \frac{\mu}{L}\right)^T h_0 \leq e^{-\frac{\mu}{L}T}h_0.
\]
or equivalently, $h_T \leq \varepsilon$ for
\[
T \geq \frac{L}{\mu} \log \frac{h_0}{\varepsilon}.
\]</p>
<h4 id="heb-rates">HEB rates</h4>
<p>And for smooth functions satisfying the HEB condition we obtain:</p>
<p class="mathcol"><strong>GD contraction (HEB).</strong> Assuming HEB the primal gap $h_t$ contracts as:
\[
\tag{GD-Rec-HEB}
h_{t+1} \leq h_t \left(1 - \frac{1}{2Lc^2} h_t^{1-2\theta}\right),
\]
which leads to a convergence rate after solving the recurrence of
\[
\tag{GD-Rate-HEB}
h_T \leq
\begin{cases}
\left(1 - \frac{1}{2Lc^2}\right)^T h_0 & \theta = 1/2 \newline
O(1) \left(\frac{1}{T} \right)^\frac{1}{1-2\theta} & \text{if } \theta < 1/2
\end{cases}
\]
or equivalently for the latter case, to ensure $h_T \leq \varepsilon$ it suffices to choose $T \geq \Omega\left(\frac{1}{\varepsilon^{1 - 2\theta}}\right)$. Note that the $O(1)$ term hides the dependence on $h_0$ for the simplicity of exposition.</p>
<h4 id="projected-gradient-descent">Projected Gradient Descent</h4>
<p>The route through IGD is flexible enough to also accommodate the constraint case. Now we have to project back into the feasible region $K$ and <em>Projected Gradient Descent (PGD)</em>, employs a projection $\Pi_K$ that projects a point $x \in \RR^n$ back into the feasible region $K$ (note that $\Pi_K$ has to satisfy certain properties to be admissible):</p>
<p class="mathcol"><strong>Projected Gradient Descent (PGD)</strong> <br />
<em>Input:</em> Smooth convex function $f$ with first-order oracle access, initial point $x_0 \in K$. <br />
<em>Output:</em> Sequence of points $x_0, \dots, x_T$ <br />
For $t = 0, \dots, T-1$ do: <br />
$\quad x_{t+1} \leftarrow \Pi_K(x_t - \gamma_t \nabla f(x_t))$</p>
<p>Without going into details we obtain (Scaling) here in a similar way due to the properties of the projection; I might explicitly consider projection-based methods in a later post.</p>
<h3 id="frank-wolfe-variants">Frank-Wolfe Variants</h3>
<p>We will now discuss how Frank-Wolfe Variants fit into the IGD framework laid out above. For this, in addition to the first-order access to the function $f$ we now need to specify access to the feasible region $K$, which will be through a <em>linear programming oracle</em>:</p>
<p class="mathcol"><strong>Linear Programming oracle</strong> <br />
<em>Input:</em> $c \in \mathbb R^n$ <br />
<em>Output:</em> $\arg\min_{x \in K} \langle c, x \rangle$</p>
<p>With this we can formulate the (vanilla) Frank-Wolfe algorithm:</p>
<p class="mathcol"><strong>Frank-Wolfe Algorithm [FW]</strong> <br />
<em>Input:</em> Smooth convex function $f$ with first-order oracle access, feasible region $K$ with linear optimization oracle access, initial point (usually a vertex) $x_0 \in K$. <br />
<em>Output:</em> Sequence of points $x_0, \dots, x_T$ <br />
For $t = 0, \dots, T-1$ do: <br />
$\quad v_t \leftarrow \arg\min_{x \in K} \langle \nabla f(x_{t}), x \rangle$ <br />
$\quad x_{t+1} \leftarrow (1-\eta_t) x_t + \eta_t v_t$</p>
<p>The Frank-Wolfe algorithm [FW] (also known as Conditional Gradients [CG]) has many advantages with its projection-freeness being one of the most important; see <a href="/blog/research/2018/10/05/cheatsheet-fw.html">Cheat Sheet: Frank-Wolfe and Conditional Gradients</a> for an in-depth discussion.</p>
<p>Before, we continue we need to address a small technicality: In the argumentation so far we did not have any restriction on choosing the step length $\eta$. However, in the case of Frank-Wolfe, as we are forming convex combinations, we have $0\leq \eta \leq 1$ to ensure feasibility. Formally, we would have to distinguish two cases, namely, where $\eta^\esx = \frac{\langle \nabla f(x_t), x_t - v_t\rangle}{L \norm{x_t - v_t}^2} \geq 1$ and $\eta^\esx < 1$; note that we always have nonnegativity as $\langle \nabla f(x_t), x_t - v_t\rangle \geq 0$. We will purposefully disregard the former case, because in this regime we have linear convergence (the best we can hope for) anyways and as such it is really the iterations with $\eta < 1$, which determine the convergence rate. Before we continue, we briefly provide a proof of linear convergence when $\eta \geq 1$ in which case we simply choose $\eta \doteq 1$ so that $x_{t+1} = v_t$; moreover we will also establish that this case typically only happens once. By smoothness and using that in this case it holds $\langle \nabla f(x_t), x_t - v_t\rangle \geq L \norm{x_t - v_t}^2$ we have:</p>
<script type="math/tex; mode=display">% <![CDATA[
\tag{LongStep}
\begin{align*}
\underbrace{f(x_{t}) - f(x_{t+1})}_{\text{primal progress}} & \geq \langle\nabla f(x_t),x_t - v_t\rangle - \frac{L}{2} \norm{x_t - v_t}^2 \newline
& \geq \frac{1}{2} \langle\nabla f(x_t),x_t - v_t\rangle \newline
& \geq \frac{1}{2} h_t,
\end{align*} %]]></script>
<p>so that in this regime we contract as</p>
<script type="math/tex; mode=display">h_{t+1} \leq \frac{1}{2} h_t</script>
<p>This can happen only a logarithmic number of steps until $\eta^\esx < 1$ has to hold. Moreover, after <em>one step</em> it is guaranteed that $h_1 \leq \frac{LD^2}{2}$: We start from (LongStep), however we estimate differently; in the following and further below let $D$ denote the diameter of $K$ with respect to $\norm{\cdot}$:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
\underbrace{f(x_{0}) - f(x_{1})}_{\text{primal progress}} & \geq \langle\nabla f(x_0),x_0 - v_0\rangle - \frac{L}{2} \norm{x_0 - v_0}^2 \newline
& \geq h_0 - \frac{LD^2}{2}.
\end{align*} %]]></script>
<p>Therefore we have for $h_1$:</p>
<script type="math/tex; mode=display">h_1 \leq h_0 - \left(h_0 - \frac{LD^2}{2}\right) \leq \frac{LD^2}{2}.</script>
<h4 id="convergence-for-smooth-convex-functions">Convergence for Smooth Convex Functions</h4>
<p>We will now first establish the convergence rate in the (general) smooth case. For this it suffices to observe that:</p>
<script type="math/tex; mode=display">\langle\nabla f(x_t),x_t - v_t\rangle \geq \langle\nabla f(x_t),x_t - x^\esx\rangle \geq 0,</script>
<p>as $v_t = \arg\min_{x \in K} \langle \nabla f(x_{t}), x \rangle$ and we can rearrange this to:</p>
<script type="math/tex; mode=display">\tag{ScalingFW}
\frac{\langle\nabla f(x_t),x_t - v_t\rangle}{\norm{x_t - v_t}} \geq \frac{\norm{x_t - x^\esx}}{D} \cdot \frac{\langle\nabla f(x_t),x_t - x^\esx\rangle}{\norm{x_t - x^\esx}} \geq 0,</script>
<p>so that the progress per iteration, with $\alpha_t = \frac{\norm{x_t - x^\esx}}{D}$, can be lower bounded by:</p>
<script type="math/tex; mode=display">% <![CDATA[
\tag{ProgressApproxFW}
\begin{align*}
h_t - h_{t+1} & \geq \frac{\langle \nabla f(x_t),x_t - v_t \rangle^2}{2L\norm{x_t - v_t}^2} \\
& \geq \alpha_t^2 \frac{\langle \nabla f(x_t),x_t - x^\esx \rangle^2}{2L\norm{x_t - x^\esx}^2} \\
& \geq \frac{\langle \nabla f(x_t),x_t - x^\esx \rangle^2}{2LD^2}.
\end{align*} %]]></script>
<p>We obtain for the (general) smooth case:</p>
<p class="mathcol"><strong>FW contraction (convexity).</strong> Assuming convexity the primal gap $h_t$ contracts as:
\[
\tag{FW-Rec-C}
h_{t+1} \leq h_t \left(1 - \frac{h_t}{2L D^2}\right),
\]
which leads to a convergence rate after solving the recurrence of
\[
\tag{FW-Rate-C}
h_T \leq \frac{2L D^2}{T+4}.
\]</p>
<h4 id="linear-convergence-for-xesx-in-relative-interior">Linear convergence for $x^\esx$ in relative interior</h4>
<p>Next, we will demonstrate that in the case where $x^\esx$ lies in the relative interior of $K$, then already the vanilla Frank-Wolfe algorithm achieves linear convergence when $f$ is strongly convex. For this we use the following lemma proven in [GM]:</p>
<p class="mathcol"><strong>Lemma [GM].</strong> If $x^\esx$ is contained $2r$-deep in the relative interior of $K$, i.e., $B(x^\esx,2r) \cap \operatorname{aff}(K) \subseteq K$ for some $r > 0$, then there exists some $t’$ so that for all $t\geq t’$ it holds
\[
\frac{\langle \nabla f(x_t),x_t - v_t\rangle}{\norm{x_t - v_t}} \geq \frac{r}{D} \norm{\nabla f(x_t)},
\]
where $v_t = \arg\min_{x \in K} \langle \nabla f(x_{t}), x \rangle$ as above.</p>
<p>In the above $t’$ is the iteration from where onwards it holds $\norm{x_t - x^\esx} \leq r$ for all $t \geq t’$. The lemma establishes (Scaling) with $\alpha_t \doteq \frac{r}{D}$:</p>
<script type="math/tex; mode=display">\tag{ScalingFWint}
\frac{\langle \nabla f(x_t),x_t - v\rangle}{\norm{x_t - v}} \geq \frac{r}{D} \norm{\nabla f(x_t)} \geq \frac{r}{D} \frac{\langle\nabla f(x_t),x_t - x^\esx\rangle}{\norm{x_t - x^\esx}}.</script>
<p>Plugging this into the formula for strongly convex functions and ignoring the initial burn-in phase until we reach $t’$ we obtain:</p>
<p class="mathcol"><strong>FW contraction (strong convexity and $x^\esx$ in rel.int).</strong> Assuming strong convexity of $f$ and $x^\esx$ being in the relative interior of $K$ with depth $2r$, the primal gap $h_t$ contracts as:
\[
\tag{Rec-SC-Int}
h_{t+1} \leq h_t \left(1 - \frac{r^2}{D^2} \frac{\mu}{L}\right),
\]
which leads to a convergence rate after solving the recurrence of
\[
\tag{Rate-SC-Int}
h_T \leq \left(1 - \frac{r^2}{D^2} \frac{\mu}{L}\right)^T h_0 \leq e^{-\frac{r^2}{D^2} \frac{\mu}{L}T}h_0.
\]
or equivalently, $h_T \leq \varepsilon$ for
\[
T \geq \frac{D^2}{r^2} \frac{L}{\mu} \log \frac{h_0}{\varepsilon}.
\]</p>
<p>Note, that it is fine to ignore the burn-in phase before we reach $t’$ as for a function family with optima $x^\esx$ being $2r$-deep in the relative interior of $K$, smoothness parameter $L$, and strong convexity parameter $\mu$, using $\nabla f(x^\esx) = 0$ and strong convexity, in order to satisfy $\norm{x_t - x^\esx} \leq r$ we need $\norm{x_t-x^\esx}^2 \leq \frac{2}{\mu} h_t \leq r^2$ and hence we need to ensure $h_t \leq \frac{\mu}{2} r^2$, which is satisfied after at most $O(\frac{4 LD^2}{\mu r^2})$ iterations, which is a constant for any family satisfying those parameters, so that the asymptotic rate from above is achieved.</p>
<p>The result from above is the best we can hope for using the vanilla Frank-Wolfe algorithm. In particular, if $x^\esx$ is on the boundary linear convergence for strongly convex functions cannot be achieved in general with the vanilla Frank-Wolfe algorithm. Rather it requires a modification of the Frank-Wolfe algorithm that we will discuss further below. For more details, and in particular the lower bound for the case with $x^\esx$ being on the boundary, see <a href="/blog/research/2018/10/19/cheatsheet-fw-lin-conv.html">Cheat Sheet: Linear convergence for Conditional Gradients</a>.</p>
<h4 id="improved-convergence-for-strongly-convex-feasible-regions">Improved convergence for strongly convex feasible regions</h4>
<p>We will now show that if the feasible region $K$ is strongly convex and the function $f$ is strongly convex, then we can also improve over the standard $O(1/t)$ convergence rate of conditional gradients however it is not known whether we can achieve linear convergence in that case (to the best of my knowledge). Note that we make no assumption here about the location of $x^\esx$. The original result is due to [GH] however the exposition will be different to fit into our IGD framework.</p>
<p>Before we continue, we need to briefly recall <em>strong convexity of a set</em>:</p>
<p class="mathcol"><strong>Definition (Strongly convex set).</strong> A convex set $K$ is <em>$\alpha$-strongly convex</em> with respect to $\norm{\cdot}$ if for any $x,y \in K$, $\gamma \in [0,1]$, and $z \in \RR^n$ with $\norm{z} = 1$ it holds:
\[
\gamma x + (1-\gamma) y + \gamma(1-\gamma)\frac{\alpha}{2}\norm{x-y}^2z \in K.
\]</p>
<p>So what this really means is that if you take the line segment between two points then on for any point on that line segment you can squeeze a ball around that point into $K$, where the radius depends on where you are on the line. We will apply the above definition to the mid point of $x$ and $y$, so that the definition ensures that for any $x,y \in K$</p>
<script type="math/tex; mode=display">\tag{SCmidpoint}
\frac{1}{2} (x + y) + \frac{\alpha}{8}\norm{x-y}^2z \in K,</script>
<p>where $z$ is a norm-$1$ direction, as shown in the following graphic:</p>
<p class="center"><img src="http://www.pokutta.com/blog/assets/scbody.png" alt="Strongly Convex body" /></p>
<p>With this we can easily establish the following variant of (Scaling):</p>
<p class="mathcol"><strong>Lemma (Scaling for Strongly Convex Body (SCB)).</strong> Let $K$ be a strongly convex set with parameter $\alpha$. Then it holds:
\[
\tag{ScalingSCB}
\frac{\langle \nabla f(x_t), x_t - v_t \rangle}{\norm{x_t - v_t}^2} \geq \frac{\alpha}{4} \norm{\nabla f(x_t)},
\]
where $v_t$ is the Frank-Wolfe point from the algorithm.</p>
<p><em>Proof.</em>
Let $m \doteq \frac{1}{2} (x_t + v_t) + \frac{\alpha}{8}\norm{x_t-v_t}^2z$, where $w = \arg\min_{w \in \RR^n, \norm{w} = 1} \langle \nabla f(x_t), w \rangle$. Note that $\langle \nabla f(x_t), w \rangle = - \norm{\nabla f(x_t)}$. Now we have:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
\langle \nabla f(x_t), x_t - v_t \rangle & \geq \langle \nabla f(x_t), x_t - m \rangle \\
& = \frac{1}{2} \langle \nabla f(x_t), x_t - v_t \rangle - \frac{\alpha}{8} \norm{x_t - v_t}^2 \langle \nabla f(x_t), w \rangle \\
& = \frac{1}{2} \langle \nabla f(x_t), x_t - v_t \rangle + \frac{\alpha}{8} \norm{x_t - v_t}^2 \norm{\nabla f(x_t)},
\end{align*} %]]></script>
<p>where the first inequality follows from the optimality of the Frank-Wolfe point. From this the statement follows by simply rearranging.
$\qed$</p>
<p>This lemma is very much in spirit of the proof of [GM] for $x^\esx$ being in the relative interior of $K$. However, the bound of [GM] is stronger: (ScalingSCB) is not exactly what we need, as we are missing a square around the scalar product in the numerator. This seems to be subtle but it is actually the reason why we do not obtain linear convergence by straightforward plugging-in. In fact, we have to conclude the convergence rate in this case slightly differently by “mixing” the bound from (standard) convexity and (ScalingSCB). Observe that so far, we have <em>not</em> used strong convexity of $f$ yet. Our starting point is the progress inequality from smoothness for the Frank-Wolfe direction $d = x_t - v_t$ and we continue as follows:</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{align*}
f(x_t) - f(x_{t+1}) & \geq \frac{\langle \nabla f(x_t), x_t - v_t \rangle^2}{2L \norm{x_t - v_t}^2} \\
& \geq \langle \nabla f(x_t), x_t - v_t \rangle \cdot \frac{\langle \nabla f(x_t), x_t - v_t \rangle}{2L \norm{x_t - v_t}^2} \\
& \geq h_t \cdot \frac{\alpha}{8L} \norm{\nabla f(x_t)}.
\end{align*} %]]></script>
<p>This leads to a contraction of the form:</p>
<script type="math/tex; mode=display">\tag{Rec-SCB-C}
h_{t+1} \leq h_t (1- \frac{\alpha}{8L}\norm{\nabla f(x_t)}),</script>
<p>and together with strong convexity that ensures</p>
<script type="math/tex; mode=display">h_t \leq \frac{\norm{\nabla f(x_t)}^2}{2\mu}</script>
<p>we get:</p>
<p class="mathcol"><strong>FW contraction (strong convexity and strongly convex body).</strong> Assuming strong convexity of $f$ and $K$ is a strongly convex set with parameter $\alpha$, the primal gap $h_t$ contracts as:
\[
\tag{Rec-SC-SCB}
h_{t+1} \leq h_t \left(1 - \frac{\alpha}{8L}\sqrt{2\mu h_t}\right),
\]
which leads to a convergence rate after solving the recurrence of
\[
\tag{Rate-SC-SCB}
h_T \leq O\left(1/T^2\right),
\]
where the $O(.)$ term hides the dependency on the parameters $L$, $\mu$, and $\alpha$.</p>
<h4 id="linear-convergence-for-normnabla-fx--c">Linear convergence for $\norm{\nabla f(x)} > c$</h4>
<p>As mentioned above (Rec-SCB-C) does not make any assumptions regarding the strong convexity of the function and in fact we can use this contraction to obtain linear convergence over strongly convex bodies, whenever the <em>lower-bounded gradient assumption</em> holds, i.e., for all $x \in K$, we require $\norm{\nabla f(x)} \geq c > 0$. With this (Rec-SCB-C) immediately implies:</p>
<p class="mathcol"><strong>FW contraction (strongly convex body and lower-bounded gradient).</strong> Assuming strong convexity of $K$ and $\norm{\nabla f(x)} \geq c > 0$ for all $x \in K$:
\[
\tag{Rec-SCB-LBG}
h_{t+1} \leq h_t (1- \frac{\alpha c}{8L}),
\]
which leads to a convergence rate after solving the recurrence of
\[
\tag{Rate-SCB-LBG}
h_T \leq \left(1 - \frac{\alpha c}{8L}\right)^T h_0 \leq e^{-\frac{\alpha c}{8L}T}h_0.
\]
or equivalently, $h_T \leq \varepsilon$ for
\[
T \geq \frac{8L}{\alpha c}\log \frac{h_0}{\varepsilon}.
\]</p>
<h4 id="linear-convergence-over-polytopes">Linear convergence over polytopes</h4>
<p>Next up is linear convergence of Frank-Wolfe over polytopes for strongly convex functions. First of all, it is important to note that the vanilla Frank-Wolfe algorithm <em>cannot</em> achieve linear convergence in general in this case; see <a href="/blog/research/2018/10/19/cheatsheet-fw-lin-conv.html">Cheat Sheet: Linear convergence for Conditional Gradients</a> for details. Rather, we need to consider a modification of the Frank-Wolfe Algorithm by introducing so called <em>away steps</em>, which basically add additional feasible directions to the Frank-Wolfe algorithm. Here we will only provide a very compressed discussion and we refer the interested reader to <a href="/blog/research/2018/10/19/cheatsheet-fw-lin-conv.html">Cheat Sheet: Linear convergence for Conditional Gradients</a> for more details. Let us first recall the <em>Away Step Frank-Wolfe Algorithm</em>:</p>
<p class="mathcol"><strong>Away-step Frank-Wolfe (AFW) Algorithm [W]</strong> <br />
<em>Input:</em> Smooth convex function $f$ with first-order oracle access, feasible region $K$ with linear optimization oracle access, initial vertex $x_0 \in K$ and initial active set $S_0 = \setb{x_0}$. <br />
<em>Output:</em> Sequence of points $x_0, \dots, x_T$ <br />
For $t = 0, \dots, T-1$ do: <br />
$\quad v_t \leftarrow \arg\min_{x \in K} \langle \nabla f(x_{t}), x \rangle \quad \setb{\text{FW direction}}$ <br />
$\quad a_t \leftarrow \arg\max_{x \in S_t} \langle \nabla f(x_{t}), x \rangle \quad \setb{\text{Away direction}}$ <br />
$\quad$ If $\langle \nabla f(x_{t}), x_t - v_t \rangle > \langle \nabla f(x_{t}), a_t - x_t \rangle: \quad \setb{\text{FW vs. Away}}$<br />
$\quad \quad x_{t+1} \leftarrow (1-\gamma_t) x_t + \gamma_t v_t$ with $\gamma_t \in [0,1]$ $\quad \setb{\text{Perform FW step}}$ <br />
$\quad$ Else: <br />
$\quad \quad x_{t+1} \leftarrow (1+\gamma_t) x_t - \gamma_t a_t$ with $\gamma_t \in [0,\frac{\lambda_{a_t}}{1-\lambda_{a_t}}]$ $\quad \setb{\text{Perform Away step}}$ <br />
$\quad S_{t+1} \rightarrow \operatorname{ActiveSet}(x_{t+1})$</p>
<p>The important term here is $\langle \nabla f(x_{t}), a_t - v_t \rangle$, which we refer to as the <em>strong Wolfe gap</em>; the name will become apparent in a few minutes. First however, observe that if we would do either an away step or a Frank-Wolfe step, at least one of them has to recover $1/2$ of $\langle \nabla f(x_{t}), a_t - v_t \rangle$, i.e., either</p>
<script type="math/tex; mode=display">\langle \nabla f(x_{t}), x_t - v_t \rangle \geq 1/2 \ \langle \nabla f(x_{t}), a_t - v_t \rangle</script>
<p>or</p>
<script type="math/tex; mode=display">\langle \nabla f(x_{t}), a_t - x_t \rangle \geq 1/2 \ \langle \nabla f(x_{t}), a_t - v_t \rangle.</script>
<p>Why? If not, simply add up both inequalities and you end up with a contradiction. It is easy to see that $\langle \nabla f(x_{t}), x_t - v_t \rangle \leq \langle \nabla f(x_{t}), a_t - v_t \rangle$, so at first one may think of the strong Wolfe gap being <em>weaker</em> than the Wolfe gap. However, what Lacoste-Julien and Jaeggi in [LJ] showed is that <em>in the case of $K$ being a polytope</em> there exists the magic scalar $\alpha_t$ that we have been using before for (Scaling) relative to the strong Wolfe gap $\langle \nabla f(x_{t}), a_t - v_t \rangle$. More precisely, they showed the existence of a geometric constant $w(K)$, the so-called <em>pyramidal width</em> that <em>only</em> depends on the polytope $K$ so that</p>
<script type="math/tex; mode=display">\tag{ScalingAFW}
\langle \nabla f(x_{t}), a_t - v_t \rangle \geq w(K) \frac{\langle \nabla f(x_t), x_t - x^\esx \rangle}{\norm{x_t - x^\esx}},</script>
<p>Note that the missing normalization term $\norm{a_t - v_t}$ can be absorbed in various way if the feasible region is bounded, e.g., we can simply replace it by the diameter and absorb it into $w(K)$ or use the affine-invariant definition of curvature. Now it also becomes clear why the name <em>strong Wolfe gap</em> makes sense for $\langle \nabla f(x_{t}), a_t - v_t \rangle$: we can combine (Scaling) with the strong convexity of $f$ and obtain:</p>
<script type="math/tex; mode=display">h(x_t) \leq \frac{\langle \nabla f(x_{t}), a_t - v_t \rangle^2}{2 \mu w(K)^2},</script>
<p>i.e., we obtain a strong upper bound on the primal gap $h_t$ in spirit similar to the bound induced by strong convexity. Similarly, combining (Scaling) with our IGD arguments, we immediately obtain:</p>
<p class="mathcol"><strong>AFW contraction (strong convexity and $K$ polytope).</strong> Assuming strong convexity of $f$ and $K$ being a polytope, the primal gap $h_t$ contracts as:
\[
\tag{Rec-AFW-SC}
h(x_{t+1}) \leq h_t \left(1 - \frac{\mu}{L} \frac{w(K)^2}{D^2} \right),
\]
where $D$ is the diameter of $K$ (arising from bounding $\norm{a_t - v_t}$), which leads to a convergence rate after solving the recurrence of
\[
\tag{Rate-AFW-SC}
h_T \leq \left(1 - \frac{\mu}{L} \frac{w(K)^2}{D^2}\right)^T h_0 \leq e^{-\frac{\mu}{L} \frac{w(K)^2}{D^2}T}h_0.
\]
or equivalently, $h_T \leq \varepsilon$ for
\[
T \geq \frac{D^2 L}{w(K)^2\mu} \log \frac{h_0}{\varepsilon}.
\]</p>
<p>On a final note for this section, the reason why we need to assume that $K$ is a polytope is that $w(K)$ can tend to zero for general convex bodies, so that no reasonably bound can be obtained; in fact $w(K)$ is a minimum over certain subsets of vertices and this list is only finite in the polyhedral case.</p>
<h4 id="heb-rates-1">HEB rates</h4>
<p>We can also further combine (ScalingAFW) with the HEB condition to obtain HEB rates for a variant of AFW that employs restarts. This follows exactly the template as in the section before relying on (ScalingAFW) and we thus skip it here and refer to the interested reader to <a href="/blog/research/2018/11/12/heb-conv.html">Cheat Sheet: Hölder Error Bounds (HEB) for Conditional Gradients</a>, where we provide a full derivation including the restart-variant of AFW.</p>
<h4 id="a-note-on-affine-invariant-constants">A note on affine-invariant constants</h4>
<p>Note that the Frank-Wolfe algorithm and its variants can be formulated as affine-invariant algorithms, while I purposefully opted for an affine-variant exposition. While, certainly from a theoretical perspective the affine-invariant versions are nicer (basically $LD^2$ is replaced by a much sharper quantity $C$) from a practical perspective when we actually have to choose step lengths the affine-variants perform often much better. For this let us compare the <em>affine-invariant progress bound</em></p>
<script type="math/tex; mode=display">\tag{ProgressAI}
f(x_t) - f(x_{t+1}) \geq \frac{\langle\nabla f(x_t),d\rangle^2}{2C},</script>
<p>with optimal choice $\eta^\esx_{AI} \doteq \frac{\langle\nabla f(x_t),d\rangle}{C}$, versus the <em>affine-variant progress bound</em></p>
<script type="math/tex; mode=display">\tag{ProgressAV}
f(x_t) - f(x_{t+1}) \geq \frac{\langle\nabla f(x_t),d\rangle^2}{2L \norm{d}^2},</script>
<p>with optimal choice $\eta_{AV}^\esx \doteq \frac{\langle\nabla f(x_t),d\rangle}{L \norm{d}^2}$.</p>
<p>Combining the two, we have</p>
<script type="math/tex; mode=display">\frac{\eta_{AV}^\esx}{\eta_{AI}^\esx} = \frac{C}{L} \norm{d}^2,</script>
<p>and in particular, when $\norm{d}^2$ is small, then $\eta_{AV}^\esx$ gets larger and we make longer steps. While this is not important for the theoretical analysis, it does make a difference for actual implementations as has been observed before e.g., by [PANJ]:</p>
<blockquote>
<p>We also note that this algorithm is not affine invariant, i.e., the iterates are not invariant by affine transformations of the variable, as is the case for some FW variants [J]. It is possible to derive a similar affine invariant algorithm by replacing $L_td_t^2$ by $C_t$ in Line 6 and (1), and estimate $C_t$ instead of $L_t$. However, we have found that this variant performs empirically worse than AdaFW and did not consider it further.</p>
</blockquote>
<h3 id="references">References</h3>
<p>[CG] Levitin, E. S., & Polyak, B. T. (1966). Constrained minimization methods. Zhurnal Vychislitel’noi Matematiki i Matematicheskoi Fiziki, 6(5), 787-823. <a href="http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=zvmmf&paperid=7415&option_lang=eng">pdf</a></p>
<p>[FW] Frank, M., & Wolfe, P. (1956). An algorithm for quadratic programming. Naval research logistics quarterly, 3(1‐2), 95-110. <a href="https://onlinelibrary.wiley.com/doi/abs/10.1002/nav.3800030109">pdf</a></p>
<p>[GM] Guélat, J., & Marcotte, P. (1986). Some comments on Wolfe’s ‘away step’. Mathematical Programming, 35(1), 110-119. <a href="https://link.springer.com/content/pdf/10.1007/BF01589445.pdf">pdf</a></p>
<p>[GH] Garber, D., & Hazan, E. (2014). Faster rates for the frank-wolfe method over strongly-convex sets. arXiv preprint arXiv:1406.1305. <a href="http://proceedings.mlr.press/v37/garbera15-supp.pdf">pdf</a></p>
<p>[W] Wolfe, P. (1970). Convergence theory in nonlinear programming. Integer and nonlinear programming, 1-36.</p>
<p>[LJ] Lacoste-Julien, S., & Jaggi, M. (2015). On the global linear convergence of Frank-Wolfe optimization variants. In Advances in Neural Information Processing Systems (pp. 496-504). <a href="http://papers.nips.cc/paper/5925-on-the-global-linear-convergence-of-frank-wolfe-optimization-variants.pdf">pdf</a></p>
<p>[PANJ] Pedregosa, F., Askari, A., Negiar, G., & Jaggi, M. (2018). Step-Size Adaptivity in Projection-Free Optimization. arXiv preprint arXiv:1806.05123. <a href="https://arxiv.org/abs/1806.05123">pdf</a></p>
<p><br /></p>
<h4 id="changelog">Changelog</h4>
<p>05/15/2019: Fixed several typos and added clarifications as pointed out by Matthieu Bloch.</p>Sebastian PokuttaTL;DR: Cheat Sheet for smooth convex optimization and analysis via an idealized gradient descent algorithm. While technically a continuation of the Frank-Wolfe series, this should have been the very first post and this post will become the Tour d’Horizon for this series. Long and technical.Toolchain Tuesday No. 42018-12-04T01:00:00+01:002018-12-04T01:00:00+01:00http://www.pokutta.com/blog/random/2018/12/04/toolchain-4<p><em>TL;DR: Part of a series of posts about tools, services, and packages that I use in day-to-day operations to boost efficiency and free up time for the things that really matter. Use at your own risk - happy to answer questions. For the full, continuously expanding list so far see <a href="/blog/pages/toolchain.html">here</a>.</em>
<!--more--></p>
<p>This is the fourth installment of a series of posts; the <a href="/blog/pages/toolchain.html">full list</a> is expanding over time. This time around will be about <code class="highlighter-rouge">git</code>, which enables version control and distributed, asynchronous collaboration. <code class="highlighter-rouge">Git</code> is probably the single most useful tool in my workflow.</p>
<h2 id="software">Software:</h2>
<h3 id="git">Git</h3>
<p>Decentralized version control for coding, latex documents, and much more.</p>
<p><em>Learning curve: ⭐️⭐️⭐️⭐️</em>
<em>Usefulness: ⭐️⭐️⭐️⭐️⭐️</em> <br />
<em>Site: <a href="https://github.com/git/git">https://github.com/git/git</a></em></p>
<p><code class="highlighter-rouge">Git</code> is the single most useful tool in my whole workflow. Think of it as the operating system that underlies almost everything. Basically everything from writing papers, coding, all my markdown documents, and even my <code class="highlighter-rouge">Jekyll</code>-driven sites are managed in a <code class="highlighter-rouge">git</code> repository. So what is <code class="highlighter-rouge">git</code>? From <a href="https://en.wikipedia.org/wiki/Git">[wikipedia]</a>:</p>
<blockquote>
<p>Git (/ɡɪt/) is a version-control system for tracking changes in computer files and coordinating work on those files among multiple people. It is primarily used for source-code management in software development, but it can be used to keep track of changes in any set of files. As a distributed revision-control system, it is aimed at speed, data integrity, and support for distributed, non-linear workflows. […] As with most other distributed version-control systems, and unlike most client–server systems, every Git directory on every computer is a full-fledged repository with complete history and full version-tracking abilities, independent of network access or a central server.</p>
</blockquote>
<p>So what do these features come down to in the hard reality of day-to-day life?</p>
<ol>
<li>
<p><em>Collaboration.</em> Working with others <em>without</em> having to worry about ‘tokens’ and other concepts solely created to implement file locks through human behavior. <code class="highlighter-rouge">Git</code> provides capabilities for <em>distributed</em> and <em>asynchronous</em> collaboration. In terms of how awesome <code class="highlighter-rouge">git</code> really is, let the numbers speak: Microsoft just <a href="https://news.microsoft.com/2018/06/04/microsoft-to-acquire-github-for-7-5-billion/">paid</a> $7.5 billion for <code class="highlighter-rouge">github</code>, one of the main <code class="highlighter-rouge">git</code> repository platforms, for a reason… With <code class="highlighter-rouge">git</code> any number of people can work on the same files, code, project etc and <code class="highlighter-rouge">git</code> will automatically merge changes provided they were not overlapping and if they were overlapping they can be merged relatively easily by hand with the help of <code class="highlighter-rouge">git</code>. Also, nothing is ever lost! Remember, when you shared files on Dropbox and someone overwrote your file after you edited it painstakingly just to fix a comma? With <code class="highlighter-rouge">git</code> this cannot happen.</p>
</li>
<li>
<p><em>Backup and full history.</em> Every copy of the repository on any machine contains the <em>full</em> version history. This provides incredible redundancy <em>and</em> if you <code class="highlighter-rouge">push</code> into a remote repository then you have a remote backup that you can <code class="highlighter-rouge">pull</code> from basically any location with an internet connection. For repository space check out, e.g., <a href="https://bitbucket.org/">bitbucket.org</a> and <a href="https://github.com">github.com</a>.</p>
</li>
<li>
<p><em>Different version branches.</em> Another powerful feature of <code class="highlighter-rouge">git</code> is to maintain and synchronize different versions of a product through <code class="highlighter-rouge">branches</code>. One of the most common use cases for me is for example, when we have an arxiv version and a conference version of a paper, which need to be keep synchronized. With <code class="highlighter-rouge">git</code> you can easily track changes between these versions and <code class="highlighter-rouge">cherry pick</code> those that you want to synchronize.</p>
</li>
</ol>
<p>Unfortunately, the learning curve of <code class="highlighter-rouge">git</code> is quite steep, in particular if you want to do something slightly more advanced. For most users I highly recommend a <code class="highlighter-rouge">git</code> gui as it makes merging etc much easier. I will mention two choices below. There are tons of git tutorials online and a good starting point is <a href="https://try.github.io/">[here]</a> and <a href="https://git-scm.com/docs/gittutorial">[here]</a>. (Ping me for your favorite one; happy to add links)</p>
<h3 id="sourcetree">Sourcetree</h3>
<p>Great and free <code class="highlighter-rouge">git</code> gui for mac os x and windows.</p>
<p><em>Learning curve: ⭐️⭐️</em>
<em>Usefulness: ⭐️⭐️⭐️⭐️</em> <br />
<em>Site: <a href="https://www.sourcetreeapp.com/">https://www.sourcetreeapp.com/</a></em></p>
<p><code class="highlighter-rouge">Sourcetree</code> is a great graphical <code class="highlighter-rouge">git</code> client. It has full support for <code class="highlighter-rouge">git</code> and comes with many useful features and is free. Not much to say otherwise: the power of <code class="highlighter-rouge">gui</code> accessible through a great user interface.</p>
<h3 id="smartgit">SmartGit</h3>
<p>Great <code class="highlighter-rouge">git</code> gui for mac os x and windows.</p>
<p><em>Learning curve: ⭐️⭐️</em>
<em>Usefulness: ⭐️⭐️⭐️⭐️</em> <br />
<em>Site: <a href="https://www.syntevo.com/smartgit/">https://www.syntevo.com/smartgit/</a></em></p>
<p><code class="highlighter-rouge">SmartGit</code> is another great graphical <code class="highlighter-rouge">git</code> client and it is free for non-commercial use. Otherwise the same as for <code class="highlighter-rouge">Sourcetree</code> applies here; both <code class="highlighter-rouge">Sourcetree</code> and <code class="highlighter-rouge">SmartGit</code> are great and it comes down to personal preference.</p>TL;DR: Part of a series of posts about tools, services, and packages that I use in day-to-day operations to boost efficiency and free up time for the things that really matter. Use at your own risk - happy to answer questions. For the full, continuously expanding list so far see here.Emulating the Expert2018-11-26T01:00:00+01:002018-11-26T01:00:00+01:00http://www.pokutta.com/blog/research/2018/11/26/expertLearning-abstract<p><em>TL;DR: This is an informal summary of our recent paper <a href="https://arxiv.org/abs/1810.12997">An Online-Learning Approach to Inverse Optimization</a> with <a href="http://www.am.uni-erlangen.de/index.php?id=229">Andreas Bärmann</a>, <a href="https://www.am.uni-erlangen.de/?id=199">Alexander Martin</a>, and <a href="https://www.mso.math.fau.de/edom/team/schneider-oskar/oskar-schneider/">Oskar Schneider</a>, where we show how methods from online learning can be used to learn a hidden objective of a decision-maker in the context of Mixed-Integer Programs and more general (not necessarily convex) optimization problems.</em>
<!--more--></p>
<h2 id="what-is-the-paper-about-and-why-you-might-care">What is the paper about and why you might care</h2>
<p>We often face the situation in which we observe a decision-maker—let’s call her Alice—who is making “reasonably optimal” decisions with respect to some private objective function and another party—let’s call him Bob—would like to make decisions that emulate Alice’s decisions in terms of quality with respect to <em>Alice’s private objective function</em>. Classical applications where this naturally occurs is in the context of learning customer preferences from observed behavior in order to recommend new products etc. that match the customer’s preference or for example in dynamic routing, where we observe routing decisions of individual participants but we cannot directly observe, e.g., travel times. The formal name for the problem that we consider is <em>inverse optimization</em>; informally speaking we can say we simply want to <em>emulate the expert</em>. For completeness, in reinforcement learning we would refer to what we want to achieve as <em>inverse reinforcement learning</em>.</p>
<p>The following graph lays out the basic setup that we consider:</p>
<p class="center"><img src="http://www.pokutta.com/blog/assets/setup-inverse-opt.png" alt="Setup Emulating Expert" /></p>
<p>In summary, Alice is solving</p>
<script type="math/tex; mode=display">x_t \doteq \arg \min_{x \in P_t} c_{true}^\intercal x,</script>
<p>and Bob can solve</p>
<script type="math/tex; mode=display">\bar x_t \doteq \arg \min_{x \in P_t} c_t^\intercal x,</script>
<p>for some guessed objective $c_t$ and after Bob played his decision $\bar x_t$, he observes Alice decision $x_t$ taken with respect to her <em>private</em> objective $c_{true}$. For each time step $t \in [T]$, the $P_t$ is some feasible set of decisions over which Alice and Bob can optimize their respective (linear) objective functions; the interesting case is where $P_t$ varies over time, so that Alice’s decision $x_t$ is round-dependent. Note, that we can basically accommodate arbitrary (potentially non-linear) function families as long as we have a reasonable “basis” for this family; the interested reader might check the paper for details.</p>
<p>Learning to emulate Alice’s decisions $x_t$ seems to be almost impossible to accomplish at first:</p>
<ol>
<li>We obtain potentially very little information only from Alice’s decision $x_t$.</li>
<li>The objective that explains Alice’s decisions might not be unique.</li>
</ol>
<p>However, it turns out that under reasonable assumptions, such that Alice’s decisions are reasonably close to the optimal ones with respect to $c_{true}$ and with an amount of examples that are “diverse enough” as necessitated by the specifics of the instance, we in fact <em>can</em> learn an <em>equivalent</em> objective that renders Alice’s solutions basically optimal w.r.t. this learned proxy objective. In fact, one way to solve an offline variant of this problem to obtain such a proxy objective that is quite well known is via dualization or KKT system approaches. For example in the case of <em>linear programs</em> this can be done as follows:</p>
<p class="mathcol"><strong>Remark (LP case).</strong> Suppose that $P_t \doteq \setb{x \in \RR^n \mid A_t x \leq b_t}$ for $ t \in [T]$ and assume that we have a polyhedral feasible region $F = \setb{c \in \RR^n \mid Bc \leq d}$ for the candidate objectives. Then the linear program
\[
\min \sum_{t = 1}^T (b_t^\intercal y_t - c^\intercal x_t) \qquad
\]
\[
A_t^\intercal y_t = c \qquad \forall t \in [T]
\]
\[
y_t \geq 0 \qquad \forall t \in [T]
\]
\[
Bc \leq d,
\]
where $c$ and the $y_t$ are variables and the rest is input data, computes a linear objective $c$, if feasible and bounded etc, so that for all $t \in [T]$ it holds
\[
c^\intercal x_t = \max_{x \in P} c_{true}^\intercal x.
\]</p>
<p>While the above can also be reasonably extended to convex programs via solving the KKT system instead, it has two disadvantages:</p>
<ol>
<li>It is an <em>offline</em> approach: first collect data and <em>then</em> regress out a proxy objective, i.e., <em>first-learn-then-optimize</em>, which might be problematic in many applications.</li>
<li>Additionally, and not less severe, this <em>only</em> works for linear programs (convex programs) and not Mixed-Integer Programs or more general optimization problems as, due to non-convexity, the KKT system or the dual program is not defined/available in this case.</li>
</ol>
<h2 id="our-results">Our results</h2>
<p>Our method alleviates both of the above shortcomings, by providing an <em>online learning algorithm</em>, where we learn a proxy objective equivalent to Alice’s objective <em>while</em> we are participating in the decision-making process, i.e., our algorithm is an online algorithm. Moreover, our approach is general enough to apply to a wide variety of optimization problems (including MIPs etc) as it only relies on standard regret guarantees and (approximate) optimality of Alice’s decisions. More precisely, we provide an online learning algorithm—using either Multiplicative Weights Updates (MWU) or Online Gradient Descent (OGD) as a black box—that ensures the following guarantee.</p>
<p class="mathcol"><strong>Theorem [BMPS, BPS].</strong> With the notation from above the online learning algorithm ensures
\[
0 \leq \frac{1}{T} \sum_{t = 1}^T (c_t - c_{true})^\intercal (\bar x_t - x_t)
\leq O\left(\sqrt{\frac{1}{T}}\right),
\]
where the constant hidden in the $O$-notation depends on the used algorithm (either MWU or OGD) and the (maximum) diameter of the feasible regions $P_t$.</p>
<p>In particular, note that in the above</p>
<script type="math/tex; mode=display">(c_t - c_{true})^\intercal (\bar x_t - x_t) = \underbrace{c_t^\intercal (\bar x_t - x_t)}_{\geq 0} + \underbrace{c_{true}^\intercal (x_t - \bar x_t)}_{\geq 0},</script>
<p>where the nonnegativity arises from the optimality of $x_t$ w.r.t. $c_{true}$ and the optimality of $\bar x_t$ w.r.t. $c_t$. We therefore obtain in particular that</p>
<p>\[
0 \leq \frac{1}{T} \sum_{t = 1}^T c_t^\intercal (\bar x_t - x_t)
\leq O\left(\sqrt{\frac{1}{T}}\right),
\]</p>
<p>and</p>
<p>\[
0 \leq \frac{1}{T} \sum_{t = 1}^T c_{true}^\intercal (x_t - \bar x_t)
\leq O\left(\sqrt{\frac{1}{T}}\right),
\]</p>
<p>hold, which tend to $0$ on the right-hand side for $T \rightarrow \infty$. Thus Bob’s decisions $\bar x_t$ converge to decisions that are not only close in cost, on average, compared to Alice’s decisions $x_t$ w.r.t. to $c_t$ but <em>also</em> w.r.t. to $c_{true}$ although we might never actually observe $c_{true}$. In the paper we consider also special cases under which we can ensure to recover $c_{true}$ and not just an equivalent function. One way of thinking about our online learning algorithm is that it provides an approximate solution to the (inaccessible) KKT system that we would like to solve. In fact in the case of, e.g., LPs it can be shown that our algorithm solves a dual program similar to the one from above by means of gradient descent (or mirror descent).</p>
<p>The key question of course is, whether our algorithm actually also works in practice. And the answer is <em>yes</em>. The left plot shows the convergence of the total error $(c_t - c_{true})^\intercal (\bar x_t - x_t)$ over $t \in [T]$ in each round (red dots) as well as the cumulative average error up to that point (blue line) for an integer knapsack problem with $n = 1000$ items over $T = 1000$
rounds using MWU as black box algorithm. The proposed algorithm is also rather stable and consistent across instances in terms of convergence, as can be seen in the right plot, where we consider the statistics of the total error over $500$ runs for a linear knapsack problem with
$n = 50$ items over $T = 500$ rounds. Here we depict mean total error averaged up to that point in time $\ell$, i.e., $\frac{1}{\ell} \sum_{t = 1}^{\ell} (c_t - c_{true})^\intercal (\bar x_t - x_t)$ and associated error bands.</p>
<p class="center"><img src="http://www.pokutta.com/blog/assets/online-learning-comp.png" alt="Convergence of Total Error and Statistics" /></p>
<h3 id="a-note-on-generalization">A note on generalization</h3>
<p>If the varying decision environments $P_t$ are drawn i.i.d. from some distribution $\mathcal D$, then also a reasonable form of generalization to unseen realizations of the decision environment $P_t$ drawn from distribution $\mathcal D$ can be shown, provided that we have seen enough examples within the learning process. For this one can show that after a sufficient number of samples $T$ it holds</p>
<p>\[
\frac{1}{T} \sum_{t = 1}^T c_{true}^\intercal x_t
\approx \mathbb E_{\mathcal D} [c_{true}^\intercal \tilde x],
\]</p>
<p>where $\tilde x = \arg \max_{x \in P} c_{true}^\intercal x$ for $P \sim \mathcal D$ and one then applies the regret bound, which provides</p>
<p>\[
\frac{1}{T} \sum_{t = 1}^T c_{true}^\intercal x_t \approx \frac{1}{T} \sum_{t = 1}^T c_{true}^\intercal \bar x_t,
\]</p>
<p>so that roughly</p>
<p>\[
\frac{1}{T} \sum_{t = 1}^T c_{true}^\intercal \bar x_t
\approx \mathbb E_{\mathcal D} [c_{true}^\intercal \tilde x] ,
\]</p>
<p>follows. This can be made precise by working out the number of samples, so that the approximation errors above are of the order of a given $\varepsilon > 0$.</p>
<h3 id="references">References</h3>
<p>[BMPS] Bärmann, A., Martin, A., Pokutta, S., & Schneider, O. (2018). An Online-Learning Approach to Inverse Optimization. arXiv preprint arXiv:1810.12997. <a href="https://arxiv.org/abs/1810.12997">arxiv</a></p>
<p>[BPS] Bärmann, A., Pokutta, S., & Schneider, O. (2017, July). Emulating the Expert: Inverse Optimization through Online Learning. In International Conference on Machine Learning (pp. 400-410). <a href="http://proceedings.mlr.press/v70/barmann17a.html">pdf</a></p>Sebastian PokuttaTL;DR: This is an informal summary of our recent paper An Online-Learning Approach to Inverse Optimization with Andreas Bärmann, Alexander Martin, and Oskar Schneider, where we show how methods from online learning can be used to learn a hidden objective of a decision-maker in the context of Mixed-Integer Programs and more general (not necessarily convex) optimization problems.