How to maximize scalar product using Lagrangian methods
Let $f(x_1,\ldots,x_n)=u_1x_1+\ldots+u_nx_n$ and $g(x_1,\ldots,x_n)=p_1x_1+\ldots+p_nx_n$, where $\textbf{u}\ge 0$ and $\textbf{p}> 0$ (coordinatewise).
Solve
\begin{align} \qquad\qquad\qquad\qquad \max\,\,\, &f(x_1,\ldots,x_n) \qquad\qquad\text{subject to} \\ &g(x_1,\ldots,x_n) = a \\ &x_1 \ge 0 \\ &...\\ &x_n \ge 0 \\ \end{align} This is a constrained optimization problem with inequality constraints, which is solved by KuhnTucker method (wiki and original paper). We use $\lambda$ as multiplier on the equality constrain and $\mu_i$ as multipliers on the nonnegativity constraints. KuhnTucker conditions:
\begin{align} u_i &= \lambda p_i + \mu_i, \text{ for } i=1,...,n \\ a &= p_1x_1+\ldots+p_nx_n \\ \mu_i &\le 0, \text{ for } i=1,...,n \\ \mu_i x_i &= 0, \text{ for } i=1,...,n \\ \end{align} Clearly, not all $x_i$ equal to $0$. Let $i^*$ denote the index for which $x_{i^*}>0$. Then $\mu_{i^*}=0$ and $\lambda = \frac{u_{i^*}}{p_{^*}}$. It then follows that for indices $i\ne i^*$ we must have $\mu_i = u_i  \frac{u_{i^*}}{p_{^*}}p_i$ and $\frac{\mu_i}{p_i} = \frac{u_i}{p_i}  \frac{u_{i^*}}{p_{^*}}$ which must be nonpositive ($p_i>0$ by assumption).
Thus, $i^*$ must be the index for which $\frac{u_i}{p_i}$ is the largest. And for all other indices $x_i=0$ because $\mu_i >0$, so that $x_{i^*} = \frac{a}{p_{^*}}$.
Related videos on Youtube
Peteris
Updated on June 16, 2020Comments

Peteris over 3 years
maximize $U(x) = u \cdot x$ with respect to $p \cdot x = w$
given that $u, p, x \in \mathbb{R}^L_+$, $w \in \mathbb{R_+}$.
I can solve it classically:
\begin{align} u \cdot x &= \sum u_i x_i \\ &= \sum \frac{u_i}{p_i} p_i x_i\\ &\le \max{\frac{u_i}{p_i}} \sum p_j x_j\\ &= \max{\frac{u_i}{p_i}} \cdot w \end{align}
with equality when $x = \frac{w}{p_i} e_i$ where $i = \text{argmax}_j \frac{u_j}{p_j}$ and $e_i$ is the $i$th basis vector.
But I would like to see a proof using Lagrange multipliers.

Peteris over 10 yearsYou're quite right, I want to add $x_i \ge 0$ as a feasibility constraint, otherwise there is an arbitrage. But all vectors in the problem are on $\mathbb{R}_+$.

mStudent over 10 yearsPeteris, this changes the nature of the problem. It's constrained optimization with inequality constraints. I added another solution and some references.