Welcome to Journal of University of Chinese Academy of Sciences,Today is

Journal of University of Chinese Academy of Sciences ›› 2025, Vol. 42 ›› Issue (1): 26-42.DOI: 10.7523/j.ucas.2023.055

• Research Articles • Previous Articles    

Stochastic augmented Lagrangian method for stochastic nonconvex nonsmooth programs with many convex constraints

ZHAO Wenshen, HAN Congying, JIN Lingzi   

  1. School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
  • Received:2023-01-17 Revised:2023-05-15

Abstract: The stochastic gradient methods have been widely used in machine learning, but most existing works aim for unconstrained or simple constrained problems. In this paper, we consider the nonconvex stochastic programs with many functional convex constraints. The deterministic augmented Lagrangian method is a classic algorithm for such problems, but the requirement of accurate gradient information makes the method impractical for the case with large-scale constraints. To solve such problems, we propose a novel stochastic augmented Lagrangian method, which is called CSALM(composite stochastic augmented Lagrangian method). The CSALM uses a stochastic gradient to approximate the accurate gradient, and it only samples stochastic gradients and batches constraint gradient per iteration. We establish the convergence theory of CSALM and show that the CSALM can find an $\epsilon$-KKT point after $\mathcal{O}\left(\epsilon^{-8}\right)$ iterations. The numerical experiments on the multi-class Neyman-Pearson classification problem(mNPC) demonstrate the efficiency of CSALM.

Key words: stochastic gradient, augmented Lagrangian method, nonlinear optimization, constrained optimization

CLC Number: