Welcome to Journal of University of Chinese Academy of Sciences,Today is

Journal of University of Chinese Academy of Sciences

    Next Articles

A stochastic augmented Lagrangian method for stochastic nonconvex nonsmooth programs with many convex constraint

ZHAO Wenshen, HAN Congying, JIN Lingzi   

  1. School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
  • Received:2023-01-17 Revised:2023-05-15

Abstract: The stochastic gradient methods has been widely used in machine learning, but most existing works aim for unconstraint or simple constraint problems. In this paper, we consider the nonconvex stochastic programs with many functional convex constraints. The deterministic augmented Lagrangian method is a classic algorithm for such problem, but the requirement of accurate gradient information makes it impractical for the case with large-scale constraints. To solve such problems, we propose a novel stochastic augmented Lagrangian method, which is called CSALM(composite stochastic augmented Lagrangian method). The CSALM uses stochastic gradient to approximate the accurate gradient, and it only sample stochastic gradients and batch constraint gradient per iteration. We establish the convergence theory of CSALM and show that the CSALM can find an ϵ-KKT point after O(ϵ-8) iterations. The numerical experiments on the multi-class Neyman-Pearson classification problem(mNPC) demonstrate the of efficiency CSALM.

Key words: stochastic gradient, augmented Lagrangian, nonlinear optimization, constraint optimization

CLC Number: