欢迎访问中国科学院大学学报,今天是

中国科学院大学学报 ›› 2024, Vol. 41 ›› Issue (2): 275-284.DOI: 10.7523/j.ucas.2022.031

• 电子信息与计算机科学 • 上一篇    下一篇

Actor-critic框架下的二次指派问题求解方法

李雪源, 韩丛英   

  1. 中国科学院大学数学科学学院, 北京 100049
  • 收稿日期:2022-01-30 修回日期:2022-04-01 发布日期:2022-04-07
  • 通讯作者: 韩丛英,E-mail:hancy@ucas.ac.cn
  • 基金资助:
    国家重点研发计划专项(2021YFA1000403)、国家自然科学基金(11991022)和中国科学院战略性先导科技专项(XDA27000000)资助

Solving quadratic assignment problem based on actor-critic framework

LI Xueyuan, HAN Congying   

  1. School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
  • Received:2022-01-30 Revised:2022-04-01 Published:2022-04-07

摘要: 二次指派问题(QAP)属于NP-hard组合优化问题,在现实生活中有着广泛应用。目前相对成熟的启发式算法通常以问题为导向来设计定制化算法,缺乏迁移泛化能力。为提供一个统一的QAP求解策略,将QAP问题的流量矩阵及距离矩阵抽象成两个无向完全图并构造相应的关联图,从而将设施和地点的指派任务转化为关联图上的节点选择任务,基于actor-critic框架,提出一种全新的求解算法ACQAP。首先,利用多头注意力机制构造策略网络,处理来自图卷积神经网络的节点表征向量;然后,通过actor-critic算法预测每个节点被作为最优节点输出的概率;最后,依据该概率在可行时间内输出满足目标奖励函数的动作决策序列。该算法摆脱人工设计,且适用于不同规模的输入,更加灵活可靠。实验结果表明,在QAPLIB实例上,本算法在精度媲美传统启发式算法的前提下,迁移泛化能力更强;同时相对于NGM等基于学习的算法,求解的指派费用与最优解之间的偏差最小,且在大部分实例中,偏差均小于20%。

关键词: 二次指派问题, 图卷积神经网络, 深度强化学习, 多头注意力机制, actor-critic算法

Abstract: The quadratic assignment problem (QAP) is one of the NP-hard combinatorial optimization problems and is known for its diverse applications in real life. The current relatively mature heuristic algorithms are usually problem-oriented to design customized algorithms and lack the ability to transfer and generalize. In order to provide a unified QAP solution strategy, this paper abstracts the flow matrix and distance matrix of QAP problem into two undirected complete graphs and constructs corresponding correlation graphs, thus transforming the assignment task of facilities and locations into node selection task on the association graph. Based on actor-critic framework, this paper proposes a new algorithm ACQAP(actor-critic for QAP). Firstly, the model uses a multi-headed attention mechanism to construct a policy network to process the node representation vectors from the graph convolutional neural network; Then, the actor-critic algorithm is used to predict the probability of each node being output as the optimal node. Finally, the model outputs an action decision sequence that satisfies the objective reward function within a feasible time. The algorithm is free from manual design and is more flexible and reliable as it is applicable to different sizes of inputs. The experimental results show that on QAPLIB instances, the algorithm has stronger transfer and generalization ability under the premise that the accuracy is comparable to the traditional heuristic algorithm, while the assignment cost for solving is less compared to the latest learning-based algorithms such as NGM, and the deviation is less than 20% in most instances.

Key words: quadratic assignment problem, graph convolutional neural network, deep reinforcement learning, multi-head-attention mechanism, actor-critic algorithm

中图分类号: