4.7 Article

Optimizing Wireless Systems Using Unsupervised and Reinforced-Unsupervised Deep Learning

期刊

IEEE NETWORK
卷 34, 期 4, 页码 270-277

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/MNET.001.1900517

关键词

Optimization; Unsupervised learning; Mathematical model; Computational modeling; Wireless networks; Resource management

资金

  1. National Natural Science Foundation of China (NSFC) [61731002, 61671036]
  2. Engineering and Physical Sciences Research Council [EP/N004558/1, EP/PO34284/1]
  3. Royal Society's Global Challenges Research Fund Grant
  4. European Research Council's Advanced Fellow Grant QuantCom

向作者/读者索取更多资源

Resource allocation and transceivers in wireless networks are usually designed by solving optimization problems subject to specific constraints, which can be formulated as variable or functional optimization. If the objective and constraint functions of a variable optimization problem can be derived, standard numerical algorithms can be applied for finding the optimal solution, which however incurs high computational cost when the dimension of the variable is high. To reduce the on-line computational complexity, learning the optimal solution as a function of the environment's status by deep neural networks (DNNs) is an effective approach. DNNs can be trained under the supervision of optimal solutions, which however, is not applicable to the scenarios without models or for functional optimization where the optimal solutions are hard to obtain. If the objective and constraint functions are unavailable, reinforcement learning can be applied to find the solution of a functional optimization problem, which is however not tailored to optimization problems in wireless networks. In this article, we introduce unsupervised and reinforced-unsupervised learning frameworks for solving both variable and functional optimization problems without the supervision of the optimal solutions. When the mathematical model of the environment is completely known and the distribution of the environment's status is known or unknown, we can invoke an unsupervised learning algorithm. When the mathematical model of the environment is incomplete, we introduce reinforced- unsupervised learning algorithms that learn the model by interacting with the environment. Our simulation results confirm the applicability of these learning frameworks by taking a user association problem as an example.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据