Multidisciplinary Optimization in Decentralized Reinforcement Learning

Date
2017-12
Language
English
Embargo Lift Date
Committee Members
Degree
Degree Year
Department
Grantor
Journal Title
Journal ISSN
Volume Title
Found At
IEEE
Abstract

Multidisciplinary Optimization (MDO) is one of the most popular techniques in aerospace engineering, where the system is complex and includes the knowledge from multiple fields. However, according to the best of our knowledge, MDO has not been widely applied in decentralized reinforcement learning (RL) due to the unknown' nature of the RL problems. In this work, we apply the MDO in decentralized RL. In our MDO design, each learning agent uses system identification to closely approximate the environment and tackle the unknown' nature of the RL. Then, the agents apply the MDO principles to compute the control solution using Monte Carlo and Markov Decision Process techniques. We examined two options of MDO designs: the multidisciplinary feasible and the individual discipline feasible options, which are suitable for multi-agent learning. Our results show that the MDO individual discipline feasible option could successfully learn how to control the system. The MDO approach shows better performance than the completely decentralization and centralization approaches.

Description
item.page.description.tableofcontents
item.page.relation.haspart
Cite As
Nguyen, T., & Mukhopadhyay, S. (2017). Multidisciplinary Optimization in Decentralized Reinforcement Learning. In 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA) (pp. 779–784). https://doi.org/10.1109/ICMLA.2017.00-63
ISSN
Publisher
Series/Report
Sponsorship
Major
Extent
Identifier
Relation
Journal
2017 16th IEEE International Conference on Machine Learning and Applications
Rights
Publisher Policy
Source
Author
Alternative Title
Type
Conference proceedings
Number
Volume
Conference Dates
Conference Host
Conference Location
Conference Name
Conference Panel
Conference Secretariat Location
Version
Author's manuscript
Full Text Available at
This item is under embargo {{howLong}}