PopAi Logo PopAi
|
Your Personal AI Workspace

Deep Multi-Objective Reinforcement Learning for Utility-Based Infrastructural Maintenance Optimization

Authors: Jesse van Remmerden, Maurice Kenter, Diederik M. Roijers, Charalampos Andriotis, Yingqian Zhang, Zaharah Bukhsh
TLDR:
The paper presents MO-DCMAC, a novel multi-objective reinforcement learning approach designed to optimize infrastructure maintenance strategies by balancing multiple objectives, such as cost and probability of collapse, through a utility-based framework. The approach integrates Deep Centralized Multi-Agent Actor-Critic (DCMAC) and Multi-Objective Categorical Actor-Critic (MOCAC) methods to handle the complexity of multi-objective optimization with non-linear utility functions. The paper demonstrates the effectiveness of MO-DCMAC through experiments in various maintenance environments, including a case study on the historic quay wall in Amsterdam, showing its superiority over traditional heuristic-based policies. However, the method currently struggles with scaling to a larger number of objectives due to the exponential increase in the critic's output with additional objectives. The paper concludes with the potential of MO-DCMAC for real-world applications in infrastructure maintenance, provided that the scalability issue is addressed.
Free Login To Access AI Capability
Free Access To ChatGPT

The document presents MO-DCMAC, a novel multi-objective reinforcement learning approach that integrates DCMAC and MOCAC to optimize infrastructure maintenance strategies by balancing multiple objectives, such as cost and collapse probability, using a utility-based framework under the Expected Shortfall Risk criterion, and demonstrates its effectiveness through experiments in various maintenance environments.

Free Access to ChatGPT

Abstract

In this paper, we introduce Multi-Objective Deep Centralized Multi-Agent Actor-Critic (MO- DCMAC), a multi-objective reinforcement learning (MORL) method for infrastructural maintenance optimization, an area traditionally dominated by single-objective reinforcement learning (RL) approaches. Previous single-objective RL methods combine multiple objectives, such as probability of collapse and cost, into a singular reward signal through reward-shaping. In contrast, MO-DCMAC can optimize a policy for multiple objectives directly, even when the utility function is non-linear. We evaluated MO-DCMAC using two utility functions, which use probability of collapse and cost as input. The first utility function is the Threshold utility, in which MO-DCMAC should minimize cost so that the probability of collapse is never above the threshold. The second is based on the Failure Mode, Effects, and Criticality Analysis (FMECA) methodology used by asset managers to asses maintenance plans. We evaluated MO-DCMAC, with both utility functions, in multiple maintenance environments, including ones based on a case study of the historical quay walls of Amsterdam. The performance of MO-DCMAC was compared against multiple rule-based policies based on heuristics currently used for constructing maintenance plans. Our results demonstrate that MO-DCMAC outperforms traditional rule-based policies across various environments and utility functions.

Method

The authors used the Multi-Objective Deep Centralized Multi-Agent Actor-Critic (MO-DCMAC) methodology, which is a multi-objective reinforcement learning (MORL) approach for optimizing infrastructural maintenance strategies. This methodology is capable of directly optimizing multiple objectives, such as minimizing cost and probability of collapse, even when the utility function is non-linear. They evaluated MO-DCMAC using two utility functions: the Threshold utility and one based on the Failure Mode, Effects, and Criticality Analysis (FMECA) methodology commonly used by asset managers. The research was conducted in multiple maintenance environments, including a case study on the historical quay walls of Amsterdam, and the results were compared against traditional rule-based policies.

Main Finding

The authors discovered that the MO-DCMAC (Multi-Objective Deep Centralized Multi-Agent Actor-Critic) approach is effective in optimizing infrastructure maintenance strategies by balancing multiple objectives, such as cost and probability of collapse, under a utility-based framework with a known non-linear utility function. They found that MO-DCMAC outperforms traditional heuristic-based policies in various experimental environments, including a case study on the historic quay wall in Amsterdam. However, they also identified a major limitation of MO-DCMAC: its inability to scale to a larger number of objectives due to the critic's output scaling exponentially with the number of objectives. This limitation restricted their experiments to only two objectives, whereas real-world maintenance planning often considers significantly more objectives. The authors suggest that future research should explore other approaches to accommodate a greater number of objectives to increase the likelihood of using methods like MO-DCMAC in real-world applications.

Conclusion

The authors discovered that the MO-DCMAC (Multi-Objective Deep Centralized Multi-Agent Actor-Critic) approach is effective in optimizing infrastructure maintenance strategies by balancing multiple objectives, such as cost and probability of collapse, under a utility-based framework with a known non-linear utility function. They found that MO-DCMAC outperforms traditional heuristic-based policies in various experimental environments, including a case study on the historic quay wall in Amsterdam. However, they also identified a major limitation of MO-DCMAC: its inability to scale to a larger number of objectives due to the critic's output scaling exponentially with the number of objectives. This limitation restricted their experiments to only two objectives, whereas real-world maintenance planning often considers significantly more objectives. The authors suggest that future research should explore other approaches to accommodate a greater number of objectives to increase the likelihood of using methods like MO-DCMAC in real-world applications.

Keywords

Deep Multi-Objective Reinforcement Learning, Utility-Based Infrastructural Maintenance Optimization, MO-DCMAC, MOPOMDP, FMECA, ESR Criterion, Kalman Filter Models, Graph Convolutional Networks, Markov Chain Monte Carlo, Deep Reinforcement Learning, Transportation Infrastructure, Safety-Critical Systems, Condition-Based Maintenance, Grouping of Maintenance Actions, Predictive Maintenance, Industry 4.0, Wind Turbines, Operations & Maintenance Optimization, Multi-Objective Optimization, Sustainable Road Network Maintenance, Hierarchical Reinforcement Learning, Partially Observable MDPS, Reliability Engineering, System Safety, Production Scheduling, Pavement Maintenance, Bridge Networks, Safe Reinforcement Learning, Genetic Algorithms, Multi-Attribute Utility Theory, Fuzzy Logic, System Failure Mode, Effects and Criticality Analysis, Maintenance Planning, Degradation Matrix, Cost of Actions, Probability of Collapse, Asset Management, Amsterdam Quay Walls, Multi-Year Maintenance Planning Framework, Multi-Objective Partially Observable Markov Decision Process, Multi-Objective Categorical Actor-Critic, Expected Scalarised Returns, Generative Flow Models, Condition-Based Maintenance, Reliability Engineering, System Safety, Production Scheduling, Pavement Maintenance, Bridge Networks, Safe Reinforcement Learning, Genetic Algorithms, Multi-Attribute Utility Theory, Fuzzy Logic, System Failure Mode, Effects and Criticality Analysis, Maintenance Planning, Degradation Matrix, Cost of Actions, Probability of Collapse, Asset Management, Amsterdam Quay Walls, Multi-Year Maintenance Planning Framework.

Powered By PopAi ChatPDF Feature
The Best AI PDF Reader

Read Paper with AI

PopAi Logo

Deep Multi-Objective Reinforcement Learning for Utility-Based Infrastructural Maintenance Optimization

AI Presentation

Chrome Extension

  • One-click PDF Summary One-click PDF Summary
  • Capture and Analyze Charts Capture and Analyze Charts
  • Extract Chart Data Extract Chart Data

Download our apps and chrome extension powered by ChatGPT