Duration: 2 hours
Start: September 03, 15:00
Location: Aula A
Abstract:
Often, we face problems where we need to optimize multiple conflicting objectives. Further, we can have a series of decisions rather than just one. This kind of problem can be modelled as a multi-objective Markov decision process (MOMDP), and one approach to solving it is the so-called multi-objective reinforcement learning (MORL).
As expected, the solution for this kind of problem is a set rather than a single solution, which is a set of policies (mappings of states and actions to probabilities). In the last few years, the attention to solving MOMDPs has increased, given its significance in solving real-world problems. From the multi-objective optimization perspective, MOMDPs pose interesting challenges since they are typically large dimensional, dynamic and have different kinds of uncertainty.
In this tutorial, we will first introduce MOMDPs and some of their properties and challenges. Next, we will relate the problem to some more common problems from the multi-objective optimization literature. Then, we will present some methods, both from MORL and evolutionary algorithms, along with snippets of code, to address MOMDPs. Finally, we will show some possible research directions for applying NEO methods to MOMDPs.
Contact:
Dr. Carlos Hernández, This email address is being protected from spambots. You need JavaScript enabled to view it.
https://cihdezc.github.io/
Carlos Hernández received his Ph.D. from CINVESTAV-IPN in 2017. In 2018, he did a postdoctoral fellowship at the University of Oxford in the United Kingdom, focusing on driving strategies for autonomous vehicles. He is currently an associate researcher in the |