Partially Observable MDPs, Monte Carlo Methods, and Sustainable Fisheries

This Earmarked Scholarship project is aligned with a recently awarded Category 1 research grant. It offers you the opportunity to work with leading researchers and contribute to large projects of national significance.

Supervisor – Professor Dirk

Reinforcement learning is an learning paradigm that is widely believed to play an important role for enabling machines to behave in an intelligent way. While significant advances have been made in the last few years, most works focus on fully observable environments, and there are still many challenges associated with dealing with partially observable environments. In particular, large and complex environments often demand lot of computation and large number of samples. This project aims to develop computationally and sample efficient reinforcement algorithms under Partially Observable Markov Decision Processes, a general framework for decision making under uncertainties (including partial observability). In particular, several ideas will be explored, implemented and tested - these includes integrating planning and reinforcement learning, and leveraging recent advances in sample efficient reinforcement learning for Markov Decision Processes.

Preferred educational background

Applications will be judged on a competitive basis taking into account the applicant's previous academic record, publication record, honours and awards, and employment history.

The applicant will demonstrate academic achievement in the field(s) of mathematics, statistics, and programming and the potential for scholastic success.

A background or knowledge of machine learning / deep learning is highly desirable.

*The successful candidate must commence by Research Quarter 1, 2021. You should apply at least 3 months prior to the research quarter commencement date. International applicants may need to apply much earlier for visa reasons.

Apply now