Hossein Nejatbakhsh Esfahani
Hossein Nejatbakhsh Esfahani
PhD CandidateDepartment of Language and Literature Department of Engineering Cybernetics Faculty of Information Technology and Electrical Engineering
Background and activities
I hold a Master's degree in Mechatronics from University of Tabriz, Iran (2012). The focus of my thesis was Robust Trajectory Tracking Control for Uncerwater Vehicle-Manipulator Systems. The goal was to control such a nonlinear platform with high complexity and uncertainties in its dynamics and in presense of external time-varying disturbances induced by the winds, waves and ocean currents.
Since 2012 I've been dealing with kinds of control theory problems in both theoretical and practical facets. I obtained 5 years (2013-2018) demonstrated experience in industrial automation and its applications for the Turbo-Generator Plants, Oil Refinery Plants, Gas Turbines, Casting Machines in Steel Plants. I'm interested in dealing with the complex dynamics and their control usning both classic control (mostly MPC, SMC) and Machine Learning -based algorithms such as Reinforcement Learning. Over the past years, I've conducted some researches upon the nonlinear control area including robust and adaptive control algorithms. Regarding a test-bed infrastructure, I've been chiefly adopting either the marine autonomous platforms or flying robots such as unmanned areial vehicles (UAVs) as case studies in order to evaluate the performance of developed control algorithms. Actually, in such cases one could consider model uncertainties, influence of coupling and complex disturbances such as winds, waves and ocean currents to analyze the robustness of proposed controllers. As an impending work, I'm struggling to explore a control approach for Partially Observable Dynamics in which the plant/process is considered as a Partially Observable Markov Decision Process (POMDP) in the context of reinforcement learning instead of MDP. The proposed strategy involves both controller (MPC) and estimator (MHE), which are adjusted using the reinforcement learning. Therefore, the contribution is a (Model Precictive Control+Moving Horizon Estimation)-based Reinforcement Learning algorithm, which hopefully handle a POMDP plant in a safe situation. I’ve found that the integration of the nonlinear control concepts with the soft computing and learning algorithms dramatically improve their performances.
January 2020, I started as a PhD fellow under supervision of Professor Sebastien Gros at the Department of Engineering Cybernetics, NTNU.
My working title is Safe Reinforcement Learning based on Model Predictive Control and Moving Horizon Estimation.
My research interests encompass the followings:
-Robust/Optimal control systems
-Approximate Dynamic Programming
-Marine robotics (autonomous ships and underwater vehicles)
-Instrumentation and Industrial Automation