Related references
Note: Only part of the references are listed.Robust control under worst-case uncertainty for unknown nonlinear systems using modified reinforcement learning
Adolfo Perrusquia et al.
INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL (2020)
Robot Position/Force Control in Unknown Environment Using Hybrid Reinforcement Learning
Adolfo Perrusquia et al.
CYBERNETICS AND SYSTEMS (2020)
Online Reinforcement Learning Control for the Personalization of a Robotic Knee Prosthesis
Yue Wen et al.
IEEE TRANSACTIONS ON CYBERNETICS (2020)
Adaptive Critic Designs for Event-Triggered Robust Control of Nonlinear Systems With Unknown Dynamics
Xiong Yang et al.
IEEE TRANSACTIONS ON CYBERNETICS (2019)
Position/force control of robot manipulators using reinforcement learning
Adolfo Perrusquia et al.
INDUSTRIAL ROBOT-THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH AND APPLICATION (2019)
Data-Driven Model-Free Tracking Reinforcement Learning Control with VRFT-based Adaptive Actor-Critic
Mircea-Bogdan Radac et al.
APPLIED SCIENCES-BASEL (2019)
Optimal and Autonomous Control Using Reinforcement Learning: A Survey
Bahare Kiumarsi et al.
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2018)
Adaptive Event-Triggered Control Based on Heuristic Dynamic Programming for Nonlinear Discrete-Time Systems
Lu Dong et al.
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2017)
Model-based reinforcement learning for approximate optimal regulation
Rushikesh Kamalapurkar et al.
AUTOMATICA (2016)
Near Optimal Event-Triggered Control of Nonlinear Discrete-Time Systems Using Neurodynamic Programming
Avimanyu Sahoo et al.
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2016)
Optimal Critic Learning for Robot Control in Time-Varying Environments
Chen Wang et al.
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2015)
Actor-Critic-Based Optimal Tracking for Partially Unknown Nonlinear Discrete-Time Systems
Bahare Kiumarsi et al.
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2015)
H∞ Tracking Control of Completely Unknown Continuous-Time Systems via Off-Policy Reinforcement Learning
Hamidreza Modares et al.
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2015)
Continuous-Time Q-Learning for Infinite-Horizon Discounted Cost Linear Quadratic Regulator Problems
Muthukumar Palanisamy et al.
IEEE TRANSACTIONS ON CYBERNETICS (2015)
Reinforcement-Learning-Based Robust Controller Design for Continuous-Time Uncertain Nonlinear Systems Subject to Input Constraints
Derong Liu et al.
IEEE TRANSACTIONS ON CYBERNETICS (2015)
Optimal tracking control of nonlinear partially-unknown constrained-input systems using integral reinforcement learning
Hamidreza Modares et al.
AUTOMATICA (2014)
Linear Quadratic Tracking Control of Partially-Unknown Continuous-Time Systems Using Reinforcement Learning
Hamidreza Modares et al.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL (2014)
A supervised Actor-Critic approach for adaptive cruise control
Dongbin Zhao et al.
SOFT COMPUTING (2013)
Reinforcement Learning and Feedback Control USING NATURAL DECISION METHODS TO DESIGN OPTIMAL ADAPTIVE CONTROLLERS
Frank L. Lewis et al.
IEEE CONTROL SYSTEMS MAGAZINE (2012)
Efficient Model Learning Methods for Actor-Critic Control
Ivo Grondman et al.
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS (2012)
Online actor-critic algorithm to solve the continuous-time infinite horizon optimal control problem
Kyriakos G. Vamvoudakis et al.
AUTOMATICA (2010)
Neural network approach to continuous-time direct adaptive optimal control for partially unknown nonlinear systems
Draguna Vrabie et al.
NEURAL NETWORKS (2009)
Discrete-time nonlinear HJB solution using approximate dynamic programming: Convergence proof
Asma Al-Tamimi et al.
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS (2008)