ORCID Profile
0000-0002-6664-2335
Current Organisation
Deakin University
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: IEEE
Date: 10-2016
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 09-2018
Publisher: IEEE
Date: 12-2018
Publisher: MDPI AG
Date: 29-09-2021
DOI: 10.3390/AI2040029
Abstract: Continuous action spaces impose a serious challenge for reinforcement learning agents. While several off-policy reinforcement learning algorithms provide a universal solution to continuous control problems, the real challenge lies in the fact that different actuators feature different response functions due to wear and tear (in mechanical systems) and fatigue (in biomechanical systems). In this paper, we propose enhancing the actor-critic reinforcement learning agents by parameterising the final layer in the actor network. This layer produces the actions to accommodate the behaviour discrepancy of different actuators under different load conditions during interaction with the environment. To achieve this, the actor is trained to learn the tuning parameter controlling the activation layer (e.g., Tanh and Sigmoid). The learned parameters are then used to create tailored activation functions for each actuator. We ran experiments on three OpenAI Gym environments, i.e., Pendulum-v0, LunarLanderContinuous-v2, and BipedalWalker-v2. Results showed an average of 23.15% and 33.80% increase in total episode reward of the LunarLanderContinuous-v2 and BipedalWalker-v2 environments, respectively. There was no apparent improvement in Pendulum-v0 environment but the proposed method produces a more stable actuation signal compared to the state-of-the-art method. The proposed method allows the reinforcement learning actor to produce more robust actions that accommodate the discrepancy in the actuators’ response functions. This is particularly useful for real life scenarios where actuators exhibit different response functions depending on the load and the interaction with the environment. This also simplifies the transfer learning problem by fine-tuning the parameterised activation layers instead of retraining the entire policy every time an actuator is replaced. Finally, the proposed method would allow better accommodation to biological actuators (e.g., muscles) in biomechanical systems.
Publisher: IEEE
Date: 11-2018
Publisher: IEEE
Date: 10-2016
Publisher: IEEE
Date: 10-2018
Publisher: Elsevier BV
Date: 04-2019
Publisher: Elsevier BV
Date: 10-2019
DOI: 10.1016/J.APERGO.2019.05.004
Abstract: Ensuring a healthier working environment is of utmost importance for companies and global health organizations. In manufacturing plants, the ergonomic assessment of adopted working postures is indispensable to avoid risk factors of work-related musculoskeletal disorders. This process receives high research interest and requires extracting plausible postural information as a preliminary step. This paper presents a semi-automated end-to-end ergonomic assessment system of adopted working postures. The proposed system analyzes the human posture holistically, does not rely on any attached markers, uses low cost depth technologies and leverages the state-of-the-art deep learning techniques. In particular, we train a deep convolutional neural network to analyze the articulated posture and predict body joint angles from a single depth image. The proposed method relies on learning from synthetic training images to allow simulating several physical tasks, different body shapes and rendering parameters and obtaining a highly generalizable model. The corresponding ground truth joint angles have been generated using a novel inverse kinematics modeling stage. We validated the proposed system in real environments and achieved a joint angle mean absolute error (MAE) of 3.19±1.57
Publisher: Elsevier BV
Date: 06-2015
No related grants have been discovered for Ahmed Abobakr.