Neuromorphic computing mimics computational principles of the brain in silico and motivates research into event-based vision and spiking neural networks (SNNs). Event cameras (ECs) exclusively capture local intensity changes and offer superior power consumption, response latencies, and dynamic ranges. SNNs replicate biological neuronal dynamics and have demonstrated potential as alternatives to conventional artificial neural networks (ANNs), such as in reducing energy expenditure and inference time in visual classification. Nevertheless, these novel paradigms remain scarcely explored outside the domain of aerial robots. To investigate the utility of brain-inspired sensing and data processing, we developed a neuromorphic approach to obstacle avoidance on a camera-equipped manipulator. Our approach adapts high-level trajectory plans with reactive maneuvers by processing emulated event data in a convolutional SNN, decoding neural activations into avoidance motions, and adjusting plans using a dynamic motion primitive. We conducted experiments with a Kinova Gen3 arm performing simple reaching tasks that involve obstacles in sets of distinct task scenarios and in comparison to a non-adaptive baseline. Our neuromorphic approach facilitated reliable avoidance of imminent collisions in simulated and real-world experiments, where the baseline consistently failed. Trajectory adaptations had low impacts on safety and predictability criteria. Among the notable SNN properties were the correlation of computations with the magnitude of perceived motions and a robustness to different event emulation methods. Tests with a DAVIS346 EC showed similar performance, validating our experimental event emulation. Our results motivate incorporating SNN learning, utilizing neuromorphic processors, and further exploring the potential of neuromorphic methods.
@article{abdelrahman2025neuromorphic,title={A Neuromorphic Approach to Obstacle Avoidance in Robot Manipulation},author={Abdelrahman, Ahmed and Valdenegro-Toro, Matias and Bennewitz, Maren and Pl{\"o}ger, Paul G},journal={The International Journal of Robotics Research},volume={44},number={5},pages={768--804},year={2025},publisher={SAGE Publications Sage UK: London, England},doi={10.1177/02783649241284058},bibtex_show=true,}
ERF 2025
Task-Oriented Visual Object Pose Estimation for Robot Manipulation: A Modular Approach
Ahmed Abdelrahman, Peter So, Hoan Quang Le, and 2 more authors
This paper presents a general method for object pose estimation from RGB-D camera data for robot manipulation tasks. We fine-tune off-the-shelf image detection models to recognize certain objects in color images then combine the result with point cloud information to estimate 3D object positions in a task-agnostic approach. By utilizing prior information about our manipulation task, we further estimate object orientations using additional heuristics. We demonstrate our approach and evaluate its performance on an electronic task board and release our adaptable and easy-to-integrate implementation as a re-usable software module under https://github.com/eurobin-wp1/tum-tb-perception.
2023
MSc Thesis
A Neuromorphic Approach to Obstacle Avoidance in Robot Manipulation
An essential measure of autonomy in assistive service robots is adaptivity to the various contexts of human- oriented tasks, which are subject to subtle variations in task parameters that determine optimal behaviour. In this work, we propose an apprenticeship learning approach to achieving context-aware action generalization on the task of robot-to-human object handover. The procedure combines learning from demonstration and reinforcement learning: a robot first imitates a demonstrator’s execution of the task and then learns contextualized variants of the demonstrated action through experience. We use dynamic movement primitives as compact motion representations, and a model-based C-REPS algorithm for learning policies that can specify hand over position, conditioned on context variables. Policies are learned using simulated task executions, before transferring them to the robot and evaluating emergent behaviours. We additionally conduct a user study involving participants assuming different postures and receiving an object from a robot, which executes hand-overs by either imitating a demonstrated motion, or adapting its motion to hand-over positions suggested by the learned policy. The results confirm the hypothesized improvements in the robot’s perceived behaviour when it is context-aware and adaptive, and provide useful insights that can inform future developments.
@inproceedings{abdelrahman2020context,title={Context-aware Task Execution Using Apprenticeship Learning},author={Abdelrahman, Ahmed and Mitrevski, Alex and Pl{\"o}ger, Paul G},booktitle={2020 IEEE International Conference on Robotics and Automation (ICRA)},pages={1329--1335},year={2020},organization={IEEE},doi={10.1109/ICRA40945.2020.9197476},bibtex_show=true,}
Tech. Report
Incorporating Contextual Knowledge Into Human-Robot Collaborative Task Execution