4.6 Article

Visuo-Tactile Feedback-Based Robot Manipulation for Object Packing

Journal

IEEE ROBOTICS AND AUTOMATION LETTERS
Volume 8, Issue 2, Pages 1151-1158

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LRA.2023.3236884

Keywords

Affordances; Robot sensing systems; Robots; Planning; Grasping; Task analysis; Visualization; Manipulation planning; force and tactile sensing

Categories

Ask authors/readers for more resources

A new visuo-tactile feedback-based manipulation planning framework is proposed in this work, which uses multisensory feedback and an attention-guided deep affordance model as perceptual states, and a deep reinforcement learning pipeline. Multiple sensory modalities, including vision and touch, are employed to predict and indicate manipulable regions for objects with similar appearances but different intrinsic properties. The proposed method achieves better accuracy and higher efficiency in object packing task.
Robots are increasingly expected to manipulate objects, of which properties have high perceptual uncertainty from any single sensory modality. This directly impacts successful object manipulation. Object packing is one of the challenging tasks in robot manipulation. In this work, a new visuo-tactile feedback-based manipulation planning framework for object packing is proposed, which makes use of the on-the-fly multisensory feedback and an attention-guided deep affordance model as perceptual states as well as a deep reinforcement learning (DRL) pipeline. Significantly, multiple sensory modalities, vision and touch [tactile and force/torque (F/T)], are employed in predicting and indicating the manipulable regions of multiple affordances (i.e., graspability and pushability) for objects with similar appearances but different intrinsic properties (e.g., mass distribution). To improve the manipulation efficiency, the DRL algorithm is trained to select the optimal actions for successful object manipulation. The proposed method is evaluated on both an open dataset and our collected dataset and demonstrated in the use case of the object packing task. The results show that the proposed method outperforms the existing methods and achieves better accuracy with much higher efficiency.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available