Project
# | Title | Team Members | TA | Documents | Sponsor |
---|---|---|---|---|---|
42 | Human-Robot Interaction for Object Grasping with Visual Reality and Robotic Arms |
Jiayu Zhou Jingxing Hu Yuchen Yang Ziming Yan |
design_document1.pdf final_paper1.pdf final_paper2.pdf final_paper3.pdf final_paper4.pdf proposal3.pdf |
Gaoang Wang | |
Human-Robot Interaction for Object Grasping with Mixed Reality and Robotic Arms #Team Members: Student 1 jiayu9 Student 2 zimingy3 Student 3 yucheny8 Student 4 hu80 #Problem Current robotic systems lack intuitive and seamless human-robot interaction for object manipulation. Traditional teleoperation methods often require complex controllers, making it difficult for users to interact naturally. With advancements in Mixed Reality (MR) and robotic systems, it is possible to develop an intuitive interface where users can manipulate objects in a virtual space, and a robotic arm replicates these actions in real-time. This project aims to bridge the gap between human intention and robotic execution by integrating MR with robotic grasping, enabling precise and efficient remote object manipulation. #Solution Our solution involves creating a Mixed Reality-based control system using Microsoft HoloLens, allowing users to interact with virtual objects via hand gestures. These interactions are then translated into real-world robotic grasping motions using a robotic arm. The system consists of three key subsystems: (1) Digital Twin Creation, (2) MR-based Interaction & Control, and (3) Robotic Arm Execution. This approach ensures seamless synchronization between virtual and real-world interactions, improving accessibility and usability for robotic object manipulation. #Solution Components Subsystem 1: Digital Twin Creation This subsystem focuses on generating accurate 3D models of real-world objects for use in Mixed Reality. Components: RealityCapture Software – for photogrammetry-based 3D model generation. Gaussian Splatting – for efficient and high-fidelity neural rendering of objects. Camera (e.g., DSLR or Smartphone with high resolution) – to capture ~100 images per object. Blender/Meshlab – for 3D model optimization and format conversion. Unity with MRTK (Mixed Reality Toolkit) – to integrate digital twins into MR. Subsystem 2: Mixed Reality Interaction & Control This subsystem enables users to interact with digital twins via Microsoft HoloLens. Components: Microsoft HoloLens 2 – to provide an immersive MR experience. MRTK (Mixed Reality Toolkit) in Unity – for hand tracking and object interaction. Azure Kinect (optional) – for improved depth sensing and object recognition. Custom Hand Gesture Recognition Algorithm – to detect and map user actions to grasping commands. Subsystem 3: Robotic Arm Execution This subsystem translates user interactions into real-world robotic grasping. Components: Robotic Arm (e.g., UR5, Kinova Gen3, or equivalent) – for object grasping. ROS (Robot Operating System) with MoveIt! – for motion planning and control. Unity-to-ROS Bridge (WebSocket or ROSBridge) – for communication between HoloLens and ROS. Custom Grasping Algorithm – to ensure stable and efficient object manipulation. External Camera for Robot Arm Reference – to assist with object localization and depth perception, improving grasping accuracy. This subsystem translates user interactions into real-world robotic grasping. #Criterion for Success --Successfully generate and import at least 10 digital twin objects into Mixed Reality. --Users should be able to interact with objects using hand gestures tracked by HoloLens. --The system should accurately map hand gestures to robotic arm movements in real-time. --The robotic arm should replicate the grasping motion within 2 minutes of user interaction. --Ensure seamless integration between MR and robotic control, with minimal latency. --Conduct a successful live demonstration showing MR-based grasping and real-world execution. |