Project
| # | Title | Team Members | TA | Documents | Sponsor |
|---|---|---|---|---|---|
| 19 | Vision-Guided Sorting and Pickup Cleaning Robot |
Dailin Wu Jinyang Chen Tinghao Pan Zihan Zhou |
design_document1.pdf |
Meng Zhang | |
| # Vision-Guided Sorting and Pickup Cleaning Robot ## 1. Problem Definition and Motivation Public environments such as campuses, parks, and sidewalks often accumulate scattered trash that requires frequent manual cleaning. Traditional cleaning methods rely heavily on human labor, which can be inefficient and costly for large or continuously used spaces. In addition, many existing robotic cleaning systems mainly focus on navigation and simple sweeping functions but lack the ability to intelligently identify and sort waste. To address this limitation, this project aims to develop a vision-guided autonomous cleaning robot capable of detecting, classifying, and collecting trash objects. By combining computer vision with robotic manipulation, the robot can not only identify waste items but also physically remove them from the environment. This integrated perception-to-action pipeline allows the system to perform both cleaning and basic waste sorting automatically. The success of this project will be evaluated based on the following criteria: - The robot can reliably detect and identify trash objects in its field of view. - The system can classify waste into predefined categories. - The robotic arm can successfully pick up and relocate detected waste items. - The system can operate autonomously with minimal human intervention. --- ## 2. Solution Overview The proposed solution integrates vision-based perception and robotic manipulation into a unified workflow. An onboard camera captures images of the surrounding environment, and a computer vision model analyzes these images to locate potential objects and determine whether they should be treated as waste. Once an object is identified as garbage, the system assigns it to a waste category. This classification is not only used for sorting but also helps guide the manipulation strategy. Different waste categories may correspond to different object shapes, sizes, or surface properties, which influence how the robotic arm approaches and grasps the item. Compared with conventional cleaning robots that only sweep debris or rely on predefined object shapes, the proposed system introduces visual intelligence and adaptive grasping, enabling the robot to handle a wider variety of waste items. The feasibility of the system is supported by the availability of common hardware components such as cameras, embedded processors, and robotic arms, as well as existing computer vision models that can be adapted for object detection and classification. --- ## 3. System Architecture and Components ### Vision Module The vision module captures images using an onboard camera and processes them through a trained vision model to detect objects and classify potential waste. The output of this module includes the object’s location, category, and estimated properties. ### Decision and Planning Module Based on the detection results, this module determines whether the object should be collected and calculates the appropriate grasping strategy. It generates the required motion commands for the robotic arm. ### Manipulation Module The robotic arm performs the physical pick-and-place action. The arm adjusts its approach direction, grasp point, and gripping force to accommodate different types of waste objects. ### Sorting and Storage Module After an object is successfully grasped, it is placed into the corresponding container or storage area according to its waste category. |
|||||