Projects
# | Title | Team Members | TA | Professor | Documents | Sponsor |
---|---|---|---|---|---|---|
1 | A Compact Material Modulus Measurement Instrument |
Kongning Lai Tianyu Fu Yunzhi Lu Ziyi Lin |
proposal1.pdf |
Huan Hu | ||
# Problem Soft materials, including hydrogels and polymers, are widely used in fields such as biomedicine, protective coatings, and electronics. Their mechanical modulus is a key parameter for precise design. However, challenges arise in the measurement process. The system will apply a resistance bridge as a sensor. However, there is a basic zero shift. As the measurement process continues, the shift will accumulate due to temperature and other factors. The accumulated shift may exceed the practical requirements of the experiment, leading to potential inaccuracies. Furthermore, there is no software available for accurately performing baseline correction, contact point estimation, and contact model selection on force-displacement curves obtained from new instruments, further complicating precise measurements. # Solution Overview This project aims to develop a macro-scale instrument replicating the functionality of an atomic force microscope (AFM) for measuring the mechanical modulus of soft materials. To mitigate the issue of zero shift, the project will develop a feedback system to control the shift and ensure the resistances are balanced. The approach begins by implementing AFM data processing functions using Python, providing a foundation for accurate analysis. Building upon this, machine learning techniques will then be integrated to enhance both the speed and accuracy of the measurements, ultimately improving the reliability of soft material characterization. # Solution Components ## Subsystem1 (Hardware) ### Part A - Multiple positioning stages: Provide precise control over specimen alignment and movement. - Metal cantilever with adhesive-attached balls: Adhesively attached balls ensure uniform force distribution. - Strain gauge for measuring both strain and force: Ensures accurate data collection for analyzing material properties. ### Part B - Wheatstone Bridge used to measure resistance variations. - Differential Amplifier with a Reference Voltage Source, Amplifies the signal and stabilizes the system. - Voltage-Controlled Resistor (VCR) is used to adjust the bridge balance dynamically to compensate for zero shift. ## Subsystem2 (Software) ### Part A - Python code for data preprocessing. The process begins with raw FZ curve input, followed by baseline correction, contact point estimation, and feature extraction. - A Linear Discriminant Analysis (LDA) classifier will be used to determine the contact model based on features extracted from force-displacement (FZ) curves. ### Part B - A PC-based AFM control software with a graphical interface using an STM32 development board as the lower controller and QT as the upper controller. - The software will generate control signals for the motor and receive strain gauge signals from the system. - It will calculate displacement based on motor speed and force from strain gauge readings. The force-displacement curve will be displayed in real time and saved locally, providing input for the LDA classifier. # Criteria of Success The mechanical system must reliably generate precise and controllable force curves while the metal cantilever with glued balls maintains a consistent contact geometry, ensuring that variations in the force-indentation response are solely due to the material properties. Additionally, active compensation with a voltage-controlled resistor must be implemented to eliminate zero shift error, maintaining the accuracy of measurements. Furthermore, preliminary data processing on collected force-displacement curves must be conducted accurately and robustly to ensure reliable analysis and interpretation. |
||||||
2 | Terrain-adaptive bipedal service robot |
Binhao Wang Gaokai Zhang Yuan Zhou Zihao Ye |
proposal1.pdf |
Liangjing Yang | ||
# Description Considering the high complexity and weight of existing legged robots, it is difficult to put them into practical applications on a large scale. Therefore, our project will develop a 6-DOF biped robot using a low-cost, torque-controlled actuator module with brushless motors. Built with 3D-printed and off-the-shelf parts, this lightweight, replicable robot leverages reinforcement learning for precise terrain adaptability, making it ideal for applications like transport and filming. # Deliverables - In this project, we aim to complete 3D printing of parts and procurement of motors and other necessary components, followed by full assembly of the biped robot. - We will also finish the implementation and integration of control and communication circuits, with successful firmware deployment. - Besides, we will complete the training of a neural network for walking control in a simulation environment to achieve stable and adaptable movement patterns. - Finally, we will exert the deployment of the trained model through a computer-based communication interface, enabling real-time control and adaptability across various terrains. |
||||||
3 | DESIGN AND CONTROL OF A FETCHING QUADRUPED |
Jitao Li Teng Hou Wenkang Li Yikai Cao |
other1.docx proposal1.pdf proposal2.pdf |
Hua Chen | ||
There are various commercially available robotic dog platforms, yet no "fetching" skill is shown. One reason is the lack of integration of a manipulator with the dog. To be compatible with the robot dog, the robot arm needs to be lightweight, accurate, and robust. The integrated system will be able to perform simple tasks such as fetching, with the help of visual feedback. Such a manipulator requires a new design, good coordination of its components, along with a dedicated controller. | ||||||
4 | Automatic Page_Turning Photocopier |
Shuchang Dong Xuan Zhu Yingying Gao Yiying Lyu |
other1.pdf proposal1.pdf |
Liangjing Yang | ||
# Problem Current photocopying machines require manual page-turning, which is inconvenient and inefficient. Additionally, these machines are limited in their applicability, as they are primarily designed for bound documents such as books. This limitation restricts their use in scenarios where unbound or irregularly shaped documents need to be copied, such as in printing shops or educational institutions. As a result, there is a need for a more versatile and efficient solution to streamline the document copying process in these environments. # Solution overview Our solution for reducing the human labor when using photocopier is to create a robotic arm. The robotic arm is a machine that has multiple linkages to support moving, camera to capture images of each page, a screen that allows operators to monitor progress and perform necessary operations, and adaptive lighting to accommodate varying book bindings and paper conditions. Additionally, the photocopier can be controlled by an automated computer program, enabling seamless page-turning during photo or scanning processes, thereby enhancing efficiency and minimizing manual effort. # Components ## Page Turning subsystem -A robotic arm or mechanism is required to automatically turn pages. This mechanism will be supported by a robust frame that ensures stability and precision. -A base frame which is easy to recognize is required to support materials. ## Photocopying System -Equipped with a high-resolution camera, this system will capture clear and detailed images of each page. -The LED lighting will be optimized based on the characteristics of the paper to prevent reflections and ensure complete content capture. ## Input/Output Interface -An intuitive user interface will allow users to input commands such as page settings. -A display screen is needed for easy interaction and status updates. # Criteria of Success -Turns one page at a time. -Completes a page-turning action within 6 seconds. -Produces clear scanned images with no significant shadows or reflections. -Automatically stops when all pages have been turned or when reaching a preset page number. |
||||||
5 | Four-axis vacuum stage for advanced nano-manufacturing |
Songyuan Lyu Xingjian Kang Yanghonghui Chen Yanjie Li |
proposal1.pdf |
Olesksiy Penkov | ||
Four-axis vacuum stage for advanced nano-manufacturing Request for Approval # Team Members - Songyuan Lyu [slv4] - Xingjian Kang [xk9] - Yanjie Li [yanjiel2] - Yanghonghui Chen [yc47] # Problem In recent years, nanocoating technology has been developed at a rapid speed. The advancements in this area include better surface performance, prolonged lifespan of materials,higher corrosion resistance. For example, the market of ceramic-based nanocoating is expected to boost at a CAGR of 7.6% from 2022 to 2032 and finally reach a value of US$ 21.82 billion. The market has earned 10.49 billion dollars. However, most PVD coating machine can only deal with 2D or 2.5D component, and they cannot solve the problem of coating complicated surfaces, which is very unfavorable for meeting the coating requirements of industries such as aerospace for complex components. When it involves coating for complicated and irregular objects, it creates some obstacles to create high quality coating films. For an instance, if the company plans to coat with an artifical tooth, they need to deposit films for multiple times and manually change the tooth posture to achieve comprehensive coating. However, this will not only reduce the uniformity of the coating and affect its lifetime. Also, it will increase the coating time. # Solution Overview We decided to build a 4 DOF robotic arm(4 aixs vacuum stage) that allows components to move freely within 3 demensional space and realize 3D magnetron sputtering.The traditional nanocoating tachnology applies magnetron sputtering PVD method to form thin films. Dealing with irregularly shaped specimens often leads to non-uniform coating membranes. The aim is to design and construct a robotic manipulator to address these issues. This device will control the specimen's posture, enhancing the coating films' uniformity and mechanical properties. # Solution Components ## Mechanical System - Aluminum Industrial Profile - Aluminium Castings - Belt ## Control System - STM32F407 MCU - ZDT Emm42_V5.0 stepper motor controller ## Actuator - stepper motors for moving (42*48, 0.6Nm; 28*30, 0.07Nm) - reduction gear for enlargeing the torque (1:10; 1:50) ## Interface - 7 inch TFT touch screen, seiral TTL protocol - USB port for computer accessing ## Communication protocal - RS485 between MCU and stepper controller ## Integration - Design PCB board to integrate power supply, microcontroller, stepper motor controller and wires in one single board # Criteria for Success - The actuator must move correctly in response to the controller's commands, to the right angle and position without losing steps. - The system work correctly in the Magnetron Sputtering Coating Machine, unaffected by strong magnetism and high heat vacuum environments. - Coating layer achieve uniformity error less than 20%. - The system can work for objects with different shapes and sizes. |
||||||
6 | Smart medicine box |
Ruolin Zhao Wentao Ke Yutong Lou Zhiyi Mou |
proposal1.pdf proposal2.pdf |
Wee-Liat Ong | ||
#PROBLEM: The global aging population is growing rapidly, especially among elderly individuals living alone. Most older adults have at least one chronic illness that requires regular medication. Managing these medications has become a major concern in healthcare. However, due to declining memory function and complex medication regimens, many elderly patients often miss doses or take incorrect medications, leading to serious health risks. Similar issues are also observed among younger individuals who struggle with medication adherence. Currently, many smart medicine boxes on the market have several shortcomings: their operation is often complex (making them difficult for seniors to use), they may lack safety mechanisms, and they offer limited remote monitoring. These limitations prevent family members or caregivers from accurately tracking a patient’s medication adherence. #SOLUTION OVERVIEW: We propose a smart medicine box that prioritizes usability, safety, and remote monitoring. This device will help ensure patients take their medications correctly and on time by combining intelligent features with a user-friendly design. The system integrates a multimodal AI model capable of intelligent prescription recognition. It can read and interpret prescriptions—identifying medication names, dosages, and verifying the prescription for accuracy. Once the required medications are loaded, the system will automatically segment pills from blister packs and store them in dedicated compartments. Medications are then dispensed in precise doses according to the prescribed schedule. A rotating dispensing mechanism releases the correct pill count at each dose time. If a patient misses a dose or if the medication supply is running low, the system triggers a voice reminder and sends an alert notification to caregivers. This ensures that missing doses are noticed promptly, and that refills or assistance can be provided in a timely manner. #SOLUTION COMPONENTS: Intelligent Recognition and Analysis System: Reads and recognizes prescription information, storing the medication schedule and dosage details for each drug. Medication Segmentation and Storage System: An automated cutting mechanism segments blister-packed medications and categorizes them into storage compartments. Built-in sensors monitor the stock levels of each medication. Medication Dispensing System: A rotating dispenser mechanism releases one pill at a time (or the required number of pills) for each scheduled dose, ensuring the correct dosage is dispensed at the right time. Safety and Monitoring System: If the patient does not take the dispensed medication on time, a voice reminder is activated to prompt them. If the medication supply is low, the device issues a voice alert and also sends a notification to designated caregivers or family members for remote monitoring. #CRITERION FOR SUCCESS: Accurate Information Handling: Ability to read, recognize, and record medication and prescription information with a high degree of accuracy. Reliable Segmentation and Storage: Capability to automatically segment and store different types of medications (including blister-packed pills) without mixing or errors. Timed Dispensing and Reminders: Timely voice reminders and automatic dispensing of the correct dosage at designated times, consistently adhering to the schedule. Alert Mechanisms: Effective alert systems for low medication supply or missed doses, including audible alerts and remote notifications to caregivers. |
||||||
7 | Tennis Ball Pick-up Machine |
Haopeng Jiang Shurui Liu Yilin Xue |
proposal1.pdf |
Gaoang Wang | ||
# Tennis Ball Pick-up Machine # members Shurui Liu (shuruil2) Yilin Xue (yilinx3) Haopeng Jiang (haopeng4) Hengjia Yu (hengjia2) # Problem In tennis training or competition, players and coaches often face a tedious and time-consuming task - picking up scattered tennis balls on the court. The traditional manual ball-picking method consumes physical energy and takes up valuable training time, especially in high-intensity training or large-scale competitions. The existing solutions either rely on manpower or lack intelligence and automation, which cannot meet the needs of modern tennis training. To address this issue, we plan to develop an automatic pickup tennis cart based on image recognition technology. This small car can accurately identify tennis balls scattered on the court, plan the optimal path, and quickly collect tennis balls through an efficient picking mechanism, thereby significantly reducing labor costs and improving training efficiency. Our innovation lies in combining image recognition technology with automated path planning to design an intelligent car that can independently complete tennis-picking tasks. Our design provides a convenient user experience for players and coaches and promotes the development of tennis in a more efficient and intelligent direction. Through this project, we hope to alleviate the physical burden on players and coaches, allowing them to focus more on technical improvement and tactical exercises while providing an innovative solution for the intelligent development of tennis. # solution overview Our solution aims to design a simple and efficient automatic pickup tennis cart that can identify tennis balls scattered on the court, plan the optimal path, and pick them up through mechanical devices into the collection container on the cart. The car is equipped with a movable picking mechanism that can navigate autonomously while achieving precise positioning and operation through cameras and visual detection algorithms. The team consists of two members majoring in Electrical Engineering (EE), one in Computer Engineering (CE), and one in Mechanical Engineering (ME). Members of the ECE program are responsible for developing image recognition algorithms based on computer vision to achieve the recognition and localization of tennis balls; EE professionals focus on designing electronic control systems and communication modules to ensure precise navigation and stable operation of the vehicle; Members of the ME profession are responsible for the mechanical structure design and power system optimization of the car, ensuring the efficiency and durability of the picking mechanism. This plan fully utilizes the team's professional background and existing resources to ensure that the project is implemented within technical feasibility and cost control, providing a simple and practical solution for tennis training. # solution components ## Visual system The visual system is responsible for real-time detection and localization of tennis balls scattered on the court, which is based on computer vision technology, capturing images of the court through cameras and using image processing algorithms to identify the position of the tennis ball. The algorithm will prioritize identifying the tennis ball closest to the car, calculate its relative direction, and provide data support for path planning. In addition, the visual system also has a dynamic tracking function, which can continuously update the position information of the tennis ball during the movement of the car, ensuring the efficiency and accuracy of the picking process. ## Control system The control system is responsible for coordinating the operation of the visual system and mechanical structure. It receives the tennis ball position information provided by the visual system and plans the optimal path to control the direction and speed of the car's movement. The control system also integrates boundary detection and obstacle avoidance functions, which monitor the surrounding environment in real-time through sensors to avoid collisions between the car and the boundary of the field or other obstacles. ## Mechanical structure The mechanical structure is the physical execution part of the car, mainly including the following key components: Electric collection drum: installed at the front of the car and driven by an electric motor. When the car passes by the tennis ball, the drum will roll up the ball and send it into the collection frame, achieving efficient picking. Lifting system: To increase the capacity of the collection box, the lifting system can automatically adjust the height according to the number of tennis balls in the collection box, ensuring that more tennis balls can be accommodated. Mecanum wheels and control: The car uses Mecanum wheels as the mobile chassis, which can achieve omnidirectional movement, including forward and backward, left and right, and rotational motion. The control system precisely adjusts the speed and direction of each wheel, allowing the car to move flexibly in the narrow space of the court and quickly adjust its orientation to align with the target tennis ball. # Criteria of Success The success criteria of this project are mainly reflected in the following aspects: Function implementation: The car can accurately identify tennis balls scattered on the court and plan the optimal path for picking them up, ensuring an efficient and error-free picking process. Performance indicators: The recognition accuracy of the visual system should reach over 90%, the control system should achieve real-time path planning and obstacle avoidance, and the success rate of mechanical structure picking should exceed 95%. User experience: The car is easy to operate, and users only need to start the device to complete the ball-picking task without complex settings or intervention. Reliability: The car can adapt to changes in lighting, ensuring long-term reliability. Scalability: The design has a certain degree of modularity and scalability, facilitating future functional upgrades or adapting to the needs of similar scenarios. By meeting the above standards, our automatic pickup tennis cart will provide an efficient, intelligent, and practical solution for tennis training, significantly improving training efficiency and user experience. |
||||||
8 | Dodgeball bot |
Chengyuan Fang Haoxiang Tian Yujie Pan Yuxuan Xia |
proposal1.pdf |
Timothy Lee | ||
# Request for Approval: Senior Project – "Dodgeball Bots" **Team members:** Haoxiang Tian [ht13] Yujie Pan [yujiep2] Chengyuan Fang [cfang14] Yuxuan Xia [yuxuanx9] --- ## 1. Problem Dodgeball is a high-energy sport that demands agility, strategy, and precision. However, traditional gameplay is limited by: - Human physicality and inconsistent skill levels - Safety concerns when targeting opponents at varying distances - Lack of adaptive training scenarios with consistent accuracy - Existing automated systems' inability to combine real-time tracking, variable launch power, and rotational targeting in fixed-body designs This project addresses the challenge of creating a **safe, adaptive, and highly accurate dodgeball launching robot**. --- ## 2. Solution Overview A fixed-body dodgeball bot with: - **Human-tracking sensors** (computer vision + depth sensor) - **Adjustable-power launching mechanism** (rubber wheels/pneumatic pistons) - **Precision rotation system** (motorized turret with feedback control) Key capabilities: - Real-time torso detection and tracking - Dynamic target locking - Controlled velocity propulsion (10–20m range) --- ## 3. Components ### Aim - **Human Tracking**: Live camera identifies humans and predicts movement. Depth sensors calculate distance (10m–20m). - **Target Lock**: Software dynamically adjusts aim as targets move. ### Rotate - **Turret Mechanism**: Stepper motor/hightorque servo rotates launcher (0–120° in <2 seconds) with stationary base. - **Feedback Control**: Encoders ensure angular precision (±5° error tolerance). ### Power - **Launch Mechanism**: Rubber wheels/pneumatic pistons propel balls. Wheel speed/pressure calibrated for 10–20m range. - **Adjustable Speed**: Motor driver/PID controller modulates power based on distance. ### Control - **Central Controller**: Manages start/stop functions and system coordination. --- ## 4. Criteria of Success The ideal dodgeball bot will achieve: - **Accuracy**: 100% torso hit rate on stationary human-sized targets at 10–20m - **Speed**: Adjustable launch velocity of 60–80 km/h - **Responsiveness**: - 120° rotation within 1 second - Target tracking/relocking in <0.5 seconds - **Safety**: Compliance with injury-prevention force limits - **Durability**: - 100+ consecutive launches without failure - Total lifecycle exceeding 1000 launches |
||||||
9 | H.E.R.O. - HAZARDOUS ENVIRONMENT REMOTE OPERATOR |
Jun Liang Qihan Shan Sizhao Ma Xihe Shao |
proposal1.pdf proposal2.pdf |
|||
# H.E.R.O. - HAZARDOUS ENVIRONMENT REMOTE OPERATOR ## PROBLEM Human workers in hazardous environments, such as handling toxic materials, high-voltage equipment, or explosives, face significant risks to their safety. Current solutions, such as glove-based sensor systems or pre-programmed robots, lack flexibility, comfort, and adaptability. These methods often require physical wearables and high costs, restricting their practical application in dangerous scenarios. ## SOLUTION A vision-based robotic hand system that mimics human gestures in real-time using non-invasive camera tracking and 3D-printed components. Key innovations include: - Camera-driven gesture recognition eliminating wearable sensors. - Real-time mimicry with closed-loop feedback for accuracy. - Low-cost 3D-printed design making robotic manipulation accessible for hazardous environments. ## SOLUTION COMPONENTS ### SUBSYSTEM 1: VISION & GESTURE RECOGNITION - **Camera module (webcam):** Capture hand movements. - **Computer vision algorithms (OpenCV or MediaPipe):** Detect and track hand landmarks. - **Communication API:** Translate gestures into spatial coordinates for robotic replication that can be recognized by control system. ### SUBSYSTEM 2: MOTION CONTROL & ACTUATION - **Inverse kinematics:** Map hand coordinates to servo angles. - **Micro controller:** Compute the control signal and drives servo motors. - **3D-printed robotic hand:** Replicate gestures with five degrees of freedom. - **ARX robotic arm:** Provide a movable platform for our hands. ### SUBSYSTEM 3: SAFETY & RELIABILITY - **Fail-safe mechanisms (e.g., emergency stop, error thresholds):** Prevent unintended motions. - **Closed-loop feedback:** Ensure real-time corrections and stability. - **Modular design:** Allow quick repairs in hazardous conditions. ## CRITERION FOR SUCCESS - **Real-time responsiveness:** Latency 90% (Over 90% pose can be correctly performed). - **Cost:** Total system cost under $200 (vs. commercial robotic hands costing $1,000+). |
||||||
10 | Grasping Any Object with Robotic Arms with Language Instructions |
Junsheng Huang Junzhou Fang Zixin Zhu Zixuan Zhang |
proposal1.pdf |
|||
We are tasked with using robotic arms to grasp objects based on human instructions. The input should consist of RGB images captured by the machine, enabling the robotic arms to identify and grasp specific objects as directed by humans. Instructions should be provided as natural voice commands or text inputs, allowing for a wide vocabulary of objects. This means the machine must understand the semantic meaning oa broad range of common objects, not limited to a few specific ones. | ||||||
11 | The Smart Fitness Coach |
Lishan Shi Yuxuan Lin |
presentation1.pptx proposal1.pdf |
Bruce Xinbo Yu | ||
# PROBLEM With the national fitness campaign, people's fitness habits are being established, and the demand for personalized fitness plans is increasing. Many individuals now exercise at home due to its convenience. However, unfamiliarity with professional knowledge can lead to injuries. This is the issue the Smart Fitness Coach aims to solve. Our goal is to provide an accessible and customized fitness experience, enhancing workout safety and efficiency. # SOLUTION OVERVIEW Our app, The Smart Fitness Coach helps home exercisers by offering a system that recognizes exercise forms, provides real-time feedback on posture, and suggests improvements for better safety and effectiveness based on the captured movements. # SOLUTION COMPONENTS ## FRONTEND (MOBILE APPLICATION) User Interface: A user-friendly interface that provides visual feedback on exercise form, indicating correct and incorrect posture. Real-Time Feedback: Visual cues and alerts to guide users, such as highlighting misaligned body parts and offering feedback on exercise correctness. Exercise Suggestions: Personalized workout recommendations based on user performance. ## BACKEND (PROCESSING UNIT) Data Processing: Real-time processing of the video feed, ensuring minimal delay in feedback to the mobile app during workouts. Pose Estimation: Techniques like OpenPose or MediaPipe are used to estimate the user’s body pose and evaluate alignment during exercises. Action Recognition: Machine learning models, for example, Yolo, identify exercises such as squats or push-ups by analyzing movement patterns. ## CLOUD DATABASE (OPTIONAL) Stores user profiles, exercise logs, and performance metrics. The cloud also hosts models, enabling periodic updates to improve accuracy. # CRITERION OF SUCCESS The project will be successful if the system can accurately identify user movements, provide reliable feedback, and suggest improvements to reduce injuries and enhance fitness. User satisfaction and fitness improvement are key indicators of success. |
||||||
12 | ROBOTIC ARM INTEGRATED INTO WHEELCHAIR WITH MR INTERFACE |
Xingru Lu Yilin Wang Yinuo Yang Yunyi Lin |
proposal1.pdf |
Liangjing Yang | ||
#TEAM MEMBER Yunyi Lin, yunyil3 Yinuo Yang, yinuoy4 Xingru Lu, xingrul2 Yilin Wang, yilin14 # PROBLEM Wheelchair users often face significant challenges when interacting with objects beyond their immediate reach, particularly behind them. Without external assistance, tasks such as pressing buttons or navigating through environments with complicated surroundings can become difficult. These difficulties are compounded when operating independently, highlighting the need for supplementary support to simplify routine activities. Additionally, wheelchair users may struggle with limited situational awareness, as their field of view is primarily forward-facing. As a result, there is a pressing need for innovative solutions that enhance both accessibility and autonomy, enabling wheelchair users to interact more conveniently with their surroundings. # SOLUTION OVERVIEW Our solution integrates a rear-facing camera that streams real-time visuals to a Mixed Reality (MR) interface, allowing wheelchair users to gain visual awareness of their surroundings, including blind spots behind them. Additionally, a robotic arm mounted at the back of the wheelchair can be controlled through MR, enabling users to perform assistive actions such as pressing buttons and interacting with objects beyond their physical reach. This system enhances both situational awareness and independent mobility, providing a more intuitive and convenient way for users to navigate and interact with their environment. # SOLUTION COMPONENT ## OPEN MANIPULATOR-P ROBOTIC ARM The Open Manipulator-P robotic arm will be responsible for helping disabilities to extend their reachable area and assist them with tasks in their blind spots, such as pressing elevator buttons behind them. ## APPLE VISION PRO Apple Vision Pro will be responsible for detecting user’s hand movements and giving feedback to the user. It provides a camera matrix consisting of eight depth cameras and RGB cameras. These cameras will be helpful in spatial computing and object detection. ## MIXED REALITY INTERFACE The mixed reality interface will provide a live view from behind the wheelchair, allowing people with disabilities to see from their blind spots. The interface will also offer feedback when user tries to control the robotic arm, such as draggable buttons. These feedbacks ill enhance the interaction between user and robotic arm. # CRITERIA FOR SUCCESS - Precision: The robotic arm should reliably press buttons with a diameter of at least 35mm, which is a common size of elevator buttons. The force applied must be sufficient to activate buttons without excessive pressure that could cause damage or failure. - Clear Vision Pro View: Users should be able to see both the front and rear environments through Vision Pro, while also adjusting the robotic arm’s perspective to gain a broader field of view. - Safety and Stability: The system must ensure that wheelchair stability is not compromised during operation. Movements of the robotic arm should not cause the wheelchair to become unbalanced. - Low Latency: The system should ensure smooth and intuitive control. The latency should be low enough that it does not disrupt normal usage or cause noticeable delays in operation. |
||||||
13 | Intelligent Home Security System |
Danyu Sun Ge Shao Jiaxuan Zhang Yuxin Xie |
proposal1.pdf |
Bruce Xinbo Yu | ||
# People -Danyu Sun(danyu3) -Ge Shao (geshao2) -Jiaxuan Zhang (jz122) -Yuxin Xie (yuxinx7) # Problem Our goal is to monitor the activities of the elderly in real time and recognize if a fall accident occurs. Currently, many elderly people find it difficult to be recognized and get help in time after a fall, leading to serious consequences. Through intelligent recognition and tracking, we can send out an alert at the first time when a fall occurs to ensure the safety of the elderly. # Solution Overview Our solution is to design an intelligent home security system that can monitor the area around home. Firstly, we will build a trackable mobile robot carrying cameras and sensors. And the data collected will be transmitted to a central control unit. Secondly, we will design an algorithm to process the data and detect fall accidents. Once abnormal behavior is detected, the system will immediately trigger an alarm. # Solution Components ## Human Tracking Robot Subsystem - Camera that is used to collect video data - Wheels that help robot move in the area - Infrared sensor that can detect human moving for the tracking function and avoid the robot hitting the wall - Joint that is connected foundation with camera and can make camera rotate ## Central Control Unit - Human detection system to verify whether someone is present - Human action detection system that can detect a fall ## Alert Subsystem - Speaker that can play alert sounds - LED light that can give out warning light # Criterion for Success - Robot can track moving people, avoid itself hitting walls, and ensure the camera can be rotated. - The system must confirm human presence before performing any action recognition. - The accuracy of human fall detection is more than 80%. - If detect the human fall, LED light will give out light and speaker will play sounds. # Distribution of Work -Danyu Sun: Human detection system used to check whether people are in present -Ge Shao: Human action detection system that can recognize falling behavior when someone falls -Jiaxuan Zhang: A light and sound alert system that can be triggered when a fall is detected -Yuxin Xie: A human tracking robot system that can track moving people and avoid hitting walls |
||||||
14 | SoftBot: Jellyfish-Inspired Bionic Soft Robot with Visual Perception |
Junwei Zhang Shuran Yan Wangjie Xu Yinliang Gan |
other1.pdf proposal1.pdf |
Shi Ye | ||
This project aims to develop a soft robotic actuator system inspired by the efficiency and structure of jellyfish. The new actuator combines dielectric elastomers and dielectric liquids to create a robust, efficient, and adaptable actuator capable of mimicking muscle-like movements. By overcoming the limitations of existing actuators like soft pneumatic actuators (SFAs), dielectric elastomer actuators (DEAs), and HASEL actuators, this design will provide enhanced force feedback while maintaining stability, flexibility, and portability. The system will integrate visual perception through a neural network-based vision module, and reinforcement learning or (PID) control for optimized movement and trajectory tracking. | ||||||
15 | Carbon Emission Tracking System |
Kaibiao Yan Sicheng Lu Zhengqi Wang Zhiliang He |
proposal1.pdf |
Ruisheng Diao | ||
Development of an interactive sand table model that visualizes carbon emission trends at the Haining International Campus, ZJU, integrates renewable energy sources (wind turbines and solar panels), and provides data analysis and prediction capabilities. | ||||||
16 | Smart Assistive Walking Stick for the Visually Impaired |
Haoyang Zhou Sanhe Fu Yihan Huang Yucheng Zhang |
design_document1.pdf proposal1.pdf |
Yushi Cheng | ||
# Problem More than 250 million people worldwide suffer from varying degrees of visual impairment, which has a profound impact on their physical health, mental well-being, and overall quality of life. Individuals with impaired vision face three key challenges when navigating their surroundings: obstacle avoidance, indoor path planning, and key object localization. Especially in the intersections on the road, the road conditions in this area are complicated, and the blind cannot directly identify the traffic lights and traffic signals. In China, most intersections have no voice prompt system designed for the blind, and there are fast-moving vehicles in this section, which is very dangerous for the blind. The most commonly used assistive tool for visually impaired individuals is the white cane, which provides users with tactile feedback. However, the standard white cane has a limited detection range, only sensing obstacles within its physical length, and cannot identify distant or elevated obstacles. But these situations are very common in the intersections. Moreover, the white cane provides only basic physical feedback and lacks the capability to convey detailed environmental information, such as road intersections and navigation directions. As a result, in unfamiliar or complex environments, relying solely on a white cane makes precise navigation difficult, forcing users to depend on external assistance or their memory of previously traveled routes. # Solution overview Our solution to this problem is the development of an intelligent smart cane. The smart cane can improve walking speed and safety both outdoors and indoors, and we have designed it with a focus on blind people crossing intersections. This advanced cane is equipped with sensors to measure the distance to obstacles, a GPS system for precise outdoor positioning, and computer vision technology to capture detailed environmental information, such as traffic signs and other critical landmarks. Additionally, the smart cane features motor-controlled omnidirectional wheels for directional guidance and provides real-time voice feedback to assist users in navigating their surroundings with greater ease, speed and confidence. In outdoor environments, aid from GPS can not only help the user to walk in strange environments that are not similar but also help them to be more confident in their familiar environments. It will also show a great ability when navigating the users to walk in indoor environments, where the obstacles are usually many and unexpectable. When the user passes through the intersection area, GPS will help give the alert, the camera takes information about the surrounding environment, such as traffic lights and their duration, traffic signs, and whether there are vehicles around. This information is identified by computer vision algorithms and then prompted by voice to the user. And the strong detecting ability provided by the laser sensor of our smart cane can help to avoid crashing into obstacles, especially to avoid crashing into the people and objects that are moving fast speed. # Solution components ## Main Control Module -Raspberry Pi serves as the core processor, responsible for receiving and processing data from various modules and controlling other components to provide the corresponding feedback. It will process the information, like the distance measured by the laser sensor, location information from GPS, and the environment signals fron vision system, to determine the direction to go and give feedback to the user by sending signals to the motor and earphones. -Use Computer Vision algorithms, i.e. YOLO to utilize the images captured by camera and recognize the signals in the environment, and thus gain the information for better guidance. ## Data Collection Module -Laser sensor is used to measure the distance between the user and obstacles. -Inertial measurement unit provides orientation estimates. -GPS is used for precise outdoor positioning. -Camera captures environmental images. ## Feedback Module -Motor is used for direction control and guidance. -Earphone voice prompts provide environmental information to the user. # Criterion For Success Our design incorporates advanced sensor-based distance detection to prevent collisions, camera-enabled object recognition for identifying road signs and other key elements in the environment, and GPS-oriented navigation to ensure accurate positioning. Compared to traditional white canes, its success can be measured by several key improvements, including faster walking speeds in both indoor, outdoor settings and intersections, more precise and efficient navigation, more accurate road inforamtion and an overall enhancement in user independence and mobility. |
||||||
17 | A remote environment recording system with online access portals |
Dingyuan Dai Xincheng Wu Yizhou Chen |
proposal1.pdf |
Shurun Tan | ||
We are deploying multiple sensors in the natural environments to monitor environmental variable dynamics. These include cameras, soil moisture and temperature sensors. We want to develop an integrated system that automatically records and visualizes the sensor data through internet. | ||||||
18 | Cheat for lottery wheel based on servo motor control |
Kaixin Zhang Zhangyang He |
proposal1.docx |
Lin Qiu | ||
**TITLE OF PROJECT** Cheat for lottery wheel based on servo motor control **TEAM MEMBERS** Zhangyang He (zhe27) Kaixin Zhang (kaixin3) Bowen Shi (bowens10) Yilin Liu (yilinl10) **PROBLEM** A classic lottery wheel is completely random mechanical making the stopping position point impossible to control accordingly. Absence of control creates difficulty in prediction as seen in testing, promotional events or research experiments, which need an outlined outcome. Current approaches to manipulation tend to be unreliable, inefficient or easily discovered. There is a need to solve the problem of such basic and accurate control with minimum intervention on the motion of the wheel. A deterministic lottery is essential for repeatable tests, fair promotions, and reliability in specialized cases. This project is necessary as it offers an effective, hidden, and high precision way, combining servo motor control and wireless. **SOLUTION OVERVIEW** This project presents a DC servo motor and a wireless switch controlled lottery wheel, allowing a natural wheel rotation while maintaining exact stop control. All you will do is generate an emergency and give a signal to the system. A DC servo motor will be used to drive the lottery wheel, as the motor can be used to control speeds, accelerations, and stopping positions quite accurately. The motion dynamics will be regulated by a motor control algorithm making it smoothly and naturally decelerate, hence, avoiding abrupt or unnatural stopping behavior. You will use real-time system programming in response to external stimulus to get reliable, repeatable results. Sensors will be utilized to halt the stopping mechanism. This switch will enable remote activation with low latency, so it can respond immediately, while remaining invisible. Using seamless wireless control to ensure it runs seamlessly without easily visible signals being sent anywhere in sight, leaving onlookers unaware of the fact that its process is easily not random. For added reliability, the system would have a high-quality signal processor and real-time feedback control to ensure that the wheel stops in the correct position ±1-degree accuracy. The motor control unit will also ensure optimally efficient power and steady operation to allow for sustained use without performance loss. This solution represents a highly effective, reliable, and stealthy approach to controlling lottery outcomes across a wide range of use-cases by leveraging an amalgamation of advanced motor control methods, real-time wireless activation, and modular system integration. **SOLUTION COMPONENTS** - **DC Servo Motor**: This motor serves as the driving force behind the lottery wheel’s movement. It offers high precision in controlling speed, acceleration, and stopping positions, which is crucial for achieving the ±1-degree accuracy required. The motor’s torque characteristics allow for smooth, gradual acceleration and deceleration, which ensures that the wheel’s stopping behavior remains natural and undetectable. - **Motor Control Unit (MCU)**: The MCU is responsible for executing the motor control algorithm, which governs the speed and position of the lottery wheel. The MCU ensures the smooth operation of the motor, facilitating the precise control needed to stop the wheel at a predefined position. The unit also manages the power supply to the motor, ensuring optimal efficiency for long-term use. - **Wireless Switch**: The wireless switch is used to activate the system remotely. It features low-latency operation, engaging the stopping mechanism in less than 0.1 seconds. This switch is discreet, ensuring that the activation is imperceptible to onlookers. The wireless communication is encrypted and employs a secure protocol to prevent detection by external observers. - **Signal Processor & Sensors**: A high-quality signal processor is integrated to refine and process the commands sent to the motor and wireless switch. It helps filter out noise and ensures the system responds instantaneously to inputs, maintaining the desired precision in stopping the wheel.These sensors are strategically placed on the system to detect the wheel’s position and monitor its movement. When the system is close to the desired stopping point, the sensors send feedback to the motor control unit, enabling it to decelerate the wheel smoothly and precisely to halt at the targeted position. - **Real-Time Feedback System**: This system continuously monitors the motor’s performance, adjusting parameters such as speed and acceleration in real-time. It ensures that the wheel stops precisely at the intended location by providing feedback to the motor control unit, which fine-tunes the motor’s behavior as needed. The feedback system also ensures the stability of the system under varying operational conditions. **CRITERION FOR SUCCESS** - **Accuracy of Execution**: The system should be capable of precisely halting the lottery wheel at a preset location with a tolerance of ±1 degree. The method used to stop the motion should be uniform and reproducible for multiple iterations of the experiment. In case of real-life games, natural emotion and seamless operation. In a traditional lottery wheel, servo motor control must make sure to establish the perfect and seamless acceleration, deceleration, and stopping behavior that to without any unnatural or sudden movements. - **Instant & Discreet Activation**: The wireless switch needs to engage the stopping mechanism in less than 0.1 seconds to be more responsive in real-time but remains unnoticed with the observers. - **System Sustainability & Reliability**: The system should be able to operate reliably under varying operating conditions without wide deviations in performance. It also needs to work on low energy and be stable over long usages. - **Undetectable Manipulation**: An intervention in the system should give no visible, audible, or mechanical signals that it is being controlled fromoutside. To give the appearance of randomness, the stopping process must seem entirely natural. |
||||||
19 | Autonomous Vehicle with Sign Recognition and Obstacle Clearing through Wi-Fi |
Pai Zhang Pengyu Zhu Rui Zhang Wendi Wang |
proposal1.pdf |
|||
# Team Members Zhang Pai (paiz4) Zhang Rui (rui14) Wang Wendi (wendiw2) Zhu Pengyu (pengyuz4) # Problem In recent years, autonomous vehicles have become popular in industrial research and daily life, with advanced functions like reading signal lights and giving ways to pedestrians. However, these advanced autonomous vehicles are seldom introduced to campus as shuttle buses or trash carts due to high costs. It would bring much convenience to students' campus life if some low-cost and efficient autonomous vehicles help students commute and provide some safety guarantees in obstacle detection and self-speed control by reading speed limit signs. We aim to develop the core of such low-cost autonomous vehicles to potentially take the responsibilities of commuting and obstacle-clearing, enabling wireless transmission to provide efficiency via campus Wi-Fi. # Solution Overview We aim to develop an autonomous vehicle supporting obstacle clearing and speed adjustment. Obstacle clearing requires a camera to detect in-the-way objects and a robot arm to collect those objects. Speed adjustment also calls for a camera capturing speed limit signs, crosswalks, pedestrians, etc. Object detection and image analysis are supported by computer vision techniques and algorithms like YOLO, R-CNN, and Swin Transformer. There are two potential places to perform such object detection, uploaded to the cloud server via Wi-Fi signals or based on the vehicle’s local computing device. We also target setting up a “memory” for the autonomous vehicle, enabling it to “memorize” and reuse some speed limit signs and object images when Wi-Fi signals are disabled. Further investigation includes enhancing the quality of object detection and image analysis via data augmentation to prepare such autonomous vehicles for a more complex working environment. # Solution Components ## Image Capture and Simple Analysis Subsystem A Raspberry Pi, a low-cost single-board computer, supporting image capturing and simple image analysis when Wi-Fi signals are not supported. ## Autonomous Vehicle Subsystem An autonomous vehicle subsystem supported with Arduino control, which can communicate with the remote cloud server for more complex and powerful image analysis through Wi-Fi transmission. ## Thorough Image Analysis Subsystem A remote cloud server with GPUs supporting powerful image analysis via common Computer Vision algorithms like YOLO, R-CNN, and Swin Transformer. ## Output Subsystem A robot arm controlled by Arduino performing object clearing and collecting. A self-adjusted speed vehicle responded to the analysis of speed limit signs. # Criterion for Success The Raspberry Pi successfully captures images of obstacles and speed limit signs. The control system (mainly Arduino) receives images captured by the Raspberry Pi and sends them to the remote cloud server in time through Wi-Fi. The remote cloud server successfully detects an obstacle and analyzes the correct speed limit, sending back the results of image analyses to the autonomous vehicle through Wi-Fi. The robot arm successfully pulls up obstacles and collects them in a certain area of the vehicle. The vehicle successfully lowers its speed after receiving information from the remote cloud server. (Optional) If no Wi-Fi signal is detected, the vehicle can use its local computing device to analyze the speed limit signs and control its functioning. # Distribution of Work Zhang Pai: Data processing and repetitive work. Zhang Rui: Documentation and reporting, hardware test. Wang Wendi: Hardware selection and setup, software algorithm design. Zhu Pengyu: Software test, robot arm test. |
||||||
20 | A device for evaluation of frictional properties of surfaces |
Haoran Cheng Tiancheng Shi Yang Li Zhongsheng Guan |
proposal1.docx |
Olesksiy Penkov | ||
# Members Tiancheng Shi (tshi13) Yang Li (yangl24) Zhongsheng Guan (zg22) Haoran Cheng (haoranc7) # Problem Extensive research efforts have been devoted to understanding the complex mechanisms of friction wear to minimize their effects in sliding systems. Improvements in the instruments used to characterize the friction and wear phenomenon are required to enhance the effectiveness of the research method. In this project, our goal is to design and build an experimental platform that evaluates surfaces' frictional behavior will be designed. # Solution Overview Our solution is to build a modular friction test platform, which is capable of accurately measuring the friction, wear rate and contact behavior between surfaces of different materials. It will include high precision sensors, controllable condition system and data acquisition system and allows researchers to simulate a variety of different conditions (temperature, loads, velocity) # Solution Components ## Mechanical Body Systems The platform will be using a modular design, which allows easier maintenance and transportation. Its body will be able to be fixed on other platforms to reduce errors from vibration or movement. ## High Precision Sensors The platform will contain four main sensor types: - Force sensors for constantly detecting the friction force. - Displacement sensors for monitoring wear and deformation on the platform. - Temperature sensors for detecting the temperature. - Velocity sensors for monitoring the velocity of the loads. ## Controllable Condition System This system will control the conditions of the platform, and it also consists of several parts: - High precision motor drive system for controlled sliding speeds - Normal force application unit for applying controlled load on normal direction - Temperature control module for adjusting the temperature # Criteria for Success We will measure our success by examining multiple sample objects’ friction properties on the platform, and check if every component works well. If the results of our examination are near the true properties of samples (error < 5%), we call it success. |
||||||
21 | Continuous Vehicles Capture |
Binyang Shen Jiawei Zhang Yining Guo Zijin Li |
proposal1.pdf |
|||
#Problem With the increasing size of cities and growing traffic flow, traditional traffic monitoring means (e.g., manual observation, fixed detectors, etc.) are often difficult to balance real-time and accuracy. The traffic department is in urgent need of a portable monitoring system that can obtain real-time road traffic density, number of vehicles and congestion conditions to assist traffic management and optimize traffic flow. #Solution Overview The aim of this project is to build a Raspberry Pi based traffic monitoring system that utilizes cameras for uninterrupted video capture and image processing algorithms to identify and count vehicles in real time. Meanwhile, the system can analyze the traffic density and congestion level in real time to provide data support for traffic flow optimization. Ultimately, we will create a visual dashboard that enables relevant departments or personnel to view road conditions and receive congestion alerts at any time, improving the efficiency and accuracy of traffic management. #Solution Components Outdoor enclosures and camera mounting systems Design and build a protective outdoor housing to ensure that the camera can consistently and steadily capture road conditions in all weather conditions. Consider waterproof, dustproof and temperature control to ensure the reliability of the system in outdoor environment. Image Processing and Vehicle Recognition Module Image processing algorithms deployed on the Raspberry Pi accurately identify and count vehicles in a live video stream. Vehicle types and traffic density are analyzed using deep learning or traditional computer vision techniques. Real-time visualization dashboards Aggregate information such as the number of identified vehicles, traffic flow density, etc. through the data interface. Design user-friendly graphical interface to present key indicators in charts, real-time curves, etc. and configure congestion alert push. #Criteria for Success Accuracy assessment: Vehicle detection and counting should be correct to a usable level (≥ 90%). Real-time requirements: the system is capable of near real-time video analysis on the Raspberry Pi platform and rapid visualization of the data. Robustness testing: Multiple tests under different weather, light and traffic flow conditions to ensure reliable operation of the outdoor chassis and the overall system. Congestion Early Warning Effect: Evaluate the system's ability to identify and warn of congestion status, and verify its effectiveness in assisting traffic management decisions. Through the gradual realization of the above goals, we will provide a set of efficient, low-cost, low-power and stable real-time traffic monitoring solutions for the traffic management department, contributing to the reduction of urban congestion and the improvement of travel efficiency. |
||||||
22 | Customizable Automatic Pottery Wheel Throwing Machine |
Minhao Shi Mofei Li Shihan Lin Zixu Zhu |
proposal1.pdf |
Wee-Liat Ong | ||
## Team Members - Mofei Li (mofeili2) - Zixu Zhu (zixuzhu2) - Shihan Lin (shihan3) - Minhao Shi (minhaos3) --- ## Problem With the growing demand for customization and personalization, traditional manual operations of pottery production could no longer meet the needs for high efficiency and precision. We hereby propose an idea to automate the wheel throwing step to mold clay in different shapes efficiently, i.e., a customizable automatic pottery wheel throwing machine. --- ## Solution Overview To implement our machine, we will use a pottery wheel rotating at a fast speed and then design a mechanical structure to mold the pottery from bottom to top. The structure needs at least 3 degrees of freedom, two for locating the point on the working line, viz., the radial line at the working height, and the other one for controlling the wall thickness. A viable way is to utilize the lead screws to convert the motors to horizontal and vertical motion, however, for the sake of cost saving and flexibility, we choose the form of a 3R-manipulator and duplicate the terminal arm to shape the inner and outer walls. Excluding the automatic wheel, we need to handle 4 motors in total. We will implement a program to convert the digital pottery model into a series of target radii at different heights, and further into the configuration of 4 motors in a complete work cycle. --- ## Components ### Hardware: A device mainly divided into two parts, a pottery wheel and a robotic arm, which is a terminal-duplicated variant of the typical 3R-manipulator. There should be 4 motors and a series of sensors for closed-loop control. The pottery wheel and the manipulator should be integrated, coordinately controlled by the computer. Ideally, there would be a physical prototype, which will realize the ability of shaping the inner and outer walls of the pottery with adjustable precision. ### Software: A user interface allowing users to modify/customize the shape of the pottery by adjusting various parameters. The prototype software will include a UI that enables users to directly control the parameters of the manipulator, allowing for minor adjustments to the pottery shape based on a default template. If time permits, more advanced software will be developed, allowing users to directly input the desired shape through a graphical interface to generate the clay body. --- ## Criterion for Success - The machine should be functional, i.e., it manages to mold the pottery into different shapes under the control of software. - The machine should be stable, i.e., it withstands the torque from the high-speed rotating wheel. - The machine should be robust, i.e., it conducts the closed-loop control successfully without accumulation of errors. - The software should be functional, i.e., it manages to map the input shape into sequential motion of the robotic arm. - The software should be robust, i.e., it rejects invalid input which could deal damage to the machine. - The software should be user-friendly, i.e., the input that describes the shape should be human-readable. |
||||||
23 | Intelligent shared item cabinet |
Nihaoxuan Ruan Xiaotong Cui Yanxin Lu Yihong Yang |
proposal1.pdf |
Piao Chen | ||
Intelligent shared item cabinet # TEAM MEMBERS: - Nihaoxuan Ruan (ruan14) - Yihong Yang (yihongy3) - Xiaotong Cui (xcui15) - Yanxin Lu (yanxinl4) # PROBLEM In modern campus environments, students frequently need to borrow small tools (e.g., screwdrivers, wrenches) outside standard front-desk hours, leading to significant inconvenience and wasted time. We propose an intelligent shared item cabinet to address this gap by offering a self-service, 24/7 borrowing and returning system. This solution not only reduces administrative burdens but also innovates on existing resource management by integrating campus card authentication with automated item recognition (via sensors or scanning). Our key success criteria include reliable item detection, user-friendly operation, and seamless integration with existing campus infrastructure, ultimately creating a secure and efficient platform that enhances the overall student experience. SOLUTION OVERVIEW Our intelligent shared item cabinet addresses the critical gap in 24/7 access to shared tools on campuses by integrating campus card authentication, automated item recognition, and a user-friendly interface, which are optimized specifically for the university setting. Another alternative to bypass the issues of the manual check-out systems and general lockers is having a system that is scalable, real-time tracking, and seamless integration with campus infrastructure. Achieving an ECE-centric setup is critical to connect the hardware reliability (sensor design, cabinet durability) with the software intelligence (real-time database synchronization, fault-tolerant operations), giving pedagogical effectiveness and sustainability respectively. Our approach differs by offering a proof of concept model that is based on investigating campus card APIs and modular approach to emphasize security, accessibility, and scalability. The team comprises mechanical, electrical, and computer engineers, who produce an inexpensive prototype with sensors, open-source frameworks, and a progressive approach in merely a single semester, thus relieving administrative burdens and enhancing student control by means of technology. # SOLUTION COMPONENTS - Campus Card Authentication Module: Handles user identification and access control. - Automated Item Recognition System: Uses sensors or scanning to detect and manage items. - User Interface: Provides a simple and intuitive interface for students to interact with the cabinet. - Backend Management System: Tracks item availability, user history, and system status. # CRITERION FOR SUCCESS - Item detection accuracy: Sensors/scanning devices must reliably detect item borrowing/returning status without errors. - System uptime stability: Full-year operational reliability, supporting 24/7 service availability. - Intuitive operation: Users can complete borrowing/returning processes within 60 seconds without external guidance. - Campus card compatibility: Integration with existing campus card authentication systems and real-time synchronization with campus databases (e.g., student credentials, permissions). |
||||||
24 | A Remote Microwave Environmental Monitoring System: Automation and Power Management |
Boyao Wang Haoran Jin Jiaheng Wen Qiushi Liu |
proposal1.pdf |
Shurun Tan | ||
FEATURED PROJECT # A Remote Microwave Environmental Monitoring System: Automation and Power Management ## Group Members - Boyao Wang (boyaow2) - Jiaheng Wen (jwen14) - Qiushi Liu (qiushi3) - Haoran Jin (haoranj4) ## Problem Monitoring environmental microwaves is important for detecting electromagnetic inference, assessing potential health impacts as well as understanding background radiation levels in urban and natural environments, thus attracting considerable attention from both academia and industry. To gain accurate and sufficient recording data, conventional methods include deploying monitors at different suitable locations manually, which may take much time, effort, and resources. Therefore, an automatic and efficient remote microwave environmental monitoring system is needed to reduce the overall cost. ## Solution Overview In this project, we aim to develop an automatic and efficient remote microwave environmental monitoring system with microwave equipment (e.g., vector network analyzers) deployed in natural environments and connected to the internet. Our goal is to create an intelligent automated pipeline that optimizes the monitoring process. This system will automatically transition between active monitoring and low-power states, significantly reducing overall power consumption while ensuring comprehensive and timely environmental microwave data collection. ## Solution Components To be specific, our solution includes the following components: ### Automated Monitoring System We will implement features including smart wake-up mechanisms based on environmental triggers, energy-efficient standby modes, and an automated rotating platform for 360-degree multi-angle data collection. ### Power Module Management System The power module will efficiently supply power to the VNA, pan-tilt mechanism, lifting vehicle, and cameras. ## Criteria of Success - The system must be able to collect environmental microwave data. - The monitoring system could be turned on, turned off, or turned to energy-efficient standby modes based on the environmental triggers. - The monitor could rotate for 360-degree multi-angle automatedly for better data collection. - The power module can provide power to components to the VNA, pan-tilt mechanism, lifting vehicle, and cameras while consuming less energy compared to the default power module. |
||||||
25 | Long-horizon Task Completion with Robotic Arms by Human Instructions |
Bingjun Guo Qi Long Qingran Wu Yuxi Chen |
proposal1.pdf |
Gaoang Wang | ||
# Problem The use of robotic arms for long-horizon tasks such as assembling, cooking, and packing that involve multi-step operations is growing. The interdependencies among subtasks, shifting environmental conditions, and requirement for constant feedback integration, however, make it extremely difficult to execute such tasks steadily. Current approaches frequently have trouble with skill chaining, task decomposition, and preserving robustness while being executed. For robotic arms to be able to manipulate objects on their own based on real-time feedback and finish long-term tasks, a comprehensive framework that incorporates perception and planning is therefore required. # Solution Overview Our solution for enabling a robot to conduct a series of tasks is to combine Perception, Planning and Acting Intelligence as a whole. The robotic arm is our primary entity. Firstly, for perception, it has a e.g. RGB camera on the top to capture the scene, including recognizing the objects using computer vision. Then, for planning, with uploaded captured images and user's instructions, the robot will do the analysis and task planning. Finally, for acting, the plan is reflected as a guide for the robotic arm to move. During the process of acting, the sensors including the rgb-camera on the robotic arm will provide continuous feedback, which will revise the action of the robotic arm in a control system loop. The whole process will loop over these three steps until the series of tasks are completed. # Solution Components ## Output Subsystem - A robotic arm (UR3) - A specially designed grasper - Exclusive tools for the long-horizon task set (e.g. a screwdriver for assembly tasks) ## Feedback Subsystem - Visual sensors (e.g. a RGB or RGB-D camera, depending on availability) - Tactile sensors - Corresponding circuits preprocessing the perceptional signals ## Planning Subsystem - A language model to extract semantic information from instructions - A vision model that preprocess input images - An agent model that process inputs, plan movements, and carry out tasks according to feedback # Criteria of Success - Overall: The robotic arm can successfully complete a certain set of long-horizon tasks (t.b.d according to feasibility) based on human instructions in a zero-shot manner. - Perception: The system can accurately recognize objects in the scene. - Planning: The system can generate reasonable multi-step operations. - Acting: The robotic arm can follow the generated plan and adjust its movement based on real-time feedback to improve accuracy and robustness. - Safety: The robotic arm can avoid collisions. |
||||||
26 | Analog Computer ODE Solver |
Dianxing Tang Haige Liu Shilong Shen Zixuan Qu |
proposal1.pdf |
Said Mikki | ||
# ANALOG COMPUTER ODE SOLVER ## Team Members: - Shilong Shen (shilong7) - Dianxing Tang (dt12) - Haige Liu (haigel2) - Zixuan Qu (zixuanq3) # Problem Analog computer is widely used in many mathematical calculations like ordinary differential equations (ODEs). ODEs have a wide range of applications in many fields, such as classical mechanics, electromagnetism, and engineering control. Especially in AI algorithms, such as in neural networks and deep learning, ODEs are used to adjust parameters during iterative training. Traditional software methods for solving ODEs often face issues like low response speed and insufficient efficiency. In addition, some types of ODEs do not have generalized solutions and therefore cannot be directly computed using traditional computational software. In some cases users prefer that the calculator can output the electrical signals of the solutions directly. Therefore, we need to design an **analog computer** using specialized hardware circuits to directly solve ODEs and other kinds of mathematical problems, which can efficiently obtain the numerical solutions of different types of ODEs and output them directly, aiming to improve computation speed, reduce latency, and meet the need for real-time processing. # Solution To design an analog computer for solving mathematical problems like ODEs, we plan to use operation amplifiers to build integrators, adders, and multipliers to create a feedback loop. We will convert the ODE into specific electrical signals, such as voltage, input these signals into the solving circuit, and then convert the output back to obtain the solution of the ODE, which will be shown directly to the user via an oscilloscope. From there we build a useful mathematical problem solver. All we need to do is select the types of ODEs and enter the coefficients, and the calculator outputs a numerical solution to the equation from the electrical signals and allows the user to visualize a picture of the solution. The main design process involves first building and simulating the analog circuit in MATLAB to verify its functionality. After successful simulation, we will implement the hardware circuit on a printed circuit board (PCB) and conduct further testing to ensure its performance. # Solution Components ## Subsystem I: Simulation Test MATLAB Simulation Circuit: Build and simulate analog circuit to verify the design. ## Subsystem II: Main PCB Hardware Circuit - Basic operational units: Build adders, multipliers and integrators with electrical components such as op-amps, resistors and capacitors - PCB hardware circuit: Construct the close-loop circuit with basic operational units on a printed circuit board (PCB) to solve ODE by multiple iterations. - ODE input recognition System: Identifies the type of ODE (e.g., order, whether it's homogeneous, etc.) and directs the signal to the corresponding circuit. ## Subsystem III: ODE Signal Conversion System - Arduino Main Control Board: Handles the input and output signals of the hardware circuit - Arduino Programming: Converts the ODE into electrical signals (e.g., voltage) to input into the hardware circuit and processes the output signal to extract the solution of the ODE. # Criterion For Success The MATLAB simulation circuit should solve 1st to 3rd order linear and nonlinear ODEs accurately, as demonstrated in testing. - The basic operational units on the PCB must function correctly, performing their intended tasks without errors. - The analog computer should solve ODEs with similar accuracy and timing as the MATLAB simulation. - The Arduino must accurately convert input and output signals for the hardware without error. ## Reference - https://i4cy.com/analog_computing/ - https://www.mathworks.com/matlabcentral/fileexchange/56756-solution-of-differential-equation-using-analog-computer - https://saching007.github.io/pubs/dpac.pdf - https://www.geeksforgeeks.org/what-is-an-analog-computer/ |
||||||
27 | A DodgeBot System |
Chiming Ni Feiyang Wu Kai Wang Nichen Tian |
proposal1.pdf |
Timothy Lee | ||
A DodgeBot System Team Members: Kai Wang (kaiwang6) Nichen Tian (nichent2) Feiyang Wu (fw14) Chiming Ni (chiming2) Yiyang Bao (yiyangb2) Problem Dodge ball is one sport that require people to throw ball to each other and the one who got hit loss the game, which means that this game can not be played alone. However, sometimes athletes may want to take trainings to enhance their skill without compainions. Solution Overview Our solution is to create a dodgebot that act as another player. The dodgebot has a shooter that shoot the dodge ball towards the human player who is captured by the robot camera and detected and tracked by on bot artificial intelligence. When the ball is thrown towards the robot, the movement of the incoming ball is also captured by the camera and the dodging system will take contorl of the robot movement to avoid the collision. Solution Components Dodge Ball Shooting System 3-DoF Gimbal Design Pitch Axis: DJI GM6020 motor + linkage mechanism (30° range). Yaw Axis: Unitree servo motor (360° continuous rotation). Launch Mechanism: High-speed pneumatic cylinder + pusher plate. Actuation Compressed air (0.5MPa) or Friction wheel drive Adjustable launch angle to achieve 2 m/s initial velocity and 1.2m launch height. Human Pose Estimation, Tracking and Dodging System Jetson Nano (Or other edge computing platform counterpart) A camera to detect people Deep neural network trained to estimate human pose and detect incoming balls Tracking system that output motor joint angles to the shooting system Decision system that decide the direction of movement to avoid ball collision Criterion For Success The dodge ball shooting system must be shoot with a appoximately 2m/s initial velocity and be able to hit a person. The human pose estimation and tracking system should be able to estimate human pose with a camera and track movement of the person with higher than at least 60% accuracy. The system should also output the correct angles for the motors to execute. The dodge system should be able to move the hit dector at 2 freedom degree to avoid hit from a non-proficient human player. |
||||||
28 | A Bio-inspired AI-based Underwater Locating System |
Haoyu Huang Jiawei Wang Xinchen Yin Zaihe Zhang |
proposal1.pdf |
Huan Hu | ||
Team members: Haoyu Huang(haoyuh3) Jiawei Wang(jiaweiw6) Xinchen Yin(xyin16) Zaihe Zhang(zaihez2) Ziye Deng(ziyed2) # Problem Localization of underwater objects has been an important research problem in the field of underwater development. Some researchers have found that the lateral line organs of fish present a promising idea for achieving near-field target awareness. The problem is how to develop a bionic device that can mimic the ability of fish lateral organ line to receive underwater vibration signals and analyze the position of a target object. # Solution overview Our solution of building an ai-based underwater locating system. The system will be built on the experiment platform which is an aquarium containing silicone oil. An oscillator powered by sinusoidal signal will be placed in different places in the aquarium to simulate the real vibration. The system consists of a pressure sensors array which detects pressure difference in the water to capture the vibration of oscillator. The sensors data will be collected by computer and a neural network will be trained to predict the location of Oscillator. The sensor array is an effective tool to locate object’s location underwater. # Component ## Power system -A 100 × 50 × 50 cm fish tank filled with kerosene up to half its height to simulate an underwater environment. \ -A vibrator to imitate the movement of an underwater entity. \ -A filament moving device to adjust the vibrator’s position. \ -A signal generator and power amplifier to drive the vibrator. ## Data collection -A sensor array placed in the middle of the fish tank to gather environmental data. \ -3D printing device and material to produce a casing for sensors. \ -A data acquisition card to capture and transmit sensor data. \ -Data acquisition software to visualize and process the collected data. # Criteria of success ## System accuracy In a simulated underwater environment, the device should accurately predict the position of the oscillator within a predefined error range (e.g., ±5 cm). ## Data collection efficiency The sensor array should reliably capture pressure differences in a silicone oil environment, and the data acquisition system should acquire and transmit data without significant loss or delay. ## System responsiveness The system should be able to detect and predict the position of the oscillator within a reasonable time range (e.g., less than 1 second) after the oscillator begins to vibrate. ## Reliability Under different conditions, such as the position of the oscillator, the frequency of vibration is different, the device should maintain stable performance. |
||||||
29 | 24V Smart Battery Charging System with Health Management |
Hongda Wu Yanbo Chen Yiwei Zhao Zhibo Zhang |
proposal1.pdf |
Lin Qiu | ||
**Team Members:** Yanbo Chen (yanboc2) Hongda Wu (hongdaw2) Yiwei Zhao (yiwei8) Zhibo Zhang (zhiboz3) **Problem Overview:** More intelligent charging systems became necessary because 24V battery charging systems need increased efficiency and reliability to serve applications including electric vehicles together with renewable energy storage and industrial equipment. The existing charging systems encounter multiple problems due to their inadequate efficiency combined with poor health management and safety shortcomings. Charging strategy optimization need regular health assessments to improve battery’s performance life. Key problems to address: • The charging process is inefficient and the charging time is slow The system lacks built-in monitoring tools which detect battery health status. The battery system experiences multiple failure risks because of its exposure to environmental conditions and breakdowns. The system does not effectively adjust its operations when dealing with different battery types and operational conditions. **Solution Overview:** A 24V Smart Battery Charging System functions as a solution that combines health management capabilities to generate effective reliable battery charging operations. The system features two-way operation since it both conducts battery charging and enables energy transfers to either grid power systems or alternative user applications. The solution embraces three principal elements that form its framework. 1. The efficient DC charging system uses advanced power electronic components and control algorithms to achieve peak charging efficiency while reducing charging time. 2. The system includes built-in battery health monitoring features that monitor essential parameters including voltage together with temperature and charge cycles for achieving peak battery performance over time. 3. The system incorporates sophisticated automated mechanisms for identifying and recovering from power variations as well as temperature changes and other environmental issues to reduce damage. 4. The system consists of modular modules that let users upgrade their components and add innovative battery technologies as they become available. **Solution Components:** 1. Power Electronics: Efficient DC-DC converters for fast charging. The system includes built-in voltage and current regulation which maintains optimal charging operations. This system allows its functions to operate both as a charging and discharging device. 2. Health Management System: The system includes continuous battery monitoring through (voltage, temperature and charge/discharge cycles). The system uses intelligent algorithm to establish an adaptive charging method by analyzing the battery health index. The battery lifespan can be extended through protection systems which monitor overcharge events and undercharge events as well as temperature conditions. 3. Safety Features: The system contains overload protection that protects the battery from destructive levels of current and voltage. The system contains a fault detection system which rapidly detects potential system problems. The system needs strong environmental resistance capabilities to work in different operational environments (temperature with humidity included). 4. User Interface & Control: User-friendly interface for system status monitoring and troubleshooting. The system features smart communication that enables remote monitoring together with control operations. 5. Modular Design: The design provides modular expansion capabilities which allow future growth of technological advancements and rising capacity requirements. **Criterion for Success:** The project will be considered successful based on these defined measures: Efficiency: The system achieves charging efficiency reaching rates higher than 90% which lowers energy waste during the operation. Battery Health Management: The system provides real-time battery health measurements as well as autonomous charge cycle management, which will increase battery life by at least 30 percent over standard systems. Fault Tolerance: The system needs to perform fast fault detection along with automatic recovery which stops both battery degradation and whole system structural failures. Safety: Safety standards for this system demand that it includes protection against overload, short circuits, overvoltage, undervoltage and high and low temperature conditions. Expandability: The system needs to have modular features which allow integration with various battery types both present and those that will emerge as new battery technologies develop. Environmental Adaptability: The system needs to perform its tasks reliably when exposed to different environmental factors (temperature changes and humidity levels included). |
||||||
30 | Design and Build a Spherical Bionic Tensegrity Robot |
Ruiqi Dai Yaoqi Shen Yuan Fang Yuhao Xu |
proposal1.docx |
|||
Team Members: -Yuan Fang (yuanf4) -Yaoqi Shen (yaoqis2) -Ruiqi Dai (ruiqid3) -Yuhao Xu (yuhaoxu3) Problem Hard robots are not friendly ito contact with people or fragile objects with a close distance since the rigid components, .while the conventional sphere integral robots were driven mostly by electric motors, which were often large and could not adapt to complex environments Solution To solve the problem, we plan to use a spherical tensegrity structure consisting of bars and strings to build the robot. The strings only bear the tension and the bars only bear the pressure. This structure is similar to the musculoskeletal system of animals. The robot will be controlled by the PCB circuit. The wireless communication system will communicate with the interactive interface and the PCB circuit. Additionally, the battery is fixed on the rods to provide power to the circuit. Subsystem 1 The wireless communication will be through the WiFi module integrated in ESP32. It will achieve communication between devices under the same WiFi. The circuit board is the server, and different response functions are designed according to different client requests to control the output of the circuit board. Subsystem 2 The batteries can charge the PCB circuit, and they will be fixed on rods, which may add to the balance weight issue. To balance the weight, we will try different types of batteries that vary in weight and size. We will also design the location to attach the batteries to keep the balance. Subsystem 3 Mechanical ball structure optimization scheme. The original foundation of the overall tension structure is realized by 6 hard rods + 24 elastic ropes of liquid crystal elastomers. Now, our design goal is to replace three of the hard rods into the battery and PCB board integration package, after the change, the overall structure will be due to material changes, stiffness and other static characteristics will change, at the same time, it will also cause a change in the movement mode. Subsystem4 PCB circuit control design. The goal of the PCB circuit is to achieve heating of the elastomer, and dynamically activate the thermal pattern change of different elastomers according to the needs of the path movement structure. Therefore, it is first necessary to design a circuit schematic for controlling multiple elastomers with a single circuit board, verify it through experiments, and test the requirements for replacing the hard rod after integration with the battery. The board will interact under the communication module. Criterion for success A stable LCE tensegrity structure with PCB and Battery integrated. Implement control of robot roll in the different directions with untethered electronically control. moving on the experimental surface, where can keep stable and keep moving without adjustment |
||||||
31 | Drone Power System Design and Build |
Bingye He Yuyang Tian Zhuoyang Lai Zikang Liu |
proposal3.pdf |
Jiahuan Cui | ||
# Team members Zhuoyang Lai (zlai7) Yuyang Tian (yuyangt3) Zikang Liu (zikangl2) Bingye He (bingyeh2) # Problem overview The primary work involves designing the electric motor, rotor, structure, and circuitry of the electric power system. This system is intended to power a large drone capable of generating 5 kg of thrust at ground level. Achieving this goal requires a holistic approach where each component is optimized for high performance, efficiency, and lightweight construction. The motor must deliver sufficient power while maintaining efficiency and durability, while the rotor design needs to provide optimal aerodynamic performance for both takeoff and sustained flight. Meanwhile, the structural elements must be robust yet lightweight to support the overall system without compromising the drone’s agility or payload capacity. The circuit design is equally critical, as it must manage high current loads, ensure stable power delivery, and integrate advanced control systems to handle the dynamic demands of flight. Together, these integrated systems must work seamlessly to meet the demanding operational requirements of a large drone, ensuring safe, reliable, and efficient performance from takeoff to flight. # Solution overview This project for drone applications encompasses four key subsystems. Subsystem 1 designs a high - torque - density BLDC motor by integrating electromagnetic, thermal, and mechanical principles, using equations and MotorCAD for design and simulation, then manufacturing components like the stator, rotor, and housing, and validating performance through tests. Subsystem 2, based on the open - source VESC project, creates a custom motor control PCB with a high - performance microcontroller, gate driver, and current sensing circuitry, incorporating thermal management and communication interfaces, and programming it with modified firmware. Subsystem 3 manufactures the drone's rotor and supporter parts via the hot press process, designing the mold and heater, and carefully layering and processing carbon fiber. Subsystem 4 designs the blades using carbon fiber composite for high strength - to - weight and aerodynamic performance, with a specific shape and number, and creates an aluminum alloy hub with a vibration - damping mechanism for a secure connection to the motor shaft. # Solution Components ## Sub system 1 For motor design, the aim is to design a high-torque-density BLDC motor optimized for drone applications. It combines electromagnetic, thermal, and mechanical design principles to deliver peak performance within compact dimensions. Firstly, we use motor design equations (e.g., torque =) to determine core dimensions, number of turns, and magnet specifications. Then we use MotorCAD for iterative electrical and thermal simulations to optimize power flow, losses, and heat distribution. Based on the simulation, we should design and manufacture all motor components. For the Electromagnetic Core, an 18-slot, Si-steel stator paired with 22-pole N52 NdFeB magnets and AWG 23 copper windings for high torque (2.5NM) and efficiency (80%). The diameter of the core is within 80mm and length within 40mm. For wiring, we should ensure 3-phase,8turns, 3 strands to reduce resistance and loss, minimize counter electromotive force and improve thermal conductivity. An aluminum finned housing for heat dissipation, ensuring peak temperatures stay below 80°C. A 6205 angular contact bearing assembly and rigid polycarbonate enclosure for durability and vibration resistance. Finally, we should validate performance after assembly through bench tests (torque/speed measurements) and thermal imaging. ## Sub system 2 Based on the open-source VESC project, we will design and fabricate a custom PCB for motor control tailored specifically for drone applications. The board will feature a high-performance microcontroller (STM32F4 series) for real-time calculations and control algorithms, a DRV8301 gate driver for efficient MOSFET switching, and current sensing circuitry for precise current monitoring. The PCB design will incorporate thermal management considerations, robust power filtering, and communication interfaces for telemetry and control. After manufacturing, the board will be programmed with modified VESC firmware optimized for aviation applications, allowing compatibility with the existing VESC Tool software for parameter tuning and monitoring. ## Sub system 3 For the structure and rotor manufacture, we choose the hot press process to manufacture the rotor and supporter part of the drone. We design the rotor mold and the heater for the hot press carbon fiber process. To manufacture the rotor, we put the pre-impregnated carbon fiber layup to the mold layer by layer with its pattern vertical to increase the structure strength. After that, the mold is put into the heater to be heated in the temperature of about 160°C for about 2-3h. Then, the mold is gradually cooled to minimize thermal stresses, resulting in a robust and high-performance component suitable for demanding applications such as drone frames and rotor blades. ## Sub system 4 For the blades, we use a carbon fiber composite material with a thickness - to - chord ratio of 10 - 12% to ensure high strength - to - weight ratio and good aerodynamic performance. The blade shape is designed with a swept - back leading edge and a tapered trailing edge to reduce aerodynamic drag and improve lift - to - drag ratio. The number of blades is set to 4, with each blade having a length of 120mm and a root chord length of 30mm, tapering to 15mm at the tip. The hub is made of aluminum alloy 6061 - T6, which is machined to have a precise fit with the motor shaft. It is designed with a central bore for shaft insertion and four evenly - spaced arms to attach the blades. The hub also includes a vibration - damping mechanism, such as rubber inserts, to reduce the transmission of vibrations from the rotor to the drone frame. # Criteria for Success - Stability: The entire drone's structure shows no obvious damage or deformation, and the overall structure remains stable with the rotor. - Performance: Delivering ≥5kg pulling force at 3000 RPM with 80% efficiency. - Thermal Reliability and Durability: Maintaining motor <80°C under 100% load for 10 minutes and matching simulated thermal behavior in bench tests. Surviving 5 hours of cyclic testing (vibration/voltage spikes) without mechanical failure. - Manufacturability: Cost-effective assembly process (e.g., modular design for easy part replacement). Compliance with drone weight limits (e.g., total system weight < 2KG). - Precision: The VESC-based motor control PCB must provide precise control of the motor, handle the required current load without thermal issues, and successfully interface with the VESC Tool software for configuration and monitoring. - Compatibility: The rotor should be directly compatible with the selected drone motor in terms of shaft diameter, mounting interface, and rotational speed range. It should also integrate seamlessly with the drone's power system and control electronics, without causing any interference or instability in the flight control system. Additionally, the total weight of the rotor system should be less than 0.5KG, and the diameter should not exceed 300mm. |
||||||
32 | Sensing your heartbeat (and others) |
Qiyang Wu Xin Chen Xuanqi Wang Yukai Han |
proposal1.pdf |
Howard Yang | ||
# Problem Traditional human activity monitoring systems often rely on cameras, wearable sensors, or specialized hardware, which can be intrusive, expensive, or inconvenient. However, WiFi signals, which are already ubiquitous in indoor environments, can be repurposed for non-contact human sensing. The challenge lies in accurately extracting and interpreting fine-grained Channel State Information (CSI) to detect subtle human activities, such as breathing, gestures, and potentially even heartbeats, while mitigating environmental interference. Solution Overview Our solution for utilizing WiFi as a radar is to leverage Channel State Information (CSI) to sense human activities. We achieve this by extracting fine-grained CSI signals from WiFi devices and applying signal processing techniques to interpret movement patterns. The system consists of a WiFi transmitter and receiver and these devices can continuously capture CSI variations caused by human motion. Advanced algorithms are then used to distinguish different actions like heartbeats and body gestures by analyzing phase shifts and amplitude changes in the wireless signals. This approach enables non-contact human activity sensing, making it suitable for applications in health monitoring and human-computer interaction. Solution Components Subsystem1: WIFI signal transmission system The WiFi signal transmission system consists of Intel AX200 or AX210 network cards and external antennas to ensure stable and high-quality signal transmission. These components work together to provide a robust wireless communication setup necessary for collecting Channel State Information (CSI). Subsystem2: CSI Extraction Tool/Software The CSI signal processing system extracts WiFi CSI data using Ubuntu 22.04 LTS and PicoScenes software, which enables real-time signal analysis for detecting fine-grained variations in the wireless channel. Subsystem3: Human Action Recognition System The human action recognition system leverages CSI data to detect human movements by analyzing signal variations. Using MATLAB, Python and specialized CSI analysis toolboxes, it processes amplitude and phase changes to detect different human activities accurately. Criterion for Success Accurate Respiration Detection: The system must reliably detect human breathing patterns using CSI data by analyzing amplitude and phase variations in WiFi signals. Robust Interference Mitigation: The system should effectively filter out environmental noise and external disturbances, such as movement from non-human objects or signal fluctuations caused by multipath effects. Detection of Heartbeat and Other Physiological Signals (If possible): The system should capture and differentiate finer physiological signals, such as heartbeats, using advanced signal processing techniques. Distribution of Works Xin Chen [ECE] – Signal Processing Develops signal processing algorithms to analyze CSI data, extracting key features such as amplitude and phase variations for human activity recognition. Implements filtering and denoising techniques to improve signal quality and enhance detection accuracy. Works closely with system integration to ensure seamless data flow and efficient processing of CSI signals. Qiyang Wu [EE] – System Integration and Data Transmission Manages real-time data transmission between WiFi hardware and processing units, ensuring minimal latency and packet loss. Develops communication protocols to synchronize CSI data collection with processing algorithms. Optimizes data handling and storage to support continuous CSI analysis and facilitate system scalability. Xuanqi Wang [EE] – Hardware Setup and Optimization Configures WiFi devices, antennas, and receivers to ensure stable and high-quality CSI signal collection. Optimizes antenna placement to maximize sensitivity to movement and reduce interference. Works on power management and circuit adjustments to ensure system reliability and efficiency in different environments. Yukai Han [ME] – Mechanical Design Designs mounting structures and enclosures to securely position WiFi devices for optimal signal reception. Ensures stability and repeatability of the setup to maintain consistency in experiments. Assists in planning and executing test scenarios, considering environmental factors that may impact CSI signal variations. |
||||||
33 | A 2D Model of Optical Satellite Communication System |
Jun Zheng Xuanyi Jin Yuxuan Li Zachary Zhao |
proposal1.pdf |
Pavel Loskot | ||
A 2D Model of Optical Satellite Communication System ##TEAM MEMBERS# Yuxuan Li (yuxuan43), Zhijun Zhao (zhijunz3), Xuanyi Jin (xuanyij2), Jun Zheng (junz6) ##PROBLEM## With the rapid development of aerospace and communication technologies, our demand for more multifunctional, stable, and advanced satellite communication products is increasing. Low Earth Orbit (LEO) satellites have gained widespread popularity due to their advantages, such as low latency and low deployment cost. They have shown promising potential in climate and geographical studies. With the advent of Starlink, LEO satellites have made their way into everyday households, delivering internet access to even the most remote areas. However, LEO satellites face a set of challenges. Since each satellite covers a smaller area and moves much faster than the Earth's rotation, a large constellation is required to ensure global coverage. Therefore, we believe that studying how these satellites communicate with ground stations is essential. We try to understand the dynamics of these interactions and optimize communication efficiency. ##SOLUTION OVERVIEW## We plan to design a 2D Model that simulates the movement of satellites along an orbit around the Earth. The basic of this model contains two disks and a few laser transmitters and receivers. There will be an inner disk that represents the earth and an outer disk that represents the satellite's orbit. Both the inner and outer disks will be rotating to represent the rotation of the Earth and the satellites’ rotation on the orbit around the Earth. There will be laser transmitters on the inner disk and receivers on the outer disk to represent satellites that receive optical signals. The receivers (satellites) can only receive the optical signal within a small scattering angle of the laser source. The rotation of the disks should be driven by motors under them, and there should be storage components for the satellite that preserve the signal for the following decoding process. The signal to be transferred should be converted to binary codes, 1 if the laser is emitted within a specific frequency and 0 if it is not emitted. To test the performance of this system in different situations, we need to develop a graphical user interface system that can control the rotation speed of each disk and the frequency of transmitting laser while demonstrating the impact on the efficiency of signal transmission by showing graphs. ##SOLUTION COMPONENT## Physical Simulation System: Orbits Simulation System: Two concentric disks, one representing Earth’s ground station, the other representing an orbiting LEO satellite. Motors with adjustable rotational speeds to mimic orbital characteristics. Signal Transmission Subsystem: A low-power laser that will be mounted on the Earth disk as the signal transmitter, The corresponding receiver(s) will be attached to the satellite disk as the signal receiver(s). Software Control and Monitoring System: A software application that can manage disk speeds and laser signal, capture the real-time data, and simulate the relative motion and signal transmission of ground-satellite communication. ##Criteria for Success## The 2D optical satellite communication model must remain stable under various conditions. Users should be able to adjust the satellite speed during the simulation through a graphical user interface (GUI). The satellite receiver should be able to detect the signal with sufficient strength. The system should decode the received signal into a readable message to demonstrate that once a signal is received, it can be used for practical purposes. The model should evaluate and display the efficiency of optical signal transmission. |
||||||
34 | A smart glove for HCI |
Hongwei Dong Jinhao Zhang Shanbin Sun Zhan Shi |
proposal2.pdf |
Pavel Loskot | ||
# TEAM MEMBERS Hongwei Dong (hd2), Shanbin Sun (shanbin3), Jinhao Zhang (jinhaoz2), Zhan Shi (zhans6) # PROBLEM & SOLUTION OVERVIEW In today's society, people are increasingly interacting with smart devices such as laptops and smartphones. This trend underscores the need for innovative methods to improve the efficiency of interaction with these devices. Among the emerging solutions, smart gloves hold great promise as a means to address this need. The smart glove is able to collect the positional information of the user's fingers. It then processes the information to recognize the user's gestures and maps the recognized gestures to predefined shortcuts, thereby facilitating efficient interaction between the user and the computer. # PROJECT TITLE A smart glove for HCI # SOLUTION COMPONENTS ## Subsystem1: IMU based gesture sensing system - MPU6050, a six DOF IMU, is placed on each fingertip to collect the position and angle information. - I2C bus to communicate with the ESP32. ## Subsystem2: gesture recognition system - Raw data pre-processing to obtain high accuracy gesture - Pre-trained gesture recognition model using ESP-DL inference library - User-defined gesture shortcut map to support any type of input ## Subsystem 3: Communication System - Serialization and deserialization on both the device and host side to package the information to be transmitted in binary/JSON format. - CP2102/CH340 USB module to support USB serial communication such as UART when the glove is charging or high-bandwidth transmission is required - Bluetooth module to support Bluetooth (LE optional) communication for gesture and command transmission ## Subsystem 4: Power Management System - High energy density lithium polymer battery (e.g. 1000-3000mAh) to power the IMUs, esp32, and peripheral components, ensuring a long wireless user experience. - Step-up or step-down voltage regulator circuitry to meet the stable power requirements of each subsystem. # Criterion For Success - The MPU6050 IMU system should reliably collect raw gesture data, with stable I2C data transfer between the MPU6050 and the ESP32 without significant latency or data loss. - The deep learning gesture recognition model used on the ESP32 should be able to map gestures to appropriate keystrokes, enabling mouse operations and various custom functions. - The communication system should ensure seamless data transfer between the computer and mobile devices without significant latency. - The battery management system should ensure that all components receive the correct voltage, with the charge management module and power monitoring functions operating correctly. # DISTRIBUTION OF WORK - Jinhao Zhang [EE]: Responsible for the design and implementation of the power subsystem and other circuit systems, including the design, test, and optimization of the battery charging module and power monitoring module. - Hongwei Dong [ECE]: Responsible for raw gesture data pre-processing program. Training and deployment of the gesture recognition model. Serialization/deserialization of data structure for device and host. - Shanbin Sun [ECE]: Responsible for collect the gesture data used for training the model. and I2C bus development. Develop the user interface to define the gesture shortcut map. Develop the IMU and device driver. - Zhan Shi [EE]: Responsible for USB and Bluetooth communication between host and device. Develop the I2C bus for IMU to ESP32 communication. PCB design and verification. Unit testing of each subsystems. |
||||||
35 | Handwriting Robot With User-Customized Font Style |
Mingchen Sun Xuancheng Liu Zhixiang Liang Zifan Ying |
proposal1.pdf |
Gaoang Wang | ||
# Handwriting Robot With User-Customized Font Style ## Team Members - Zifan Ying (zifany4) - Zhixiang Liang (zliang18) - Xuancheng Liu (xl124) - Mingchen Sun (msun52) ## Problem Handwriting remains a personal and unique form of expression, yet current digital and automated writing solutions lack the ability to accurately replicate individual handwriting styles. Existing methods either rely on digital fonts that imitate handwriting or require complex, manual customizations. This project addresses the need for an automated system that can learn and reproduce a person's unique handwriting style with high fidelity. By integrating machine learning-based handwriting analysis with robotic writing mechanisms, this system enhances document personalization, enabling applications in personalized correspondence, secure document signing, and artistic reproduction. ## Proposed Solution The proposed solution is a handwriting replication system that learns a user's unique writing style and reproduces it on new documents. The system analyzes sample handwriting using computer vision and machine learning, then employs a robotic mechanism to accurately recreate the learned style onto paper with a pen. The result is realistic, customizable handwritten output that closely mimics the original writing style. ## Solution Components ### Character Learning & Generation - Utilizes computer vision and machine learning to extract and analyze the user’s handwriting style from provided sample documents. - Generates new text in the learned style based on any user-provided input. - Can run on a standard computer or an embedded microcontroller (MCU) for flexibility. ### Writing Mechanism - A two-axis robotic system that writes text with precise, controlled pen movements. - Optional third-axis control to adjust pen pressure or stroke width for enhanced authenticity of writing. - Integrated paper-feeding mechanism to ensure smooth, continuous document production. ## Criteria of Success - The system successfully learns and replicates handwriting styles from the user’s sample documents. - The robotic writing mechanism accurately reproduces the generated text in a natural, human-like manner. - The final output closely matches the user’s real handwriting in style, spacing, and (if applicable) pen pressure. |
||||||
36 | Design, build and control of a jumping robot |
Hanjun Luo Siying Yu Xinyi Yang Xuecheng Liu |
proposal1.pdf |
Hua Chen | ||
## MEMBERS - Xinyi Yang [xinyiy19] - Xuecheng Liu [xl125] - Hanjun Luo [hanjunl2] - Siying Yu [siyingy3] ## Problem Jumping robots have the potential to navigate challenging terrains, access confined spaces, and operate in environments where traditional wheeled or legged robots struggle. However, achieving controlled, efficient, and multi-level jumping remains a significant challenge due to the need for precise energy storage and release mechanisms, dynamic stability, and adaptive landing strategies. ## Solution Overview To address the challenges of controlled and efficient jumping, we propose a bio-inspired jumping robot that mimics the flea’s powerful jumping mechanism. Our robot uses a spring-based energy storage system to build up and release energy efficiently, generating powerful jumps. A motor-driven control system adjusts the force applied to change jump height. The lightweight structure mimics a flea’s legs to improve force transfer while keeping the robot compact. By combining these elements, our design makes jumping more controlled and adaptable. ## Solution Components Energy Storage & Release Module: Our design adopts a spring-based energy storage system, mimicking the flea’s resilin pads to maximize energy density. A motor-driven mechanism gradually stretches the spring, storing potential energy, which is rapidly released through a triggering system to generate explosive jumping force. Actuation & Height Control Module: The jumping process is controlled by a motor-actuated system that regulates energy input and adapts to different jumping heights. By integrating a control system, the robot can adjust force application and optimize energy utilization for multi-level jumps. Structural Design: Inspired by the flea’s exoskeletal structure, our robot employs a lightweight yet high-strength frame to optimize force transmission. The robotic legs mimic biological multi-joint configurations, allowing efficient energy redirection and reducing stress concentration. ## Criteria of Success Multi-Level Jumping: The robot must successfully perform three distinct jump heights, achieving at least two successful attempts for each height. Instant Actuation: The robot must initiate a jump within a short response time after receiving the command. Durability: The robot should withstand multiple jumps without failure or significant performance degradation. ## Distribution of Work Xinyi Yang: Designs and optimizes the robot’s structure and leg mechanism for efficient force transmission. Siying Yu: Develops an embedded control system for for the motor and circuit system to control energy storage and release. Hanjun Luo: Implements control algorithms and integrates all components and tests jumping performance for reliability. Xuecheng Liu: Implements the control algorithm and integrates all components and tests jumping performance for reliability. |
||||||
37 | Rudimentary Spherical Motor System for All-Terrain Vehicles |
Ibrahim Tayyab Zhaoyu Kang |
proposal1.pdf |
Lin Qiu | ||
Problem Traditional motors used in all-terrain vehicles (ATVs) are often bulky, inefficient, and not well-suited for rough terrain applications. There is a need for a compact, energy-efficient propulsion system that enhances mobility without increasing vehicle weight. Solution Overview The project aims to develop a prototype spherical motor system using an electromagnetic field-controlled spherical rotor. The system will integrate: - Electromagnetic Field Generation to control rotor movement. - PCB-Based Controller & Microcontroller to regulate power and ensure efficient operation. - Sensing and Feedback System (Hall sensors, optical encoders, and IMU) to provide real-time tracking of rotor position, speed, and stability. - Power Subsystem with regulated DC power for motor components and safety mechanisms . - Testing and Evaluation System to validate performance metrics such as speed, power efficiency, and stability. Criterion for Success 1. Performance Metrics: The motor must achieve a minimum rotational speed of 500 RPM and function reliably on inclined surfaces up to 15 degrees. 2. Compact & Lightweight Design: The total system weight must be under 5 kg to ensure easy integration into ATVs. 3. Energy Efficiency: The system should operate within a 200W power limit and electromagnets must function with ±5% precision in field strength, switching within 5ms latency. 4. Safety & Reliability: Implement fail-safes, proper insulation, and cooling mechanisms to prevent electrical hazards and overheating. |
||||||
38 | AIRLOOM TYPE VERTICAL AXIS WIND TURBINE |
Chengsheng Jiang Jiayao Lin Jiayi Guo Yiyang Zhou |
other1.pdf |
Jiahuan Cui | ||
Our design aims to enhance the efficiency of VAWT by changing the motion track of blades to a square loop and designing an energy conversion circuit of higher efficiency. Also, we try to use 3D-printing components to decrease the construction cost. Team Members: Chengsheng Jiang (jiang98) Yiyang Zhou (yiyang27) Jiayi Guo (jiayig8) Jiayao Lin (jiayaol3) |
||||||
39 | High efficient resonance tank design and output voltage control for wireless power transfer system |
Hongye Dong Liheng Jing Yuhang Wang Yuyang Wei |
proposal1.docx |
Chushan Li | ||
A wireless charging system for electric vehicle Spring 2025 Yuhang Wang Yuyang Wei Hongye Dong Liheng Jing #Problem In light of national carbon peaking and carbon neutrality goals, along with the rapid development of artificial intelligence, smart electric vehicles as a backbone force in reducing carbon emission and air pollution mitigation showcasing its potential to become a vital component of future vehicle industry. However, one of the largest obstacles in the promotion of electric vehicles (EVs)is the capacity of batteries. A simpler way around this problem is to build a ubiquitous charging system that is robust and efficient. As a result, industries are having intense competition in achieving better engineering design and attracting consumers at the same time. #Solution Overview Our solution is to design and develop an efficient, autonomous wireless vehicle charging system that utilizes advanced resonance tank control and output voltage regulation. The focus is on optimizing wireless power transfer through precise coil positioning and adjusting the control of the resonance tank to maintain stable power output. Additionally, the project seeks to create a control system that enables a vehicle to navigate autonomously to a charging station, aligning itself correctly for wireless charging without human assistance. The proposal also aims to develop a complete, integrated prototype consisting of the vehicle, PCB, charging base, and control software to demonstrate autonomous routing and charging in a real-world environment. #Solution Components ##Vehicle movement control algorithm -The movement is fully controllable as demanded by the program -The algorithm enables the vehicle to scan across accessible areas and calculate a route around walls for the vehicle to follow. -As the routing program is running, the car is automatically finding its way to the station. ##A wireless charging system between the vehicle and the charging station -Two PCB boards are applied to form a CLLC charging system. The value of the capacitors and the inductors are adjusted to comply with charging needs. -Control signal is designed to drive the wireless charging system. ##A physical vehicle -The vehicle is designed so as to carry the micro processors and wireless charging receiver. -The vehicle is physically optimized to ensure a better efficiency of the wireless charging. -The vehicle is capable of smoothing bumps on the road. #Criteria of Success An autonomous car with energy storage and fast speed. The wireless charging power Pin≥20W. Car can automatically align to the charging coils. Car can detect the place of wireless charging station. Obstacles can be avoided during the driving. |
||||||
40 | Automated guided vehicle for cargo delivery in factories |
Qiqian Fu Xuhong He Yuyi Ao Zhengjie Wang |
proposal1.pdf |
Jiahuan Cui | ||
# Members - Zhengjie Wang[zw65] - Xuhong He[xuhongh2] - Yuyi Ao[yuyiao2] - Qiqian Fu[qiqianf2] # Problem Cargo delivery has long been a problem in large factories due to the low efficiency, restricted working period, and high safety risks caused by manual labor. Human-operated vehicles or manual transport methods often result in delays, errors, and inconsistent performance, especially in complex factory surroundings. Workers’ working hours are limited and therefore cannot maintain a 24/7 operation. Additionally, heavy machinery, narrow transporting channels, and dynamic obstacles increase the risk of accidents and injuries. To solve these problems, factory owners and researchers want to design a kind of automated guided vehicle that performs better than manual labor. # Solution Overview An automated guided vehicle needs to be designed and assembled. The vehicle needs to deliver cargo within a large factory. The vehicle needs to be equipped with a control/navigation and obstacle avoidance system. This system ensures that the vehicle can move to the ordered destination by itself safely. Moreover, the vehicle needs to lift goods for at least 10 kilograms. This AGV frees workers from moving the goods from point to point, and the only job they need to do is to place the goods in the correct position when the AGV reaches its destination. # Solution Components ## Automated motion control mechanisms - Mechanical systems controlling motions that contain lifting, turning, and acceleration. - Signal transfer system transferring analog signal to digital ones. ## Path planning and efficient communication - Path planning that enables the vehicle to navigate efficiently in the factory - Efficient communication protocol for vehicle and to receive instructions from central control system ## Obstacle detection and stopping - Lidar-based detection system for object recognition and stopping. - PointPillars network for 3D cloud point processing to detect. ## Map Reconstrucion and Localization - Using SLAM algorithms to reconstruct the scene. - Localization of the vehicle in the given route. ## Criterion for Success - Basic motions of lifting, turning, and acceleration can be accomplished by the mechanical system - Following designated path and be able to shift the cargo to the correct shelf - The vehicle can successfully stop when an obstacle is detected within the defined range. - The scene in the factory can be correctly constructed and the vehicle can be localized. |
||||||
41 | Dodgeball Bots |
Qingyan Li |
proposal2.pdf proposal3.pdf |
Timothy Lee | ||
# Dodgeball Bots # Members - Loigen Sodian [sodian.21] - Isaac Koo Hern En [ikoo2] - Jaden Peterson Wen [peterson.21] - Qingyan Li [qingyan5] - Putu Evaita Jnani [putu.20] # Problem Typically, practicing dodge ball requires a second party who acts as the dodge ball thrower. Unfortunately, the thrower may have imprecise aim or is entirely unavailable. Hence of the need of a robot that fully replaces the function of the human thrower. A dodgeball robot that tracks humans and fires balls to hit them is often quite complex to build and involves multidisciplinary knowledge (both electrical, computing, and mechanical work). Sometimes dodgeball bots may possess safety concerns to humans if the ball is misfired or the force feedback applied is too large for the apparatus to handle. Sophisticated decision making, object detection and recognition algorithms, and physical modelling need to be studied in detail to make sure that the robot is safe and smart. # Solution Overview Here we present a Dodgeball robot, a combination of dodgeball and robot. The robot’s primary function is to fire small projectiles at specific targets (e.g., based on color or specific symbols) through the use of computer vision aided by YOLO v8 machine learning. The machine is able to move independently on its own, and gun reload can be manually overridden through an IR remote control, and all of the necessary components (including machine vision devices) will be mounted in the device, and the machine will be powered by a 20V/5A powerbank, meaning that no external connection is necessary. # Components ## Firing - Homemade “tennis” ball launcher for firing mechanisms. Two motors aligned horizontally rotate on opposing directions to launch the ball (tennis-sized ball). The system is an open loop system. The motors are of high RPM and reasonable torque, hence the ball will be launched quickly, and its trajectory will be a straight line. This reduces the need to determine depth/distance. - Vertical motor for vertical movement of the gun (shaft and the two rotating motors mentioned previously). - Reload motors control the flow of the ball from the magazine to the chamber, allowing only one ball at a time. IR remote may be used to disable the reload motors (through the Arduino) to prevent it from continuously firing at targets. - When a target is about to be shot, the Arduino control system will activate an LED and sleep for 2 seconds, after which it will start firing. This 2 second window should be enough for the other party to get ready. ## Moving - Two-axis control for gun elevation and rotation. Elevation motors elevate the gun, while the rotation motors rotate the whole turret structure, which houses the whole firing system. - Arduino (with IR sensor) to activate or deactivate reload mechanism to prevent erroneous operation. - From the AI classification, the Moving system moves the turret and the gun elevation to center the target on the camera (which is located on the gun barrel). ## Vision - Thermal camera to capture the surroundings and feed it to the AI for classification. We classify targets if their heat signature deviates significantly from neighboring pixels. - Jetson Nano for AI computations. - Python to create and run AI model (YOLO v8). ## Body - Chassis structure to hold all the components together. - Armor to shield devices from environment, with holes made to ensure air-flow. - Cooling fan inside for electronic devices. - Power bank for power supply of the electronics. # Criteria of Success ## Stable operation of the design - The robot must function autonomously without malfunction throughout its operation. - It should reliably track human movement and execute precise ball-firing actions without errors or unexpected shutdowns. - The system should incorporate error handling mechanisms to prevent escalating the problem to a potential injury device. - An example would be to immediately stop firing if a software problem is detected or when there are no bullets left in the magazine to prevent motor strain. - If the motors, processor or firing mechanism exceed certain safe operating temperatures, the system should pause operation and cool down before continuing. - A built-in physical emergency stop button for immediate shutdown if unexpected behavior occurs. ## Positioning accuracy of the gun - The robot must achieve ≥80% firing accuracy when targeting moving humans within a 5-meters range under standard (flat terrain, good visibility) conditions. ## High targeting accuracy - System should use reliable sensors, and good dataset model to enhance detection accuracy and prevent false positive. Stationery should achieve accuracy of ≥80%, moving ≥70%. ## Cost efficiency of the entire project - The entire project, including hardware, software and assembly, must be completed within the budget of 1000 RMB. - Designed should be for efficiency in power consumption (since it is battery-powered), and material usage, e.g., reusing components from previous projects. |
||||||
42 | Human-Robot Interaction for Object Grasping with Visual Reality and Robotic Arms |
Jiayu Zhou Jingxing Hu Yuchen Yang Ziming Yan |
proposal3.pdf |
Gaoang Wang | ||
Human-Robot Interaction for Object Grasping with Mixed Reality and Robotic Arms #Team Members: Student 1 jiayu9 Student 2 zimingy3 Student 3 yucheny8 Student 4 hu80 #Problem Current robotic systems lack intuitive and seamless human-robot interaction for object manipulation. Traditional teleoperation methods often require complex controllers, making it difficult for users to interact naturally. With advancements in Mixed Reality (MR) and robotic systems, it is possible to develop an intuitive interface where users can manipulate objects in a virtual space, and a robotic arm replicates these actions in real-time. This project aims to bridge the gap between human intention and robotic execution by integrating MR with robotic grasping, enabling precise and efficient remote object manipulation. #Solution Our solution involves creating a Mixed Reality-based control system using Microsoft HoloLens, allowing users to interact with virtual objects via hand gestures. These interactions are then translated into real-world robotic grasping motions using a robotic arm. The system consists of three key subsystems: (1) Digital Twin Creation, (2) MR-based Interaction & Control, and (3) Robotic Arm Execution. This approach ensures seamless synchronization between virtual and real-world interactions, improving accessibility and usability for robotic object manipulation. #Solution Components Subsystem 1: Digital Twin Creation This subsystem focuses on generating accurate 3D models of real-world objects for use in Mixed Reality. Components: RealityCapture Software – for photogrammetry-based 3D model generation. Gaussian Splatting – for efficient and high-fidelity neural rendering of objects. Camera (e.g., DSLR or Smartphone with high resolution) – to capture ~100 images per object. Blender/Meshlab – for 3D model optimization and format conversion. Unity with MRTK (Mixed Reality Toolkit) – to integrate digital twins into MR. Subsystem 2: Mixed Reality Interaction & Control This subsystem enables users to interact with digital twins via Microsoft HoloLens. Components: Microsoft HoloLens 2 – to provide an immersive MR experience. MRTK (Mixed Reality Toolkit) in Unity – for hand tracking and object interaction. Azure Kinect (optional) – for improved depth sensing and object recognition. Custom Hand Gesture Recognition Algorithm – to detect and map user actions to grasping commands. Subsystem 3: Robotic Arm Execution This subsystem translates user interactions into real-world robotic grasping. Components: Robotic Arm (e.g., UR5, Kinova Gen3, or equivalent) – for object grasping. ROS (Robot Operating System) with MoveIt! – for motion planning and control. Unity-to-ROS Bridge (WebSocket or ROSBridge) – for communication between HoloLens and ROS. Custom Grasping Algorithm – to ensure stable and efficient object manipulation. External Camera for Robot Arm Reference – to assist with object localization and depth perception, improving grasping accuracy. This subsystem translates user interactions into real-world robotic grasping. #Criterion for Success --Successfully generate and import at least 10 digital twin objects into Mixed Reality. --Users should be able to interact with objects using hand gestures tracked by HoloLens. --The system should accurately map hand gestures to robotic arm movements in real-time. --The robotic arm should replicate the grasping motion within 2 minutes of user interaction. --Ensure seamless integration between MR and robotic control, with minimal latency. --Conduct a successful live demonstration showing MR-based grasping and real-world execution. |
||||||
43 | Autonomous Transport Car |
Xubin Shen Yiqi Tao |
proposal1.pdf |
Chushan Li | ||
Project: Autonomous Transport Car Team Members: Ma Jingyuan(674072315) Xubin Shen (677258677) Tao Yiqi(670182981) Zhang Haotian(676598571) Problem Overview: Recent years, the demand for autonomous goods transport systems is growing and people are seeking ways to improve efficiency in logistics. Traditional retrieval methods are generally manual. Workers need to deliver the packages by hand, which is very exhausting and time-consuming with low reliability. Existing solutions often highly depend on human operations, lacking automation in identifying, selecting and transporting items. This limits the further development of logistics industry. Besides, there are also big challenges to train the transport devices to find and follow proper path accurately. Additionally, a convenient way for users to issue instructions and get the feedback is also necessary. For these problems, we propose an autonomous transport car that can grab goods and deliver, with intelligent object recognition based on color and accurate navigation systems. This project aims to design an autonomous system of searching, grabbing, and transporting designated items, improving efficiency and reducing the dependence on human. Solution Overview: The Autonomous Transport Car project aims to develop an intelligent vehicle capable of autonomously searching for and transporting specified goods. This solution integrates advanced technologies such as autonomous driving, motor systems, mechanical manipulation, and computer vision to achieve efficient and reliable operation. Autonomous Driving System: The design includes an autonomous driving system that navigates through the environment using sensors and algorithms. It follows a preset ground trajectory, including obstacle detection and avoidance features, to move to the designated platform. Motor Power System: The vehicle is equipped with an efficient and stable motor power system, utilizing power electronic components and control algorithms。 Gripping Structure: The mechanical structure is designed and assembled to pick up goods from shelves. It can adjust the gripping force according to the size and shape of the items, and this structure is controlled by motors and actuators. Camera Recognition: The vehicle is equipped with a camera recognition system that can identify the types and colors of goods on the shelves, locate and select the specified items. Solution Components: 1.Autonomous Driving System oFollows preset trajectories via IR sensors and PID control. oAvoids obstacles using ultrasonic sensors and reroutes dynamically. 2.Camera Recognition System oRaspberry Pi Camera Module V2 with OpenCV for color-based item detection. 3.Gripping Mechanism o2-DOF servo-driven gripper with pressure feedback, tailored for lightweight boxes. 4.Communication & Control oArduino Mega handles motor control and sensor data. oHC-05 Bluetooth module enables app-based commands (e.g., “return to base”). 5.User Interface oMobile app with minimalistic UI for issuing commands and receiving status updates. Project Goals Successful outcomes will include: 1.Functional Hardware Prototype with Technical Specifications oA 4-wheel modular chassis using 12V geared DC motors, controlled by an L298N motor driver. oComputational Units: Raspberry Pi 4B (4GB RAM) for computer vision (OpenCV-based color detection) and high-level navigation logic. Arduino Mega for low-level motor control, sensor interfacing (e.g., ultrasonic, IR), and gripper actuation. oSensors: 3x TCRT5000 IR sensors for line following. 2x HC-SR04 ultrasonic sensors for obstacle detection. FlexiForce pressure sensors on the gripper for force feedback. oActuators: 2-DOF servo-based gripper (SG90 servos) optimized for lightweight (≤500g), box-shaped items. oPower: Dual 7.4V LiPo batteries (separate power supply for motors and logic units). oCommunication: HC-05 Bluetooth module for app integration. 2.Reliable Software Implementation oAutonomous Navigation: PID-controlled line tracking using IR sensors. Obstacle avoidance via ultrasonic sensors with dynamic path recalculation. oObject Recognition: Color-based identification (targeting specific HSV ranges) using Raspberry Pi Camera Module V2. Localization within a 1m x 1m shelf area. oApp Integration: Basic command interface (e.g., “retrieve red item”) with real-time status feedback via Bluetooth/Wi-Fi. 3.Scope and Success Metrics oFunctional Limitations: Gripping mechanism designed for standardized, rigid items (no fragile/irregular shapes). Navigation restricted to flat indoor environments with clear line markings. oDemonstrated Outcomes: End-to-end operation in a 5m x 5m test area: Identify target item → Plan path → Grasp → Transport → Deliver. ≥85% success rate across 20 trials. Expectations for Team Members Attend all meetings prepared (e.g., review agendas, complete assigned tasks). Communicate progress, blockers, or delays proactively (no “radio silence”). Respect deadlines; renegotiate timelines early if conflicts arise. Provide constructive feedback during reviews and respond openly to critiques. Document work thoroughly for seamless handovers. Escalate risks (e.g., technical hurdles, miscommunication) immediately. |