Projects
# | Title | Team Members | TA | Professor | Documents | Sponsor |
---|---|---|---|---|---|---|
1 | 3D Scanner |
Chenchen Yu Jiayi Luo Peiyuan Liu Yifei Song |
Xinyi Xu | proposal1.pdf |
Pavel Loskot | |
# Team Members Yifei Song (yifeis7) Peiyuan Liu (peiyuan6) Jiayi Luo (jiayi13) Chenchen Yu (cy32) # 3D Scanner # Problem Our problem is how to design an algorithm that uses a mobile phone to take multiple angle photos and generate 3D models from multiple 2D images taken at various positions. At the same time, we will design a mechanical rotating device that allows the mobile phone to rotate 360 degrees and move up and down on the bracket. # Solution Overview Our solution for reconstructing a 3D topology of an object is to build a mechanical rotating device and develop an image processing algorithm. The mechanical rotating device contains a reliable holder that can steadily hold a phone of a regular size, and an electrical motor, which is fixed in the center of the whole system and can rotate the holder 360 degrees at a constant angular velocity. # Solution Components ## Image processing algorithms - This algorithm should be capable of performing feature detection which is essential for image processing. It should be able to accurately identify and extract relevant features of an object from multiple 2D images, including edges, corners, and key points. - This algorithm should be designed to minimize the memory requirement and energy consumption, because mobile phones have limited memory and battery. ## Mechanical rotating system Phone holder that can adjust its size and orientation to hold a phone steadily Base of the holder with wheels that allows the holder to move smoothly on a surface Electrical motor for rotating the holder at a constant angular velocity Central platform to place the object The remote-control device can be used to control the position of the central platform. Different types of motors and mechanisms can be used for up and down, such as the stepper motors, servo motors, DC motors, and AC motors. # Criterion for Success - Accuracy: The app should be able to produce a 3D model that is as accurate as possible to the real object, with minimal distortion, errors or noise. - Speed: The app should be able to capture and process the 3D data quickly, without requiring too much time or processing power from the user's device. - Output quality: The app should be able to produce high-quality 3D models that can be easily exported and used in other software applications or workflows. - Compatibility: Any regular phone can be placed and fixed on the phone holder with a certain angle and does not come loose - Flexibility: The holder with a phone must be able to rotate 360 degrees smoothly without violent tremble at a constant angular velocity # Distribution of Work Yifei Song Design a mobile app and deploy a modeling algorithm to it that enables image acquisition and 3D modeling output on mobile devices. Peiyuan Liu: Design an algorithm for modeling 3D models from multiple view 2D images. Jiayi Luo: Design the remote-control device. Using the electrical motors to control the central platform of the mechanical rotating system. Chenchen Yu: Design the mechanical part. Build, test and improve the mechanical rotating system to make sure the whole device works together. |
||||||
2 | A Desktop-Size Environment-Controlled Greenhouse for Multi-Variable Optimization of Crop Growth |
Haoyu Qiu Taoran Li Ze Yang Zhimin Wang |
Qi Wang | design_document1.pdf proposal2.pdf |
Wee-Liat Ong | |
TEAM MEMBERS: Zhimin Wang (zhiminw2@illinois.edu 3180110982), Ze Yang (zeyang2@illinois.edu 3180111602), Taoran Li (taoranl2@illinois.edu 3180110750), Haoyu Qiu (haoyuq2@illinois.edu 3190110672) A DESKTOP-SIZE ENVIRONMENT-CONTROLLED GREENHOUSE FOR MULTI-VARIABLE OPTIMIZATION OF CROP GROWTH PROBLEM: Greenhouse production plays a significant role in modern agriculture, especially in densely populated areas such as eastern China. The large-scale and medium-scale greenhouses are a productive system that allows us to respond to the growing global demand for fresh and healthy crops throughout the year, which is widely applied in agricultural production. Traditionally, small-scale greenhouses are usually used in agricultural experiments. Researchers cultivate their plants in a modular environment-controlled greenhouse, to gather data on the state of crop growth in a highly specified and optimized environment. However, in most cases, traditional greenhouses are not intended for ordinary consumers. Several obstacles remain to be solved for a customer greenhouse product: 1. Too large size, excessive energy consumption, not appliable for household use. 2. It is very inconvenient to install and carry away, making it unsuitable for customers to use. 3. The greenhouse environment is not easily controlled because its climate parameters are interrelated. 4. There is no full-featured app to adapt to product use. SOLUTION OVERVIEW: To solve the problems mentioned above, we plan to design a desktop-size environment-controlled greenhouse that can be used for ordinary customers. To reduce its size and energy consumption, only the necessary components would be kept in the product. The product is a cube space with an environment-controlling system. All the control functions will be implemented through the app on the mobile phone. The whole product's size is strictly controlled to be desktop-level. The energy consumption should be limited to about the same as general household appliances. SOLUTION COMPONENTS: 1. Main planting cube. The model should be able to hold a fully functional environment-controlling system. 2. The environment-controlling system includes: adjustable LEDs, a temperature controlling system, water waste collection & disposal, a filter for the input gas, a fan for outputting the fresh air, and a camera to monitor the plants. 3. Environment detectors: a. Temperature. b. Illumination detector. c. Air quality. 4. A mobile phone app that can receive the data and adjust the settings. CRITERION FOR SUCCESS: 1. Desktop-level appliance with appropriate size & energy consumption. 2. A main controlling system based on STM32. 3. Fully functional environmental parameters detection. 4. App to control the function of the product. DIVISIONS OF LABOR AND RESPONSIBILITIES: All members would contribute to the design and process of the project. Taoran Li will be responsible for the model design including CAD modeling. He will cooperate with Haoyu Qiu, who is responsible for the main control system design. They will mainly be responsible for the hardware part. Zhimin Wang and Ze Yang will mainly participate in the app development. Zhimin Wang will be mainly responsible for the API and interface between hardware and software. Ze Yang will be responsible for the software design. Everyone should be responsible for their parts of the written work. Finally, testing would be held by all of us together. |
||||||
3 | High Noon Sheriff Robot |
Shuting Shao Yilue Pan Youcheng Zhang Yuan Xu |
Yutao Zhuang | design_document1.pdf proposal1.pdf |
Timothy Lee | |
# MEMBERS: - Yuan Xu [yuanxu4] - Shuting Shao [shao27] - Youcheng Zhang [yz64] - Yilue Pan [Yilvep2] # TITLE: HIGH NOON SHERIFF ROBOT PROBLEM: Nowadays with the increasing number of armed attacks and shooting incidents. The update for public places needs to be put on the agenda. Obviously, we could not let police and security to do all the jobs since humans might neglect some small action of threat behind hundreds of people and could not respond quickly to the threat. A second of hesitation might cost an innocent life. Our team aims on making some changes to this situation since nothing is higher than saving lifes not only victims but also gunners. We find some ideas in the Old western movies when two cowboys are going to a high noon duel, the sheriff will pull out the revolver quicker than the other and try to warn him before everything is too late. If we can develop a robot that can detect potential threats and pull out weapons first in order to warn the criminal to abandon the crime or use non-lethal weapons to take him down if he continues to pull out his gun. # SOLUTION OVERVIEW: In order to achieve effective protection in a legal way, we have developed the idea of a security robot. The robot can quickly detect dangerous people and fire a gun equipped with non-lethal ammunition to stop dangerous events. The robot should satisfy the following behavioral logic: - When the dangerous person is acting normally and there is no indication of impending danger, the robot should remain in standby mode with its robot arm away from the gun. - When the dangerous person is in a position ready to draw his gun or other indication of dangerous behavior, the robot is also in a drawn position and its arm is already clutching the gun. - When the dangerous person touches his gun, The robot should immediately draw the gun, move the hammer and finish aiming and firing to control the dangerous person. This type of robot would need to include three subsystems: Detection system, Electrical Control system, and Mechanical system. # SOLUTION COMPONENTS: ## [SUBSYSTEM #1: DETECTION SUBSYSTEM] This subsystem consists of a camera and PC. We are going to use YOLO v5 to detect object, determine the position of human and the gun. Use DeepSORT to track the object, let the camera follow the opponent. Use SlowFast to detect opponent’s behavior. ## [SUBSYSTEM #2: ELECTRICAL CONTROL SYSTEM] This subsystem consists of a STM32, two high speed motors, two gimbal motors, one motor for revolver action and position sensor. The STM32 serves as the controller for the motors. The high speed motor will be used to move the mechanical grab to grab the revolver and pull it out as fast as possible so that it will use the position sensor as the end stop point instead of PID control. The gimbal motors serve as Yaw and Pitch motion for the revolver to control the accuracy of the revolver so that it needs encoders to give the angle feedback. ## [SUBSYSTEM #3: MECHANICAL SYSTEM] This subsystem consists of a three-degree-of-freedom robot arm and a clamping mechanism fixed to the end of the arm. The clamping mechanism is used to achieve the gripping of the gun, the moving of the hammer and the pulling of the trigger. The mechanical arm is used to lift and aim the gun. # CRITERION FOR SUCCESS - Move Fast. The robot must draw its gun and aim faster than the opponent; - Warning First. If opponent’s hand moves close to the gun on his waist, the robot should draw the gun and aim it at the opponent without firing. If the opponent gives up drawing a gun and surrender, the robot should put its gun back in place. Otherwise, the robot will shoot at the opponent. - Accurate shooting. Under the premise that the opponent may move, the robot must accurately shoot the opponent's torso. # DISTRIBUTION OF WORK - EE Student Shuting Shao: Responsible for object detection and object tracking. - EE Student Yuan Xu: Responsible for behavior detection and video processing. - EE Student Youcheng Zhang: Responsible for electrical control system. - ME Student Yilue Pan: Responsible for the Mechanical system. |
||||||
4 | Electromagnetic Launch System with Switchblade Drone |
Ruike Yan Shuyang Qian Xinyu Xia Zheng Fang |
Adeel Ahmed | design_document1.pdf proposal1.pdf |
Jiahuan Cui | |
# TEAM MEMBERS: Shuyang Qian (sq8) Zheng Fang (zhengf4) Xinyu Xia (xinyux4) Ruike Yan (ruikey2) #TITLE OF THE PROJECT: Electromagnetic Launch System with Switchblade Drone # PROBLEM: The Switchblade UAVs in use today tend to use pneumatics for power. It has been limited by its launching speed, cost, and portability. Making use of electromagnetic technology can improve the design. The project aims to develop an electromagnetic launch system which can launch switchblade drone well. # SOLUTION OVERVIEW: The project involves the development of an electromagnetic launch system and a switchable drone. The launch system is designed to propel a fixed-wing drone to a relatively high speed, using electromagnetic forces. The drone is equipped with a switchable wing mechanism that allows it to be housed within the launching track during launch and then deployed for flight after exiting the launching system. There are several main steps to finish the project well: Design and construction of the launch system Development of the foldable wing mechanism Integration of subsystems Testing and validation Overall, the project's success will depend on the effective implementation of these solutions, which will require careful planning, design, and testing to achieve the desired outcome of a functioning electromagnetic launch tube with a switchblade drone. # SOLUTION COMPONENTS: The solution will consist of the following components: Electromagnetic launch system: the system includes multiple sets of acceleration coils, a base to hold the coils, a base with both a guide slot for the horizontal movement of the ejection ram, and a launch cart to hold the drone. Switchblade drone: the system includes the main body of the drone, a pair of foldable wings, a folding device powered by a torsion spring, and an attachment device for the drone to the ejection ram. Electrical control system: the system mainly controls the charging and discharging of the coil, the main components are Hall Effect Sensors, N-Channel Power MOSFETs, MOSFET Heatsinks, High Speed Power MOSFET Drivers, Resistors, Momentary Switch. # CRITERION OF SUCCESS: The success of the project will be determined by the following criteria: Portability: Weather the system is small and portable enough to be carried in a suitcase or other boxes. Speed of the launched plane: The speed of the plane needs to be fast enough so that it can travel enough distance and realize some additional functions. Safety: The system should not cause danger to the operator or other people around it. Potential dangers are, for example, Mechanical scratches and electric leakage. Stability: The success rate of launching the plane, and the route of the plane after each launching should be similar. # DISTRIBUTION OF WORK: Shuyang Qian (ME): Responsible for designing and constructing the mechanical part of electromagnetic launch system including the guide rails, fixing parts and installation of coils. Zheng Fang (ECE): Responsible for designing and soldering the circuit for controlling the charging and discharging of the coil. Xinyu Xia (ME): Responsible for designing and constructing the switchblade drone which can be accelerated by the electromagnetic launch system and whose foldable wings can run well. Ruike Yan (EE): Responsible for designing the control system for switchblade drone which lets the drone continues to fly after leaving the electromagnetic launch system. |
||||||
5 | VTOL Drone with Only Two Propellers |
Jinke Li Qianli Zhao Tianqi Yu Yanzhao Gong |
Muhammad Malik | design_document1.pdf proposal1.pdf |
Jiahuan Cui | |
# **TEAM MEMBERS:** - Yu Tianqi(tianqiy3) - Li Jinke(jinkeli2) - Gong Yanzhao(yanzhao8) - Zhao Qianli(qianliz2) # **TITLE: VTOL DRONE WITH ONLY TWO PROPELLERS** # **PROBLEM:** Nowadays, drones, as an important carrier of new technology and advanced productivity, have become an vital part of the development of new aviation forms. They have been used in many different areas such as military, civilian, commercial and so on. Traditional drones like helicopters have shortcomings in flight speed while fixed-wing aircraft require a runway for takeoff and landing. Vertical takeoff and landing (VTOL) aircraft not only have helicopters' assessibility and flexbility to take off and land in small spaces, thus they can fly to destinations that are not easily accessible by traditional aircraft, such as remote areas or areas with poor infrastructure; the design of VTOL also allows for faster deployment and response times which is especially important in emergency situations where every second counts. Addtionlly, simpler construcrtion of this drone not only reduces over all cost but requires less energy in longer flight time. Overall, VTOL aircraft offer a level of flexibility and efficiency that traditional aircraft cannot match, making them a valuable tool in a variety of industries, including transportation, military, and emergency services. # **SOLUTION OVERVIEW:** We plan to design a small VTOL UAV with a wingspan of about one meter to achieve both vertical takeoff and landing and horizontal flight like a fixed-wing aircraft by means of a horizontal tail and rotatable propellers located at the ends of the mean wings. Such two flight modes and the transition between them require a very precise perception and adjustment of the aircraft's attitude. To do this, we need a high frequency motherboard and some gyroscopic sensors to receive and process the aircraft attitude information and make feedback adjustments. This places high demands on the control section, and also on the mechanical side to ensure structural rigidity, reduce unpredictable jitter in the wings and other components, and thus reduce additional attitude adjustments. What's more, we also need to give more thought to the design of the rotatable propeller section. It is important to reduce the inertia of the rotating part while reducing the complexity of the structure and making it more reliable. For our aircraft, the arrangement of internal electronics and storage space has a huge impact on the center of gravity. While designing the aircraft structure with sufficient strength. We also consider the arrangement of the location of each electronic component, the heat dissipation of electronic components, sufficient storage space, certain water resistance, easier maintenance, etc. We believe that with the cooperation of the team members from different disciplines, we can be responsible for our own sub-projects and take full consideration of the design of other sub-projects to complete the overall design. # **SOLUTION COMPONENTS:** **VTOL Control Subsystem:** Different from the traditional sliding mode, vertical takeoff and landing makes our drone basically get rid of the dependence on the runway. This subsystem uses the GY-521 breakout of the MPU6050 6 degree of a freedom IMU. It gives adequate measurement precision to stabilize our drone. We use Teensy 4.0 as our microcontroller and use it for robotics, audio projects and Arduino applications (Teensyduino in our drone). After we assemble all the hardware stuff, we need to write the control code in Arduino/C++ language and uploaded them to the Teensy 4.0 board using Arduino IDE. Our drone will use the rotary lift fan to realize the vertical takeoff and landing of the aircraft by relying on the torque force output of the motor according to the feedback information of the IMU. **Power Subsystem:** The power system will provide sufficient power for the takeoff and subsequent flight of the drone. It mainly includes two motors, two electric regulators, two propellers and batteries. In our VTOL drone, we plan to use Sunnysky brushless motors V2216, KV800, which could provide a maximum force of 1360N each. And according to the working current, we choose 30A electric regulators and 7.4V batteries. **Mechanical Subsystem:** This system is the main structure of the drone, housing the rest subsystems of the drone. It is also a vital part, providing lift force when the drone is level. It consists of wings, fuselage and tail. In our drone, we plan to use lightweight PLA to 3D print the wings and other small part and laser cut the glass fiber plate to get the fuselage. Carbon fiber rods are also used in the wing parts to support the 3D printed wings. **Adjustment of the center of gravity Subsystem:** This subsystem consist of a gyroscope and Teensy 4.0 board, which detects the position of the drone's center of gravity in real time and tranmits the information to the board. The board calculates and transmits the porper angle to the servos, so that the drone can fly soomthly in the air. **Feedback Control Subsystem:** This subsysteem is aimed to ensure the drone mantains a stable flight path and does not deviate from its target orientation. The system works by comparing the current and target orientation and adjusting each propeller's angle according in order toreduce any error. A PID controller is used to determine the necessary adjustments, and it is then sent to the properllers via a servo motor in order to adjust the blades angles. This process is repeated contiually as the drone is flown. **Flight mode adjustment Subsysytem:** This subsystem contains two servo, Teensy4.0 board, drone remote control and receiver. When the UAV recives a signal to switch from vertical flight mode to horizontal flight mode, it turns the angles od servos so that a horizontal force is generated to move the UAV in the horizontal direction. # **CRITERION FOR SUCCESS:** - Flight performance: The drone should be able to take off and land vertically, as well as hover and maneuver smoothly in the air. It should also have a sufficient range and flight time to perform its intended function. - Payload capacity: The drone should be able to carry the required payload, such as a camera, sensors, or delivery package, while maintaining stability and flight performance. - Safety: The drone should be designed with safety in mind, including proper wiring, motor placement, and redundancy systems to prevent crashes or malfunctions. - Reliability: The drone should be built with high-quality components and tested thoroughly to ensure that it operates reliably and consistently over time. - Cost-effectiveness: The drone should be designed and built in a cost-effective manner, using affordable components and minimizing unnecessary features or complexity. # **DISTRIBUTION OF WORK** ## ME STUDENT Yanzhao Gong: - Print and assembly the mechanical parts of the drone. - Participate in the design of the rotating mechanism of the two propellerso and the follow-up improvement. ## EE STUDENT Qianli Zhao: - Adjust and control the drone propellers angle when the drone goes from vertical takeoff to horizontal flight. - Use the gyroscope to detect and adjust the center of gravity of the drone in time. ## ECE STUDENT Li Jinke: - Participate in the electrical design of the drone. Complete the welding, assembly and debugging of the electronic control hardware equipment of drone - Implementation and debugging of drone vertical takeoff and landing control algorithm code ## ME STUDENT Tianqi Yu: - The design of the fuselage part of the structure, the use of glass fiber plate, carbon fiber rods and PLA 3d printing with the design of lightweight, high-strength fuselage. - Participated in the design of the rotating mechanism of the two propellers at the end of the wing. |
||||||
6 | Robotic T-Shirt Launcher Mark II |
Hao Ding Moyang Guo Yixiang Guo Ziyu Xiao |
Qi Wang | other4.pdf proposal1.pdf |
Timothy Lee | |
ROBOTIC T-SHIRT LAUNCHER MARK II TEAM MEMBERS Guo yixiang (yg16), Guo moyang (moyangg2), Xiao ziyu (ziyux2), Ding hao (haod3) PROBLEM Our team has identified a problem with the launcher project that was completed last year. In particular, the previous design only included a single-shot launcher that required manual reloading and could only adjust the angle and direction automatically. SOLUTION OVERVIEW To address this issue, our team has proposed an improved design that will improve upon the limitations of the previous model. The Robotic T-shirt Launcher Mark II will be a fully automated system capable of launching multiple T-shirts by itself, without manual reloading. Our proposed design will also include more advanced features, such as the ability to adjust the trajectory of the launch. In addition, we will build it into a wearable device that could be carried on our shoulders. SOLUTION COMPONENTS The automatic launcher is comprised of several components that work together to provide a powerful and reliable weapon system. These components include: Power Components: The power components of the system consist of an air pump, an air cylinder, a quick exhaust valve, and connecting elements. These components are responsible for providing the necessary power and pressure to the system to shoot out the bullet. Function Components: The functional components of the system include the barrel, the two-axis gimbal (which is wearable), and the automatic loading system. The barrel provides the means for firing projectiles, while the gimbal allows for precise targeting and tracking of moving targets. Control System: The control system is responsible for managing the various components of the system, including the electromagnetic valves that control the airflow, the actuator controllers for the loading mechanism, and the gimbal controller for targeting. Human-Machine Interface (Advanced Requirement): For advanced users, the system could include a human-machine interface with features such as automatic firing, angle adjustment, and target recognition lock-on, allowing the user to engage targets effectively. CRITERIA FOR SUCCESS: Functionality: The launcher should be able to launch T-shirts accurately and consistently at a controlled angle and velocity. The system should be able to handle multiple T-shirts without the need for manual reloading, and the entire launch process and angle control should be initiated and controlled by a single button. Airtight and Adequate Air Pressure: The launcher's air channel should have high airtightness and be able to generate sufficient air pressure to launch T-shirts effectively. The air pressure should be able to be adjusted and controlled to suit different launch scenarios. Automation: The loading system should be fully automated, with T-shirts being automatically loaded into the air chamber without the need for manual intervention. The loading mechanism should be designed to be reliable and efficient, and the electrical control system should be able to manage the entire process automatically. Safety and Cost-effectiveness: The launcher should be designed with safety in mind. Safety mechanisms, such as emergency stop buttons, should be included to prevent accidents or injuries. The design and construction of the launcher should be cost-effective, and any additional features should be carefully considered. Also, it is necessary to implement additional components to measure some critical values such as gas tightness in order to prevent gas leaks. |
||||||
7 | Fixed wing drone with auto-navigation |
Yihui Li Zhanhao He Zhibo Teng Ziyang An |
Yiqun Niu | design_document1.pdf proposal1.pdf |
Jiahuan Cui | |
# Fixed wing drone with auto-navigation ## Group Members **Zhibo Teng** NetID: zhibot2 **Yihui Li** NetID: yihuil2 **Ziyang An** NetID: ziyanga2 **Zhanhao He** NetID: zhanhao5 ## Problem Traditional methods of data collection, such as using manned aircraft or ground surveys, can be time-consuming, expensive, and limited in their ability to access certain areas. The multi-rotor airfoil UAV being used now has slow flight speed and short single distance, which is not suitable for some long-distance operations. Moreover, it needs manual control, so it has low convenience. Fixed wing drones with auto-navigation can overcome these limitations by providing a cost-effective and flexible solution for aerial data collection. The motivation behind our design is to provide a reliable and efficient way to collect high-quality data from the air, which can improve decision-making processes for a variety of industries. The drone can fly pre-determined flight paths, making it easier to cover large areas and collect consistent data. The auto-navigation capabilities can also improve the accuracy of the data collected, reducing the need for manual intervention and minimizing the risk of errors. ## Solution Overview Our design is a fixed wing drone with auto-navigation capabilities that is optimized for aerial data collection. The drone is equipped with a range of sensors and cameras, as well as software that allows it to fly pre-determined flight paths and collect data in a consistent and accurate manner. Our design solves the problem of inefficient and costly aerial data collection by providing a cost-effective and flexible solution that can cover large areas quickly and accurately. The auto-navigation capabilities of the drone enable it to fly pre-determined flight paths, which allows for consistent and repeatable data collection. This reduces the need for manual intervention, which can improve the accuracy of the data and minimize the risk of errors. Additionally, the drone’s compact size and ability to access difficult-to-reach areas can make it an ideal solution for industries that require detailed aerial data collection. ## Solution Components ### Subsystem #1: Aircraft Structure and Design * Design the overall structure of the plane, including the wings, fuselage, and tail section * Use 3D modeling software to create a digital model of the plane * Choose materials for construction based on their weight, durability, and strength * Create a physical model of the plane using 3D printing or laser cutting ### Subsystem #2: Flight Control System * Implement a flight control system that can be operated both manually and automatically * For manual control, design a control panel that includes a joystick and other necessary controls * For automatic control, integrate a flight controller module that can be programmed with waypoints and flight parameters * Choose appropriate sensors for detecting altitude, speed, and orientation of the plane * Implement algorithms for stabilizing the plane during flight and adjusting control surfaces for directional control ### Subsystem #3: Power and Propulsion * Choose a suitable motor and propeller to provide the necessary thrust for the plane * Design and integrate a battery system that can power the motor and control systems for a sufficient amount of time * Implement a power management system that can monitor the battery voltage and ensure safe operation of the plane ### Subsystem #4: Communication and Telemetry * Implement a wireless communication system for transmitting telemetry data and controlling the plane remotely * Choose a suitable communication protocol such as Wi-Fi or Bluetooth * Develop a user interface for displaying telemetry data and controlling the plane from a mobile device or computer ## Criterion for Success 1. Design and complete the UAV model including wings, fuselage, and tail section 2. The UAV can fly normally in the air and realize the control of the UAV, including manual and automatic control 3. To realize the data monitoring of UAV in flight, including location, speed and altitude ## Distribution of Work **Zhibo Teng:** Aircraft Structure and Design **Yihui Li:** Aircraft Structure and Design **Ziyang An:** Flight Control System Power and Propulsion **Zhanhao He:** Flight Control System Communication and Telemetry |
||||||
8 | Clickers for ZJUI Undergraduate |
Bowen Li Mu Xie Qishen Zhou Yue Qiu |
Qi Wang | design_document1.pdf proposal2.pdf |
Timothy Lee | |
# TEAM MEMBERS Bowen Li (bowenli5) Qishen Zhou (qishenz2) Yue Qiu (yueq4) Mu Xie (muxie2) # PROBLEM I-clicker is a useful teaching assistant tool used in undergraduate school to satisfy the requirement of course digitization and efficiency. Nowadays, most of the i-clickers used on campus have the following problems: inconsistency, high response delay, poor signal, manual matching. We are committed to making an i-clicker for our ZJUI Campus, which is economical, using 2.4G Wi-Fi signal connection, and on the computer to achieve matching. At the same time, it has to deal with the drawbacks as mentioned above. # SOLUTION OVERVIEW Compared with wired machines and mobile phone software, wireless i-clickers have the following advantages: they are easy to carry, they can accurately match and identify user tags, they are difficult to cheat and would not distract students. A wireless voting system consists of a wireless i-clicker, a wireless receiver on the administrator side, and a corresponding software program. In order to solve the problem of signal reception which is common in schools, we decided to use 2.4GHz Wi-Fi signal for data transmission. In addition, different from other wireless voting devices that carry out identity confirmation and bind identity information on the hardware side, we decided to make an identity binding system on the software side, and at the same time return it in the hardware unit for customer confirmation. # SOLUTION COMPONENTS A mature i-clicker should have a hardware part and a software part. The hardware part needs economical and effective hardware logic design. These include the storage and transportation of user key signals through a single chip computer program, a simple LCD1602 display to provide immediate feedback, a 2.4GHz Wi-Fi transmit-receive device for many-to-one wireless signal transmission, and a beautiful shell design. While the software component includes the conversion of hardware signals to software signals, a mature voting system, authentication of device owners, and signal return to hardware systems. ## SCM HARDWARE LOGIC SYSTEM: Use SCM to compile the LCD module, return user input value. STC89C52RC can easily do this. Pass data to the NRF wireless transmission module. ## WIRELESS 2.4G SIGNAL TRANSMISSION SYSTEM: A wireless signal detector should be a many-to-one signal transmission system. Bluetooth is one-to-one and Radio frequency is expensive. So, Wi-Fi signal transmission is the best choice. Each detector should load a transmitter and a receiver to transmit data to the administrator and get the data transmitted by the software. ## HARDWARE-TO-SOFTWARE SIGNAL TRANSFER SYSTEM: A Hard-to-Soft system is necessary in any similar design. We should write a driver to process data. ## SOFTWARE DATA PROCESSING SYSTEM: Software ought to process the data signal accurately and generate feedback to each i-clicker. Specifically, a software is needed in our design. The administrator can get user data and display it visually through statistical charts. This system should also have the function to associate user information to their answer. This is designed to score. A return signal should also be designed here. Users can receive feedback on their detector screen. ## USER IDENTIFICATION SYSTEM ON SOFTWARE: Give an internal ID number to each i-clicker. Bind identity information (such as NetID, Student number) to i-clicker internal ID number on the software. Users can get their binding information on their screen by pushing a specific button. This data will be reset when a new packet is returned by the administrator. ## 3D PRINT SHELL: A beautiful shell that fits the hardware system is needed. The shell should not be too large and the buttons must fit into the hardware. # CRITERION FOR SUCCESS Stability: Signal should be received easily. Signal loss inside a room shouldn’t occur, especially when there is a gap of two chairs. Affordability: I-clickers should have a low cost. This facilitates mass production and popularization on campus. Efficiency: The process from keystroke to signal collection and transmission shouldn’t have a high delay. Beauty: Shell design should be accepted widely and be accessible to 3D printing. Feedback: Users should get the feedback from the administrator easily. This is useful in arousing study enthusiasm of students. Concurrency: The system should handle signals from a great deal of students in a short period correctly. # DISTRIBUTION OF WORK Qishen Zhou: Software data processing system and user information identification system. Bowen Li: Hardware-to-software data transfer system and SCM hardware logic system. Yue Qiu: Wireless signal transmission system and processing the data returned from the administrator. Mu Xie: 3D print shell design and physical setup for the hardware part. |
||||||
9 | Robot Vacuum |
Kailong Jin Long Chang Tianyu Zhang Zheyi Hang |
Tielong Cai | design_document2.pdf proposal1.pdf |
Meng Zhang | |
**Team Members** Tianyu Zhang tianyu7 Long Chang longc2 Zheyi Hang zheyih2 Kailong Jin kailong3 **Project Title** Robot Vacuum **Problem Description** As technology evolves, robot vacuums are gradually evolving from having only a single sweeping function to having a certain level of intelligence, including laser navigation as well as home map building. But the fact is, in the daily use of robot vacuums, there are still problems such as easy to fall, cannot completely sweep all space. Many large companies are working hard to develop new robot vacuums, which are expected to greatly reduce the work that workers need to do personally, freeing people's hands for a long time and meeting people's expectations of the value of "robots". **Solution Overview** The idea is to solve four problems with existing robot vacuums. Automatically steer the robot at the edge of the stairs by adding a mechanical structure. Improve the suspension structure of the robot vacuum to give it better pass ability. By designing a linkage system with the elevator, the robot vacuum can perform multi-floor sweeping operations. In addition, we will optimize the 3D vision of today's robot vacuums and optimize the pathfinding algorithm. This will allow the robot to become powerful enough to really free people's hands. **Solution Components** *Anti-fall steering subsystem* - It allows the robot to automatically turn when it approaches the edge of stairs to avoid falling and this function is completely mechanical and does not require software. - The robot has a four-wheel structure and is driven by the rear wheels. The front wheels are set to a cone shape. - An extra steering wheel is installed on the chassis, with a rough rubber surface to provide sufficient friction. The direction of steering wheel is perpendicular to the forward direction and is linked to the rear wheel, which provides power. The steering wheel is slightly higher than the four wheels and does not contact the ground during normal progress. - As the robot approaches the edge of the stairs, the conical front wheels will be the first to detangle from the platform, causing the chassis to lower. The steering wheel contacts the ground of the platform and turns quickly to avoid falling. *Low obstacles passing subsystem* - The system allows the robot to pass low thresholds or obstacles to avoid getting stuck during the cleaning process. - This function requires the use of infrared sensor, steering gear and mechanical structure coordination. - We will redesign the structure of the connection between the wheel and the main body of the robot. The connector will be set as a folding telescopic structure, which can be powered by the steering engine to temporarily raise the main body of the robot. - Infrared sensors will be used to detect the height of obstacles in front of the robot to determine whether to turn or pass. *Elevator Interaction Subsystem* - Signal sender and receiver to interact with the elevator. - State machine inside the robot to control the robot's behavior. - Simple elevator (for demo only) with signal sender and receiver to interact with the robot. *Effective Path Finding Subsystem* - Laser sensor to capture and store the 3D/2D surrounding information. - Path decision algorithm inside programmable chip based on archived 3D/2D surrounding information that can make wise decisions on low obstacles. **Criterion for Success** - When approaching the edge of stairs, the robot automatically turns to avoid falling. - The robot can pass 1-2cm high thresholds or obstacles smoothly without getting stuck. - When finishing the cleaning work of one floor, the robot can call for the elevator to send itself to the next floor to continue its cleaning work. - The robot adopts an algorithm developed by us to find the most effective route considering the existence of low obstacles. **Intra-group Division of Labor** - Long is in charge of the implementation of the Anti-fall steering subsystem and the mechanical structure of the elevator. - Zheyi is in charge of the implementation of the Low obstacles passing subsystem and the overall mechanical structure of our robot. - Kailong is in charge of the implementation of the Elevator Interaction Subsystem. - Tianyu is in charge of the implementation of the Effective Path Finding Subsystem and the overall software component of our robot. |
||||||
10 | Digital Controlled LED Rotating Display System |
Chentai Yuan Guanshujie Fu Keyi Shen Yichi Jin |
Adeel Ahmed | design_document1.pdf proposal1.pdf |
Chushan Li | |
# TEAM MEMBERS Chentai Yuan (chentai2) Guanshujie Fu (gf9) Keyi Shen (keyis2) Yichi Jin (yichij2) # TITLE OF THE PROJECT Digital Controlled LED Rotating Display System # PROBLEM By visual persistence phenomenon, we can display any images and strings with a rotating LED array. Many devices based on this idea have been developed. However, there are some common issues to be solved. First, the images or strings to be displayed are pre-defined and cannot be changed in a real-time way. Second, the wired connection between some components may limit the rotation behavior, and harm the quality of display. Some economical wireless communication technologies and new ways to connect components can be applied to achieve a better display and real-time image update. # SOLUTION OVERVIEW We aim at developing a digital controlled LED rotating display system. A servo motor is controlled to drive the stick with one row of LED to do circular rotation. The connection between LEDs, control circuit, motor and other components should be simple but firm enough to suppose good display and high-speed rotation. Moreover, there is another part to handle users’ input and communicate with the display part via Bluetooth to update images in a real-time and wireless way. # SOLUTION COMPONENTS ## Subsystem1: Display Subsystem - LED Array that can display specific patterns. - Controller and other components that can timely turn the status of LEDs to form aimed patterns. ## Subsystem2: Drive Subsystem - Servo motor that drive of the LED array to do circular rotation. - Controller that communicates with the motor to achieve precise rotation and position control. - An outer shell that has mechanisms to fix the motor and LED array. ## Subsystem3: Logic and Interface Subsystem - Input peripherals like keyboard to receive users’ input. - A FPGA board for high-level logics to handle input, give output and communicate with other subsystems. - Wireless communication protocol like Bluetooth used in communication. - VGA display hardware offering Graphical User Interface. # CRITERION OF SUCCESS - Users can successfully recognize the real-time patterns to be displayed. - It achieves the precise rotation and position control of motor. - The motor can drive the LED array and any necessary components to rotate stably and safely. - The LED array is under real-time control and responds rapidly. - The communication between components has low latency and enough bandwidth. # DISTRIBUTION OF WORK - Chentai Yuan(ME): Mechanisms and servo motor control. - Guanshujie Fu(CompE): Logic and Interface design and keyboard & VGA display implementation. - Keyi Shen(EE): Wireless communication and servo motor control. - Yichi Jin(EE): Circuit design, keyboard & VGA display implementation. |
||||||
11 | Miniterized Langmuir Blodgett Trough |
Xiran Zhang Zhanlun Ye Zhanyu Shen Zhehao Qi |
Muhammad Malik | Kemal Celebi | ||
TEAM MEMBERS: Zhanyu Shen (zhanyus2@illinois.edu 3190110849), Zhanlun Ye (zhanlun2@illinois.edu 3190110850), Zhehao Qi (zhehaoq2@illinois.edu 3190110358), Xiran Zhang (xiranz2@illinois.edu 3190110102) Title: Miniaturize Langmuir Blodgett trough PROBLEM: The normal method to produce a nano film is using typical Langmuir-Blodgett film method in which we will use two barriers to control the film area, tension stress and particles' density and directions. However, the nano film is really small and hard to observe and analysis the situation of the film. Normally we use a lifter to obtain a sample and use microscope to observe its quality. This typical method has a problem that this method is really ineffective and hard to control the system. SOLUTION OVERVIEW: The project proposed aims to build a miniaturized Langmuir Blodgett trough that is small enough to place under a microscope. This way the microscope can observe the fluidic interfaces and provide information about the real-time changes at the interface. A particular application to observe can be the interfacial assembly of nanoparticles, induced by the barrier movements, changes in the nanomaterial amount, variation of the fluid composition and condition. SOLUTION COMPONENTS: Image analysis The microscope can observe the fluidic interfaces and turn it into an image. Our codes need to analyze the image to see if the particles form a good film by analyzing whether there are large gaps between the particles or whether the particles are in the same direction. After the image analysis is completed, we need to convert the results into corresponding signals to control the movement of barriers, such as vibration or proximity. Mechanism design Using PTFE to build a trough, which can hold the water and the testing material. Using delrin to build two barriers, which can control the density, thickness or orientation of the testing materials. Set two step motors to control the barriers and the speed of injection. The whole system is about 13cm x 22cm x 3cm to satisfy the space between microscope's platform and objectives. CRITERION FOR SUCCESS: Ideally, we would build a miniaturized Langmuir Blodgett though that is small enough to place under a microscope. Through the microscope we could observe fluidic interfaces and get information about the interface. The purpose of or subject is to build a interface by nanoparticles and manipulate the movement of barrier to make real-time changes, accommodating the amount of nonmaterial, varying the fluid composition and condition. The adjustment would be carried out by image analysis and a real-time feedback system. DELIVERABLE: Hardware: a miniaturize LB trough; Software: Image processing software. The interface for hardware and software. DISTRIBUTION OF WORK: Zhanyu Shen & Zhanlun Ye Take responsibility for the Mechanical part including the design of PTFE trough, design of the barrier moving system, the syringe pump system and lifter system. We also need to assemble those parts firmly within the limited space underneath the microscope. After all those design work, we need to help to adjust the motors and optimize the whole system. Xiran Zhang & Zhehao Qi Build a real-time feedback system whose output would transfer information to electric control system. It would analyze images from microscope and decides whether the interface qualified or not. The preparation of electric control system, respectively control barriers, pump and lifter. |
||||||
12 | Laser System to Shoot Down Mosquitos |
Fan Yang Ruochen Wu Yuxin Qu Zhongqi Wu |
Xinyi Xu | design_document1.pdf proposal1.pdf |
Timothy Lee | |
# Laser System to Shoot Down Mosquitos ## Team Member - Ruochen Wu (rw12) - Yuxin Qu (yuxin19) - Zhongqi Wu (zhongqi19) - Fan Yang (fy10) ## Problem In the world, there are thousands of humans suffering from the disease and death brought by bite from mosquito. Therefore, an effective method of protection against mosquitoes is necessary. Tracking the mosquito and using a laser to kill it may be a feasible solution. ## Solution Overview Firstly, the laser gun attached to the camera will emit a low-power laser to illustrate the drop point. To position the mosquito, we employ the yolov5s on our computation platform to do real-time detection. We move the camera to diminish the distance between the drop point and the mosquito until they coincide. The laser gun then emits a high-power laser to destroy the mosquito. But there are many challenges in implementation. The first one is the computing platform. The embedded development board may be incapable to run yolov5. We are considering those with NPU or CUDA support. Cloud Computing is another solution. But it possibly has high latency and low stability. Besides, the mosquito is very tiny and possibly occupies only a few pixels on the frame. If necessary, we may use radar to help with the detection. Also, the security of the laser is a major problem. We plan to do the safety check before emitting it and find a proper power that is harmless to humans. ## Solution Components: #### 1. Positioning system: - High resolution camara - Low power laser for aiming - Software employing yolov5 - Computing platform (cloud server or embedded development board) #### 2. Attacking system: - Driver control module - Rotation motor - High power laser for shooting ## Criterion for Success 1. Be able to detect a mosquito in the scene from a camera and locate its position. 2. The lasing device can target and shoot the mosquito. 3. The laser does not harm people. ## Distribution of Work Ruochen and Zhongqi are responsible for the training of yolov5 on mosquito datasets. Though the mosquito is very small and the procesing speed is limited by the device, they are both ECE students and have got lots of experience in deep learning. Fan will handle the deployment of yolov5 on embedded device, who is majoring in ECE. It takes time to get familiar with the environment of the board and make full use of the computing resource. Yuxin will take charge of the driver control module, laser and motors. Yuxin is EE student, who is familiar with the control system. Though we all lack mechanical experience, it accounts for a little portion of the project. |
||||||
13 | Smartphone-based Fluorescence Microscope |
Feifan Xie Juncheng Zhou Shu Xian Lai Wentao Ni |
Xinyi Xu | design_document1.pdf proposal1.pdf |
Wee-Liat Ong | |
# PROBLEM Microscope applications are primarily helpful in health screening and microorganisms sampling and post processing. However, most conventional microscopes are expensive (expensive lenses and imaging equipment), relatively immobile (mostly as benchtop microscopes), and require professional knowledge to operate and post process the acquired image. These restrictions pose as barriers to rural communities whom can benefit immensely from mobile microscopy technologies (clinical setup in rural areas, water sampling etc.). # SOLUTION OVERVIEW We propose a smartphone-based compound lens microscope (an external device) that is attachable to a smartphone and makes use of today’s smartphone robust camera technology. More specifically, the type of microscope we have in mind is a fluorescence microscope. We expect users to be able to attach the device to individual smartphones and capture magnified image (and store those images in their phones) up to an applicable magnification. Our current plan is to focus on sampling water to detect sewage microorganisms such as total coliforms and E. Coli bacteria for clean, potable water. The reason we chose a compound lens microscope is because double lens setup gives us a higher magnification and resolution compared to simple lens. Another benefit is the flexibility the compound lens design is capable to give which facilitates internal lens modification. The reason we decided on fluorescence microscope is because high powered LEDs are easily accessible with a relatively low cost, and still able to provide us a wide range of detectable target (bacterium, intrinsically fluorescence particles, white blood cells etc.). # DESIGN COMPONENTS - COMPOUND LENS SETUP We plan to build a compound microscope using two convex lens – one as the objective lens, and another as the eyepiece. We plan to get lens with short focal lengths (5-10mm) for our objective lens to maximize our magnification at a least possible expense of tube length increment. For the eyepiece lens, a numerically larger focal length(10-20mm) is preferred for its high efficiency on image magnification to tube-length ratio. To figure out the best lens combination and calibration, optical testing is a must-take stage. - MICROSCOPE ATTACHMENT TO SMARTPHONE Depending on the design of our microscope tube/casing, we will also need to design a device that connects our microscope to our smartphone. If our microscope takes the form of a tube, then the attachment will look more like a pair of clamps to our smartphone. If our microscope takes the form of a case, we can either directly place our phone on the casing, or also use clamps. - LED LIGHT SOURCE We will need high power, UVC LEDs (wavelength between 200-600nm depending on targets and versatility) for our sample illumination. We will need to setup a mini circuit within our microscope to mount our power supply, a microcontroller, a mechanical switch (to turn on/off the LEDs) and the LEDs as our light source. The power supply can either be dry cells, or we could use our phone as a power source. - MICROSCOPE TUBE/CASING We will need a casing that connects and covers our microscope system. We plan to draw out our design using CAD software and use a 3D printer to print the prototype out. This casing must include the design for sample holder (some space to insert thin plastic film) and the design to mount our PCB for LED illumination. Depending on how well our optical system turn out, we might also need to add some refinements to our microscope (e.g. emission filters, bigger lens etc.). A modifiable lens magnification system is considered for a more flexible use to satisfy practical requirements. # CRITERIA OF SUCCESS - MAGNIFICATION AND RESOLUTION We set our microscope to have at least a minimum of 20X magnification and resolution that is comparable to a benchtop microscope for our target applications. - COST AND PORTABILITY The design of our microscope has to be reasonably lightweight and portable because mobility is one of the most important factors of our design. The microscope should be easily attached/detached for a decent range of smartphones. We also aim to minimize the cost required for our microscope so that it is accessible to a wider group of people. - VERSATILITY/PRACTICALITY OF SAMPLED TARGETS Our microscope must aim to examine target samples that have reasonable potential for clinical usage or health applications rather than applications that are too specific. If possible, we also aim to design a microscope that can accommodate a wider range of fluorophores. - SIMPLICITY AND SAFETY OF USER EXPERIENCE AND DESIGN Because our microscope involves an external device with potentially risky sub-components (UVC LED light), the design must have careful considerations of safety (e.g. no leaking of the UV rays). The process of preparing the sample and acquiring the magnified image should also be relatively straightforward that any adult can easily learn how to operate the system. # DISTRIBUTION OF WORK Overall, the main challenge of our project revolves around setting up a working microscope model from scratch and refining it. And none of us are experts in this (not even class knowledge), therefore the early stages of design will require high level of collaboration rather than divide-and-conquer style of work. Job splitting is more likely to take form as we get into the middle stages of our project. ME FEIFAN, XIE Involves in the CAD design and practical manufacture of the microscope casing. He is responsible for compound lens system design and theoretical analysis, also lens test and calibration. ME WENTAO, NI Involves in the CAD design and practical manufacture of the microscope casing. He is also responsible for the illumination matter excitation test. EE JUNCHENG, ZHOU Responsible in post-processing and determining our image quality through data collection and comparing to standard lab microscopes. COMPE SHU XIAN, LAI Responsible for the acquiring and designing of the PCB components needed to setup LED illumination. He is also involved in compound lens system design and theoretical analysis. |
||||||
14 | Tea Blend Distributor |
Anyu Ying Ruiqi Ye Zhenzuo Si Zhiyuan Wang |
Xinyi Xu | design_document2.pdf proposal1.pdf |
Timothy Lee | |
# TEAM MEMBERS: - Zhenzuo si (zsi2) - Ruiqi Ye (ruiqiye3) - Zhiyuan Wang (zw39) - Anyu Ying (anyuy2) # PROBLEM Tea is a popular beverage but cannot be easily obtained like coffee because no machine on the market can make it as convenient to drink tea as a coffee machine. Additionally, people’s requirements for the type and strength of tea are just as complex as those for coffee. We want to design a device that allows users to input the type of tea they want to drink and their taste preferences and then receive a cup of tea that meets their requirements.” # SOLUTION OVERVIEW This machine has a total of five systems: an interactive subsystem that receives user input, a control subsystem that controls all other subsystems, a solid storage subsystem for storing tea leaves, a tea brewing subsystem that adds an appropriate amount of water at the right temperature, and a flavour subsystem for adding additional ingredients such as milk and sugar. # SOLUTION COMPONENTS ##INTERACTIVE SUBSYSTEM The interactive subsystem includes a series of digital displays and buttons for users to adjust parameters related to taste, such as tea strength, temperature, and concentration of additional ingredients. It is also capable of delivering this data to the control subsystem. ## CONTROL SUBSYSTEM The control subsystem is capable of transmitting signals to other subsystems and can control the number of tea leaves and additional ingredients used, as well as the temperature and amount of water used, and the overall brewing time. ## TEA BREWING SUBSYSTEM The tea brewing subsystem includes a mixing tank that can store the added tea leaves, water, and additional ingredients and can dispense the brewed tea and tea leaves together at the set time. ## FLAVOR SUBSYSTEM The flavouring subsystem includes tanks for storing syrup and milk, as well as pipelines and valves for adding a predetermined amount of syrup and milk based on instructions from the control subsystem. # CRITERION FOR SUCCESS After users set their taste preferences on the front-end interface, they can wait for a certain amount of time and then enjoy a cup of tea that meets their preferences. After each tea-making process, the machine’s interior is relatively clean and there are no residual tea leaves that could affect the taste or food safety. # DISTRIBUTION OF WORK Zhiyuan Wang is responsible for designing the mechanical structure, including the outer shell, storage compartment, and liquid pipelines. Anyu Ying is responsible for designing and soldering the circuit board. Zhenzuo Si and Ruiqi Ye are responsible for developing and debugging the control and interaction systems. |
||||||
15 | Augmenting AR/VR with Smell |
Baoyi He Kaiyuan Tan Xiao Wang Yingying Liu |
Qi Wang | design_document2.pdf proposal1.pdf |
Rakesh Kumar | |
# TEAM MEMBERS - **Kaiyuan Tan** (kt19) - **Baoyi He** (baoyihe2) - **Xiao Wang** (xiaow4) - **Yingying Liu** (yl73) # TITLE OF THE PROJECT Augmenting AR/VR with Smell # PROBLEM Augmented Reality (AR) and Virtual Reality (VR) technologies are rapidly growing and becoming more prevalent in our daily lives. However, these technologies have not yet fully addressed the sense of smell, which is a critical aspect of human experience. The absence of scent in AR/VR experiences limits the immersive potential of these technologies, preventing users from experiencing a full sensory experience. # SOLUTION OVERVIEW The solution is to augment AR/VR experiences with smell, enabling users to experience a full sensory experience. This will be achieved by incorporating hardware and software components that can simulate various scents in real-time, in response to events in the AR/VR environment. The solution will consist of a scent-emitting device and software that can track and simulate scents based on the user's location and orientation in the AR/VR environment. # SOLUTION COMPONENTS The solution will consist of the following components: - **Scent-emitting device**: This device will be designed to emit various scents in real-time. It will be portable and lightweight, making it easy for users to carry around during AR/VR experiences. - **Scent simulation software**: This software will be designed to track the user's location and orientation in the AR/VR environment and simulate scents accordingly. The software will use various algorithms to determine the intensity and duration of scent emissions. - **AR/VR hardware**: The solution will require AR/VR hardware to create the immersive environment. This hardware will include AR/VR headsets, controllers, and other peripherals necessary to interact with the AR/VR environment. # CRITERION OF SUCCESS The success of the project will be determined by the following criteria: - **Immersive Experience**: The solution must provide an immersive AR/VR experience that incorporates smell as a key sensory input. - **User Acceptance**: The solution must be accepted by users, who should be able to appreciate and enjoy the experience. - **Technical Feasibility**: The solution must be technically feasible and reliable, with a low latency and high accuracy in scent simulation. - **Scalability**: The solution should be scalable and adaptable to different AR/VR environments and hardware configurations. - **Safety**: The solution must be safe for users and the environment, with proper ventilation and control mechanisms to prevent any harm or discomfort caused by excessive or inappropriate scent emissions. # DISTRIBUTION OF WORK - Model various scenerios based on AR/VR hardware. *(Tan)* - Design algorithms which output the intensity and duration of scents based on the constructed scenerios. *(He & Liu)* - Merge the scene with scents smoothly. *(He & Wang & Liu)* - Design a protable scent-emitting device. *(Wang)* - Test using real scents, invite people to experience and adjust based on feedback. *(All)* |
||||||
16 | Thermo-Camera Based Energy Consumption Monitoring System |
Boyan Li Lingjie Zhang Yutao Zhu Zheyang Jia |
Adeel Ahmed | design_document1.pdf design_document2.pdf design_document3.pdf proposal2.pdf proposal1.pdf |
Pavel Loskot | |
# TEAM MEMBERS: Yutao Zhu (yutaoz2@illinois.edu 3190110413), Zheyang Jia (zheyang5@illinois.edu 3190110096), Boyan Li (boyanl3@illinois.edu 3190110007), Lingjie Zhang (lingjie3@illinois.edu 3190110913) # THERMO-CAMERA BASED ENERGY CONSUMPTION MONITORING SYSTEM # PROBLEM: In the field of chip and circuit research, power consumption is an important indicator. Thermal imaging is a method to analyze power consumption. For example, thermal analysis can assist designers to determine the electrical performance and reliability of components on PCB and help determine whether components or PCB will fail or burn out due to overheating. A circuit board contains many components. We want to simulate the power consumption related to temperature. At present, due to the different thermal properties of each circuit element, the current thermal imaging equipment is not necessarily flexible and accurate for analyzing circuit power consumption. Our goal is to design a convenient, dedicated, and accurate thermal imager to assist in the research of chips and circuits. # SOLUTION OVERVIEW: To solve the problems mentioned above, we plan to design a thermo-camera and corresponding software to analyze the temperature distribution over a circuit board such as the motherboard of a computer. The product is a cuboid frame with a thermo-camera, a controller, and an image recognition system. The camera's field of view can cover a small PCB or part of a computer motherboard. According to what circuit components we want to analyze, the camera can move to the corresponding location. Knowing the temperatures over the board, we will estimate how much energy is consumed at different parts of the board. # SOLUTION COMPONENTS: A thermo-camera that sends images to a computer in real-time. A bracket capable of three-dimensional movement for placing the thermo-camera. Image processing software to inform physics-based models of energy consumption in electrical circuits. A control system for the mobile camera, which is very useful for adjusting its position and zooming to obtain correct real-time image. The interface between the camera hardware and the image processing software. # CRITERION FOR SUCCESS: ## DELIVERABLE: Hardware: A thermo-camera on a bracket with a control system. Software: Image processing software. The interface for hardware and software. ## FUNCTIONALITY: The thermal camera can adjust its position and zoom to obtain the correct image and send it to the computer in real-time. Users can use image processing software to analyze the energy consumption in the circuit. The interface between software and hardware should be stable and reliable. # DISTRIBUTION OF WORK: ECE Boyan Li: Develop and implement thermal image segmentation to extract images of electronic components. Obtain the temperature and energy consumption distribution on the circuit board through the real-time image. EE Yutao Zhu, Zheyang Jia: Design and implement the interface between hardware and software so that the camera and the computer can successfully transmit real-time images. And the work on image processing software together with ECE students. ME Lingjie Zhang: Make a bracket for placing the camera, which can enable the camera to complete 3-dimensional movement (like the probe of 3D printer) to capture the power consumption of the circuit board. |
||||||
17 | Remote Driving System |
Bo Pang Jiahao Wei Kangyu Zhu |
Yi Wang | design_document1.pdf proposal1.pdf |
||
#### TEAM MEMBERS Jiahao Wei (jiahaow4) Bo Pang (bopang5) Kangyu Zhu (Kangyuz2) ## REMOTE DRIVING SYSTEM #### PROBLEM: In daily life, people might not be able to drive due to factors like fatigue and alcohol. In this case, remote chauffeur can act as the driver to make the driving safe and reduce the incidence of traffic accidents. Remote chauffeuring can improve the convenience of driving. In the case of urban traffic congestion and parking difficulties, remote chauffeurs allow drivers to park their vehicles in parking lots away from the city center and then deliver them to their destination via remote control. #### SOLUTION OVERVIEW: The remote driving system is designed to provide real-time feedback of the car's external environment and internal movement information to the remote chauffeurs. Through the use of advanced technologies, the remote chauffeurs can remotely operate the car's movement using various devices. This system is capable of monitoring the car's speed, distance from obstacles, and battery life, and transmitting this information to the remote chauffeurs in a clear and easy-to-understand format. #### SOLUTION COMPONENTS: ##### Modules on TurtleBot3 : - The mechanical control system: to achieve the basic motion functions of the TurtleBot3 car. - The distance sensing system used for monitoring the surrounding environment: Using LiDAR to detect the distance of the car in different directions. - The system used for monitoring the vehicle's status: real-time monitoring the car's battery power, speed, etc., and uploading the data to the PC server in real-time. ##### Server Modules: - The transmission system used to remotely control the car: implemented using Arduino IDE. - The system used to build an AR-based information interaction system: implemented using Unity. - The system used to output specific car motion commands: implemented using ROS to control the car. ##### HRI modules: - The gesture recognition system used to recognize gestures given by people and feed back to the central PC server. - The device used for interaction between the car and people: transmitting real-time surrounding information of the car to the Hololens 2 glasses in video form. #### CRITERION FOR SUCCESS: - Functionality: The remote driving system needs to be able to facilitate interaction between the user and the vehicle, enabling the user to remotely control the vehicle's steering, acceleration, and deceleration functions. - User experience: The user can obtain real-time information about the surrounding environment while driving the vehicle through the glasses, and control the vehicle's movement through gestures. - Environmental parameter detection: The vehicle can obtain distance information about the environment and its own real-time information. - Durability and stability: The server needs to maintain a stable connection between the vehicle and the user. #### DISTRIBUTION OF WORK: - ECE STUDENT PANG BO: Implementing the ROS interaction with the PC, using the ROS platform to control the car's speed and direction. - ECE STUDENT WEI JIAHAO: Building the car, implementing environmental monitoring and video transmission, ensuring stable transmission of environmental information to the user. Implementing speed measurement, obstacle distance detection, and battery level monitoring for the car. - EE STUDENT ZHU KANGYU: Designing the AR interaction, issuing AR information prompts when the car is overspeeding or approaching obstacles. Implementing hand gesture recognition for interaction between hololens2 and PC. |
||||||
18 | A Transformer |
Haobo Li Jingcheng Liu Shiqi Yu Tinghua Chen |
Xiaoyue Li | design_document1.pdf proposal1.pdf |
Rakesh Kumar | |
**TEAM MEMBERS:** - Jingcheng Liu [jl138] - Haobo Li [haoboli2] - Tinghua Chen [tinghua3] - Shiqi Yu [shiqiy2] **TITLE OF THE PROJECT:** A Transformer **PROBLEM:** In some cases or scenarios, humans can not reach the location or area, so we need adaptive robotics to reach that area instead of us. The robotics that we will introduce and build is Modular Self-Reconfigurable Robotics (MSRR). MSRR can be used in many scenarios, like space exploration, disaster response, undersea inspection, education, entertainment and art. Because MSRR can reconfigure its shape and modules, it can be used for space exploration missions, where they can reconfigure themselves to adapt to different tasks and environments, and they can also repair themselves and replace damaged modules. In disaster scenarios, MSRR can adapt to changing environments or narrow and complex landform, help with search and rescue missions. In the undersea scenarios, MSRR can also work and help with inspection or building piers and tunnels. The other aspect application of MSRR is education, entertainment and art. Because MSRR can be assembled and reassembled to create different configurations, it can be programmed to create interactive artworks and installations. **SOLUTION OVERVIEW:** We are aiming to build a modular block system with self-reconfigurable features. Our solution will include easier lighter devices, fluent transformation and easy-to-operate interface. It’s an innovation in the field of MSRR, especially in education, entertainment and art. More concisely, we will use electromagnet to control the mechanism of block robotics. Different block robots are controlled by a central host computer through wireless signals. MCU in block robots receive signals from wireless module and control the circuit to apply positive or negative current to the electromagnet to control the rotation or suspension of the block entity. When the modular block robotics come into application, we can also install different modules on different block, but they are further study and exploration which are not included in this project. **SOLUTION COMPONENTS:** - Wireless control module: This module will be designed to transmit command signal from host computer to block robots. After the signal is received at the remote side, MCU in block robots will process this signal and convert it to control signals on its ports. - Electrical control circuit: Electrical circuits will get input signal from an MCU port, then use it to control the state or polarity of 6 electromagnets on the block surface or 8 electromagnets on the corner which have 5v voltage and 3kg force. - Mechanical entity: A 3D printed cube and 12 metal sticks on the edges. Metal sticks on the edge serve as hinges to attach different cubes during the rotation. **CRITERIA FOR SUCCESS:** - Wireless control module can send and receive signals, transmit data and commands from host computer to remote robotic side. - Wireless signals can be decoded in the MCU and converted to control signals in the circuits. - Electrical control circuits can apply the voltage we want to the electromagnets. - Block entities can rotate smoothly around the joint of block robots with an degree of 90 and 180. - The block entities are firm and the electromagnets, metal sticks, circuits are fixed in the block. - Commands are useful and efficient and the dynamic process is fluent and steady. - The interaction interfaces are simple and aesthetic, easy to understanding and control. **DISTRIBUTION OF WORK:** - Jingcheng Liu - Electrical Engineering: Wireless and MCU control - Haobo Li - Electrical Engineering: Mechanical entity and installation - Tinghua Chen - Computer Engineering: MCU and control circuits - Shiqi Yu - Computer Engineering: Commands and codes, interaction interfaces. |
||||||
19 | An immersive human-driven robot detecting foreign matter in tubes |
Pengzhao Liu Shixin Chen Tianle Weng Ziyuan Lin |
Yutao Zhuang | design_document1.pdf proposal2.pdf |
Liangjing Yang | |
# TEAM MEMBERS: Name Netid Chen Shixin shixinc2 Lin Ziyuan ziyuanl3 Liu Pengzhao pl17 Weng Tianle tianlew3 # Title: A immersive human-driven robot detecting foreign matter in tubes. # Problem: With the development of technology in the 21st century, systems like rockets, chemistry transportation systems, and systems underground are getting more likely to involve small and unreachable spaces for humans, for example, thin tubes. Sometimes, there could be foreign matter inside these tubes and we need to figure out where it is and even remove it. For such little space that is hard to reach and observe, human beings are getting harder to enter. Current solutions include a self-control robot or a robot controlled through a remote handset. However, as the environment inside tubes could be very complex, these solutions could be either impossible or not flexible enough. # Solution Overview: We will design a human-driven robot but in an immersive context. We will use a self-design electric car as a model. People change the speed through audio, changing the direction by manipulating the position of their hands as if there is a real steering wheel. The position of the car will be recorded and displayed on the screen in front of the driver or on the glass of the driver even though the actual car may be far away from the user. In this way, the driver can immersively drive the car and make precise and subtle operations when the “road” condition is very complex. The robot is able to detect the foreign matter as a recognition or segmentation problem and send back the information like the position of the foreign matter. Then humans can take corresponding actions. # Solution Components Subsystem #1 A human hand position recognition system. The input is your hands’ position picture captured by the camera. After data processing, the output is the degree(from -90 to 90) you want to turn the wheel. This signal will be sent to the electronic component which controls the direction of the wheel through wireless communication. We will need a processor(computer GPU) to run the machine learning model for the degree regression problem. We will also need a camera, and a Bluetooth sender to communicate between the car and the computer. Subsystem #2 An audio detection module. The input is the driver’s voice, the output is the speed of the car. Subsystem #3 Robot body which performs the main work of detecting. A car and electronic device(like Arduino) that can control the degree of the wheel and other operations. A Bluetooth receiver that receives the signal from the main computer. Speed-changing hardware(some voltage-changing circuit)on the car. Subsystem #4 Object recognition/segmentation system. This system aims to recognize and find the foreign object inside the tube. We can either design the neural network on the FPGA board or process the image sent back to the computer. # Criterion for Success: (1) Successfully calculating the degree of direction change. (2) Successfully respond to the audio voice. (3) The electrical degree signal can be transformed into the car wheels’ degree. (4) The car can change speed with different audio voices. (5) The car can detect the object and remind the computer. (6) Additional functions of the car may be added, such as sweeping out foreign stuff. # Distribution of Work: Chen Shixin and Lin Ziyuan: All machine learning algorithms and implementation (audio, picture), processing data, transmitting signals between cars and computers. Complex! Even though we are ECE students. Since we need to perform both regression and classification problems under the vision and audio context. We also need to understand and manage the wireless communication of signals. Liu Pengzhao and Weng Tianle: Design and implementation of the entire car, circuit to control the movement of the car. Arduino programming. Camera-car system designing. Additional function on the car. Complex! Since we are students in ME, we lack knowledge of circuit designing and Arduino programming. We need to coordinate the input digital signal and the car motion. We also need to make the car camera system to be stable. We need to learn sensors. |
||||||
20 | A mm-Wave Breath Monitoring System for Smart Vehicle Applications |
Bowen Song He Chen Kangning Li Keyu Lu |
Xuyang Bai | design_document1.pdf proposal1.pdf |
Shurun Tan | |
# TEAM MEMBERS: Kangning Li (kl32@illinois.edu 3190110100), He Chen (hechen4@illinois.edu 3190110853). Bowen Song (bowen15@illinois.edu 3190110710). Keyu Lu (keyulu2@illinois.edu 3190110390). # A MMWAVE BREATH MONITORING SYSTEM FOR SMART VEHICLE APPLICATIONS # PROBLEM: With the development of the intelligent automobile industry, radar technology has been applied to automobiles. Common radar applications include optical radar, laser radar, and millimeter wave radar. At present, the technology of outside vehicle radar is highly developed, such as using laser radar to measure distance. But we're focusing more on radar applications inside the car. Nowadays, many traffic accidents are caused by drivers' fatigue driving. How to detect drivers' breathing state quickly and accurately has become a hot topic. At the same time, the children left in the car is also a problem that urgently needs to be solved. Therefore, we hope to rely on radar technology to realize the breath detection of drivers and children in the car. # SOLUTION OVERVIEW: The method we are going to apply is using the millimeter wave sensor to detect the situation inside the car. By processing the data from the radar, we want to achieve breath detection. We choose to use 60G millimeter wave sensor for its harmless to human and it’s allowed to use in China. For signal processing, we can use artificial intelligence or statistical approach. This is partly dependent on how much data we can collect. We plan to finish radar signal processing and self-detection technology in complex and diverse environments. The detections for children of different ages, people under different shielding materials and different postures are our future goals. # SOLUTION COMPONENTS: TI-60GHz mmWave Radar Development board: IWR6843ISK-ODS Hardware link and data collection. A sensor to work on Millimeter wave radar range detection and micro-doppler detection technology. An algorithm to do radar signal processing and self-detection technology in complex and diverse environments. An interface to connect the computer software and radar sensor. AI algorithm or Statistics method which is used to adjust the software and work on the data processing. # CRITERION FOR SUCCESS: We expect to produce a vehicle-mounted mmWave radar that will have the following properties: Reliability: It can work well under variety of environments, including children for different ages, people under different shielding materials, people with different postures, environment with more than one people, people during walking, people during fitness, etc. Security: It won’t cause any kind of damage to people under any circumstances. Easy to use: The mmWave Radar system should produce obvious information which is easy for user to get and understand. Accuracy: The mmWave radar system should produce result with high accuracy, avoid incorrect result caused by various environment distraction. Efficiency: The speed for our system to produce the information should be fast, which means it should collect the environment and produce in time feedback efficiently. # DISTRIBUTION OF WORK: EE Kangning Li, Keyu Lu, He Chen: Exploit the radar sensor to obtain the data during the lab. Develop and implement the periodic linearly-increasing frequency chirps (known as Frequency-Modulated Continuous Wave (FMCW)). Design the lab steps and organize the structures of the lab. Control the lab environment to meet the standards. ECE Bowen Song: Use the signal data to do the signal processing and improve the detection precision. Implement and test the processing system for different targets. |
||||||
21 | Remote Robot Car Control System with RGBD Camera for 3D Reconstruction |
Han Yang Hao Chen Junyan Li Yuhao Ge |
Yiqun Niu | design_document1.pdf proposal1.pdf |
Pavel Loskot | |
## Team Members - [Yuhao Ge], [yuhaoge2], - [Hao Chen], [haoc8], - [Junyan Li], [junyanl3], - [Han Yang], [hany6]. ## Project Title Remote Robot Car Control System with RGBD Camera for 3D Reconstruction ## Problem We aim to build a user-friendly control system for assisting users to remotely control a robot car equipped with an RGBD camera in complex indoor environments. The car should be able to build the environment based on the point cloud scanned by the camera, and the remote computer will reconstruct the point cloud to gain the map of the environment. ## Solution Overview Our solution consists of a Robot Car Subsystem, Camera Subsystem, Remote Control Subsystem, and Human-Robot Interaction Interface. The Robot Car Subsystem includes a robot car and a rotating base for the RGBD camera. The Camera Subsystem captures RGBD images of the surrounding environment and performs real-time 3D reconstruction. The Remote Control Subsystem allows users to control the robot car remotely via a joystick. The Human-Robot Interaction Interface provides a third-person perspective view of the reconstructed environment and allows users to interact with the robot car in real-time. ## Solution Components - Robot Car Subsystem: Includes a robot car and a rotating base for the RGBD camera. - Camera Subsystem: Captures RGBD images of the surrounding environment and performs real-time 3D reconstruction using image signal processing software. - Remote Control Subsystem: Allows users to control the robot car remotely via a joystick. - Human-Robot Interaction Interface: Provides a third-person perspective view of the reconstructed environment and allows users to interact with the robot car in real-time. ## Criterion for Success - The remote robot car control system can navigate and avoid obstacles in complex indoor environments. - The Camera Subsystem can perform real-time 3D reconstruction with high accuracy and reliability. - The Remote Control Subsystem provides a smooth and responsive control experience for the user. - The Human-Robot Interaction Interface provides an intuitive and user-friendly way for users to interact with the robot car and view the reconstructed environment. ## Distribution of Work - Han Yang (EE): Camera Subsystem design and implementation - Hao Chen (ECE): Remote Control Subsystem design and implementation - Junyan Li (ECE): Human-Robot Interaction Interface design and implementation - Yuhao GE (ECE): Robot Car Subsystem design and implementation ## Justification of Complexity We believe that our team has the necessary skills and knowledge to handle the mechanical and electrical complexity of our project. Specifically, Han Yang has experience in image signal processing and Hao Chen has experience in remote control systems. Junyan Li has experience in human-robot interaction design, and Yuhao Ge has experience in robotics and mechanical design. Additionally, we plan to use readily available off-the-shelf components and design our system in a modular and scalable way to minimize the complexity and facilitate the development process. |
||||||
22 | V2V Based Network Cooperative Control System |
Jiazhen Xu Xinwen Zhu Yuxuan Jiang Zihao Li |
Xuyang Bai | design_document1.pdf proposal1.pdf |
Simon Hu | |
## Team Members - Xinwen Zhu: xinwenz3 - Jiazhen Xu: jiazhen6 - Yuxuan Jiang: yuxuanj9 - Zihao Li: zihaoli5 ## Problem Nowadays, autonomous vehicles are being applied in more and more scenarios. Thr current systems rely on the vehicles themselves and a cloud server. While using the computational power of the cloud server, the vehicles remain their own ability of quick deciding through sensors under emergencies. However, such vehicle-cloud systems have certain limitations. Recently, some scholars became interested in V2V(vehicle to vehicle) communication. Compared to the vehicle-server solution, V2V technologies - allows for real-time communication between vehicles, allowing for faster decision-making and response times. - allows for more localized processing of data, reducing the amount of data that needs to be transmitted to the cloud. is more secure than sending all car data to the cloud. - is more reliable than relying on the cloud for all processing. Cloud-based systems are prone to downtime and network failures, which can lead to a loss of service. - is more scalable than relying on cloud-based processing. Classical vehicle-server systems are naturally have shortcomings when making urgent mixed traffic decisions, because the communication time and server processing time might be too long and the information of a single vehicle is limited. ## Solution Overview In this senior design project, we want to solve this problem by designing a novel system consisting of vehicle sensing, vehicle-server communication and vehicle-vehicle communication. ## Solution Components ### **Subsystem #1** The communication system (5G) between vehicles and a powerful server. Since equipping each vehicle with strong processor is expensive and too energy consuming, uploading the data collected by the vehicle to the server and sending the commands back will be more efficient. V2V communication is not a replacement of V2S(vehicle to server), but an improvement. V2S is still important in our system. ### **Subsystem #2** The controlling system on the vehicle that can be run on simple on-board chip. This on-vehicle system mainly controls the movement of vehicle and decode the commands from the server. The vehicle also needs to check its battery level and be able to return to charging site when detecting low battery level. ### **Subsystem #3** Simple AI logic for the vehicle to drive by itself when it’s disconnected from the ITS server. The vehicle should be equipped with a GPS chip and at least find a way back to the nearest charging site by itself. ## Criterion for Success - efficient communication protocol between vehicles. - getting the relative position of another vehicle and using the information to avoid obstacle. - design efficient routing and obstacle avoiding algorithm for on-vehicle chip set. (in case cloud server is down) ## Distribution of Work - Xinwen Zhu: Design the mechanism of the vehicle, and install required sensors to the vehicle. Mainly responsible for writing the report. - Yuxuan Jiang: Design or refine a V2V communication protocol, which should make vehicles communicate in low latency and high privacy. - Zihao Li: Design an AI routing algorithm to enable the automatic drive of the vehicles given the information from V2V, V2S comunication, and its own sensor. The vehicle should make a rational decision on avoiding obstacles and coordinating with other cars to alleviate traffic congestion. - Jiazhen Xu: Implement a embedded Real-Time Operating System with following functionalities: 1) Synchronize information form the sensor, other vehicles, and the server. 2) Sending information to other vehicles and the server at set intervals. 3) Run a routing algorithm to navigate the vehicle 4) Control the mechanisms of the vehicle to veer, accelerate, and so forth. |
||||||
23 | FPGA-based object tracking, obstacle avoidance, and voice-activated trolley |
Haomin Wang Jiarun Hu Yang Zhou Yihang He |
Tielong Cai | design_document2.pdf design_document3.pdf proposal1.pdf |
Said Mikki | |
# Members: - Yang Zhou [yangz15] - Haomin Wang [haominw3] - Yihang He [yihangh2] - Jiarun Hu [jiarunh2] # Problem: Nowadays the development of electric vehicles today has become a trend. At the same time, more and more new energy vehicle startups like to equip their cars with intelligent systems. However, existing SOCs are always based on non-real-time operating systems and need to meet the real-time property and safety of the in-vehicle system. Common systems which are based on CPU + GPU tend to have high energy consumption, which will ha a negative impact on the endurance of the vehicle. Therefore, designing a system with low energy consumption and high real-time performance is necessary. # Solution Overview In order to achieve low energy consumption and high real-time performance, our solution is to design a specific system to control our trolley based on FPGA, which combines four subsystems. The first subsystem processes real-time data from the other subsystems to control the trolley. The second subsystem is designed to detect the target object and send a tracking signal to the movement control subsystem. The third subsystem is to detect obstacles in the path of the trolley and send an avoidance signal to the first one. The last subsystem is to recognize natural language instructions from the operator and sends the corresponding signal to the movement control subsystem. By taking these four aspects into account, we will create our object tracking, obstacle avoidance, and voice-activated trolley. # Solution components: 1. **Trolley movement control subsystem:** The movement control subsystem will process real-time data from the other subsystems and produce the signal to control the movement of the trolley. Control signals will be passed through the FPGA port to the PCB board, which is connected to electric motors. The PCB board can generate current to control the speed of electric motors depending on the control signal so that our trolley can move as designed. 2. **Object tracking subsystem:** The object tracking subsystem will use a camera to catch the image in front of the trolley. FPGA will receive the image and process it to identify the location of the color block and generate suitable control signals based on the location of the color block so that the trolley can move toward the color block. 3. **Obstacle avoidance subsystem:** We will use ultrasonic sensors to detect obstacles in the path of the trolley. The FPGA will be used to process the signals from the sensors and control the movement of the trolley. The microcontroller should be programmed with algorithms for obstacle detection and avoidance. 4. **Voice-activated subsystem:** Our design target is that the trolley can recognize specific natural language instructions and act accordingly. Thus, we will design a voice-activated system and combine it with the control system of the trolley. In order to reduce the latency as well as achieve high recognition accuracy, we will build a CNN network on FPGA instead of LSTM or DSP procedure to do this task. And this voice-activated system will give the corresponding signal to the control part. # CRITERION FOR SUCCESS: 1. The trolley should be able to move at a reasonable speed so that it can avoid obstacles and respond to voice commands in a timely manner. The movement control subsystem will also be able to process conflicting instructions and produce the correct signal to control the movement. The subsystem needs to be secure and reliable. 2. The trolley should be able to use a camera to detect a color block and move toward the color block. This can be measured by testing if the trolley can follow the movement of the color block closely. 3. The trolley should be able to detect obstacles accurately and reliably using its sensors and cameras. This can be measured by testing the trolley's ability to detect and avoid obstacles of different sizes and shapes. 4. The trolley should be able to recognize and respond to specific voice commands accurately and reliably. This can be measured by testing the trolley's ability to understand a range of voice commands and respond accordingly. # DISTRIBUTION OF WORK: ## Yang Zhou, Electrical Engineering: Design and implement the trolley movement subsystem. Implement and test the way control subsystems interact with other subsystems. ## Haomin Wang, Computer Engineering: Design and implement the object tracking subsystem. Test the trolley's ability to detect and follow the color block. ## Yihang He, Computer Engineering: Design and implement the obstacle avoidance subsystem. Test the trolley's ability to detect and avoid obstacles of different sizes and shapes. ## Jiarun Hu, Electrical Engineering: Design and implement the Voice-activated subsystem. Test the trolley's ability to recognize natural language instruction and control the movement of the trolley. |
||||||
24 | An Autonomous Pool Cleaner |
Hanwei Yu Jiayu Zhang Tianle Li Wenbo Ye |
Tielong Cai | design_document1.pdf proposal1.pdf |
Rakesh Kumar | |
# TEAM MEMBERS - Jiayu Zhang (jiayu7) - Hanwei Yu (hanweiy3) - Wenbo Ye (wenboye2) - Tianle Li (Tianlel2) # Problem Pools need to be cleaned regularly. The traditional manual cleaning method is time-consuming and labor-intensive. Therefore, pool owners need a more efficient measure to keep the pool clean with minimal intervention. # Solution Overview Our solution is to create an autonomous pool cleaner. The cleaner is a waterproof machine which contains a sensor that helps it detect obstacles and avoid collisions, wheels and tracks to help it move around, brushes and filters that collects debris and other particles from the pool water, batteries that provides power and a remote control system that allows the machine to be started and stopped from the ground. # Solution Components ## Propulsion Subsystem - Wheels and motors that enable the robot to move. ## Body Subsystem - The shell of the cleaner and waterproof elements to protect the inside circuits and chips. ## Information Collection Subsystem - Use ultrasonic underwater sensors to enable the robot to walk underwater along walls, steer etc. The robot moves around the pool’s edge starting from a certain position. Use this to create the pool's contour information L,W. - Use an analog-to-digital converter to input the pool’s information to the processor. ## Route Design Subsystem - Inner helical trajectory. The spiral trajectory can help to reduce the overall cleaning time required, which can help to improve the robot's performance and reduce energy consumption. - Teraboard/STM32/Arduino will be used as processor of the control system. It acquires data from sensors, plans a route, and then controls the propulsion subsystem. Which one to use depends on whether it is available in the laboratory. ## Communication Subsystem - A floating infrared receiver that connects to the cleaner under the water. - A infrared emitter that is used to control the cleaner. ## Cleaning Subsystem - Brushes that scrub the surfaces of the pool - Filter that collects debris and other particles and filter water ## Power Subsystem. - Rechargeable battery ## Big Object Collection Subsystem - Collect swimmers' lost items, such as swimming goggles in the process of cleaning up. - A motor and a idler wheel with brush that rolls up to transfer big objects. - Batteries that provide power to motor. - A dam-board and a box to gather and collect big objects. - A water pump that is used to absorb big objects. # Criterion for Success - The cleaner should be waterproof. - The cleaner needs to cover every part of the pool in relatively short time. - The cleaner should be able to suck in debris and other particles - The cleaner can start and stop by remote control. - The cleaner can be successfully charged. # Distribution of Work - Hanwei Yu: Responsible for the overall modeling of the cleaner, making the mechanical design into a physical robot, and designing proper waterproofing measures. - Tianle Li: Use ultrasonic underwater sensors to enable the robot to walk underwater along walls, steer etc. The robot moves around the pool’s edge starting from a certain position. Use this to create the pool's contour information L,W. After establishing the contour of the pool, obtain the robot's position in the pool using an optical encoder. - Wenbo Ye: Code implementation of efficient path planning, so that the route can adapt to different real-world environments. Control the propulsion subsystem for proper movement and turning. Use the control subsystem to guide and correct the course of travel based on signals from the sensors - Jiayu Zhang: Use MPU6050 to get the velocity and the angle of the cleaner. And build a close-loop control system to control the speed and spinning angle. And remote control the cleaner. |
||||||
25 | Semantic Communications for Unmanned Aerial Vehicles |
Chang Su Chenhao Li Tianze Du YU Liu |
Xiaoyue Li | design_document1.pdf proposal1.pdf |
Meng Zhang | |
#TEAM MEMBERS: 1. Yu LIU (yul9), 2. Chenhao LI (cl89), 3. Chang SU (changsu4), 4. Tianze DU (tianzed2) #TITLE OF THE PROJECT: Semantic Communications for Unmanned Aerial Vehicles #Problem & Motivation: Most existing techniques in semantic communications heavily rely on the direct transmission of each entire image between transmitters and receivers, whose performance is bottlenecked by the transmission procedure rather than the algorithms of semantic understanding. Therefore, we aim to develop a technique for unmanned aerial vehicles (UAVs), which can understand image samples, extract specific semantics, and communicate its symbolic representations to a target receiver (e.g., another UAV or smart device). We anticipate that this technique can be much speedier than the direct transmission of each entire image. #Solution Overview: Our design at a high level: Extract the semantic features of images taken by the camera on UAV, and encode these features into bits for transmission. Bits are transmitted through physical channel. The receivers’decoders understand and infer the messages. #Solution Components: - Subsystem 1: Mutual Communication System (MCS) between UAVs and smart devices. The UAV needs to transmit semantic information to receivers. It needs Channel Encoder, Physical Channel and Channel Decoder. - Subsystem 2: Lighting Semantic Extraction Systems (LSES) for semantic information extraction on UAV. The system needs to understand images information, for example number of people and their locations on images, or other extract useful information. - Subsystem 3: UAV mechanical, balance and dynamic System (UAVS). We need to modify a “stupid” UAV and make it successfully carry a camera and a microcomputer (e.g.,smartphone), moved around, and take image samples. #Criterion for success - Basic Requirements 1. Develop a UAV drone able to carry a camera and a microcomputer (e.g.,smartphone), moved around, and take image samples. 2. The UAV understands its images, especially number of people and their locations on images. 3. The UAV transmits semantic information to receivers. - Additional Features 1. Successful performance improvements in terms of transmission speed and other important metrics. 2. The UAV may understand other extract useful information (semantics) and their relative locations. #Distribution of Work 1. Yu Liu: In charge of the whole project. Assist with Lighting Semantic Extraction Systems (LSES) and Mutual Communication System (MCS). 2. Chang Su: In charge of Lighting Semantic Extraction Systems (LSES). Assist with UAV mechanical, balance and dynamic System (UAVS). 3. Chenhao Li: In charge of Mutual Communication System (MCS), Assist with Lighting Semantic Extraction Systems (LSES). 4. Tianze Du: In charge of UAV mechanical, balance and dynamic System (UAVS). Assist with Mutual Communication System (MCS). |
||||||
26 | ML-based Weather Forecast on Raspberry Pi |
Chenzhi Yuan Xuanyu Chen Zhenting Qi Zheyu Fu |
Yi Wang | design_document1.pdf proposal1.pdf |
Cristoforo Dimartino | |
#Team Members Zheyu Fu (zheyufu2@illinois.edu 3190110355) Xuanyu Chen (xuanyuc2@illinois.edu 3190112156) Chenzhi Yuan (chenzhi2@illinois.edu 3190110852) Zhenting Qi (qi11@illinois.edu 3190112155) #Problem Weather forecasting is crucial in our daily lives. It allows us to make proper plans and get prepared for extreme conditions in advance. However, meteorologists always get it wrong half of the time and still keep their job :) To overcome the limitations of traditional weather forecasting, machine learning models have become increasingly important in weather forecasting. Building our own weather forecast ML system is a perfect idea for us to analyze vast amounts of area data and generate more accurate and timely weather predictions on the go in our surrounding areas. #Solution Overview A weather forecast system can be created by using a few different hardware components and software tools. Our solution mainly consists of two parts. For weather measurement and data collection, temperature, humidity, and barometric pressure sensors are considered the main components. A machine learning-based algorithm is to be applied for data analysis and weather predictions. #Solution Components ##Hardware Subsystem Due to the complexity of weather conditions, our system incorporates the following weather indicators and their corresponding collectors: -a barometric pressure sensor, a temperature sensor, and a humidity sensor -a digital thermal probe for heat distribution -an anemometer for wind speed, wind vane for wind direction, and rain gauge for precipitation The aforementioned equipment would be integrated into a single device, and weatherproof enclosures are needed to protect it. Plus, a Raspberry Pi, either with built-in wireless connectivity or a WiFi dongle, is required for conducting computations. ##Software Subsystem A practically usable weather forecast system is supposed to make reliable predictions for real-world multi-variable weather conditions. We apply Machine Learning techniques to suffice such generalization to unseen data. To this end, a high-quality dataset for training and evaluating the Machine Learning model is required, and a specially designed Machine Learning model would be developed on such a dataset. Once a well-trained system is obtained, we deploy the such model on portable devices with easy-to-use APIs. #Criterion for Success 1. The weather measurement prototype with sensors should be able to accurately collect the temperature, humidity, and barometric pressure. etc. 2. A machine learning algorithm should be successfully trained to make predictions on the weather conditions: rainy, sunny, thunderstorm, etc. 3. Our system can forecast the weather in Haining, in real-time, and/or longer-period forecast. 4. The forecasted weather information could be demonstrated elegantly through some UI interface. A display screen would be a baseline, and an application on phones would be extra credit if time permitted. 5. Extra: Make our own weather dataset for Haining. If good, make it open-source. #Work Distribution **EE Student Zheyu Fu**: -Design the sensor module circuit -Development of visualization interface **ECE Students Xuanyu Chen & Zhenting Qi**: -Weather data collection and analysis -Build and test Machine Learning model on Raspberry Pi **ME Student Chenzhi Yuan**: -Physical structure hardware design -Proper distribution of the sensors to collect accurate data on temperature, humidity, barometric pressure, etc. |
||||||
27 | An Intelligent Assistant Using Sign Language |
Haina Lou Howie Liu Qianzhong Chen Yike Zhou |
Xiaoyue Li | |||
# TEAM MEMBERS Qianzhong Chen (qc19) Hanwen Liu (hanwenl4) Haina Lou (hainal2) Yike Zhou (yikez3) # TITLE OF THE PROJECT An Intelligent Assistant Using Sign Language # PROBLEM & SOLUTION OVERVIEW Recently, smart home accessories are more and more common in people's home. A center, which is usually a speaker with voice user interface, is needed to control private smart home accessories. But a interactive speaker may not be the most ideal for people who are hard to speak or hear. Therefore, we aim to develop a intelligent assistant using sign language, which can understand sign languages, interact with people, and act as a real assistant. # SOLUTION COMPONENTS ## Subsystem1: 12-Degree-of-Freedom Bionic Hand System - Two moveable joints every finger driven by 5-V servo motors - The main parts of the hand manufactured with 3D printing - The bionic hand is fixed on a 2-DOF electrical platform - All of the servo motors controlled by PWM signals transmitted by STM32 micro controller ## Subsystem2: The Control System - The controlling system consists of embedded system modules including the microcontroller, high performance edge computing platform which will be used to run dynamic gesture recognition model and more than 20 motors which can control the delicate movement of our bionic hand. It also requires a high-precision camera to capture the hand gesture of users. ## Subsystem3: Dynamic Gesture Recognition System - A external camera capturing the shape, appearance, and motion of objective hands - A pre-trained model to help other subsystems to figure out the meaning behind the sign language. To be more specific, at the step of objects detection, we intended to adopt YOLO algorithm as well as Mediapipe, a machine learning framework developed by Google to recognize different sign language efficiently. Considering the characteristic of dynamic gesture, we also hope to adopt 3D-CNN and RNN to build our models to better fit in the spatio-temporal features. # CRITERION OF SUCCESS - The bionic hand can move free and fluently as designed, all of the 12 DOFs fulfilled. The movement of single joint of the finger does not interrupt or be interrupted by other movements. The durability and reliability of the bionic hand is achieved. - The controlling system needs to be reliable and outputs stable PWM signals to motors. The edge computing platform we choose should have high performance when running the dynamic gesture recognition model. - Our machine could recognize different sign language immediately and react with corresponding gestures without obvious delay. # DISTRIBUTION OF WORK - Qianzhong Chen(ME): Mechanical design and manufacture the bionic hand; tune the linking between motors and mechanical parts; work with Haina to program on STM32 to generate PWM signals and drive motors. - Hanwen Liu(CompE): Record gesture clips to collect enough data; test camera modules; draft reports; make schedules. - Haina Lou(EE): Implement the embedded controlling System; program the microcontroller, AI embedded edge computing module and implement serial communication. - Yike Zhou(EE): Accomplish object detection subsystem; Build and train the machine learning models. |
||||||
28 | Electric Load Forecasting (ELF) System |
Ao Zhao Liyang Qian Yihong Jin Ziwen Wang |
Xiaoyue Li | design_document1.pdf proposal2.pdf |
Ruisheng Diao | |
# Electric Load Forecasting (ELF) System # Team members: Ao Zhao, aozhao2 Ziwen Wang, ziwenw5 Liyang Qian, liyangq2 Yihong Jin, yihongj3 # Problem Electric load forecasting (ELF) is a method that takes into account unstable factors, such as weather conditions and electricity prices, to predict the demand for electricity. Many utility companies rely on manual forecasting techniques based on specific datasets, but these methods may lack accuracy when fine-grained time particle forecasting is required. To accurately predict expenses on electricity and construct reliable infrastructures that can withstand a certain electrical load, utility companies need more advanced and reliable forecasting methods. # Solution Overview The electric load forecasting system is a powerful tool for predicting future electric load usage based on dedicated hardware and AWS services. By combining the data collection subsystem, data storage subsystem, prediction subsystem, query API subsystem, and web page subsystem, customers can easily retrieve and visualize the predicted electric load usage and use it for planning and optimization purposes. The system is designed to be accurate, effective, reliable, and easy to use, providing customers with a complete solution for electric load forecasting. # Solution Components [Data Collection Subsystem] - This subsystem is responsible for collecting real-time data on electric load usage. The data collection hardware is designed to be reliable, scalable, and capable of handling large volumes of data. The collected data is then sent to the data storage subsystem for further processing through AWS IoT Core. Hardware I. Smart meters to collect voltage, current, power and other data which further improve the ability to collect information Hardware Ⅱ. A transmission communication device that connects a smart meter to software or concentrator Hardware Ⅲ. Sensors that collect some relevant external factor data data (ex. Temperature sensor) [Data Storage Subsystem] - This subsystem is responsible for storing the collected data in a secure, scalable, and durable storage system. The data is stored in a format that is compatible with the Forecast DeepAR+ algorithm. AWS S3 provides a highly available and cost-effective storage solution that is suitable for storing large volumes of data. [Prediction Subsystem] - This subsystem is responsible for generating accurate predictions of future electric load usage based on the collected data. The Forecast DeepAR+ algorithm is a state-of-the-art machine learning algorithm that is designed for time-series forecasting. The AWS Forecast service makes it easy to generate accurate predictions at scale. The output of this subsystem is a forecast of future electric load usage that can be used for planning and optimization purposes. [Query API Subsystem] - This subsystem provides a RESTful API that allows customers to retrieve the predicted electric load usage for a specified time period. The API is designed to be secure, scalable, and easy to use. Customers can send requests to the API with the necessary parameters, and the API will return the predicted electric load usage in a format that is easy to understand and use. [Web Page Subsystem] (Optional) - This subsystem provides a user-friendly web interface for accessing the predicted electric load usage. The web page is built on top of the query API and allows customers to easily select the time period they are interested in and view the predicted electric load usage in a graphical format. The web page is designed to be responsive, easy to use, and accessible from any device with a web browser. # Criterion for Success Accuracy: The system should generate accurate predictions of future electric load usage. The accuracy of the predictions should be high enough to enable effective planning and optimization of electric power usage. Scalability: The system should be capable of handling large volumes of data and generating predictions for a large number of electric load customers. The system should be able to scale up or down as the demand for electric power changes. Reliability: The system should be designed to be highly reliable and available. It should be able to handle failures gracefully and recover quickly from any disruptions in service. Security: The system should be designed to be secure and protect customer data from unauthorized access or disclosure. The system should use industry-standard encryption and access controls to protect customer data. Ease of Use: The system should be designed to be easy to use and accessible to a wide range of customers. The query API should be easy to understand and use, and the web page interface should be intuitive and user-friendly. Cost-Effectiveness: The system should be designed to be cost-effective and provide good value for money. The cost of running the system should be reasonable and should not be a significant barrier to adoption. # Distribution of Work Yihong Jin, Computer Engineering: As a [AWS Certified Solutions Architect - Professional](https://www.credly.com/badges/1e4aa7a1-3ee6-4dd8-94c5-c015a85c3b84/linked_in_profile), design and implement the software architecture of this solution based on AWS services. Responsible for building the data pipeline which ingest raw data from by dedicated hardwares and prepare it for Machine Learning model training. Liyang Qian, Computer Engineering: Train the deepAR+ model with data stored in AWS S3 and build the API to enable customers to take advantage of forecasting results. Ao Zhao, Ziwen Wang, Electrical Engineering : Design the hardware used to collect the data and connect smart meters and software through transmission devices or specific communication methods to realize data interaction between each other. |
||||||
29 | Dancing Scoring Robot |
Chuxuan Hu Heyue Wang Xiaohan Zhu Yiyuan Chen |
Tielong Cai | design_document1.pdf proposal1.pdf |
Gaoang Wang | |
TEAM MEMBERS: Chuxuan Hu (chuxuan3@illinois.edu 3190112151) Heyue Wang (heyuew2@illinois.edu 3190110843) Xiaohan Zhu (xzhu54@illinois.edu 3190110352) Yiyuan Chen (yiyuanc2@illinois.edu) PROBLEM: The current background of the dancing scoring machine is that many dance competitions are judged by a panel of human judges. This system is open to bias and subjectivity, and it can be difficult to ensure that the same standards are applied across all competitions. There is thus a huge demand for this machine in providing a more consistent and objective approach to judging dance competitions. The dancing scoring machine automates the process of evaluating dance performances. It is programmed with a set of criteria and a scoring system, allowing it to accurately compare performances against a standard. The machine can be adjusted to suit the individual needs of each competition, allowing for greater customization and accuracy of the scoring. The impact of this machine is that it provides a more reliable and objective method for judging dance competitions. It eliminates potential bias and subjectivity, ensuring that the same standards are applied across all competitions. In addition, the machine is designed to be user-friendly and intuitive, allowing for a streamlined and efficient judging process. This machine will ultimately provide a fairer and more reliable judging process for dance competitions, resulting in a more accurate and consistent ranking of performances. OBJECTIVE This project proposes to design a robot for scoring dancing performances. There are three primary objectives: -The robot must be able to track and score multiple dancers simultaneously. -The robot should use multiple modalities to score performances, such as audio and visual inputs. -The robot should provide a comprehensive score for each dancer, including an accuracy score and a performance score. SOLUTION OVERVIEW Our solution is to score the dancers' performance by utilizing three distinct evaluation methods. The first step is to evaluate whether the dancer's movements match the standard movements; the second step is to assess how well the dancer's movements match the dance music; and the third step is to use a smartwatch to monitor the dancer's body condition in real-time, analyze the intensity of their movements, and record the dancer's hand movements in greater detail. By taking these three aspects into account, we can create a comprehensive evaluation of the dancer's performance and display it on the screen. SOLUTION COMPONENTS SCORING SUBSYSTEM: -Camera for scanning dancer’s body movement. -Sound Boxes for music playing. -Smart Bracelets for detecting the wearer's physical condition including heartbeat rate. OUTPUT SUBSYSTEM: -Display for showing three-part scoring of the dancer’s performance. CRITERIA FOR SUCCESS: -Clearly displays the different data collected from three input subsystems. -Displays the data in a synchronized, cohesive manner. -The algorithm should effectively combine the three inputs to provide an accurate assessment of the current dancing performance. WORK DISTRIBUTION EE Student Xiaohan Zhu -Design and implement comprehensive evaluation methods. -Integrate different evaluation standards and showcase the results on the display. ECE Student Chuxuan Hu -Designate and implement the motion recognition framework. -Carry out unit tests to ensure software accuracy and efficiency. ME Student Heyue Wang -Develop hardware for the three subsystems. -Implement and test the total assembly product with physical interactions, ensuring the successful completion of the project. ECE Student Yiyuan Chen - Focus on vital signs collection and smartwatch adjustments - Collect and integrate those data to system |
||||||
30 | High-renewable microgrid for Railway Power Conditioner(RPC) |
Jiakai Lin Jiebang Xia Kai Zhang Yongcan Wang |
Yi Wang | design_document1.pdf design_document2.pdf design_document3.pdf design_document4.pdf proposal1.pdf |
Lin Qiu | |
# Team Members: - Yongcan Wang (Yongcan2) - Jiebang Xia (jiebang2) - Kai Zhang (kaiz5) - Jiakai Lin (jiakail2) # Problem: In real life, the external power supply system is in three-phase while the traction network of an electrified railway usually only involves two phases Vdd and GND. If we randomly select two phases out of the three-phase power grid, there will be a mismatch in the power consumption for each power line. We need to come up with an idea to balance the power consumption on each power line. During the operation of electrified railways, the environment is not stable and small disturbances may exert on the system, which requires our system to have resilience in eliminating those disturbances. For example, the friction between the train and the ground varies in different areas and the train may climb up a ramp sometimes. In order to make the train operate at a uniform velocity, we cannot simply exert unified traction. It's also impossible for the power supply voltage to remain the same and there must exist some small vibration. It's vital to make the train function properly invariant with the outside environment. # Solution overview: The overall plan is to connect the three phases of the power grid circularly to the traction network at different sections of the railway. In other words, suppose the three phases are phase a, b, and c and we choose phase (a, b), phase (b, c), and phase (c, a) as input voltage periodically every few kilometers, the power supply grid will be close to balance in the large scale. To balance the power supply on the breakpoint of the traction network where the selected three phases input is changed, a Railway Power Conditioner (RPC, hub of power conversion) is designed to dynamically balance the interphase active power. A microgrid is also connected to the RPC and plays the role of a reservoir. It will absorb extra power or supply backup power during disruption, and it provides an approach to utilize regenerative braking energy to increase energy efficiency. Control theorems are added to make the traction network stable and improve the quality of the power supply. # Solution component: ## 1. Railway Power Conditioner (RPC): This is the main subsystem of this project. It aims to dynamically balance the inter-phase active power, independently compensate the reactive power of each feeder and suppress its harmonics. We will use converters, AC/DC transformers, and RC filters in this subsystem. Transformers are used to step down the high voltage at the traction network to a relatively safe low voltage to handle. AC/DC converters are used to eliminate the phase difference at the two sides of the breakpoint and make it possible to connect them. RC filters are used to suppress harmonics and provide a stable DC voltage. ## 2. Control system: It's the algorithm part of the project. We will use control theorems to make the train run in a safe and stable manner. We will start with open-loop control, and we will design PID closed-loop control later. We will also try data-driven maximum power point tracking control to extract maximum power from the solar panel. Through designing control signals for switching functions from Arduino, some switches can be opened or closed to achieve the purpose of those converters. ## 3. MTDC (Multi-terminal dc transmission control system: This is the bus that connects RPC and other sources from the microgrid (like photovoltaic, battery, and even wind power). We need to design an MTDC that can satisfy our different expectations. For RPC and battery, we need a stable voltage reference, while for solar panels, we need to change the DC voltage value to extract maximum power. This can be realized by designing a proper DC/DC converter for each component. ## 4. Three-phase voltage source and solar panel: These are the external voltage sources that provide power to the system. Instead of using the 110kV/220kV external power grid, we will use a much safer 220V three-phase voltage source from sockets. We will borrow solar panels from the lab and connect them in parallel as the microgrid source to simulate the solar farm. # Criterion for success: - Two AC voltage sources with different phases are converted into the same DC voltage through RPC. This implies the two voltages on either side of the breakpoint are connected and this proves the functionality of RPC. - MTDC provides the required voltage for each connected component. We can use a voltmeter to test whether the voltage is the desired voltage for each port of the MTDC. The port for RPC and the battery should be stable, while the port for the solar panel should vary with time to meet the maximum power extraction. - Apply some disturbance on the system to assess its stability. We can connect a resistance in series with the train load to simulate that more traction is needed. If the DC voltage at RPC becomes stable again soon after we add the resistance, the system has relatively good resilience. In addition, the advantages and disadvantages can also be judged by comparing the results of different algorithms, such as whether the maximum power point tracking can be achieved in the shortest possible time. # Distribution of work: 1. One group member (EE *Yongcan Wang*) will do some research on designing the voltage level for each wire of the system. He is also responsible for selecting the appropriate transformers that meet these voltage specs on the market and installing them into the circuit. 2. One group member (EE *Jiebang Xia*) is responsible for the converters and filters part. He needs to think about how to use appropriate semiconductor components to build the AC/DC and DC/DC converters to meet our requirements for RPC and MTDC. He is also responsible to select the specification of the filter that results in desired time constant. 3. One group member (EE *Kai Zhang*) is responsible for clarification of the whole principle. This member needs to understand the working principle of the whole system and will also help the member who is responsible for hardware, help him with some connections, select hardware specifications, etc. 4. One team member (ECE *Jiakai Lin*) is responsible for the programming part of the software, especially designing the control algorithms for those AC/DC and DC/DC converters in different areas and the short-time maximum power point tracking algorithm. Our project has some control programs that need to be completed by ECE students, and most of the other parts are the study and practical application of some hardware principles of electrical engineering, as well as the application of power supply and power grid knowledge. |
||||||
31 | Raspberry Pet Pal |
Shuhan Guo Xiaomin Qiu Xiaoshan Wu Yirou Jin |
Yiqun Niu | design_document1.pdf proposal1.pdf |
Gaoang Wang | |
MEMBERS: Shuhan Guo [shuhan4] Yirou Jin [yirouj2] Xiaomin Qiu [xchou2] Xiaoshan Wu [xw50] PROBLEM: In today's fast-paced and high-pressure world, young people are experiencing both loneliness and poverty. They long for social engagement and relationships, but most social activities are costly and pose a risk of infection during the pandemic. As a result, many young people turn to pets for companionship. However, owning a pet can financially burden those struggling to make ends meet. Furthermore, cute pets can sometimes create chaos in their owners' homes. Thus, an affordable and functional electronic pet seems like a necessary solution. Unfortunately, the current products on the market are either too expensive or fail to match a real pet's abilities, such as voice and behavior interaction. Therefore, lonely and financially strapped young people require an affordable and functional electronic pet. SOLUTION OVERVIEW: The new electronic pet is equipped with cutting-edge technology and advanced features for an interactive and engaging experience. This electronic pet is designed to follow its owner's movements through target detection technology while displaying a range of expressions through a high-quality display screen. With its voice recognition and corresponding audio output, this pet can communicate with its owner and respond to commands. Additionally, it can assist in carrying objects through a built-in weighing system, track physical activity through its motion detection capabilities, and navigate its surroundings using obstacle avoidance technology. Furthermore, the pet's limbs are designed for interactive play and detection through infrared technology. With its advanced features, this electronic pet offers unparalleled interactivity and companionship for pet lovers of all ages. SOLUTION COMPONENTS: Camera subsystem for object detection: This subsystem will consist of a camera module attached to the Raspberry Pi that detects and follows objects. We can use OpenCV and TensorFlow libraries to perform object detection and tracking. Display subsystem for showing expressions: This subsystem will consist of a small display screen showing different expressions of the pet. We can use Python libraries like pygame or Tkinter to create a graphical user interface for displaying emotions and expressions. Speech recognition and synthesis subsystem: This subsystem will consist of a microphone and speaker attached to the Raspberry Pi. We can use Google's Speech Recognition API or CMU Sphinx to perform speech recognition and the pyttsx3 library to synthesize speech. Weighing subsystem for carrying things: This subsystem will consist of a small weighing scale attached to the pet. We can use an HX711 module to interface with the load cell and obtain weight readings. Motion sensing subsystem for step tracking: This subsystem will consist of an accelerometer or gyroscope sensor to detect the pet's movement and track its steps. We can use the Adafruit LSM9DS1 library to interface with the sensor. Obstacle avoidance subsystem: This subsystem will consist of ultrasonic or infrared sensors to detect obstacles and help the pet navigate around them. We can use the RPi.GPIO library to interface with the sensors. Behavior interaction subsystem: This subsystem will consist of an infrared or ultrasonic sensor that detects objects or hands’ proximity, and the pet can react with different motions or movements. We can use the RPi.GPIO library to interface with the sensors. CRITERION FOR SUCCESS: Functionality: The electronic pet should be able to perform all the desired functions reliably and accurately. It should be able to follow objects, display a range of emotions on its screen, recognize and respond to voice commands, carry and weigh objects, count steps, avoid obstacles, and interact with limbs using infrared detection. User experience: The electronic pet should be easy to use and interact with. Users should be able to easily control and communicate with the pet through its display screen and voice recognition system. The pet should also respond to user interactions in a fun and engaging way. Durability and stability: The electronic pet should be built using durable and stable components to ensure that it can withstand regular use without breaking down. The car should be stable enough to navigate different terrains and avoid obstacles without getting stuck or tipping over. Battery life: The electronic pet should have a long-lasting battery life to ensure that it can be used for extended periods without needing to be recharged. This is particularly important if the pet is intended for use by children or in educational settings where it may be used for extended periods. Customizability: The electronic pet should be customizable to allow users to personalize their pet and add new features or functionality over time. This could include changing the pet's appearance, adding new voice commands, or integrating with other devices or systems. DISTRIBUTION OF WORK: ECE STUDENT 1 WU XIAOSHAN: Develop and implement the target detection algorithm for the pet's ability to follow its owner and avoid obstacles. Implement and test the motion control system for the pet. ECE STUDENT 2 QIU XIAOMIN: Develop and implement the facial expression recognition system for the pet's screen. Implement and test the speech recognition system for the pet's ability to interact with its owner. EE STUDENT JIN YIROU: Develop and implement the weighing system for the pet's ability to carry objects. Implement and test the step-counting system for the pet. ME STUDENT GUO SHUHAN: Design and manufacture the physical structure of the pet, including the casing and the mechanical components required for limb - movement. Implement and test the infrared detection system for the pet's limb interaction. |
||||||
32 | Observation Balloon For Testing Centers |
Jiajie Wang Shuaicun Qian Tunan Zhao Yichi Zhang |
Yutao Zhuang | design_document1.pdf proposal1.pdf |
Timothy Lee | |
GROUP MEMBERS Yichi Zhang NetID: yichi6 Tunan Zhao NetID: tunanz2 Jiajie Wang NetID: jiajie3 Shuaicun Qian NetID: sqian8 PROBLEM A floating balloon drone that monitors students who are taking tests. Needs to be non-noisy and provide aerial observation of the students to make sure that they are not cheating. Normally, if we want to use the drone to monitor the students, the sound will heavily affect the students and it may cause danger. Therefore, we want to create a safer machine that can achieve the goal. We can remotely control the direction and the height of the balloon. And we can bring it to the place where has no camera to monitor the student. SOLUTION OVERVIEW Our design is a none-noisy balloon drone that can be controlled by human remotely to monitor the students that taking the exam. The most important part is we need to make it not noisy as the normal drone. We want to use some special way to avoid using noisy motor. And to make it more useful, we make the drone equipped with several cameras which can send the pictures to our cellphone with Bluetooth. And if possible, we want to use the computer version to automatically detect if there are any students suspected cheating. SOLUTION COMPONENTS SUBSYSTEM #1: BALLOON DESIGN Design the overall structure of the balloon, including the shape and size. Choose which gas to use for construction based on their weight, durability, and strength. Create a physical model of the balloon using 3D printing or laser cutting. SUBSYSTEM #2: FLIGHT CONTROL SYSTEM Design a control panel that includes a joystick and other necessary controls. Choose appropriate sensors for detecting altitude, speed, and orientation of the plane. Implement algorithms for stabilizing the plane during flight and adjusting control surfaces for directional control. SUBSYSTEM #3: POWER AND PROPULSION Choose a suitable motor which has no noisy and propeller to provide the necessary thrust for the plane. Design and integrate a battery system that can power the motor and control systems for sufficient time. Implement a power management system that can monitor the battery voltage and ensure safe operation of the plane. SUBSYSTEM #4: IMAGE TRANSMISSION Buy a small monitor that can capture the picture. Control the battery it uses. Transmit the picture to user's phone. If possible, automatically detect whether there is someone cheating. CRITERION FOR SUCCESS 1.Floating balloon drone with flight controls built for indoor environments. 2.Able to capture suspicious student activity during testing. DISTRIBUTION OF WORK Yichi Zhang: POWER AND PROPULSION Tunan Zhao: IMAGE TRANSMISSION Jiajie Wang: BALLOON DESIGN and FLIGHT CONTROL SYSTEM Shuaicun Qian: BALLOON DESIGN and FLIGHT CONTROL SYSTEM |
||||||
33 | MEMS-based Feedback Controller |
Jianbo Xu Peiheng Yao Zhiyuan Xie |
Xuyang Bai | design_document1.pdf design_document2.pdf proposal1.pdf |
Binbin Li | |
**TEAM MEMBERS: ** Jianbo Xu[jianbox2] Peiheng Yao[peiheng5] Zhiyuan Xie[zx30] **TITLE OF THE PROJECT: ** MEMS-based Feedback Controller **PROBLEM: ** Seismic activity can cause significant damage to buildings and infrastructure. Vibrations caused by seismic activity can result in a range of problems, such as structural damage, equipment failure, and even collapse, which can lead to serious safety risks and financial losses. A feedback controller can help to mitigate these risks by providing a mechanism for actively controlling the vibrations of the building model. The controller receives feedback from sensors that measure the vibration of the building model and uses this information to adjust the input to the shake table. This allows the controller to actively suppress the vibrations of the building model, reducing the risk of damage or failure. **SOLUTION OVERVIEW: ** We are aiming to develop a feedback controller for this purpose involves designing a control algorithm that can accurately and efficiently control the vibration of the building model. The model response is measured by a MEMS-based accelerometer, which is a type of accelerometer that uses micro-electromechanical systems (MEMS) technology to measure acceleration. **SOLUTION COMPONENTS: ** Reading circuit: we need to design the reading circuit that can completely know how the capacitor of MEMS accelerometer changed. We can use some RC circuit like the RC latch to save the signal and then transmit it to another place. What we need to do is to lower the delay of this process. Wireless system: It can connect the circuit with our computer. This system allows us to import the data into computer, where the further data processing occurs. There’re two solutions in this wireless system, one is using the Bluetooth wireless system to transmit the data, the advantage is that it’s convenient and we’re familiar with the use of Bluetooth. We can use the Arduino module like HC-05 or HC-42 to make it possible. The mechanical part: To imitate the image from the description, we need to place four rods on the system and place the MEMS accelerometer on the surface. And keep it stable to avoid the intrinsic vibration, which will cause the error in our system. Data processing program: To simulate the vibration of the building or other vibrations, we need to precisely capture and analyze data collected. It can be built using Simulink, MATLAB. After our computer got the data, like signals. We need to process it by filtering some signal of different type of wavelength and fetch what we need to create the feedback controller system. **CRITERIA FOR SUCCESS: ** Reliability: Feedback controller can detect seismic excitation correctly, and overall weight is no more than 10% of the floor mass Accuracy:It can make reaction to a slight vibration. Selectivity: Maximum acceleration of floor response is no more than 0.01 g under various seismic excitations. |
||||||
34 | Robot for Gym Exercise Guidance |
Chang Liu Dalei Jiang Kunle Li Zifei Han |
Yiqun Niu | design_document1.pdf proposal1.pdf |
Gaoang Wang | |
TEAM MEMBERS Dalei Jiang (daleij2) Zifei Han (zifeih2) Chang Liu (changl12) Kunle Li (kunleli2) PROJECT TITLE Robot for Gym Exercise Guidance PROBLEM In modern society, daily fitness is a necessary life choice for healthy people. When it comes to fitness, the standard of movement is very important. However, hiring a coach exclusively for instruction is sometimes not a convenient and economical option. We think robots are perfectly capable of determining whether a person's movements are in place. To this end, we need to propose a scheme to design a robot that can walk behind people and use certain technologies to identify human movements when people are moving, compare with the existing action models, and give an evaluation. SOLUTION OVERVIEW Our solution is to design a robot that included a chassis that drove the motion on the bottom and a computer operating system and camera on the top. With ultrasonic radar and cameras, the robot can follow the target. When the "motion assessment" module starts to operate, the camera will capture video information and begin motion analysis at the same time. The analysis of human motion will be completed as soon as possible and the standard evaluation of motion will be given. At the same time, we will design some multimedia files, such as sound and video, to interact with the user. SOLUTION COMPONENTS Based on the introduction above, several systems need to be implemented to realize the solution. SUBSYSTEM 1: BOTTOM MOBILE PLATFORM PROGRAMMING We plan to take use of the EAI SMART robot platform as the base movement platform of the robot. We will do the programming based on the ROS system to realize automatic navigation, path planning, and object tracking. SUBSYSTEM 2: SKELETAL BINDING AND MOVEMENT ANALYSIS OF THE HUMAN BODY The most important part of this program is that we will use the Mask R-CNN to do the skeletal binding to determine the human's movement. We will try to train an efficient model to help us realize fast analysis. SUBSYSTEM 3: MAN-MACHINE INTERACTIVE SYSTEM As a user-oriented product, we need to design a friendly human-computer interface to realize the free conversion of functions. SUBSYSTEM 4: MOVEMENT STANDARD ALGORITHM We need to devise an algorithm to assess the deviation between the gymnast's movements and the standard. This algorithm is very important for the final product performance feedback. CRITERION FOR SUCCESS The robot can self-navigate to find people in the gym. The robot can monitor the person doing exercise and extract human poses. The robot can check whether the person is doing correctly in the exercise. DISTRIBUTION OF WORK Dalei Jiang: Skeletal binding and movement analysis of the human body Zifei Han: Bottom mobile platform programming Chang Liu: Man-machine interactive system building Kunle Li: Movement standard algorithm designing |
||||||
35 | A Direct Digitally Modulated Wireless Communication System |
Bingsheng Hua Dingkun Wang Luyi Shen Qingyang Chen |
Xuyang Bai | design_document1.pdf proposal3.pdf |
Shurun Tan | |
TEAM MEMBERS: Luyi Shen luyis2 Bingsheng Hua bhua5 Dingkun Wang dingkun2 Qingyang Chen qc20 PROJECT NAME: A Direct Digitally Modulated Wireless Communication System PROBLEM: Communication system is closely related to our life. We measure communication systems primarily by their effectiveness and reliability. But in fact, validity and reliability are a pair of contradictory indicators, and they need a certain compromise. We hope to improve the efficiency of communication system on the basis of guaranteeing the accuracy of communication. SOLUTION OVERVIEW: The project is to design and implement a kind of communication system for the next generation technology which is much more simplified compared to the systems that existed. The final version of the system should be expected to be able to transmit data like images and videos. Our basic idea is that the information can be send in digital signal form to matesurface, EM waves will be sent to the matesurface and be scattered to space. The information we want to transit will be carried on scattered EM waves. And once the receiver receives the signal it will be decoded into the original information. Basically, our project is a kind of innovation or re-creation of an existing communication system. The biggest difference between our design and other systems could be the method to process the information. There is a significant component in our future design called metasurface, which could be used to adjust the phase, magnitude, and polarization along with other significant properties of EM waves which can send multi-digit signal at same time. As for the functionality of our project, we think it could be an interesting trial and we have faith to finish it since everything we need in the project we could find plenty of research materials and reports to look into. Even if the project is not applicable in the end, we believe the application of the metasurface material could be still powerful in communication system. SOLUTION COMPONENTS: Metasurface: it could be used to adjust the phase, magnitude, and polarization along with other significant properties of EM waves. Receiver: it is where information will be received and decoded. FPGA: it is where information will be prepared and send to the metasurface. Signal emitter: Send EM wave to matesurface. CRITERION FOR SUCCESS 1.The system could be used to transmit data like Images and Videos. 2.The system should be able to demonstrate a certain level of supreme communication efficiency DISTRIBUTION OF WORK: Dingkun Wang & Qingyang Chen Responsible for the software part of the communication system, including the information processing sent by the computer, the receiver information receives and decode, the interface between software and hardware, etc. Bingsheng Hua & Luyi Shen Responsible for the design of metasurface in the communication system and the construction of the hardware of the communication system. |
||||||
36 | Microgrids |
Ao Dong Bohao Zhang Kaijie Xu Yuqiu Zhang |
design_document1.pdf proposal1.pdf |
Lin Qiu | ||
TEAM MEMBERS: ●Kaiijie Xu (kaijiex3@illinois.edu), ●Bohao Zhang (bohaoz2@illinois.edu), ●Ao Dong (aodong2@illinois.edu), ●Yuqiu Zhang (yuqiuz2@illinois.edu) Microgrids PROBLEM: In recent years, the power system has faced challenges stemming from increasing load and transmission capacity, as well as high costs, operational difficulties, and weak regulation of large interconnected power grids with centralized generation and long-distance transmission. However, advances in new power electronics technology have led to the proliferation of distributed generation based on renewable sources such as wind, solar, and storage. Distributed power generation offers various advantages, including high energy utilization, low environmental pollution, high power supply flexibility, and low input cost. Developing and utilizing efficient, economical, flexible, and reliable distributed power generation technology presents an effective approach to addressing the energy crisis and environmental issues. The concept of microgrid, which aims to mitigate the impact of large-scale distributed power supply to the grid and leverage the benefits of distributed power generation technology, was introduced. The microgrid represents a promising solution to address the limited carrying capacity of the power system for the extensive penetration of distributed power supply. SOLUTION OVERVIEW: To verify the feasibility of the microgrid, we plan to build a small microgrid on a PCB board. This microgrid contains a power generation part (solar panels), a transmission part (wires), a power consumption part (light bulbs, fans, etc.), an energy storage device (batteries), and a device connected in parallel with the larger grid. The microgrid can perform most of the power system functions independently. It can also be connected to the larger grid and switched from islanding mode to parallel mode. SOLUTION COMPONENTS: Power Generation: The proposed solution involves integrating a solar panel with a printed circuit board (PCB) to serve as a primary power source for the microgrid. The solar panel will primarily generate energy to power the microgrid. In the event that power generation is insufficient to meet the microgrid's energy demands, it will be possible to draw electricity from the larger grid by establishing a connection between the microgrid and the grid. This hybrid power supply arrangement will ensure a stable and reliable power supply for the microgrid, even under variable weather conditions that may affect the solar panel's output. Energy Storage: To enable energy storage for the microgrid, a battery will be integrated into the PCB board. The battery will function as an energy storage device that can capture and store excess energy generated by the solar panel when the power generation exceeds the system load. Conversely, when the power generation is lower than the system load, the battery will discharge stored energy to supplement the power supply. This mechanism will contribute to a more stable and reliable power supply for the microgrid, reducing the potential for power outages or disruptions. Additionally, the battery's capacity and performance characteristics will be optimized to ensure efficient energy storage and discharge, and to prolong the battery's operational lifespan. Load: The intended loads for the microgrid are primarily light bulbs and electric fans, with the possibility of integrating cell phone charging devices at a later stage if budget and technological feasibility permit. These loads have been selected based on their low power requirements and the ability to provide immediate benefits to end-users. Additionally, they are expected to be relatively easy to implement, and can serve as a starting point for the development of more complex microgrid applications in the future. Nonetheless, the potential inclusion of cell phone charging devices as part of the microgrid's load profile requires a careful assessment of the system's technical capabilities, as well as a thorough evaluation of the costs and benefits associated with such an expansion. Control system: The control module employed for the microgrid is the DSP28377 chip, which receives analog control signals from other modules. To facilitate the desired control functionality, a C program will be implemented on the DSP28377 chip. This program will enable the control module to execute the necessary control algorithms to regulate the microgrid's power supply in accordance with the received signals. The use of the DSP28377 chip offers numerous advantages, including high performance, low power consumption, and flexible configurability. Moreover, the C programming language is well-suited for embedded systems, and can be used to develop efficient and reliable control programs that are tailored to specific system requirements. Connection with large power grid: The microgrid will be designed to enable seamless switching between islanding mode and parallel mode, with the objective of enhancing the system's flexibility and reliability. To facilitate this transformation, a phase-locked loop (PLL) will be employed as the conversion device. The PLL will serve to match the frequency and phase of the larger grid within a relatively short timeframe, thereby enabling safe and reliable connection of the microgrid to the grid. This technology offers numerous benefits, including efficient frequency synchronization and robust performance characteristics. Additionally, it offers the flexibility to accommodate varying grid conditions, including changes in frequency and phase, ensuring that the microgrid remains operational and stable under different operating scenarios. CRITERION FOR SUCCESS: The bus voltage is a critical parameter for our microgrid system, as it determines the feasibility of incorporating larger circuit components. To this end, the minimum bus voltage requirement has been established at 50 volts. This voltage level will enable the integration of larger circuit components and facilitate the implementation of more complex microgrid functionalities. Additionally, it will contribute to a more stable and reliable microgrid operation by ensuring that the voltage level remains above the minimum threshold required by the components. The power requirement for the microgrid circuitry is another important consideration, as it directly affects the feasibility of realizing the microgrid's intended functionalities. To meet this requirement, a minimum power threshold of 100 watts has been established. This power level will enable the majority of the microgrid's components to be satisfied and will facilitate the implementation of the basic microgrid functions. By meeting this threshold, the microgrid will be able to operate effectively and efficiently, providing a reliable source of energy to end-users. The state transition time is a critical performance metric for the microgrid, particularly with respect to its ability to connect to the unity grid. To achieve this objective, a carrier state transition time of less than 200 milliseconds has been established as a performance requirement. This time constraint reflects the need for a rapid and reliable state transition process, which will enable the microgrid to connect to the unity grid seamlessly and without disruption. By meeting this requirement, the microgrid will be able to operate in both islanding and parallel modes, providing a stable and reliable power supply to end-users. DISTRIBUTION OF WORK: EE STUDENT 1 Xu Kaijie: ●Develop and implement the design of PCB for micro-grid and power electronics converter. ●Implement the connection between different systems inculding power point tracking control and Basic battery management EE STUDENT 2 Zhang Bohao: ●Model the micro-grid system and converter in Simulink to verify the feasibility of system. ●Implement the data-driven maximum power point tracking control. EE STUDENT 3 Dong Ao: ●implement and test Basic battery management hardware to realize V2G system. ●build up the evaluation system to evaluate the efficiency and safety of entire system. ME STUDENT Zhang Yuqiu: ●Design the mechanical structure to combine several components of our micro-grid system including power converter and Basic battery management hardware ●Implement the basic structure and shape of our converter with CAD |
||||||
37 | Dental Health Monitoring System |
Linjie Tong Xin Wang Yichen Shi Zitai Kong |
Adeel Ahmed | design_document1.docx design_document2.pdf proposal1.pdf |
Zuozhu Liu | |
The project is to build an intelligent dental health monitoring system that can collect dental images, send images to a cloud server based on mobile App, perform tooth image segmentation and medical image analysis (2D image to 3D point cloud/mesh reconstruction) with cutting-edge AI algorithm. For RFA, please refer to the link to idea post below. | ||||||
38 | MassageMate: Smart Robot Masseur |
Jack Bai Ke Xu Wentao Yao Xiuyuan Zhou |
Yutao Zhuang | design_document1.pdf proposal1.pdf |
Liangjing Yang | |
# Team Members + Ke Xu [kex5] + Hao Bai [haob2] + Wentao Yao [wentaoy4] + Xiuyuan Zhou [xiuyuan5] # PROBLEM + High-intensity work tends to cause fatigue in people’s neck and waist, so they need massage to relax their shoulders and neck, but frequent visits to massage parlors cost quite time and are expensive. + With the growing need of massage, the quantity and quality of human massagers are hard to meet the demand, especially personalized customization. + Some customers may not want to be touched by an unfamiliar person (i.e. a real human Masseur), or they have some body privacy that hope to be hidden from others, then a Robotic Masseur can help. # SOLUTION OVERVIEW + The proposed solution involves utilizing a high-resolution Automatic Speech Recognition (ASR) module to accurately convert speech to text. The resulting text will be structured and analyzed using Codex, with the Code4Struct methodology proposed by Xingyao W. et. al. applied for enhanced understanding. + Task slots will be generated through slot filling, a method commonly used in task-oriented dialog systems, allowing for seamless integration with OpenCR, a powerful platform for robotics. OpenCR will be utilized to instantiate the robot tasks, with each task assigned to a Resp Pi device. + Finally, the Resp Pi device will be leveraged to physically activate the robot, enabling it to move and perform the assigned tasks. This comprehensive solution offers a professional and efficient method for integrating ASR technology and robotics for enhanced performance and automation. # SOLUTION COMPONENTS ## SUBSYSTEM I: MECHANICAL SYSTEM + OpenManipulator Robotic Arm + Customed Robotic Hands for specific needs of Massage ## SUBSYSTEM II: CONTROL SYSTEM + Resp Pi/Control panel that is Compatible with Robotic Arm + PC with GPT-3 key ## CRITERION FOR SUCCESS + The robotic arm can doing massage in appropriate strength and location under the control of certain instructions. + Build a phone APP that can monitor the current massage status, including force, frequence, position, etc. + Successfully parse the natural language given by the human into structured dialog state and further into natural language # DISTRIBUTION OF WORK + Ke Xu: hardware (Resp Pi & OpenCR) + Wentao Yao: hardware-software interfaces (Resp Pi & OpenCR) + Hao Bai: slot filling, dialog management (API layer, Python strategies) + Xiuyuan Zhou: hardware (Robotic) |