Hall of Fame
Project | Award |
---|---|
Multipurpose Temperature Controlled Chamber (for Consumer Applications) Isaac Brorson, Stefan Sokolowski, Mitchell Stermer |
Best Overall Project $1200 |
Multipurpose Temperature Controlled Chamber (for Consumer Applications) #TEAM MEMBERS: Stefan Sokolowski (stefans2) Mitchell Stermer (stermer2) Isaac Brorson (brorson2) #PROBLEM: Have you ever put a drink in the freezer to make it cool down faster, only to forget about it and later find it exploded and frozen? Or have you wanted to cook a steak, but forgotten to move it from the freezer to the refrigerator the previous day? Finally, has there ever been a time when you set food out overnight in order to prepare it for the next day but only to find that it didn’t thaw as expected? We have done all of these things plus more and have always wished there were a smart device that could quickly cool or warm food without freezing or cooking it. #SOLUTION: Our project would be a programmable temperature controlled chamber which allows a user to set the temperature curve of a food item they are planning on consuming in the near future. This device would be able to quickly heat or cool food to a desired temperature, then hold it at that temperature until the user is ready to use the food. The way someone would use this device would start by placing their food item in the device's insulative chamber and closing the door. The user interface would present the user with a variety of options: standard heating or cooling presets for common food items, temperature set and hold, or the ability to set a detailed temperature curve. If you want to cool a drink to just above freezing, you would select the corresponding menu option, and this device will lower the temperature of its chamber to well below freezing, then slowly raise its temperature to ensure the drink doesn't freeze. If you select the menu option to thaw a steak, this device will raise the temperature of the chamber to just below the point at which meat begins to cook, (roughly 105 degrees F) then slowly lower the temperature towards room temperature. This device could also be used for applications outside of cuisine. Say you’re running an experiment to test the capacity of a battery at different temperatures. You could set a temperature curve to visit several different temperatures and hold each one as your battery capacity tester runs its tests. This would allow you to automate an experiment that would otherwise require intermittent attention over the span of multiple hours. There are temperature controlled chambers on the market, but they’re all exorbitantly expensive and large for a household kitchen. We want to make a device that could sit on a countertop and be affordable to anyone who has the budget for other standard kitchen appliances. ![pic](https://i.imgur.com/HJiCQsN.png) #POWER We plan to use a dual output DC power supply such as the RD-125B[1] to power both our digital electronics and the high power heating and cooling elements. This power supply would be plugged directly into an outlet using a 120V plug, and would create 5V and 24V DC outputs. According to its datasheet[1], the RD-125B’s 24V output is rated to supply 4.6A, which equates to just over 110W. Based on our research of thermoelectric coolers and heating elements, we think this should be plenty of power for our application.The RD-125B’s 5V output is rated to supply far more power than our 5V electronics could possibly draw. #MECHANICAL DESIGN In order to reach temperatures below freezing with thermoelectric coolers, we’ll need to thermally insulate the chamber very well. Since this insulation needs to be able to withstand the heat produced by the heating elements, we landed on Kaowool. This ceramic wool insulates very well while also being rated to over 1000℃[2]. Since our device is intended for food applications, it’s important for our temperature controlled chamber to be waterproof and food safe. For this reason, we plan to purchase an off-the-shelf cooking pot such as this one[3]. By fitting a smaller pot inside of a slightly larger pot, we can create an affordable and convenient way to insulate our chamber. We can fit the gap between the pots with Kaowool insulation, and use the larger pot’s lid with Kaowool in it to seal the top. To heat the chamber, we plan to wrap a resistive heating element (such as nichrome wire) around the inner chamber. Since we plan to use an electrically conductive pot for our inner chamber, we’ll need to insulate the heating element from it to prevent shorting. This can be done with Kapton tape, which can withstand temperatures ranging from -269℃ to 400℃[4]. To cool the chamber, we plan to use thermoelectric cooling modules. These require a good thermal pathway to work well, so we’ll need to use a material with high thermal conductivity to mount them to the chamber wall. We plan to ask the machine shop to machine us aluminum mounts which match the curved outside surface of the pot composing the chamber to the flat faces of the thermoelectric cooling elements. Additionally, we’ll use thermal grease to reduce the thermal resistance of the junctions. The thermoelectric coolers will require rectangular holes cut through the wall of the outer pot so they can pump heat to outside of the device. We plan to mount our circuit board and the user interface electronics in an E-box attached to the side of the outer pot. We can use standoff rods to ensure the electronics don’t get heated or cooled too much from being close to the chamber, though we expect that our thermal insulation will be good enough for that not to be a concern. #HEATING SUBSYSTEM As mentioned in mechanical design, we plan to use a resistive heating element to heat the chamber. This will be powered by the higher voltage DC power rail produced by the power supply, which is 24V for the RD-125B. We'll use a solid state switch to control the current through the heating element. This allows us to control its power using PWM, which is essential for ensuring the chamber temperature remains below a certain prescribed level. The simplest and most cost effective switching device would be an N-channel power MOSFET such as the Taiwan Semiconductor TSM170N06CH[5]. #COOLING SUBSYSTEM We plan to use thermoelectric (Peltier) coolers to provide the cooling. These work as heat pumps, so we’ll need heat sinks and cooling fans to dissipate the heat they produce. The thermoelectric coolers and fans will be run off of the same higher voltage DC that powers the heating element. We want to have the option to run the thermoelectric coolers in reverse while the chamber is heating to prevent their heat sinks from cooling down the chamber. To do this we’ll need to power the thermoelectric coolers through an H-bridge so that we can reverse their polarities. The H-bridge can be composed of two N-channel MOSFETs such as the one mentioned above[5], and two P-channel MOSFETs such as the Rectron Semiconductor RM15P55LD[6]. The H bridge can be controlled by the STM32 microcontroller, allowing us to use PWM to vary the power supplied to the thermoelectric coolers. We may or may not need gate drivers for the H-bridge. Gate drivers are necessary for a fast switching rate, but our application doesn’t require high frequency PWM. #TEMPERATURE MEASUREMENT SUBSYSTEMS To be as precise as possible, we want distinct temperature sensors for measuring the temperature of the air in the chamber and the temperature of the item being warmed or cooled. Measuring the temperature of the food is made difficult due to many food items having insulative packaging. (Glass bottles, styrofoam containers, etc...) Since we want our device to work for as wide of a range of food items as possible, we plan to give the user the option to select from multiple different interchangeable food temperature probes. Temperature sensing probes could include a meat thermometer, a flat metallic probe that could be placed on frozen meat, or a ring shaped thermometer that could go around a bottle or can. Temperature sensing (thermocouple / thermopile) may require some basic analog electronics, such as an op amp to amplify the small voltage produced by a thermocouple. #USER INTERFACE SUBSYSTEM We plan to use an STM32 microcontroller, for our use a STM32F103C8T6 would probably suffice with IO and processing power, but more capable F4’s might be considered if we add more sensors. The microcontroller and user interface will require logic level voltage DC. We would most likely use an I2C enabled LCD display as well as a bright, external RGB LED in order to show the user what state the machine is in from a distance. We plan to use a push button rotary encoder to allow the user to interact with the device, in addition to an ON/OFF switch and a "cancel" button. User feedback should be fairly simple and if time allows, we might consider connecting the device to an external service to send users notification as to the status of their heating/cooling cycle. The user interface screen will have multiple interactive menus: one to select the behavior mode of the device, one to set temperature and time values, one to show a temperature curve, and one to be displayed while the device is operating. #CHALLENGES & CONSIDERATIONS: - Everything inside the chamber will need to be able to withstand the full range of temperature. - Electronics will need to be very well thermally insulated from the chamber if we want to use it as an oven. - Since thermopiles operate off of a temperature gradient, they require a stable case temperature. This means we'll need to keep the thermocouple in a temperature controlled environment. - The chamber should ideally be made watertight for the case of a spill or leak. - When making the mechanical design, we'll need to keep in mind how different materials expand / contract at different rates when they're heated / cooled. #CRITERION FOR SUCCESS: - Inside of the chamber should be able to reach at a low end 0 degrees Celsius and at a high end 40 degrees Celsius. - Be able to hold temperature to within +-5 degrees Celsius of target temperature. - User has the ability to set target temperature, heating/cooling curve and max/min temperature allowances through GUI on an LCD display. - Display of current temperature, and possibly a plot of the temperature vs. time graph. - Ability to select the behavior of the device from a provided menu of presets for different foods. - (Stretch Goal) We could possibly include multiple different methods to measure food temperature in addition to the ambient temperature. (Stainless steel probe to measure the internal temperature of meats, thermocouple for bottles and containers) [1] Power Supply: https://www.mouser.com/datasheet/2/260/RD_125_SPEC-1511572.pdf [2] Kaowool: https://www.morganthermalceramics.com/media/llhhadih/5-14-205_kaowoolblankets_072018.pdf [3] Aluminum pot: https://www.amazon.com/Winco-Winware-Aluminum-Stockpot-12-Quart/dp/B001CHMIQ4/ref=sr_1_10?crid=1VECOQHCN2UC2&keywords=aluminum%2Bpot&qid=1706684643&sprefix=aluminum%2Bpot%2Caps%2C93&sr=8-10&th=1 [4] Kapton tape: https://www.dupont.com/electronics-industrial/kapton-hn.html#:~:text=Kapton%C2%AE%20HN%20has%20been,C%20(752%C2%B0F). [5] N channel MOSFET: https://services.ts.com.tw/storage/resources/datasheet/TSM170N06CH_A2211.pdf [6] P channel MOSFET: https://www.mouser.com/datasheet/2/345/rm15p55ld-1396325.pdf | |
Smart Glasses for the Blind Siraj Khogeer, Abdul Maaieh, Ahmed Nahas |
ECE 445 Instructor's Award $800 |
# Team Members - Ahmed Nahas (anahas2) - Siraj Khogeer (khogeer2) - Abdulrahman Maaieh (amaaieh2) # Problem: The underlying motive behind this project is the heart-wrenching fact that, with all the developments in science and technology, the visually impaired have been left with nothing but a simple white cane; a stick among today’s scientific novelties. Our overarching goal is to create a wearable assistive device for the visually impaired by giving them an alternative way of “seeing” through sound. The idea revolves around glasses/headset that allow the user to walk independently by detecting obstacles and notifying the user, creating a sense of vision through spatial awareness. # Solution: Our objective is to create smart glasses/headset that allow the visually impaired to ‘see’ through sound. The general idea is to map the user’s surroundings through depth maps and a normal camera, then map both to audio that allows the user to perceive their surroundings. We’ll use two low-power I2C ToF imagers to build a depth map of the user’s surroundings, as well as an SPI camera for ML features such as object recognition. These cameras/imagers will be connected to our ESP32-S3 WROOM, which downsamples some of the input and offloads them to our phone app/webpage for heavier processing (for object recognition, as well as for the depth-map to sound algorithm, which will be quite complex and builds on research papers we’ve found). --- # Subsystems: ## Subsystem 1: Microcontroller Unit We will use an ESP as an MCU, mainly for its WIFI capabilities as well as its sufficient processing power, suitable for us to connect - ESP32-S3 WROOM : https://www.digikey.com/en/products/detail/espressif-systems/ESP32-S3-WROOM-1-N8/15200089 ## Subsystem 2: Tof Depth Imagers/Cameras Subsystem This subsystem is the main sensor subsystem for getting the depth map data. This data will be transformed into audio signals to allow a visually impaired person to perceive obstacles around them. There will be two Tof sensors to provide a wide FOV which will be connected to the ESP-32 MCU through two I2C connections. Each sensor provides a 8x8 pixel array at a 63 degree FOV. - x2 SparkFun Qwiic Mini ToF Imager - VL53L5CX: https://www.sparkfun.com/products/19013 ## Subsystem 3: SPI Camera Subsystem This subsystem will allow us to capture a colored image of the user’s surroundings. A captured image will allow us to implement egocentric computer vision, processed on the app. We will implement one ML feature as a baseline for this project (one of: scene description, object recognition, etc). This will only be given as feedback to the user once prompted by a button on the PCB: when the user clicks the button on the glasses/headset, they will hear a description of their surroundings (hence, we don’t need real time object recognition, as opposed to a higher frame rate for the depth maps which do need lower latency. So as low as 1fps is what we need). This is exciting as having such an input will allow for other ML features/integrations that can be scaled drastically beyond this course. - x1 Mega 3MP SPI Camera Module: https://www.arducam.com/product/presale-mega-3mp-color-rolling-shutter-camera-module-with-solid-camera-case-for-any-microcontroller/ ## Subsystem 4: Stereo Audio Circuit This subsystem is in charge of converting the digital audio from the ESP-32 and APP into stereo output to be used with earphones or speakers. This included digital to audio conversion and voltage clamping/regulation. Potentially add an adjustable audio option through a potentiometer. - DAC Circuit - 2*Op-Amp for Stereo Output, TLC27L1ACP:https://www.ti.com/product/TLC27L1A/part-details/TLC27L1ACP - SJ1-3554NG (AUX) - Connection to speakers/earphones https://www.digikey.com/en/products/detail/cui-devices/SJ1-3554NG/738709 - Bone conduction Transducer (optional, to be tested) - Will allow for a bone conduction audio output, easily integrated around the ear in place of earphones, to be tested for effectiveness. Replaced with earphones otherwise. https://www.adafruit.com/product/1674 ## Subsystem 5: App Subsystem - React Native App/webpage, connects directly to ESP - Does the heavy processing for the spatial awareness algorithm as well as object recognition or scene description algorithms (using libraries such as yolo, opencv, tflite) - Sends audio output back to ESP to be outputted to stereo audio circuit ## Subsystem 6: Battery and Power Management This subsystem is in charge of Power delivery, voltage regulation, and battery management to the rest of the circuit and devices. Takes in the unregulated battery voltage and steps up or down according to each components needs - Main Power Supply - Lithium Ion Battery Pack - Voltage Regulators - Linear, Buck, Boost regulators for the MCU, Sensors, and DAC - Enclosure and Routing - Plastic enclosure for the battery pack --- # Criterion for Success **Obstacle Detection:** - Be able to identify the difference between an obstacle that is 1 meter away vs an obstacle that is 3 meters away. - Be able to differentiate between obstacles on the right vs the left side of the user - Be able to perceive an object moving from left to right or right to left in front of the user **MCU:** - Offload data from sensor subsystems onto application through a wifi connection. - Control and receive data from sensors (ToF imagers and SPI camera) using SPI and I2C - Receive audio from application and pass onto DAC for stereo out. **App/Webpage:** - Successfully connects to ESP through WIFI or BLE - Processes data (ML and depth map algorithms) - Process image using ML for object recognition - Transforms depth map into spatial audio - Sends audio back to ESP for audio output **Audio:** - Have working stereo output on the PCB for use in wired earphones or built in speakers - Have bluetooth working on the app if a user wants to use wireless audio - Potentially add hardware volume control **Power:** - Be able to operate the device using battery power. Safe voltage levels and regulation are needed. - 5.5V Max | |
Automatic cake decorator Rui Gong, Muye Yuan, James Zhu |
Honorable Mention |
# Team Members: Muye Yuan(muyey2) Rui Gong(ruigong5) James Zhu (tianyi9) # Problem The current challenge lies in manual application of cream on cakes, prompting the need for an automated solution. Traditional methods often result in variations in cream thickness, coverage, and overall quality due to the nature of manual application. This not only demands skilled workers but also leads to increased production costs and the potential for human errors. Moreover, labor costs can be a significant factor in the overall production costs. # Solution We decided to make an automatic cake decorator, which puts creams with shapes and curves around the edge of the top surface of the cake. By automating this process, we aim to eliminate the inconsistencies associated with manual application, enhance the overall quality of decorated cakes, and reduce production costs. Ultimately, this device can offer a more efficient and cost-effective solution for the baking industry. The decorator can move along the edge of the cake detected by the camera. According to the input, the movement will be divided by x and y components which can lead the stepper motor to the appropriate position. This system differs from existing food printer solutions, which only print pixelated images on the food. It leaves a vectorised, continuous trail of cream. So it requires a more dedicated CV algorithm to recognize the shape of cakes. # Solution Components ##Subsystem1 Computer vision and detector: 1x 1080p usb camera, laptop A frame holds the camera hanging it on the top of our decorator machine, looking down to the cake in it. It’s connected to a laptop running our recognition program. The program would recognize the edge of the camera with a CV algorithm. It could identify the cake successfully even with other distractions (like the machine itself) in the view, and fit the edge into a set of waypoints for the cream extruder to follow. The program presents a preview of it for the user to confirm. The laptop is connected to MCU PCB with USB. Once a key is pressed, it would send out a waypoint to the MCU and signal for it to start moving the mechanical parts. ##Subsystem2 MCU and PCB 1x ATmega328P MCU, 1x self designed PCB with the MCU and the motor driving circuit Input: Usb connected from the laptop Output: Control signal to the step motors driving the extruder and the cream syringe. Once a set of waypoints is received, the trajectory following the waypoint would be converted into its projections on the x and y axis, and the function of x and y position over time would be calculated. (these calculations might be done on the laptop as well). Then the program on the MCU would start and drive the two sliding rail motors, as well as the motor pushing the syringe. ##Subsystem3 Mechanical structure 3x 42-40 Stepper Motor, Cake Decorating Tools Cupcake Injector, rectangular frame, and 2x Linear Rail Guide, height adjustable base (placing the cake) The structure of the machine resembles that of a cartesian robot, or a 3D printer, which is two perpendicular sliding rails (powered by motors) connected to each other, able to move its tips to arbitrary x-y positions. A large syringe with cream inside is mounted at the tip, extruding the cream uniformly when pushed by a motor. # Q&A ##1.Decide whether to implement a 2D or 3D movement system. We want to implement the 3D movement system, but we don’t know how complex it is. Thus, if the 3D system is too complicated for us to implement, we will change to implement a 2D movement system. ##2.Clarify the mechanisms you plan to use for x, y, and z movements. Will they be similar to those in a 3D printer, and how will you ensure movements, when working with a medium like cream? Yes. It is similar to 3D printers with two perpendicular sliding rails. And we are planning on putting a rubber hose on the syringe and the end factor of the mechanism grabbing the other end of the hose, keeping the relatively heavy syringe static. ##3.Determine the dimensions of the machine(syringe size, etc). Are you considering a vertical actuator to push the cream out of the syringe? Detail out all the electrical components required for this idea. We want to start from a small size, so the amount of cream will not be large. For example, we start from using the Cake Decorating Tools Cupcake Injector and a step motor pushing it to get the cream out. ##4. The incorporation of a camera for position detection adds complexity. How do you plan to convert the camera inputs into xyz position? The coding required to convert camera output into g-code(x,y,z) is critical. The z position is fixed for a cake. We first require the user to place height of the cake manually so that its top surface is near the extruder. Later we might add an ultrasound system and an automatically adjustable base for the cake. For x,y coordinates, we might first try to mount the camera high enough, so that we can assume it’s a planar projection from the pixel coordinates to the physical. We would first fix the relative position of the machine and the camera and do calibration (mapping from pixel coordinate to physical) manually. But later we could try adding some marks on the edges on the machine, the camera can automatically figure out the linear translation without the need to calibrate every time. If the error of assuming planar projection turns out to be too large, we could still figure out the intrinsic of the camera and do unprojection with formulas. # Criterion For Success -CV system recognize the edge of the target successfully -Moving system can successfully follow the input instruction -Put cream with a curve around the edge of the top surface of the cake. | |
Chess Playing Robot with Computer Vision Zack Alonzo, Jose Flores, Joshua Hur |
Honorable Mention |
# Chess Playing Robot with Computer Vision jhur22, joseaf3, zalonzo2 ## Problem Our project’s goal is to address the need for a tangible and interactive chess-playing device, enabling users to play in the physical world against a chess AI rather than relying on digital platforms. Designed for both beginners and advanced players, the chess-playing robot would provide an engaging alternative to mobile apps, allowing for skill development and strategic thinking in a hands-on manner. ## Solution We plan to develop an autonomous chess-playing robot that eliminates the need for a human opponent by incorporating our own chess algorithm with varying difficulty levels. Using a system involving a magnet and motors beneath the board, the computer opponent’s chess pieces will move autonomously while the human player will simply pick up and place their pieces. Then, our robot will analyze the current board position by capturing an image through a camera and will identify all the pieces on the board by identifying each piece's color, associating it with the corresponding chess piece. With this updated board, we will now be able to determine the optimal move based on the chosen difficulty level and current board position. When identified, our code will output the necessary information to the system with the magnet and the motors underneath the board to move its intended piece and wait for the subsequent human player’s move (additionally, a button press will “submit” the player’s move). ## Solution Components The project contains three major subsystems to accomplish its task. - Magnetic Chess Board - Computer Vision-based Chess Board Visualizer - AI Chess Algorithm ## Subsystem 1: Magnetic Chess Board A version of this board already exists in the machine shop from a previous student project, so this part is mostly complete. However, our goal will still be to improve upon the design of the board, as the current board has some issues with the main magnet and its consistency in grabbing the chess pieces. The chess board itself consists of 3 motors: 2 for one axis (AXIS1) and 1 for the other axis (AXIS2). The purpose of having 2 motors for AXIS1 is to prevent AXIS2 from tilting and being offset. Connected to AXIS2 is a magnet that will be responsible for moving pieces on the computer’s side of the board. When the computer executes a ply, the code running on the microcontroller will move the magnet to the piece’s starting position. Once it arrives, it will activate a voltage high to enable the magnet to grab the chess piece. Once held, it will navigate through the board to the desired end location, activate a voltage low, and finish its ply. Because the pieces will be sliding around flush with the board, the pieces or board need to be modified. In chess, knights can move over other pieces so to avoid collisions with other pieces, we thought of centering all chess pieces in their respective tiles and guiding chess pieces along the lines or borders of a board. To complete the solution, we thought of two ideas: Method 1: Enlarge the board to grant the pieces more clearance when moving around the board. Method 2: Reduce the size of the pieces to give them more space when moving around. The method we go with will depend on where we can store the board because we want it to be large but not so big that we can’t easily move it somewhere, such as between the machine shop and the lab room. Parts: - Motor: Mercury Motor SM-42BYG011-25 2 Phase 1.8° 32/20 (x3) (already have) - Large Magnet (already have) - Chess Board with Plastic Sheet Covering - ESP32 S3 Microcontroller (we can get this from the ECE supplies instead of using our budget) ## Subsystem 2: Chess Board Visualizer with Computer Vision This will be the main challenge of the project. First, we require an arduino camera that will be mounted above the chess board, enabling us to have a top-down view of all of the chess pieces. This camera will utilize a MIPI interface, allowing us to connect it to the CSI port of a Raspberry Pi and run all of our code for the computer vision part (and the Raspberry Pi will be mounted on our PCB to create a Pi HAT). Next, each of the 32 magnetic chess pieces will be color coded. With 6 types of pieces, we will use the 3 primary colors (red, blue, and yellow) along with the 3 colors in between the primary ones (purple, green, and orange). To differentiate between the opposing sides, the human player will have a darker shade of these colors and the robot will use lighter shades. Parts: - Colored Chess Pieces with Magnetic Bottoms (x32): (will 3D print our own) - Neodymium Magnets (x32): https://www.amazon.com/dp/B0BVYFSDNS/ref=twister_B0C6X3LNB9?_encoding=UTF8&psc=1 [$13] - Raspberry Pi: (SC0685 Raspberry Pi | Embedded Computers | DigiKey) [$60] - MIPI Camera: (SC0194(9) Raspberry Pi | Embedded Computers | DigiKey) [$55] ## Subsystem 3: AI Chess Algorithm The artificial intelligence agent will need to calculate optimal moves of various proficiency based on Subsystem 2’s computer vision. The agent’s logic will be based on Python’s chess library to calculate effective moves, check the legality of said moves, and judge a game’s outcome (a win, defeat, or stalemate). To check the status of the chess board (e.g. piece positions), Python’s chess library needs to parse in a string to describe the board. The syntax of the string needs to be in Forsyth-Edwards Notation (FEN) and it denotes the following. - Piece locations - Active color’s ply - Castling availability - Enpassant possibilities - Half move clock - Full move number An example board for FEN could be "rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR". More details for parsing and other information can be found here: https://python-chess.readthedocs.io/en/latest/core.html ## Criteria for Success (5 things) - Computer vision algorithm correctly identifies piece positions on the board with high accuracy - Successfully update internal representation of board - Magnet correctly grabs intended piece and does not make the current piece bump into others - Robot will successfully detect if the human player cheats/performs an illegal move - Chess board moves the pieces to the intended positions with high accuracy ## Proposal for Expansion A really fun expansion that we want to do is to make this a more universal game-playing robot rather than just a chess-playing robot by adding games like Checkers, Go, Sorry, etc. Once we have the base chess game working with the magnetic arm on the bottom and the CV, all we would have to do is 3D print more pieces, make a new sheet to put on top of the board, and use other libraries for rules for other games and interface that with how to move the magnetic arm for the specific game. | |
Monitor for Dough and Sourdough Starter Jake Hayes, Abhitya Krishnaraj, Alec Thompson |
Honorable Mention |
Team Members: - Jake Hayes (jhayes) - Abhitya Krishnaraj (abhitya2) - Alec Thompson (alect3) # Problem Making bread at home, especially sourdough, has become very popular because it is an affordable way to get fresh-baked bread that's free of preservatives and other ingredients that many people are not comfortable with. Sourdough also has other health benefits such as a lower glycemic index and greater bioavailability of nutrients. However, the bulk fermentation process (letting the dough rise) can be tricky and requires a lot of attention, which leads to many people giving up on making sourdough. Ideally, the dough should be kept at around 80 degrees F, which is warmer than most people keep their homes, so many people try to find a warm place in their home such as in an oven with a light on; but it's hard to know if the dough is kept at a good temperature. Other steps need to be taken when the dough has risen enough, but rise time varies greatly, so you can't just set a timer; and if you wait too long the dough can start to shrink again. In the case of activating dehydrated sourdough starter, this rise and fall is normal and must happen several times; and its peak volume is what tells you when it's ready to use. # Solution Our solution is to design a device with a distance sensor (probably ultrasonic) and a temperature sensor that can be attached to the underside of most types of lids, probably with magnets. The sensors would be controlled with a microcontroller; and a display (probably LCD) would show the minimum, current, and maximum heights of the dough along with the temperature. This way the user can see at a glance how much the dough has risen, whether it has already peaked and started to shrink, and whether the temperature is acceptable or not. There is no need to remove it from its warm place and uncover it, introducing cold air; and there is no need to puncture it to measure its height or use some other awkward method. The device would require a PCB, microcontroller, sensors, display, and maybe some type of wireless communication. Other features could be added, such as an audible alarm or a graph of dough height and/or temperature over time. # Solution Components ## Height and Temperature Sensors Sensors would be placed on the part of the device that attaches to the underside of a lid. A temperature sensor would measure the ambient temperature near the dough to ensure the dough is kept at an acceptable temperature. A proximity sensor or sensors would first measure the height of the container, then begin measuring the height of the dough periodically. If we can achieve acceptable accuracy with one distance sensor, that would be ideal; otherwise we could use 2-4 sensors. Possible temperature sensor: [Texas Instruments LM61BIZ/LFT3](https://www.digikey.com/en/products/detail/texas-instruments/LM61BIZ%252FLFT3/12324753) Proximity sensors could be ultrasonic, infrared LED, or VCSEL.\ Ultrasonic: [Adafruit ULTRASONIC SENSOR SONAR DISTANCE 3942](https://www.digikey.com/en/products/detail/adafruit-industries-llc/3942/9658069)\ IR LED: [Vishay VCNL3020-GS18](https://www.mouser.com/ProductDetail/Vishay-Semiconductors/VCNL3020-GS18?qs=5csRq1wdUj612SFHAvx1XQ%3D%3D)\ VCSEL: [Vishay VCNL36826S](https://www.mouser.com/ProductDetail/Vishay-Semiconductors/VCNL36826S?qs=d0WKAl%252BL4KbhexPI0ncp8A%3D%3D) ## MCU An MCU reads data from the sensors and displays it in an easily understandable format on the LCD display. It also reads input from the user interface and adjusts the operation and/or output accordingly. For example, when the user presses the button to reset the minimum dough height, the MCU sends a signal to the proximity sensor to measure the distance, then the MCU reads the data, calculates the height, and makes the display show it as the minimum height. Possible MCU: [STM32F303K8T6TR](https://www.mouser.com/ProductDetail/STMicroelectronics/STM32F303K8T6TR?qs=sPbYRqrBIVk%252Bs3Q4t9a02w%3D%3D) ## Digital Display - A [4x16 Character LCD](https://newhavendisplay.com/4x16-character-lcd-stn-blue-display-with-white-side-backlight/) would attach to the top of the lid and display the lowest height, current height, maximum height, and temperature. ## User Interface The UI would attach to the top of the lid and consist of a number of simple switches and push buttons to control the device. For example, a switch to turn the device on and off, a button to measure the height of the container, a button to reset the minimum dough height, etc. Possible switch: [E-Switch RA1113112R](https://www.digikey.com/en/products/detail/e-switch/RA1113112R/3778055)\ Possible button: [CUI Devices TS02-66-50-BK-160-LCR-D](https://www.digikey.com/en/products/detail/cui-devices/TS02-66-50-BK-160-LCR-D/15634352) ## Power - Rechargeable Lithium Ion battery capable of staying on for a few rounds of dough ([2000 mAh](https://www.microcenter.com/product/503621/Lithium_Ion_Battery_-_37v_2000mAh) or more) along with a USB charging port and the necessary circuitry to charge the battery. The two halves of the device (top and underside of lid) would probably be wired together to share power and send and receive data. ## (stretch goal) Wireless Notification System - Push notifications to a user’s phone whenever the dough has peaked. This would likely be an add-on achieved with a Raspberry Pi Zero, Gotify, and Tailscale. # Criterion For Success - Charge the battery and operate on battery power for at least 10 hours, but ideally a few days for wider use cases and convenience. - Accurately read (within a centimeter) and store distance values, convert distance to dough height, and display the minimum, maximum, and current height values on a display. - Accurately read and report the temperature to the display. - (stretch goal) Inform the user when the dough has peaked (visual, audio, or app based). | |
Mushroom Growing Tent Elizabeth Boyer, Cameron Fuller, Dylan Greenhagen |
Honorable Mention |
# Mushroom Growing Tent Project Team Members: - Elizabeth Boyer (eboyer2) - Cameron Fuller (chf5) - Dylan Greenhagen (dylancg2) # Problem Many people want to grow mushrooms in their own homes to experiment with safe cooking recipes, rather than relying on risky seasonal foraging, expensive trips to the store, or time and labor-intensive DIY growing methods. However, living in remote areas, specific environments, or not having the experience makes growing your own mushrooms difficult, as well as dangerous. Without proper conditions and set-up, there are fire, electrical, and health risks. # Solution We would like to build a mushroom tent with humidity and temperature sensors that could monitor the internal temperature and humidity, and heating, and humidity systems to match user settings continuously. There would be a visual interface to display the current temperature and humidity within the environment. It would be medium-sized (around 6 sq ft) and able to grow several batches at a time, with more success and less risk than relying on a DIY mushroom tent. Some solutions to home-grown mushroom automation already exist. However, there is not yet a solution that encompasses all problems we have outlined. Some solutions are too small of a scale, so they don’t have the heating/cooling power for a larger scale solution. Therefore, it’s not enough to yield consistent batches. Additionally, there are solutions that give you a heater, a light set, and a humidifier, but it’s up to the user to juggle all of these modules. These can be difficult to balance and keep an eye on, but also dangerous if the user does not have experience. Spores can get released, heaters can overheat, and bacteria and mold can grow. Our solution offers an all-in-one, simple, user-friendly environment to bulk growing. # Solution Components ## Control Unit and User Interface The control unit and user interface are grouped together because the microcontroller is central to the design of both, and they are closely linked in function. The user interface will involve a display that shows measured or set values for different conditions (temperature, humidity, etc) on a display, such as an LCD display, and the user will have buttons and/or knobs that allow the user to change values. The control unit will be centered around a microcontroller on our PCB with circuitry to connect to the other subsystems. Parts List: 1x Microcontroller 1x PCB, including small buttons and/or knobs, power circuitry 1x Display module 1x Power supply ## Temperature Sensing and Control The temperature sensing and control components will ensure that the grow box stays at the desired temperature that promotes optimal growth. The system will include one temperature sensor that will record the current temperature of the box and feed a data output back into our PCB. From here, the microcontroller in our control unit will read the data received and send the necessary adjustments to a Peltier module. The Peltier module will be able to increase the temperature of the box according to the current temperature of the box and set temperature. Cooling will not be required, as maintaining a minimum temperature is more important than a maximum temperature for growth. Parts List: 1x Temperature Sensor 1x Peltier module ## Humidity Sensing and Control The humidity sensing and control system will work in a similar way to the temperature system, only with different ways to adjust the value. We will have one humidity sensor that will be continually sending data to our PCB. From here, the PCB will determine whether the current value is where it should be, or whether adjustments need to be made. If an increase in humidity is needed, the PCB will send a signal to our misting system which will activate. If a decrease is needed, a signal will be sent to our air cycling system to increase the rate of cycling, thereby decreasing the humidity within the box. Parts List: 1x Humidity Sensor 4x Misting heads Water tubing as needed ## Air Quality Control The air filtration system is run constantly, as healthy mushroom growth (free of bacteria) needs clean, fresh air, and mycelium requires and uses up oxygen as it grows. Additionally, this unit is connected to the hydration sensing unit- external humidity is in most cases going to be lower than internal humidity, and cycling in new air can be used to decrease humidity. When high humidity is detected, the air filtration system will decrease the internal humidity by cycling in less humid air. Parts List: Flexible Air duct length as needed 1x Fan for promoting air cycling # Criteria For Success Our demo will show that each of our subsystems functions as expected and described below: For the control unit and user interface, we will demonstrate that the user can change the set temperature and humidity values through buttons or knobs. The humidity sensing and control system’s functionality will demonstrate that introducing dry air into the device activates the misting system, which requires functional sensors and a water pump. The temperature sensing and control system demo will involve showing that the heater turns on when the measured temperature is below the set temperature. The air quality control system’s success will be demonstrated as air movement coming from the fan enters the tent. | |
Oxygen Delivery Robot Aidan Dunican, Nazar Kalyniouk, Rutvik Sayankar |
Honorable Mention |
# Oxygen Delivery Robot Team Members: - Rutvik Sayankar (rutviks2) - Aidan Dunican (dunican2) - Nazar Kalyniouk (nazark2) # Problem Children's interstitial and diffuse lung disease (ChILD) is a collection of diseases or disorders. These diseases cause a thickening of the interstitium (the tissue that extends throughout the lungs) due to scarring, inflammation, or fluid buildup. This eventually affects a patient’s ability to breathe and distribute enough oxygen to the blood. Numerous children experience the impact of this situation, requiring supplemental oxygen for their daily activities. It hampers the mobility and freedom of young infants, diminishing their growth and confidence. Moreover, parents face an increased burden, not only caring for their child but also having to be directly involved in managing the oxygen tank as their child moves around. # Solution Given the absence of relevant solutions in the current market, our project aims to ease the challenges faced by parents and provide the freedom for young children to explore their surroundings. As a proof of concept for an affordable solution, we propose a three-wheeled omnidirectional mobile robot capable of supporting filled oxygen tanks in the size range of M-2 to M-9, weighing 1 - 6kg (2.2 - 13.2 lbs) respectively (when full). Due to time constraints in the class and the objective to demonstrate the feasibility of a low-cost device, we plan to construct a robot at a ~50% scale of the proposed solution. Consequently, our robot will handle simulated weights/tanks with weights ranging from 0.5 - 3 kg (1.1 - 6.6 lbs). The robot will have a three-wheeled omni-wheel drive train, incorporating two localization subsystems to ensure redundancy and enhance child safety. The first subsystem focuses on the drivetrain and chassis of the robot, while the second subsystem utilizes ultra-wideband (UWB) transceivers for triangulating the child's location relative to the robot in indoor environments. As for the final subsystem, we intend to use a camera connected to a Raspberry Pi and leverage OpenCV to improve directional accuracy in tracking the child. As part of the design, we intend to create a PCB in the form of a Raspberry Pi hat, facilitating convenient access to information generated by our computer vision system. The PCB will incorporate essential components for motor control, with an STM microcontroller serving as the project's central processing unit. This microcontroller will manage the drivetrain, analyze UWB localization data, and execute corresponding actions based on the information obtained. # Solution Components ## Subsystem 1: Drivetrain and Chassis This subsystem encompasses the drive train for the 3 omni-wheel robot, featuring the use of 3 H-Bridges (L298N - each IC has two H-bridges therefore we plan to incorporate all the hardware such that we may switch to a 4 omni-wheel based drive train if need be) and 3 AndyMark 245 RPM 12V Gearmotors equipped with 2 Channel Encoders. The microcontroller will control the H-bridges. The 3 omni-wheel drive system facilitates zero-degree turning, simplifying the robot's design and reducing costs by minimizing the number of wheels. An omni-wheel is characterized by outer rollers that spin freely about axes in the plane of the wheel, enabling sideways sliding while the wheel propels forward or backward without slip. Alongside the drivetrain, the chassis will incorporate 3 HC-SR04 Ultrasonic sensors (or three bumper-style limit switches - like a Roomba), providing a redundant system to detect potential obstacles in the robot's path. ## Subsystem 2: UWB Localization This subsystem suggests implementing a module based on the DW1000 Ultra-Wideband (UWB) transceiver IC, similar to the technology found in Apple AirTags. We opt for UWB over Bluetooth due to its significantly superior accuracy, attributed to UWB's precise distance-based approach using time-of-flight (ToF) rather than meer signal strength as in Bluetooth. This project will require three transceiver ICs, with two acting as "anchors" fixed on the robot. The distance to the third transceiver (referred to as the "tag") will always be calculated relative to the anchors. With the transceivers we are currently considering, at full transmit power, they have to be at least 18" apart to report the range. At minimum power, they work when they are at least 10 inches. For the "tag," we plan to create a compact PCB containing the transceiver, a small coin battery, and other essential components to ensure proper transceiver operation. This device can be attached to a child's shirt using Velcro. ## Subsystem 3: Computer Vision This subsystem involves using the OpenCV library on a Raspberry Pi equipped with a camera. By employing pre-trained models, we aim to enhance the reliability and directional accuracy of tracking a young child. The plan is to perform all camera-related processing on the Raspberry Pi and subsequently translate the information into a directional command for the robot if necessary. Given that most common STM chips feature I2C buses, we plan to communicate between the Raspberry Pi and our microcontroller through this bus. ## Division of Work: Given that we already have a 3 omni wheel robot, it is a little bit smaller than our 50% scale but it allows us to immediately begin work on UWB localization and computer vision until a new iteration can be made. Simultaneously, we'll reconfigure the drive train to ensure compatibility with the additional systems we plan to implement, and the ability to move the desired weight. To streamline the process, we'll allocate specific tasks to individual group members – one focusing on UWB, another on Computer Vision, and the third on the drivetrain. This division of work will allow parallel progress on the different aspects of the project. # Criterion For Success Omni-wheel drivetrain that can drive in a specified direction. Close-range object detection system working (can detect objects inside the path of travel). UWB Localization down to an accuracy of < 1m. ## Current considerations We are currently in discussion with Greg at the machine shop about switching to a four-wheeled omni-wheel drivetrain due to the increased weight capacity and integrity of the chassis. To address the safety concerns of this particular project, we are planning to implement the following safety measures: - Limit robot max speed to <5 MPH - Using Empty Tanks/ simulated weights. At NO point ever will we be working with compressed oxygen. Our goal is just to prove that we can build a robot that can follow a small human. - We are planning to work extensively to design the base of the robot to be bottom-heavy & wide to prevent the tipping hazard. | |
Remotely Controlled Self-balancing Mini Bike Will Chen, Eric Tang, Jiaming Xu |
Honorable Mention |
# Remotely Controlled Self-balancing Mini Bike Team Members: - Will Chen hongyuc5 - Jiaming Xu jx30 - Eric Tang leweit2 # Problem Bike Share and scooter share have become more popular all over the world these years. This mode of travel is gradually gaining recognition and support. Champaign also has a company that provides this service called Veo. Short-distance traveling with shared bikes between school buildings and bus stops is convenient. However, since they will be randomly parked around the entire city when we need to use them, we often need to look for where the bike is parked and walk to the bike's location. Some of the potential solutions are not ideal, for example: collecting and redistributing all of the bikes once in a while is going to be costly and inefficient; using enough bikes to saturate the region is also very cost inefficient. # Solution We think the best way to solve the above problem is to create a self-balancing and moving bike, which users can call bikes to self-drive to their location. To make this solution possible we first need to design a bike that can self-balance. After that, we will add a remote control feature to control the bike movement. Considering the possibilities for demonstration are complicated for a real bike, we will design a scaled-down mini bicycle to apply our self-balancing and remote control functions. # Solution Components ## Subsystem 1: Self-balancing part The self-balancing subsystem is the most important component of this project: it will use one reaction wheel with a Brushless DC motor to balance the bike based on reading from the accelerometer. MPU-6050 Accelerometer gyroscope sensor: it will measure the velocity, acceleration, orientation, and displacement of the object it attaches to, and, with this information, we could implement the corresponding control algorithm on the reaction wheel to balance the bike. Brushless DC motor: it will be used to rotate the reaction wheel. BLDC motors tend to have better efficiency and speed control than other motors. Reaction wheel: we will design the reaction wheel by ourselves in Solidworks, and ask the ECE machine shop to help us machine the metal part. Battery: it will be used to power the BLDC motor for the reaction wheel, the stepper motor for steering, and another BLDC motor for movement. We are considering using an 11.1 Volt LiPo battery. Processor: we will use STM32F103C8T6 as the brain for this project to complete the application of control algorithms and the coordination between various subsystems. ## Subsystem 2: Bike movement, steering, and remote control This subsystem will accomplish bike movement and steering with remote control. Servo motor for movement: it will be used to rotate one of the wheels to achieve bike movement. Servo motors tend to have better efficiency and speed control than other motors. Stepper motor for steering: in general, stepper motors have better precision and provide higher torque at low speeds than other motors, which makes them perfect for steering the handlebar. ESP32 2.4GHz Dual-Core WiFi Bluetooth Processor: it has both WiFi and Bluetooth connectivity so it could be used for receiving messages from remote controllers such as Xbox controllers or mobile phones. ## Subsystem 3: Bike structure design We plan to design the bike frame structure with Solidworks and have it printed out with a 3D printer. At least one of our team members has previous experience in Solidworks and 3D printing, and we have access to a 3D printer. 3D Printed parts: we plan to use PETG material to print all the bike structure parts. PETG is known to be stronger, more durable, and more heat resistant than PLA. PCB: The PCB will contain several parts mentioned above such as ESP32, MPU6050, STM32, motor driver chips, and other electronic components ## Bonus Subsystem4: Collision check and obstacle avoidance To detect the obstacles, we are considering using ultrasonic sensors HC-SR04 or cameras such as the OV7725 Camera function with stm32 with an obstacle detection algorithm. Based on the messages received from these sensors, the bicycle could turn left or right to avoid. # Criterion For Success The bike could be self-balanced. The bike could recover from small external disturbances and maintain self-balancing. The bike movement and steering could be remotely controlled by the user. | |
Waste Bin Monitoring System Benjamin Gao, Matt Rylander, Allen Steinberg |
Most Commercially Promising Project ($500) Given to the project which shows the highest potential in its target market. |
# Team Members: - Matthew Rylander (mjr7) - Allen Steinberg (allends2) - Benjamin Gao (bgao8) # Problem Restaurants produce large volumes of waste every day which can lead to many problems like overflowing waste bins, smelly trash cans, and customers questioning the cleanliness of a restaurant if it is not dealt with properly. Managers of restaurants value cleanliness as one of their top priorities. Not only is the cleanliness of restaurants required by law, but it is also intrinsically linked to their reputation. Customers can easily judge the worth of a restaurant by how clean they keep their surroundings. A repulsive odor from a trash can, pests such as flies, roaches, or rodents building up from a forgotten trash can, or even just the sight of a can overflowing with refuse can easily reduce the customer base of an establishment. With this issue in mind, there are many restaurant owners and managers that will likely purchase a device that will help them monitor the cleanliness of aspects of their restaurants. With the hassle of getting an employee to leave their station, walk to a trash can out of sight or far away, possibly even through external weather conditions, and then return to their station after washing their hands, having a way to easily monitor the status of trash cans from the kitchen or another location would be convenient and save time for restaurant staff. Fullness of each trash can isn’t the only motivating factor to change out the trash. Maybe the trash can is mostly empty, but is extremely smelly. People are usually unable to tell if a trash can is smelly just from sight alone, and would need to get close to it, open it up, and expose themselves to possible smells in order to determine if the trash needs to be changed. # Solution Our project will have two components: 1. distributed sensor tags on the trash can, and 2. A central hub for collecting data and displaying the state of each trash can. The sensor tags will be mounted to the top of a waste bin to monitor fullness of the can with an ultrasonic sensor, the odor/toxins in the trash with an air quality/gas sensor, and also the temperature of the trash can as high temperatures can lead to more potent smells. The tags will specifically be mounted on the underside of the trash can lids so the ultrasonic sensor has a direct line of sight to the trash inside and the gas sensor is directly exposed to the fumes generated by the trash, which are expected to migrate upward past the sensor and out the lid of the can. The central hub will have an LCD display that will show all of the metrics described in the sensor tags and alert workers if one of the waste bins needs attention with a flashing LED. The hub will also need to be connected to the restaurant’s WiFi. This system will give workers one less thing to worry about in their busy shifts and give managers peace of mind knowing that workers will be warned before a waste bin overflows. It will also improve the customer experience as they will be much less likely to encounter overflowing or smelly trash cans. # Solution Components ## Sensor Tag Subsystem x2 Each trash can will be fitted with a sensor tag containing an ultrasonic sensor transceiver pair, a hazardous gas sensor, a temperature sensor, an ESP32 module, and additional circuitry necessary for the functionality of these components. The sensors will be powered with 3.3V or 5V DC from a wall adapter. A small hole will need to be drilled into the side of each trash can to accommodate the wall adapter output cord. They may also need to be connected to the restaurant’s WiFi. - 2x ESP32-S3-WROOM https://www.digikey.com/en/products/detail/espressif-systems/ESP32-S3-WROOM-1-N16R2/16162644 - 2x Air Quality Sensor (ZMOD4410) https://www.digikey.com/en/products/detail/renesas-electronics-corporation/ZMOD4410AI1R/8823799 - 2x Temperature/Humidity Sensor(DHT22) https://www.amazon.com/HiLetgo-Digital-Temperature-Humidity-Replace/dp/B01DA3C452?source=ps-sl-shoppingads-lpcontext&ref_=fplfs&psc=1&smid=A30QSGOJR8LMXA#customerReviews - 2x Ultrasonic Transmitter/Receiver https://www.digikey.com/en/products/detail/cui-devices/CUSA-R75-18-2400-TH/13687422 https://www.digikey.com/en/products/detail/cui-devices/CUSA-T75-18-2400-TH/13687404 ## Central Hub Subsystem The entire system will be monitored from a central hub containing an LCD screen, an LED indicator light, and additional I/O modules as necessary. It will be based around an ESP32 module connected to the restaurant’s WiFi or ESPNOW P2P protocol that communicates with the sensor tags. The central hub will receive pings from the sensor tags at regular intervals, and if the central hub determines that one or more of the values (height of trash, air quality index, or temperature) are too high, it will notify the user. This information will be displayed on the hub’s LCD screen and the LED indicator light on the hub will flash to alert the restaurant staff of the situation. - 1x ESP32-S3-WROOM https://www.digikey.com/en/products/detail/espressif-systems/ESP32-S3-WROOM-1-N16R2/16162644 - 1x LCD Screen https://www.amazon.com/Hosyond-Display-Compatible-Mega2560-Development/dp/B0BWJHK4M6/ref=sr_1_4?keywords=3.5%2Binch%2Blcd&qid=1705694403&sr=8-4&th=1 # Criteria For Success This project will be successful if the following goals are met: - The sensor tags can detect when a trash can is almost full (i.e. when trash is within a few inches of the lid) and activate the proper protocol in the central hub. - The sensor tags can detect when an excess of noxious fumes are being produced in a trash can and activate the proper protocol in the central hub. - The sensor tags can detect when the temperature in a trash can has exceeded a user-defined threshold and activate the proper protocol in the central hub. - The central hub can receive wireless messages from all sensor tags reliably and correctly identify which trash cans are sending the messages. |