Name | NetID | Section |
---|---|---|
Ryu Okubo | rokubo2 | ECE 120 |
Jiacheng Huang | jh59 | ECE 110 |
- Introduction
a. Statement of Purpose
"3D scanning is the process of analyzing a real-world object or environment to collect data on its shape and possibly its appearance (e.g. colour). The collected data can then be used to construct digital 3D models."
"A structured-light 3D scanner is a 3D scanning device for measuring the three-dimensional shape of an object using projected light patterns and a camera system."
Our goal is to build a structured light 3D scanner that can be integrated with computer software to scan objects and construct 3D models simultaneously, which can be used to improve Virtual Reality, Augmented Reality, and other technologies that needs 3D human-computer interaction. Together with our other team members working on the Drone, our final aim is to combine our projects to build a Drone-based 3D Scanner.
b. Background Research
Projecting a narrow band of light onto a three-dimensionally shaped surface produces a line of illumination that appears distorted from other perspectives than that of the projector, and can be used for geometric reconstruction of the surface shape (light section).
A faster and more versatile method is the projection of patterns consisting of many stripes at once, or of arbitrary fringes, as this allows for the acquisition of a multitude of samples simultaneously. Seen from different viewpoints, the pattern appears geometrically distorted due to the surface shape of the object.
Although many other variants of structured light projection are possible, patterns of parallel stripes are widely used. The picture shows the geometrical deformation of a single stripe projected onto a simple 3D surface. The displacement of the stripes allows for an exact retrieval of the 3D coordinates of any details on the object's surface.
Design Details
a. Block Diagram / Flow Chart
b. System Overview
By projecting patterns, like stripes, on the object we want to scan, two cameras can take photos of the object with patterns from two different angles. Because the pattern that we project will be distorted by the object's 3D feature, we can use the distorted pattern to reconstruct the 3D model.
Parts
Possible Challenges
The greatest challenge of this project may be to find out the way of storing data from sensors in a clear manner, and how to implement those data to build a 3D model.
References
[1]2020. [Online]. Available: https://en.wikipedia.org/wiki/3D_scanning. [Accessed: 13- Feb- 2020].
[2]2020. [Online]. Available: https://en.wikipedia.org/wiki/Structured-light_3D_scanner. [Accessed: 13- Feb- 2020].
Attachments:
Comments:
Integrating data from a camera, lidar sensor, accelerometer, and gyroscope can be quite a challenging task requiring a deep mathematical background (especially if you're trying to combine it with a moving sensor). Do you have any libraries/pre-made programs in mind that can accomplish this or were you planning on writing the software yourself? What will you be using to process the data you receive and turn it into a 3D model? Will it be on a laptop, raspberry pi, arduino, or something else? This should be listed under your parts as well as your block diagram. Please also include the model # of each part or a link to purchase it under your parts list. I'd definitely recommend spending some more time doing research into 3D scanning and updating your proposal.
Posted by jamesw10 at Feb 15, 2020 19:21
|
I agree with James. This is a challenging task that would require a lot of math background in SLAM. It is possible to create a drone that scans a 3D model of a building with existing packages, but you would need to spend a lot more time researching how to implement that with your camera system. Mapping the color to 3D model is ongoing research and would very likely be too challenging. I suggest that you focus first on creating a 3D map with a 3D camera (cheaper than lidar), and you might be able to create a map with just the camera without accelerators and gyroscope readings.
Posted by yuchenc2 at Feb 16, 2020 00:35
|
The feasibility of this project is pretty concerning to me, too. Johnny and James both have great points; what you're trying to accomplish is SLAM, which is an extremely complex, mathematically rigorous, and very difficult topic. It's very concerning that you don't have a good example of someone else doing this kind of project before (your only reference is the Wikipedia page on 3D scanning!) If you actually want to do this project, you're going to need to seriously overhaul this proposal and show us that you've thought this through. In its current state, your proposal does not do this. We would like to see specific prior work, specific software libraries, specific math/algorithms used, and more. This proposal is not sufficient given the extreme difficulty of the project.
Posted by fns2 at Feb 16, 2020 12:23
|
I don't want to sound redundant so my recommendation is also not to worry about the portability of the whole thing at first. This is a project that would take multiple iterations and I wouldn't use any of your time now trying to foresee how to make this operation portable. That being said, LIDAR equipped drones and their respective 3D technologies do exist. I know Intel sells an entire system based on scanning environments and the scene is fairly developed for the agricultural industry. I would see if there are any shortcuts for the math with some open-source software or possibly even educational versions. This way you can focus on the electronics part of the lab and not so much the math and developing that part of the software. Also, many Arduinos contain gyroscopes and accelerometers so be on the eye out for those when selecting an Arduino or other microcontroller.
Posted by dbycul2 at Feb 16, 2020 20:00
|
I don't want to re-re-repeat too much. Lidar is expensive, a better alternative may be Infrared Proximity Sensors. I believe they are accurate enough for this (and especially are with multiple measurements). Doing position from acceleration data isn't always easy. Constant error in acceleration becomes quadratic error in position. Look into accounting for this. I like the idea and I think you should pursue it, but I would do research on other cheap approaches. Specifically this <https://www.instructables.com/id/DIY-3D-scanner-based-on-structured-light-and-stere/> project looks interesting (using structured light).
Posted by weustis2 at Feb 16, 2020 21:49
|
Approved, Feb 20th.
Posted by fns2 at Feb 20, 2020 18:44
|