MAKER:mfx2/Translated by:Fun Unlimited
Creative InspirationBefore starting the project, I did some basic research.I found that many 3D scanners use a rotating platform and various sensors to measure the distance from the center to establish a rotating model.Many have adopted dual cameras similar to the Kinect scanner.
My project is developed based on the Yscanner scanner, using a single camera.It is a low-resolution laser scanner, simple and feasible.
This laser technology irradiates based on the offset of the laser relative to the camera, measuring the distance to the center, as you can clearly see the line in the picture.
Video demonstration:
Materials List
Design Principle
The core component of this project is the line laser emitter that can be projected vertically onto the object.It uses the Raspberry Pi camera to capture the projection, performs perspective correction, and filters before image processing.In image processing, it collects the distance from each part of the line to the center of the object.
In radial coordinates, this image will produce both r and z components simultaneously, and then achieve the three-dimensional effect by obtaining new slices through the rotation of the object.
To achieve the design effect, I used Raspberry Pi as the central computing unit.
1. Connect the Raspberry Pi to the stepper motor and motor driver.Power it with an external 5V power supply and control it with the GPIO pins of the Raspberry Pi.
2. Connect the line laser emitter to the 3.3V line of the Raspberry Pi, and connect the Raspberry Pi camera to the camera input.
3. Install a simple pull-down button and an LED to display the system status to the user.
Points to Note:1. Install the electronic devices into the laser-cut box built with T-slot and M3 hardware.
2. Hide the electronic devices in the bottom compartment, place the rotating tray on the lid for easy placement of objects.This lid can minimize the light sources that enter the system, as these lights can create noise in the final scan.
3D Printed Shell
Using Autodesk Fusion 360, I designed a 3D shell model.The shell design is simple, consisting of a box and a hinged lid (both 3D printed).
The device is mainly divided into two layers:The electronic device layer and the main layer, with holes for passing wires between the two layers.
The cutting was done using an Epilog Zing 40 W laser cutter.As shown in the picture, the shell mainly consists of the main layer, the electronic device layer, two lid components, the front panel, the back panel, and two side panels.
On the main layer, there are three cutouts:One for installing the stepper motor, one for placing the wires of the laser, and another for the soft ribbon cable of the Raspberry Pi camera.
The base has mounting holes for fixing the Raspberry Pi, breadboard, and motor driver, as well as larger cutouts for the stepper motor.The lid of the shell can simply snap together to form a triangular-shaped lid, with the hinge width equal to the diameter of the side panel holes.
The back panel and one of the side panels both have slots on the side for easy access to the Raspberry Pi ports (HDMI, USB, Ethernet, and power).The front panel is a simple part, manually drilled for installing buttons and LEDs.
All parts are fixed together with M3 hardware, T-joints, and slots.During assembly, M3 screws can be used to secure the parts together.
I used a laser cutter to process most components because it offers speed and convenience.Although it is difficult to create 3D geometries on a paper cutter.
The first component is the line laser emitter bracket.This component will be installed on the main layer at a 45-degree angle to the camera line of sight, with a hole to secure the laser in place.
Additionally, a motor bracket is needed because the motor’s shaft is too long.The friction produced by the bracket does not hinder the components of the laser cutting and lowers the plane of the motor connection, thus aligning the rotating platform with the main layer.
Files required for the project can be downloaded from the project file repository:https://make.quwj.com/project/204
Electronic Components
The hardware wiring part of this project is very simple; just connect the motor, button, LED, laser, and camera to the Raspberry Pi.
1. Connect the resistor in series with each pin to protect the pins.One GPIO pin is dedicated to controlling the LED; when the device is ready, the LED will light up, and when the device is running, it will use PWM for pulse control.
2. Connect another GPIO pin to the pull-up button; it will be high when the button is not pressed and low when pressed.
3. Four GPIO pins are used to drive the stepper motor.
Stepper MotorSince the motor in the project only requires a certain degree of stepping without speed control, a very simple stepper motor driver (L298N) was chosen.This driver only needs to boost the control line to feed into the motor’s input.
To understand how the stepper motor operates at a very low level, I referred to the L298N datasheet and Arduino library.The stepper motor’s core has a center shaft with alternating polarities.
Using four wires wrapped to control two electromagnets, each electromagnet powers each opposing shaft in the motor.By switching the polarity of the shafts, the stepper can be driven.
Once I understood how the stepper motor works, I could control the stepper motor more easily.Since the maximum current of the stepper motor is about 0.8A, which exceeds the power supply capacity of the Raspberry Pi, I finally chose a 5V power supply instead of the Raspberry Pi to power it.
Software Part
The software part of this project consists of four aspects:Image processing, motor control, mesh creation, and embedded functionality.
1. As shown in the figure, with the system booting, .bashrc automatically logs in to the Raspberry Pi and starts running the python code.The system lights up the status light, informing the user that it has started correctly and is waiting for the button to be pressed.
2. Users can place the items to be scanned and close the lid.After pressing the button, the LED will flash, informing the user that the device is working.
The device will loop between image processing and motor control until the rotation completes the collection of all data for the item.Finally, it creates the mesh and sends the file to a pre-selected email.
3. The device can restart the loop for another scan merely by pressing the button.
Image ProcessingThe first step is to process the captured images to extract the information stored in the images into a form usable for creating spatial point arrays.
First, take a photo of the object on the platform, along with all background noise generated by the laser hitting the back of the box and scattering.
The original form of this image has two main issues.First, the angle of the object being photographed is too high, and second, there is a lot of background noise.The perspective issue is the first to consider because using the photo as is will not determine the object’s consistent height.
As shown in the figure, the inverted “L” shape has consistent heights.However, since one side is longer than the other, their heights at the edge closest to the observer seem different.
To solve this problem, I had to convert the workspace in the image from the previous trapezoidal shape to a rectangular shape.For this, I used the code provided in the link.
https://www.pyimagesearch.com/2014/08/25/4-point-opencv-getperspective-transform-example/
When given an image and four points, the code will crop the image between the four points and compensate for the transformation to crop the image.Using four points to create a rectangle instead of the previous trapezoidal shape.
The next issue to resolve is the background noise formed by external light and the light reflected from the laser itself.I used the inRange() function from OpenCV to filter the light.I set the threshold to pick up only red light at specific levels.
To obtain the correct values, I started with a larger threshold and gradually raised the threshold level until the only light picked up was from the laser on the scanned object.
After obtaining this image, I found that the brightest pixel in each row is a pixel line that connects to the leftmost line of the laser.Finally, convert each pixel into vertices in 3D space and store them in an array, as described in the mesh creation section.
Motor ControlAfter successfully processing a single image to obtain slices of the object, I also need to rotate the object to obtain new photos from different angles.To do this, I need to control the stepper motor under the scanning object platform.
I create a variable to track the state of the motor and control the input of the four motors’ subdivisions, which are the basis for achieving stepper functionality.
Mesh CreationTo create a mesh from all processed images, I must first convert each white pixel in the processed images into vertices in 3D space.I want to collect the coordinates of a single object slice with cylindrical symmetry, so I need to collect the coordinates of the cylinder.
The height of the image can be represented by the z-axis, the distance to the center of the rotating platform can be represented by the R-axis, and the rotation of the stepper motor can be represented by the theta-axis.Since I store the data in cylindrical coordinates, I must convert each vertex into Cartesian coordinates.
Once the vertices are created, they are stored in a list.Then, that list is stored in another list, which contains the vertex lists created for each acquired image.After processing all images and converting them into vertices, the vertices are displayed in the final mesh.
Among them, it is best to include the top and bottom vertices, and based on the resolution, I choose the number of evenly distributed vertices for each image.Since not all vertex lists have the same length, I had to find the list with the smallest number of vertices and remove vertices from all other lists until they are equal.
After creating the vertex list, I can create the mesh.I use the .obj file standard to set the mesh format because it is simple and supports 3D printing.
Embedded FunctionalityOnce the device is operating normally, I enhanced it by adding complete embedded functionality.This means removing the keyboard, mouse, and monitor, and wirelessly sending the .obj file after processing.
1. Change the .bashrc code to automatically log in and start the python main program on boot.Execute the sudo raspi-config command, set “Console Autologin”, and add the line “sudo python /home/pi/finalProject/FINAL.py” to /home/pi/.bashrc.
2. Add a button and LED status display for user input and output.The button will inform the user when the device starts scanning, and the LED will indicate the machine’s status.
If the LED lights up, it indicates that the device is ready to start scanning.If the LED flashes, it indicates that the device is currently scanning.If the LED reports an error, it indicates a software error, requiring a system restart.
Finally, I send the .obj file to the device via email.Using the smtplib and email libraries to complete this.This wireless sending method is convenient, allowing the generated files to be passed to users or accessed on different platforms.
Putting All Components Together
After completing the above steps, you can now combine the components together.
1. Assemble the shell box.2. Install the camera and laser into the box.3. Install other electronic devices.4. On the back of the Raspberry Pi, you can use the Raspberry Pi ports and 5V motor input.5. Install the device’s front with the button having LED status indicator.
Completion
The laser 3D scanner can accurately scan objects.The features of the object are unique and recognizable, and using slicing software (such as Repetier) makes it easy to 3D print the parts.
The biggest discovery through testing is the consistency of the device.In multiple trials of the same object, even with slight changes in the object’s position, the scanning program generates very similar .obj files.
As shown in the picture, the results of three scans are very similar, capturing the same details.The system’s consistency is quite good.
Adjustable Aspects
One of the adjustable variables is the resolution of the scan.Since the stepper has 400 steps, I can choose each ΔΘ to determine the angular resolution.By default, I set the angular resolution to 20 iterations, meaning the motor rotates 20 steps per frame (400/20=20).
This choice is mainly to save time; completing the scan in this way takes about 45 seconds.However, if a higher quality scan is desired, the number of iterations should be increased to 400.This provides more points for 3D construction, allowing for more detailed scans.
In addition to angular resolution, vertical resolution can also be adjusted, or the number of different points scanned along the laser slice can be selected.To save time, I set the default value to 20, but you can increase the value for better results.
As shown in the figure, by changing the parameters of angular resolution and spatial resolution, different scanning results can be displayed.The format for each label is angular resolution x spatial resolution.From the default scan settings, it can be seen that the features of the duck are recognizable but not obvious.
However, as the resolution increases, precise features start to emerge, including the eyes, mouth, tail, and wings of the duck.The highest resolution image takes about 5 minutes to scan.The high-resolution effect is very successful.
The format for each label is angular resolution x spatial resolution.From the default scan settings, it can be seen that the details of the duck are recognizable but not obvious.As the resolution increases, the accuracy of the details improves, including the eyes, mouth, tail, and wings of the duck.
Limitations
Despite the success of the project, there are still some limitations in design and implementation.The issue of light scattering from the laser emitter; the objects I scanned are all semi-transparent, and bright or dark objects have defects in surface reflection.If the object is semi-transparent, the light will be absorbed and scattered, making the slice readings very noisy.In bright and dark objects, the light will be reflected or absorbed to the extent that it is difficult to pick up.
Additionally, using the camera to capture the features of the object allows for the limitation of line-of-sight obstruction; recessed objects and sharp angles are often blocked by other parts of the object.The tail of the little yellow duck loses curvature in the scan.The camera can only detect the surface structure of the object and cannot capture holes or internal geometric shapes.This common issue also exists in other scanners.
Areas for Improvement
Although the overall effect of the project is good, there are still areas that can be optimized:1. In the current state, the scanning resolution can only be changed by altering the hard-coded resolution variable in the code.To make the project more embedded, a resolution potentiometer could be added so that users could change the resolution without plugging in a monitor and keyboard to the scanner.
2. The image effects when creating the scanner are not very good.To address this issue, mesh smoothing techniques could be used to smooth out irregular and rough positions.
3. Pixel coordinates do not scale well to the real world.The mesh I created is six to seven times larger than the actual object.

Fun Unlimited 2019 Annual Hot Project Review
Transform NumWorks Calculator with Raspberry Pi
