First, I need (perhaps more than) a broken HoloLens.
About five or six years ago, a friend lent me a first-generation HoloLens, and that experience is still fresh in my mind.
HoloLens
HoloLens is a mixed reality headset developed by Microsoft. The glasses will track your movements and gaze, generating appropriate virtual objects projected into your eyes. Because the device knows your orientation, you can interact with virtual 3D objects using gestures, such as raising your hand in the air and clicking with your fingers.
I remember when I started RoboRaid, before I could react, bizarre alien robots burst through my walls, attacking me. Following the game’s audio prompts, I kept changing my attack direction and then snapped my fingers like Thanos.
HoloLens gaming experience | Image provided by the author
These robots were not exquisitely modeled, but the focus was on the “realism”. I distinctly remember after the game ended, I went to touch the wall to check if it really had exploded. Even the gameplay videos from years ago still feel impressive today.
Compared to the Google Glass of that time, which had a “virtual screen” for display, HoloLens creates a 3D holographic image that integrates with the real environment, and its binocular imaging provides a higher fusion level than monocular (Google Glass 2D screen). Based on the real environment, it generates game scenes that are “real” enough to have varying sizes and occlusion, allowing interaction between people and the scene. However, the domestic price once skyrocketed to 70,000 yuan, which completely dashed my hopes.
Two years later, I accidentally saw a “broken” HoloLens on Xianyu for about 400 yuan. An idea struck, and I bought one, rubbing my hands together in preparation for repair.
Fortunately, after disassembling it, I found that the core components (CPU, memory chips, and several main chips) were undamaged; the internal damage was mainly to the circuit, which could be repaired with flying wires (a method of directly connecting two breakpoints on the circuit board with wire).
Unfortunately, the lens was broken, but since the HoloLens optical lens is “modularly designed,” I only needed to replace the optical lens (yes, I bought another broken HoloLens with intact lenses for about 400 yuan).
Screen and the “modular” optical module | Image provided by the author
At first, I thought installing the lens would solve the problem, but the first time I put it on, the image was blurry. After taking it off and putting it back on a second time, one eye’s image was higher than the other. I kept taking it off and adjusting… During the constant calibration process, the good lens also got damaged from repeated removal, and I almost gave up.
Repaired HoloLens, without the shell | Image provided by the author
Fortunately, at the moment it powered on, the feeling of playing HoloLens returned. More importantly, during the repair process, I gained some understanding of the imaging mechanism of the waveguide lens, which laid the groundwork for later developments.
Since it’s called glasses, the optical components are key.
Smart glasses are generally regarded as the “next-generation computing platform” and should serve as tools to enhance efficiency, meeting daily needs like translation and navigation (this is also the direction many manufacturers have been striving for recently). “Daily” means it should not be too large and portable, which is also my original intention in making smart glasses.
A typical AR device consists of two main components: processing chips and optical modules. The latter often distinguishes different AR glasses products. The optical module structure of AR glasses on the market mainly consists of three parts; to be straightforward, they serve to project images, magnify them, and reflect images.
Image projection refers to needing a screen for imaging. Magnification refers to increasing the image size. The so-called image reflection uses different optical technologies (such as Google’s prisms and HoloLens’s waveguide lenses) to reflect, refract, or diffract, ultimately imaging on the retina.
For instance, Google Glass projects images onto a small prism with a cut reflective surface. Waveguides have improved significantly in size and viewing angle. To enhance “optical efficiency,” minimizing loss during light transmission is key; “total internal reflection” allows light to progress through the waveguide by reflecting back and forth (like a slithering snake) without being transmitted out.
Different optical technology principles | Huachuang Securities
This was my first attempt to use the waveguide lens removed from HoloLens for production. Additionally, devices that meet the first two criteria are likely familiar to everyone, which are common projectors. Subsequently, I bought a ready-made mini projector, using Raspberry Pi as the processing terminal to create the first prototype. The downside is that projectors are large, power-hungry, and do not meet the needs of a mobile device.
The first version I made was very large | Image provided by the author
With my personal abilities at the time, it was unrealistic to create a sufficiently small projection optical machine. However, my thoughts became clearer—to control the difficulty of production, I selected “raw materials” that could cover the three functions mentioned above as much as possible; the waveguide lens from HoloLens is no different from traditional optical devices, serving only to “change the direction of light propagation”.
Thus, I abandoned the idea of using HoloLens lenses to make one and opted for more suitable and compact components.
I found a broken “Epson BT200” optical component (the old rule—disassemble!), whose image reflection and optical magnification structure were combined, and only needed to replace the internal imaging (in other words, change the screen) for modification. The BT200 uses free-form optical technology, avoiding the complex structure of waveguides and making it relatively easy to modify.
I removed the BT200 screen, replaced it with an HDMI (capable of input signal imaging) OLED screen, and connected it to a Raspberry Pi terminal to create the first “smart glasses”.
BT200 version of the glasses; later I added a 3.5mm headphone jack | Image provided by the author
For the third attempt, I planned to develop a better version and spent 4500 yuan on a commercial array waveguide module. In the following days, I tried using it to watch “The Animal World,” play “Animal Crossing”… By using HDMI, I could “project” the game screen in front of my eyes, playing while sitting or lying down without worrying about the Switch hitting my face.
Using glasses to watch videos, excluding shooting factors, the image quality is good | Image provided by the author
Wearing my self-made cool “glasses” to watch stray cats.
Overall, the image quality satisfied me. Then I wondered, besides just a screen, could I achieve more?
I added USB as an expansion port, compatible with the modules I designed later. Based on the Raspberry Pi Linux system, I wrote some applications to correspond with different external modules. Reality issues quickly arose. Initially, I hadn’t thought about what I wanted the glasses to ultimately look like, and when I wanted to add or remove features midway, it became difficult. More functions mean more accessories and interconnections, and the glasses themselves have limited space to accommodate them.
As a result, I only created an application for “recognition detection (of living beings)”; however, the Raspberry Pi I used at the time only had a USB 2.0 port, leading to insufficient power for the accessory, and the device couldn’t support the detector head. The idea of extending the glasses’ functionality almost failed.
But I didn’t want to halt this process. Since I couldn’t create a “super Saiyan,” I planned to isolate the “recognition detection” function and make it into a “sight scope”. (This function faced many issues on the glasses, making further improvement impossible; it involves subsequent patent applications for the “sight scope,” which I won’t elaborate on here.)
Converted to a sight scope | Image provided by the author
Simply put, the “sight scope” enhances environmental perception based on thermal imaging combined with image algorithms, providing certain night vision capabilities; thermal imaging can penetrate smoke and detect objects with specific heat sources (useful for quickly searching for living targets).
I initially used a regular camera, but the performance of a visible light camera in low-light environments was very unsatisfactory, and adding extra lighting went against the portability and low power consumption goals. Later, I tried thermal imaging, which does not rely on visible light imaging and has a certain anti-interference capability; strong flashes are completely “ineffective” against thermal imaging.
One reason for making it a sight scope is that I consider myself a half-hearted military enthusiast. The basic design continued the previous glasses’ concept, aiming to find solutions to many problems exposed by the glasses version.
The biggest problem was power consumption. The improved glasses had already significantly reduced power consumption compared to the first-generation “HoloLens version,” but the Raspberry Pi had mediocre craftsmanship, and power consumption control wasn’t great; using a built-in battery, 2000mAh could only last about 30 minutes. Can you imagine carrying a 10,000-20,000mAh power bank for a “little gadget”?
Additionally, the high temperature from the Raspberry Pi affected the wearing experience over extended periods. Moreover, this time, with the improved (sight scope) version, I personally didn’t need video or multimedia functions, so reducing power consumption and heat generation was the primary goal. Thus, I replaced the original Raspberry Pi-based driving scheme with a low-power STM32 solution, upgrading the accessory performance, extending the runtime to about 7 hours.
During the pandemic, when I rarely went out, I used the sight scope to observe outside and “detect” stray cats. By the way, long-wave thermal imaging cannot see gases, so I couldn’t see whether passersby were farting.
Piercing smoke experiment | Image provided by the author
AR is not a specific “device” but a technology for enhancing the real environment. HoloLens’s “environment scanning” is based on SLAM (Simultaneous Localization and Mapping), scanning the environment and overlapping the results with reality. This machine currently cannot achieve that.
But it begins to fulfill my initial “fantasy” of AR glasses—interacting with the environment.
At this point, I counted the costs and stopped.
Although I tried to find solutions to the problems of AR glasses, development had stagnated. The reason is that the production costs far exceeded the scope of DIY; at that time, a waveguide lens cost 4,500 yuan, and I had spent over 10,000 yuan on parts.
Manufacturers developing AR glasses also exposed some difficult problems. The first is standby time. Viewing AR glasses as the next-generation computing platform, their standby time is far lower than that of mobile phones. Standby time equals usage value. The second is screen brightness; currently, only monochrome screens can be used outdoors, while color screens cannot.
When OPPO Air Glass was released, some might ask, why isn’t it colored? The answer is this. Red and blue brightness cannot be achieved, while green has the lowest power consumption at the same brightness among the “three colors.” Moreover, colored data is larger, occupying more memory and consuming more power. Due to the optical technology solution used by OPPO, the content displayed on the lens can be seen clearly by others.
OPPO Air Glass display effect | Image from the internet
This reminds me that although Google Glass also had a color screen before, it used a low resolution (640*480), narrow viewing angle, and small screen to solve power consumption and standby issues. Even so, continuous usage time did not exceed four hours. In contrast, while HoloLens is stunning enough (the screen size rivals that of a TV), it compensates for the brightness required for a large image with power consumption.
I personally believe that Google had a good consideration in making an AR glasses product, which was a product of various factors. To some extent, it gave me a glimmer of hope for the concept machine at this year’s Google I/O. So, how far are we from “consumer-grade AR”? It may depend on whether manufacturers pursue the “coolness” of HoloLens or create a product similar to “Google Glass.”
Author: Xiao Xinjie
Editor: Shen Zhihan
More stories from “Geek”
——Live Broadcast Preview——
“Guokr K Technology” live broadcast column will soon begin! If you want to learn about cutting-edge technology products in 2022, come and see, the next four special live broadcasts will be held on Thursday evenings!
On May 26 (Thursday) at 16:00, the first live broadcast “Autonomous (Assisted) Driving” challenge will take you to experience a different driving fun and understand the current technology and future of autonomous driving. Click the button below to reserve the live broadcast ↓
[Insert Guokr video account 5.26 live broadcast preview]
This article is from Guokr and may not be reproduced without authorization.
If you need to contact, please email [email protected]
Leave a Comment
Your email address will not be published. Required fields are marked *