Recently, I have been working on a wearable camera project, primarily using the Raspberry Pi. Previously, I bought a Narrative Clip – a wearable camera, but its quality was quite disappointing. Since I was about to travel to four countries in Europe, I spent some time creating my own wearable camera. With my hacking skills, it should be no problem.
I made a hole in the shoulder strap of my backpack and placed the Raspberry Pi camera module (click here to buy from the official site) inside. I inserted the connected ribbon cable into the strap, connecting it to the Raspberry Pi at the top of the backpack. Doesn’t it feel well hidden?
The main purpose of this setup is to prevent rain from wetting the electronic devices while keeping it tightly attached to the bag, thus eliminating the need to adjust its position repeatedly (the previous Narrative Clip required constant repositioning).
With these two issues resolved, everything became much better. The photo on the right shows what my bag looked like completely soaked in heavy rain.
Adding a GPS Module
For me, the main purpose of this camera is to take photos during holidays, so I thought it would be good to add a LinkIt ONE module. The LinkIt One is a wireless development board, and I’m using it here because it supports GPS and can send GPS data to the Raspberry Pi. This way, photos can automatically record location information, conveniently stored in the EXIF data.
Many people may also know the quirks of GPS; sometimes it can achieve 100% accuracy, but other times it can be quite off. The accuracy of GPS mainly depends on satellite acquisition – I won’t be going too far, so the accuracy should be guaranteed.
Building a Mobile Application
The most frustrating part of the Narrative Clip camera I previously used was its poor controllability: there were no sufficient prompts when taking photos, and I had no idea if the photos were good or not.
Actually, by utilizing the onboard WiFi chip (AP mode) of the Raspberry Pi 3, this issue can be resolved. By turning on the WiFi hotspot and connecting to the corresponding application – the application can display the photos just taken. This application is built using the Flask web framework. With this small feature added, the convenience of this camera has greatly improved, at least its controllability has been significantly enhanced, allowing for deletion and renaming of captured photos.
Moreover, this web application works quite well, maintaining a connection with the Raspberry Pi throughout the day. This should be considered one of the particularly successful steps in the entire modification project, as I initially had low expectations for this web application. I also created a backup plan: using Apache, which can serve as a basic file browser if the Flask application doesn’t work.
RTC Issues
Throughout this project, I encountered significant issues with the RTC (real-time clock). I knew I would face RTC problems because the Raspberry Pi does not come with an RTC chip, but I didn’t expect the issue to be this severe.
To address this, I specifically added server timestamps and JavaScript timestamps. This makes it easier to compare the time recognized by the camera device with the time on the phone.
I found that if I turn off the camera and turn it back on the next morning, the camera’s time resets to the last time it was turned on – which is about 24 hours ago. Since the photo filenames are based on timestamps, if the time resets and I start taking photos directly, new photos will overwrite old ones. This is indeed a tragic issue.
The best solution to this problem is to keep the device powered on at all times. However, the power supply I have can only sustain 30 hours of battery life – if I take it on vacation for a few days and it runs out of power, I would have to stop several times. Fortunately, I chose to vacation in Western Europe, where the internet is well-developed, so every 1-2 days, I can transfer the photos I have taken.
If you find this problem intolerable, you can choose to purchase an RTC chip for just 6 euros.
Time-lapse Shooting Results
Let me showcase the shooting results! The video above is compiled from time-lapse photos, which means taking a photo every set interval and then combining those photos into a video. Although it took some time to delete some incorrect photos, the overall effect is indeed much better than the Narrative Clip.
The Magic of OpenCV
After the vacation, I took a bunch of photos. This portable product does not have great shooting effects, but I tried to improve the effects using the OpenCV library. For example, the image above can be significantly improved with simple adjustments. However, due to time constraints, I won’t be making such adjustments to the time-lapse video. But you understand how to use such a discreet portable shooting product, right?
Interested students can also check this project on GitHub.
*Reference source: manoj.ninja, translation by the white hat next door to FB editor Lao Wang, please indicate the source from FreeBuf hackers and geeks (FreeBuf.COM).