We often hear that the Streamr tech stack sounds interesting in theory, but it’s tricky to understand exactly where and how to deploy it in practice. Let’s take a break from Web3 and look at some subjects in need of a data upgrade to imagine how a decentralized real-time data protocol, crowdselling framework, and data marketplace could be of value—starting with the fascinating, but frustratingly foggy, topic of UFO/UAP research.
Table of Contents
Yes. For many, 2020 is likely remembered for the COVID19 pandemic. For me, it was the year I realised there might be something to the UFO phenomenon after all!
The watershed moment was a clip of retired US Navy Top Gun pilot Commander David Fravor describing his encounter with a 40-foot cylindrical “Tic Tac” that he was ordered to intercept off the coast of San Diego in 2004, and which appeared to run rings around his state-of-the art jet. If you’re not familiar with this case, it’s one of the best documented (multiple credible witness and data points) and hardest to explain in prosaic terms (misidentification, hoax, known technology or natural phenomena).
Whatever the base reality of such encounters turns out to be, listening to earnest accounts from people like Commander Fravor—and learning that this is but one of thousands with extraordinary commonalities stretching back at least 80 years, along with recent government admissions—it becomes clear that this is not a subject to easily dismiss away. What it needs is wider, honest research, and here’s where Streamr might come in: BETTER DATA AVAILABILITY.
The UFO data problem
The closer you look into the UFO phenomenon, the further away any single explanation for it recedes. The topic is a labyrinth of competing hypothesis, security classifications, hoaxes, spurious attempts to debunk and shape the narrative, misidentifications, unverifiable video, lack of public physical evidence—but enough reports from “credible observers of relatively incredible things” to keep interest alive.
The problem with these observer reports is they are difficult to substantiate, follow up on, and quantify for trend or pattern analysis. Even in notable cases of mass UFO sightings—such as in 1952 over Washington DC, 1997 over Phoenix Arizona or, 1989-90 over Belgium—where dozens of witness, including police officers and pilots, reported unusual activity in the skies over an extended period: it’s a messy process to piece together an accurate timeline of locations, shapes, sizes, altitudes, and apparent motion.
Whether flares, balloons, mass delusions, or something else, having a stream of openly verifiable data to match against the reports and sporadic video evidence could aid with identification.
An imagined solution: real-time sky scanning Data Union
The skies are a busy place, and civilian options for monitoring it are limited. However, taking inspiration from Raspberry Pi powered rigs such as those used by amateur astronomers in the Global Meteor Network, multiple individually owned cameras or sensors could be set up to record a section of the sky, and plugged into the Streamr Network, share and aggregate the data for real-time monitoring of anomalous objects and motion.
Such anomalous motion to watch for were outlined in a 2021 report on UFOs from the US Office of the Director of National Intelligence. According to the report, a small number of the hardest to explain, and therefore most interesting, UFO sightings are those that appear to demonstrate the following unusual flight characteristics:
“Some UAP appeared to remain stationary in winds aloft, move against the wind, maneuver abruptly, or move at considerable speed, without discernable means of propulsion. In a small number of cases, military aircraft systems processed radio frequency (RF) energy associated with UAP sightings.”
An AI plugin could be trained to sift through the data stream and flag any such anomalous readings or signatures with the apparent trajectory to estimate its likelihood to appear on other nearby sensors—whose location and status can be published on a new stream. Known flight, satellite, or meteorological activity can be factored into the AI filter.
A localised real-time alert could also be sent to anyone subscribed to the stream in the area, who can turn on their idle sensors, investigate and check for any witness statements or posts on social media. The reverse may also be achieved following a witness sighting by combing through the historic stream to see if nearby sensors recorded anything at the time.
For researchers, data captured in such an uncontrolled manner is unlikely to be of sufficient quality to stand alone as evidence to draw conclusions from. However, if combined with other sources—such as video or witness testimonies—this data could provide value.
For camera operators, this data might not have enough commercial viability to incentivise trade in the Data Unions crowdselling model, but exploring such an ambitious sky scanning network experiment may prove insightful (and fun!) for those intrigued by the mystery.
The UFO video problem
The inability to trust what you’re seeing on video or images and verify their context and authenticity is a growing challenge for the 21st century. With UFOs, sensational footage is often dismissed as fabricated, and low resolution images left too open to interpretation.
An imagined solution: app for video capture with real-time metadata authentication
If video could be captured, uploaded without the possibility of edits, and watermarked in a way that links to the verified metadata (location, time, sensor data) from its capture, with an optional witness statement attached, then perhaps the billions of HD smartphones we walk around with could better aid in UFO identification?
There are three key components that would need to be solved here:
- The metadata itself cannot be falsified (such as using a VPN for location). This might be achieved by streaming the metadata in real-time, which might be harder to fake.
- The metadata watermark reference cannot be added to a different video than its original. Cryptographically signing a video frame data payload at the source ensures that the device that has generated this frame.
- The video cannot be edited or enhanced before being uploaded. This might be achieved by scrambling the video in the device, with the only option to unscramble being matching the video’s signature to the metadata’s, with each containing parts of the others.
Let’s walk through the flow of how this might work as a hypothetical smartphone application:
A person wishes to capture and authenticate a video of something. They open the app, which connects via the Streamr Network, to a source that verifies the device’s real-time location, time, sensor, and camera configuration. Perhaps triangulation from cell towers can be included in this to avoid reliance on internet data.
This metadata is broadcast in real-time to the Streamr Marketplace and a live link established to the device. The video capture begins, and the metadata is relayed back from the stream with a unique identifier that is embedded as one half of a signature into the video being recorded.
At the same time, a unique identifier from the video (perhaps a random cross-section of pixels or frame data payload) is broadcast in tandem to the stream on the Marketplace, where it is stored as the other half of the signature for a complementary verification of the recording. This is not the video recording itself, but a signature formed from it, which uses much less bandwidth and does not contain sensitive data.
When the recording is complete, the video is scrambled locally within the app, including its half of the signature, without an option to export or edit. The app owner can then decide to upload the video, and include with it a witness statement.
The video is uploaded and unscrambled by combining the half of the signature embedded in the video, to the complementing half saved on the Marketplace. The complete signature is then imprinted as a QR code-like scannable watermark on the upload, so anywhere the video is shown, its watermark links to the Marketplace reference stream with metadata and witness statement from its capture.
It would take a lot of work to build this and ensure that the metadata could not be faked, and crucially, that the watermark could not be added to a secondary or edited video. There are likely better ways to do this than outlined here, but the real-time transmission of the metadata, and a watermark formed out of the video content, should make forgery a greater challenge than offline alternatives.
Even with these or similar solutions built, data can still be manipulated by practical effects and other means, so no video or data stream is likely to stand alone as extraordinary evidence of extraordinary claims anytime soon. However, this line of thinking—aided by decentralized, trustless protocols, like Streamr—could be the start towards a higher standard of video referencing and open data sharing that shines a little more light to where it is needed.
Are you a technically minded person who might want to experiment or build anything like the above using the Streamr tech stack? Get in touch by submitting an application to the Data Fund.
Let us know what areas to consider next in the Streamr Use Case series!
Want to learn more about Streamr and how to participate in the Network? Join the discussion and find your channel on Discord.