Dev Update May 2020

Welcome to Streamr dev update for May 2020! Looking back at last month’s update, we’re happy to realise that many major development strands that were still seeking solutions only a month ago, have now found them and fallen nicely into place. Here’s a few hand-picked highlights from May:

  • Solved all remaining problems blocking the upcoming Network whitepaper
  • Started testing the WebRTC flavour of the Network at scale
  • Got the Data Unions framework relatively stable in advance of entering public beta
  • Started planning a roadmap towards the next major Data Unions upgrade
  • Completed Phase 0 of token economics research with BlockScience.

The Network whitepaper

For over 9 months now, a few people in the Network team have been hard at work at documenting and benchmarking the Network at scale. The deliverable of that effort is an academic paper, intended for deep tech audiences, in both the enterprise and crypto spaces, as a source of detailed information and metrics about the Network.

This blog post from September outlined the approaches and toolkit we were using to conduct the experiments, but the road to the goal turned out to be quite complicated. We’ve sort of learned to expect the unexpected, because pretty much everything we do is in uncharted territory, but trouble can still come in surprising shapes and sizes.

We worked steadily on setting up a sophisticated distributed network benchmarking environment based on the CORE network emulator, only to ditch it several months later because it was introducing inaccuracies and artifacts into our experiments at larger network sizes of 1000 nodes or more. We then activated Plan B, which meant running the experiments in a real-world environment instead of the emulator.

We chose 16 AWS data centres across the globe and ran between 1 and 128 nodes in each of them, creating Streamr Networks of 16 – 2048 nodes in size. The new approach was foolproof in the sense that the connections between nodes were real, actual internet connections, but running a large-scale distributed experiment across thousands of machines brought its own problems. I’ll give some examples here. First of all, it needed pretty sophisticated orchestration to be able to bring the whole thing up and tear it down in between experiments. Secondly, accurately measuring latencies required the clocks of each machine to be synchronised to sub-millisecond precision. Thirdly, the resulting logs needed to be collected from each machine and then assembled for analysis. None of these things were necessary in the earlier emulator approach, but the reward for the extra trouble was accurate, artifact-free results from real-world conditions, adding a lot of relevance and impact to the results.

During May, we finally got each and every problem solved, and managed to eliminate all unexpected artifacts in the measured results. Right now we are finalising the text around the experiments and their results, and we are expecting the paper to become available on the project website in July.

Network progress towards Brubeck

Working towards the next milestone, Brubeck, means making many important improvements. One of them is enabling nodes behind NATs to connect to each other, which will allow us to make each client application basically a node. This, in turn, helps achieve almost infinite scalability in the Network, because then clients will help propagate messages to other clients. The key to unlocking this is migrating from websocket connections to WebRTC connections between nodes. This work is now in advanced stages, although we are still observing some issues when there are large amounts of connections per machine. Having developed the scalability testing framework for the whitepaper comes in handy here; the correct functioning of the WebRTC flavour network can be validated by repeating the same experiments and checking that the results are, in line with the ones we got with the websocket edition.

Another step towards the next milestone is making the tracker setup more elaborate. Trackers are utility nodes that help other nodes discover each other and form efficient and fair message broadcasting topologies. When the Corea milestone version launched, it supported only one tracker, statically configured in the nodes’ config files, making peer discovery in the Network a centralized single point of failure; if the tracker failed, message propagation in the Network would still function, but new nodes would have trouble joining, over time deteriorating the Network. Thanks to recent improvements, the nodes can now map the universe of streams to a set of trackers, which can be run by independent parties, allowing for decentralization. Trackers can now be discovered from a shared and secure source of truth, a smart contract on Ethereum mainnet, which in the future could be a token-curated registry (TCR) or a DAO-governed registry. The setup is somewhat analogous to the root DNS servers of the internet, governed by ICANN – only much more transparent and decentralized.

Ongoing work also includes improving the storage facilities of the Network. Storage is implemented by nodes with storage capabilities. They basically store messages in assigned streams into a local Cassandra cluster and use the stored data to serve requests for old messages (resends). The current way we store data in Cassandra has been problematic when it comes to high-volume streams, leading to uneven distribution of data across the Cassandra cluster, which in turn leads to query timeouts and failing resends. In the improved storage schema, data will be more evenly distributed, and this kind of hotspot streams shouldn’t cause problems going forward. As a result, the Network will offer reliable and robust resends and queries for historical data.

There’s also ongoing work to upgrade the encryption capabilities of the Network – or more specifically the SDKs. The protocol and Network have actually supported end-to-end encryption since the Corea release, but the official SDKs (JS and Java so far) only implement end-to-end encryption with a pre-shared key. The manual step of pre-sharing the encryption key limits the usefulness of the feature. The holy grail here is to add a key exchange mechanism, which enables publishing and subscribing parties to automatically exchange the decryption keys for a stream. This feature is now in advanced stages of implementation, and effortless encryption should become generally available during the summer months.

Data Unions soon in public beta

The Data Unions framework is approaching a stable state. In the April update, we discussed some issues where the off-chain state of the DUs became corrupted, leading to lower than expected balances. All known issues were solved during May, and the system has been operating without apparent problems since then.

The Data Unions framework has been in private beta since late last year, with a couple of indie teams (Swash having made the most progress so far) building on top of it. During private beta, we’ve been working on stability, documentation, and frontend support for the framework. We’re now getting ready to push the DU framework into public beta, which means that everyone can soon start playing around with it. The goal of the public beta phase over the summer months is to get more developers hands-on with the framework, and to iron out remaining problems that might occur at larger-scale use (and abuse).

We’ve also started planning the first major post-release upgrade to Data Unions architecture, which improves the robustness and usability of the framework. We are currently working on a proof of concept, and we’ll be talking more about the upgrade over the course of the summer.

Phase 0 of token economics research completed

As was mentioned in one of the earlier updates, we started a collaboration with BlockScience to research and economically model the Streamr Network’s future token incentives. It’s a long road and we’ve only just started, but it’s worth sharing that in May we reached the end of Phase 0. This month-long phase was all about establishing a baseline: transferring information across teams, establishing a glossary, documenting the current Network state and future goal state, and writing down what we currently know as well as key open questions.

The work continues on an ongoing basis with Phase 1, the goal of which is to define mathematical representations of the actors, actions, and rules in the Streamr Network. In future phases, the Network’s value flows will be simulated, based on this mathematical modeling, to test alternative models and their parameters and inform decisions that lead to incentive models sustainable at scale.

Looking forward

By the next monthly update, we should have Data Unions in public beta, and hopefully also the Network whitepaper released. Summer holidays will slow down the development efforts over July-August, but based on the previous summers, it shouldn’t prevent us from making good progress.

To conclude this post, I’ll include a bullet-point summary of main development efforts in May, as well as a list of upcoming deprecations that developers building on Streamr should be aware of. As always, you’re welcome to chat about building with Streamr on the community-run dev forum or follow us on one of the Streamr social media channels.

Network

  • Experiments for the Network whitepaper have been completed. Finalising text content now
  • Java client connection handling issues solved. Everything running smoothly again, including canvases
  • The Network now supports any number of trackers
  • Brokers can now read a list of trackers from an Ethereum smart contract on startup
  • WebRTC version of the Network is ready for testing at scale
  • Token economics research Phase 0 completed
  • Working on a new Cassandra schema and related data migration for storage nodes.
  • Working on key exchange in JS and Java clients to enable easy end-to-end encryption of data.

Data Unions

  • Data Union developer docs are complete
  • Problems causing state corruption were fixed
  • Started planning a major architectural upgrade to Data Unions.

Core app, Marketplace, Website

  • Streamr resource permissions overhaul is done
  • Buyer whitelisting feature for Marketplace is done
  • Working on adding terms of use, contact details, and social media links to Marketplace products
  • Working on a website update containing updates to the top page, a dedicated Data Unions page, and a Papers page to collect the whitepaper-like materials the project has published.

Deprecations and breaking changes

This section summarises deprecated features and upcoming breaking changes. Items with dates TBD are known already but will occur in the slightly longer term.

  • (Date TBD): Authenticating with API keys will be deprecated. As part of our progress towards decentralisation, we will eventually end support for authenticating based on centralised secrets. Integrations to the API should authenticate with the Ethereum key-based challenge-response protocol instead, which is supported by the JS and Java libraries. At a later date (TBD), support for API keys will be dropped. Instructions for upgrading from API keys to Ethereum keys will be posted well in advance.
  • (Date TBD): Publishing unsigned data will be deprecated. Unsigned data on the Network is not compatible with the goal of decentralization, because untrusted nodes could easily tamper data that is not signed. As the Streamr Network will be ready to start decentralizing at the next major milestone (Brubeck), support for unsigned data will be ceased prior to reaching that. Users should upgrade old client library versions to newer versions that support data signing, and use Ethereum key-based authentication (see above), which enables data signing by default.

Stay up to date

Get the latest Streamr news and articles delivered to your inbox