Developing Envoy with Rust

A few days ago, I saw in the envoy-wasm Slack group that a Google engineer, Yan, was discussing bringing Rust into Envoy.

This time he mentioned Google’s Crubit, a tool for C++ and Rust interoperability, which looks quite interesting. Here’s a simple record of it.

Developing Envoy with Rust

As I recall, this has been a topic of discussion in the Envoy community for a while, and even Matt, the founder of Envoy, is a fan of Rust.

There are two directions: one is to write the core of Envoy in Rust, and the other is to write extensions.

The former is quite radical; I haven’t seen anyone attempt it yet, while the latter is relatively easier. At least we can see the PR from the engineer snowp, who tried to write an echo demo in Rust: https://github.com/envoyproxy/envoy/pull/25409.

Haha, it’s somewhat similar to the echo-nginx-module that Spring wrote for Nginx: https://github.com/openresty/echo-nginx-module.

However, this attempt has been abandoned; interacting between Rust and a large C++ project like Envoy is not that simple.

Why Not Wasm

Rust can be compiled to Wasm, and Wasm is already integrated into Envoy. So why do some people still want to mess with native Rust extensions?

The engineers in the group have discussed quite a bit, and I’ll share my understanding:

  1. Wasm requires memory copying.

    Wasm also has memory safety mechanisms, reading and writing to linear memory addresses in the VM, so copying is unavoidable. In practical implementations, a single read may involve multiple copies.

  2. Limited API capabilities.

    The proxy-wasm-cpp-sdk package must wrap an API layer for Wasm to call, leading to a long encapsulation chain.

  3. Insufficient support for Rust language features.

    For example, support for asynchronous functions is not very good; the Wasm VM needs to support asynchronous runtimes.

What Does Crubit Solve?

Crubit is a tool for C++/Rust interoperability, defined by the official documentation: https://github.com/google/crubit.

Specifically, it includes two points:

  1. Mutual function calls.

    Functions between C++ and Rust can call each other.

  2. Mutual memory access.

    Data structures between C++ and Rust can access each other.

The ideal expected effect is that Rust can be as convenient as C++ for developing extensions.

In comparison, the simplifications include:

  1. No need to wrap an FFI API for cross-language calls, which is a problem faced by existing non-C++ extensions.
  2. No need to manually write memory struct mappings; you can even read and write memory across languages directly.

How It Works

Simply put, it analyzes the source code and automatically generates FFI wrapper code.

For example, from this C++ struct:

struct Position {
  int x;
  int y;
};

It will automatically generate the following Rust struct:

pub struct Position {
    pub x: ::core::ffi::c_int,
    pub y: ::core::ffi::c_int,
}

This way, when writing Rust code, you can manipulate C++ structs as if they were Rust, which is quite convenient.

Especially for large C++ projects like Envoy, where there are many nested structs, writing them by hand is a significant undertaking, and the maintenance cost is also substantial.

What’s the Future?

Firstly, the Crubit project is still in its initial “MVP” version, and from the documentation, it seems there are still quite a few limitations.

Moreover, Yan’s ideas are still just thoughts; he hasn’t started working on it yet, so there’s a long way to go.

If it really comes to fruition, it should be much more convenient to use, and we might see new extensions implemented in Rust.

Furthermore, if this step is successful, it’s possible that more core code in Envoy will be written in Rust in the future.

Let’s wait and see!

Still Hard Problems Ahead

However, even if the interoperability between C++ and Rust is resolved, there’s still a tough problem ahead: asynchronous scheduling.

Because Envoy has a single-threaded asynchronous concurrency model, and Rust has its own asynchronous abstractions, how can these two work together smoothly?

In simple terms, the previous solutions addressed the issue of running Rust’s synchronous functions on Envoy.

Now we need to solve the problem of how Rust’s asynchronous functions run on Envoy.

Can We Use Tokio?

Can we directly use an asynchronous runtime like Tokio? The answer is that it’s not easy.

Tokio is a multi-threaded runtime with its own scheduling mechanism, which we can simply compare to Golang’s runtime scheduling.

An asynchronous function could be scheduled to run on different threads, which would break Envoy’s single-threaded concurrency model.

(Of course, it can theoretically be solved by binding a single-threaded Tokio runtime to each Envoy thread.)

Moreover, Tokio needs its own main loop to trigger epoll_wait, which conflicts with Envoy’s own epoll_wait.

Unless we merge the two, or Envoy creates its own Rust asynchronous runtime.

The workload involved in this is quite daunting.

What’s the Difference with MoE?

Haha, my main job is working on MoE, so I naturally need to make a comparison.

MoE embeds Golang into Envoy, while the Rust embedding we’re discussing is a different approach in the same domain.

Firstly, we encounter the same issues:

  1. Cross-language function calls and memory operations require some wrapper glue code.
  2. Envoy has a single-threaded concurrency model convention.

However, the problem-solving approaches are quite different.

Interoperability

MoE honestly writes the wrapper code because it’s defined as extension development, and the APIs needed are not too many, so the total amount is limited.

The Rust solution, however, has the potential to enter the Envoy Core code later, so expectations are higher.

If a more general solution emerges, it’s not impossible that C++ code in Envoy Core could gradually be replaced by Rust.

Single-threaded Constraints

MoE cleverly bypasses the single-threaded constraints, allowing Goroutines to be scheduled to other threads, meaning the Golang runtime does not impose additional restrictions.

This allows us to support the standard Golang runtime, and existing Golang libraries can be used directly without modification.

However, our own wrapper framework code will also ensure Envoy’s single-threaded concurrency convention, so there won’t be any concurrency issues.

On the other hand, Rust’s asynchronous mechanism is not as well-encapsulated as Golang’s Goroutines; it’s somewhat similar to Lua’s cooperative coroutines.

Moreover, asynchronous runtimes are handed over to third-party libraries (unlike Lua, which provides built-in resume and yield scheduling APIs).

So, Rust has the opportunity to combine language asynchronous scheduling with the host’s event loop, similar to OpenResty.

Final Effects

A major highlight of MoE is its support for native Golang, allowing existing Golang libraries to be used directly without modification.

If the Rust solution is successful, its universality may surpass the effect of OpenResty embedding Lua.

Because in OpenResty, Lua’s non-blocking libraries need to be rewritten based on Nginx’s event loop, for example, network libraries need to be based on cosocket.

However, if Rust’s asynchronous runtime can be integrated with Envoy’s event loop, existing asynchronous implementations could potentially run without modification.

This may not require alterations, but it likely depends on the specific implementation.

Although I’m not very familiar with Rust, I can still confidently say this challenge is not small, and it won’t be resolved quickly.

In Conclusion

I’m really looking forward to Rust entering Envoy. If it becomes a reality, it will add a Rust extension mechanism to compete with the Golang extension mechanism.

However, based on my superficial understanding of Rust and Golang, I believe the development paths of the two are quite different: Rust leans more towards system programming, while Golang is more focused on business programming.

This means Rust is more suitable for the core of Envoy, while Golang is more suitable for Envoy extensions.

Of course, this is just my personal opinion; I welcome everyone to engage in technical exchanges!

Leave a Comment