Multi-module support in mod_wasm v0.10.0
We are excited to announce the release of mod_wasm v0.10.0. This version improves performance and includes support for running multiple modules, allowing you to mix and match applications written in different languages.
When we introduced mod_wasm back in October, our goal was to bring WebAssembly to the Apache web server, so developers could adopt Wasm in a popular environment they are familiar with.
In that first v0.1.0 release, we created an architecture that combined an Apache extension module written in C with a WebAssembly runtime written in Rust. We also created a workflow to wire incoming HTTP requests from the Apache server to a function within a WebAssembly module.
We recently announced a WASI port of PHP. Combined with mod_wasm, it allows Apache to serve traditional PHP applications while leveraging WebAssembly security capabilities. We demonstrated this combination working by running WordPress with mod_wasm.
With the initial version, we proved what was possible. With this new version, we focused on adding functionality and improving performance. These are the highlights from the new mod_wasm v0.10.0:
- Multi-module support (#6)
- Shared Wasm modules with different configurations (#7)
- Improved performance on stdout buffers (#16)
Previously, you could specify in
httpd.conf the minimum information needed to load and instantiate a Wasm module with a specific WASI context. This setup was enough to support running the WordPress demo mentioned earlier on:
WasmMapDir /home /var/www/hello
When trying different modules, or different WASI contexts, you had to comment/uncomment the configuration blocks and restart the Apache server. It worked, but it was not optimal. We needed a way to define different modules and their configurations. This would allow developers to serve, for instance, PHP and Python applications in Apache using only one extension module (mod_wasm), instead of different modules for each runtime. Below you can find the new
httpd.conf syntax in v0.10.0 for defining Wasm modules and their configurations per location:
🚨 This feature introduces a breaking change: the
WasmRoot directive has been removed, and
WasmModule now accepts the full file path.
Shared Wasm modules with different configurations
httpd.conf example above, two different applications are using the same
python3.11.wasm Wasm module. The simplest implementation is to manage each
<Location> group directive as a unique and isolated piece of configuration. This means that such a file is being loaded twice, incrementing Apache's initialization time, and consuming twice the memory. As an example, the Python runtime embedded into
python3.11.wasm weights 25MB.
Loading a Wasm file from disk and compiling it into memory is a heavy-weight process. And since a Wasm file is essentially composed of read-only instructions, it makes sense to cache these modules once they are loaded and compiled, and just before they are instantiated with the specific WASI context.
In v0.10.0 we have added a cache mechanism for Wasm modules. Automatically and transparently, when the same Wasm file is referenced more than once in a
WasmModule directive, mod_wasm will reuse the already loaded and compiled Wasm module. The improvement is noticeable in Apache's initialization time and memory footprint.
Improve performance on stdout buffers
So far, we have been discussing features that provide developers with more flexibility and an enhanced experience, while at the same time reducing mod_wasm's impact on Apache's initialization time and memory footprint. What can we do to make mod_wasm execution faster?
Apache server can run hybrid multi-threaded multi-processing modules (MPMs), and in two main different variants: worker and event. In our initial version, our multi-threading was very simple. A big bottleneck was the buffer needed for the WASI
libwasm_runtime.so is a shared library loaded by
mod_wasm.so, originally this buffer was modeled as a static variable so it could survive between different function invocations, until the module execution was done with its output. This buffer was mutex-protected, so new incoming HTTP requests had to wait until the previous request was done.
In the new mod_wasm v0.10.0 design, a new entity
WasmExecutionCtx has been defined. For each HTTP request, a new instance is created inheriting the configuration for its dispatching route and providing a unique WASI
stdout buffer without the need for mutexes between different executions. This way, we are reusing the Apache thread that dispatches the request to: build the WASI context, instantiate the Wasm module, run the Wasm function and return the
stdout buffer as the HTTP response. This new design boosted the throughput since the main bottleneck around Wasm execution has been worked out.
Thanks to everyone who provided feedback and participated in the design and implementation of mod_wasm v0.10.0! We continue working towards having a production-ready mod_wasm v1.0!
To see this new version of mod_wasm in action:
Run the container:
docker run -p 8080:8080 ghcr.io/vmware-labs/httpd-mod-wasm:latest
And open the browser at:
|HTTP Request Viewer||Python 3.11||http://localhost:8080/http-request-viewer|
That's it. We are looking forward to your feedback! If you like the work we are doing with mod_wasm, don't forget to give us a star on our GitHub repo!