Skip to content

Reflecting architectural and technology choices

Lucca Greschner edited this page Jul 14, 2023 · 22 revisions

Golang

Concurrency

Goroutines

Although the application does not leverage much concurrent tasks, when needed Go provided the required tooling to make concurrency easy. To start a concurrent task one just needs to prepend the keyword go to a function call which starts a so-called goroutine. This makes it very easy and readable to perform concurrent actions.

Channels

Besides that, Go brings the concept of channels which can be used to communicate between goroutines. Writing to a channel is done with the -> operator and is non-blocking, reading is done with the <- operator and is blocking. This came in really handy with the publisher subscriber pattern.

The sync package

The sync package is part of the Go standard library and provides simple and easy to use functionality for handling concurrency. Most notably, it provides mutexes to lock data from being modified concurrently.

Another handy functionality it brings is the sync.Once struct which through the use of its Do method ensures that a certain action is only done once through the lifecycle of the initialized struct.

Memory management

Having memory management through a garbage collector in a low level language makes it harder to screw up. Especially as we haven't dealt with manual memory management that much before, it was a relief that this is not a thing to worry about. However, the Go garbage collector has its drawbacks. One being the fact that it obviously makes the executable larger.

Compiled language

In comparison to Java which is run through the Java Virtual Machine and JavaScript which is interpreted by various engines, Go is compiled into a native binary.

On one hand this provides us with more direct system access than the aforementioned languages provide. However, interacting with C libraries still needs a binding layer in between (see the use of pam by Mike Steinert in this project).

On the other hand this confronted us with problems we didn't have before using other higher-level languages. For instance, concurrent for loops can potentially lock up your entire system when done incorrectly. This isn't the case for Java because the JVM would crash the application before this happens and in case of JavaScript, the engine would probably die before the system locks up.

Large binaries

A drawback of Go in respect of binary size is that its compiler is designed to statically link the entire runtime (see: Go Documentation). This leads to our application being ~22MB in size when compiler flags are not modified. However, with adding the ldflags "-w" and "-s" this can be reduces to ~13MB.

Further size improvements can be made through UPX which compresses the binary. However, this leads to the binary potentially being flagged by anti-malware software. Also, when compressed, a binary is loaded into memory as a whole while only needed parts are loaded in uncompressed binaries.

Testing

Go provides a powerful but simple testing framework in its standard library. Tests can be run through the go test command which also provides neat tools like coverage checking.

testing.T

A Go test needs to be getting a testing.T struct as parameter. The testing.T has several methods that make it possible to control the tests result and how it is run. The Fail method marks the test as failed and continues its execution, while FailNow cancels the test immediately. However, it does not stop goroutines and other tests are still run. The Error method logs a message before calling Fail and the Fatal method does the same but calls FailNow.

Parameterized testing is possible through table driven tests and the Run method which can run a test function under a given name and combined with a for-loop starts

testing.M

The testing.M stuct can be used to configure the tests of an entire package before any of them are run.

Considerations

The Go testing framework certainly is pretty bare bones but also very powerful and mighty. Due to the tests being located in the same package as the code, all variables are visible to the tests. This means, other than in object oriented programming languages like Java, where fields are protected through access modifiers even against access from tests, in Go we can access any variable inside the package. This means, that it is easier to structure code for convenient testing.

While the methods for testing are certainly enough, there are no methods to test for equality. All this testing needs to be done by hand which can be pretty tedious. Therefore, this project uses testify, a small library that provides assert functions like Java's JUnit.

What about mocking? Mocking isn't really a thing you need to do in Go. Simply implement the interface with your own testing implementation as seen in the tests for the logging wrapper and you're good to go.

All in all, testing in Go has been as good as with any other language I did it in before. However, the way errors are handled in Go certainly made it easier to not miss something. And the best thing is, that all this can be done with just the standard library.

Network stack

A reason for which we chose Go for our backend is its enormous network stack. It is very easy to setup a working http server within minutes. But it also provides a huge flexibility due to the concept of HandlerFunc, a type that's just a function that accepts a HTTP request and a writer to write its answer to. Using this concept it is very easy to implement routers or middlewares yourself.

But besides HTTP, the net package which essentially contains the complete network stack of Go provides many tools for other protocol such as tcp, udp, tls, smtp, ip and rpc. However, there is a supplementary package called x/net that is an extension to the standard library. It additionally provides support for HTTP/2, websocket, dns, icmp and webdav.

This powerful network stack allowed us to almost exclusively use the built-in functionality to do http, websocket and rpc connections in our project. Solely for the purpose of WebSocket Upgrades and CORS we needed additional libraries.

When to use and when not to use Go?

Go is a fascinating language when searching for a middle ground between low level APIs and higher level language features. Go doesn't provide much syntactical sugar. However, most ideas implemented in Go make for an excellent programming experience.

However, Go - as no programming language is or ever will be - isn't a tool for any purpose. Go shines when it comes to its great network stack and handling concurrency.

However, the lack of functional programming aspects and the mapreduce concept make manipulation of data oftentimes tedious and harder than it could be. When dealing with lots of data, rust or python might be a better choice.

It also does not shine when it comes to quick and dirty approaches. Most built-in functionality it provides is crude and rudimentary by design, so that the user (i.e. programmer) can adjust it to its needs. This means that most things won't work for a project out of the box but need to be combined with own solutions, wrappers and ideas. While this flexibility brings great advantages in long term projects where things like logging and networking can be tailored to the needs of the application, when doing a quick project, a interpreted/scripting language like python, javascript or even bash would do a better job.

tl;dr - Use Go with network-related stuff, highly concurrent applications, microservices or general backend tasks. Stay away from it when it comes to easy handling of data, quick and dirty projects and frontend tasks.

Configuration libraries

The most popular library for configuring a Go application seems to be viper. However, due to it's abundant feature set it is quite a big dependency.

That's the reason, I switched to koanf which provides the possibility to import configuration providers (i.e. different ways to configure your application) as needed which makes it way more lightweight compared to viper.

It also allows to easily combine multiple sources for configuration by just loading them into the same config object.

Publisher-Subscriber-Pattern

Our intent for this project was to experiment with modularity. Thus, we established the rule 'Modularity at all costs'. This lead us to choose a publish subscriber pattern as the heart of our backend application.

Pro Pub-Sub

The approach of using a publisher subscriber pattern provided us with the needed flexibility to keep adding modules and even make them loadable through RPC plugins - which was very easy to do due to the decoupling the pattern brought to our application.

Also, it allowed us to save all measured data in out database. If we would not have used a Pub-Sub system, this would have been more difficult.

Contra Pub-Sub

No communication between user and modules

However, our design of the publisher subscriber pattern does not allow the user to communicate with the modules. The only thing an API consumer can do is to request or subscribe to some resource.

This means that certain use cases can't be met with our application. For example, there is no way of searching for specific information using a module. It can only publish all data it has gathered and the filtering needs to be done on the client side. Also, we could not have a plugin that can be used to update software or even run a terminal in the browser (as seen in Cockpit).

While doing updates or running a terminal isn't a thing pure monitoring software does, having the possibility to configure modules and filter what data the user wants, would be a nice feature. One way to circumvent this limitation would be to add the possibility of configuring the expected returned data through the request's value field and manipulating the JSON output on the server side (as seen in the HIST request handling). However, this would add additional overhead.

This idea shows the question we asked ourselves throughout the development of our application and our protocol: "How do we circumvent our own barriers?".

High latency

When requesting a resource via GET, the user has to wait until the next measurement comes in. This also is due to the limitation of not having a direct way of communicating to the modules. This could be somewhat circumvent by just returning the last entry in the database. However, this would lead to the frontend getting outdated information - and the REPLY message has no way of showing the time the data was captured at.

tl;dr

The Pub-Sub pattern certainly is useful. However, it may only be used where it can be really beneficial, especially in highly object oriented programming languages where it can assist in de-tangling spaghetti code.

The Pub-Sub pattern really shines when it comes handling events (like in the Excubitor-Frontend) or whenever multiple parts of a software need to be informed about a certain message. Bidirectional communication in the way we would have needed it, however, makes things way more complicated and in this case you should ask yourself if there is no other solution to your requirements.

WebSocket

We opted to use WebSocket for transferring the actual data between our frontend and backend. This decision was made because WebSocket allows for full-duplex communication which came in handy with our subscriber mechanism.

Gobwas/ws

For realizing WebSockets in Go we used the Gobwas/ws library. Before switching to Gobwas/ws we used the x/net package Gobwas/ws is based on. However, x/net does not implement the HTTP 1.1 Upgrade mechanism that was essential for us as we wanted to use a Reverse Proxy for securing the WebSocket connection. Gobwas/ws also contains the wsutil package that brings a high-level API for sending and receiving WebSocket messages that made our code way more maintainable and readable.

HTTP/1.1 Upgrade

Some may consider HTTP 1.1 legacy or even EOL as HTTP/2 and HTTP/3 are out and provide a multitude of performance improvements, especially in mobile use. However, HTTP/2 and HTTP/3 don't provide a suitable alternative for HTTP/1.1's Upgrade mechanism and the proposed method of tunneling websocket through a HTTP/2 stream (see: RFC 8441) is not widely implemented.

As we required encrypted WebSocket connections and did not want to implement encryption in our application, we needed to fall back to HTTP/1.1. And as Monitoring Software isn't often used in a mobile setting, HTTP/2 and HTTP/3 would only be a nice-to-have.

When to use Gobwas/ws?

Gobwas/ws is a really good library for WebSockets, while x/net is more than enough for most use cases. However, using its WebSocket Upgrader is exceptionally easy, flexible and comfortable. So, in theory one could even use x/net for the actual WebSocket connection and just use Gobwas/ws' WebSocket Upgrader.

tl;dr - Use Gobwas/ws whenever you want a more high level API for WebSockets, use x/net when you need low level access to the WebSocket connection. Gobwas/ws can also provide this low level access and you can always fall back to x/net methods but we figured, the overhead is not worth it.

go-plugin

go-plugin is the project we used to implement our plugin system. It is made by hashicorp and can be found in this GitHub-Repository.

Pro

With go-plugin and due to our flexible architecture it was exceptionally easy to implement a working RPC plugin system.

Both the consumer (our backend application) and the provider (a plugin) need access to an interface that a plugin needs to implement. In our case we just created the package shared within our project and every Excubitor plugin needs to implement this interface and serve the implementation through go-plugin. After adding some boilerplate code for an RPC Server and RPC Client, everything was ready and working - sort of. It even supports using gRPC instead of Go's own RPC implementation so that one can implement plugins using any gRPC-supported programming language.

Note: You can find information about creating an Excubitor plugin in the respective section of this wiki.

Contra

With go-plugin it is recommended to use hashicorp's logging framework hclog. But at the point we implemented the plugin system we had already implemented our own logger. Luckily, as hclog defines a logger interface and through Golang's way of implementing interfaces implicitly we could implement a wrapper (or adapter) that allows us to translate hclog's statements to our own logging layer.

It is understandable that a company with this kind of library ecosystem likes to work with their respective tool chain. But a bit more flexibility would have been really nice.

Another drawback of go-plugin lies in documentation. The documentation is somewhat all over the place. There are multiple small README files and an example folder. However, there is no single place to get all information about how to use go-plugin. Therefore, we needed to reverse engineer the examples to understand how the plugin system works. This was the sole biggest issue in using go-plugin.

When to use go-plugin

go-plugin is incredibly good for adding plugin functionality to applications. Even though it lacks in documentation and flexibility of choice regarding the logger, I would always use go-plugin when using Go. Especially the possibility to use gRPC instead of Go's own RPC implementation makes it appealing to use go-plugin and use multiple programming languages for implementing plugins.

Frontend Architecture

To adjust to the modular nature of the backend, the frontend is build with a microfrontend architecture. A web application with a microfrontend is split into many fully independent and selfcontained modules, the microfrontends.

In this project, the frontend is split into the loader and the components. The loader is responsible for authentication, communication with the backend, loading the components, delivering data to the components and navigation. The components or microfrontends display the data from the backend plugins. The communication between the loader and the microfrontends is done through events on the window object.

Modularity and flexibility

The microfrontend architecture allows for a high degree of flexibility, every microfrontend is an independent web component and can use its own set of libraries or even frameworks and only needs to comply with the predefined communication protocol between components. This also makes it possible to load the microfrontends during runtime in cases, where the exact set of needed modules is not known at compile time.

Security

Security is a huge issue for dynamically loaded microfrontends. Because the authentication needs to be managed from a central instance, in this case the loader, there is no way to limit the scope that the microfrontends can access. This is especially problematic in projects handling sensitive data or altering the system.

Overhead

Because every web component is isolated, shared libraries and styles have to be included in every microfrontend. This leads to huge project sizes and can lead to inconsistencies in the long term.

When to use Microfrontends

Only if you absolutely have to! The overhead and security issues make them impractical for most real world use cases. In some instances you can't avoid them, especially if the frontend to an application has to adjust to the backend at runtime. For most scenarios, a better approach would be to build an optimized frontend during install, build or start time, if you need to adjust the frontend to a plugin architecture in the backend.

Svelte

Svelte was used for the microfrontends. I general it was a pleasant experience and in many ways much more elegant than frameworks like React. The Highlights include:

.svelte Files

Javascript, HTML and CSS are all written in one file by default, although splitting up components in to multiple files is possible if needed. This makes smaller components much more compact and clearer than in other frameworks. The .svelte-files use normal HTML instead of JSX, this allows for better separation of UI and logic. To use functionality, that is normally done with JSX there are special logic blocks in the svelte-HTML. Those include {#if}, {#each}, {#await}.

Reactivity and global state

Variables in svelte are reactive by default, therefore there is no need to think about state. Management for global state is included by default which makes libraries like Redux redundant. In my opinion, it also is much easier to understand than Redux.

Web Components

Albeit all the benefits, I would highly recommend not to build web components with svelte. As of version 3 there is no control over the lifecycle functions and control over the shadow-DOM is only possible with hacks and comes with substantial limitations.

When to use svelte

Svelte was very pleasant to work with and I will definitely use it again. The documentation is great and it can do everything other lightweight frameworks like React or Vue.js can do. Most of the times in even more elegant ways.

10/10 would recommend (if you don't want to build web components)

Svelte-Kit

Svelte-Kit is the fullstack framework in the svelte ecosystem we used in the loader. It is quite a bit more complex than standalone svelte and was mainly used for the included directory based router.

When to use Svelte-Kit

If I have to build a fullstack application, I can definitely see myself exploring Svelte-Kit further, especially for the many build options that include server-side rendering for many platforms and the "batteries included" approach.

TailwindCSS

Tailwind comes in handy when aiming for rapid, consistent styling in a web application. It also lightens the burden of having to know the multitude of CSS properties as it combines them into concise and plausible classes.

Compared to component based CSS frameworks like Bootstrap, it allows for a much higher degree of flexibility and ability to customize the looks of a resulting web application. Besides that, it is easier to style self-made components.

Tailwind's preprocessor postCSS minimizes the resulting file sizes by only shipping the needed CSS classes. Through directives like @apply it allows to keep the HTML clear by avoiding repeating, long lists of classes.

When to use TailwindCSS

I would recommend to use it in any application. It provides aid where aid is needed and doesn't interfere with my way of coding.

Clone this wiki locally