March 10, 2021

Micronetia vs Competitors

I mentioned the Micronetia project on Larry Sanger's decentralization forum and was asked there how it compared to a device called "Helm".

TL;DR

In summary it's quite a similar idea. Certainly we are looking at solving much the same problem, although their product is aimed at email first and other things later, while Micronetia is aimed at blogging first and other things later. In the main I'd say the difference is in build vs buy and to a significant degree that shows a difference in philosophy.

This project is trying to build something from components you get yourself. Helm is buy a device and let someone else do all the hard work. The result is that you can't expand a Helm to do things Helm don't want you to do/haven't implemented yet, but the Micronetia project give users choices and options to expand and tweak to their heart's content.

A closer competitor to Micronetia is Yunohost, which I also discuss below.

Vendor Lock-in

The problem with the Helm approach is that you are reliant on Helm. Helm appears to be quite closed and proprietary in terms of OS and hardware so if Helm haven't implemented a particular integration then you can't do that. For example one micronetia user has added calibre and an RPG server to his pi. It doesn't look like you could do that on the Helm box - a quick look at the Helm's community shows requests to add things like an RSS feed aggregator are responded to by "we'll look into it" as opposed to here's how you do it: "... sudo apt install..."

The lock-in particularly pronounced on the hardware side. If you buy a helm device and it doesn't work you have to get another one from them. If you want more storage than what is offered then you discover you can't do that. The Micronetia project is based on Raspberry Pi hardware and there are numerous suppliers/resellers of Pis. If you want to add a couple of USB SSD drives then you can, if can only afford a pi3B+ then you can use that. Heck if you are willing to tinker you can run some of the project on a pi zero (I'm using one to test the backup feature to be released real soon now). However nothing in my project requires even a pi, it just requires a debian like OS (and I expect with a small amount of work not even that). It would be very simple to port this to an Intel/AMD CPU platform like a NUC and probably pretty simple to create a jenkins or similar builder that took a vanilla ubuntu or debian server and added all the relevant packages.

Then there's the internet connectivity part. The Helm project mandates using Helm's cloud for backup and various other things including domain management. Micronetia has no such dependency. You get a domain from any registrar and so on. Yes it uses cloudflare's DNS and caching now but there's nothing that requires cloudflare, it can be moved to a different service like ngrok if/when cloudflare decides to change its offerings. One of my intended near future developments (and by near future I mean in the next week or so) is the ability to have a second pi as an offsite backup (or more than one if you are that paranoid) that you can have a friend put on their network. Unlike helm this remote pi remains under the control of you and your friend so your data isn't exposed to anyone's cloud. And you get to decide what encryption you want etc. etc.

Self Reliance

With the Micronetia approach you are not reliant on anyone. The project is a binding together of a variety of existing open-source software/tools and it is, itself, up on gitlab for all to see and fork. In fact there's no need to install the ghost blog on a micronetia server. The base server is a solid recent version of Raspbian with a number of server related packages and it makes sense that there will be usecases where you use that for something without ghost. If you want to have a dedicated chat server (say) or a back up server or something else like a DNS server you can do that. Mind you if you haven't set up ghost some of the cloudflare configuration will be need to be tweaked and run manually, but you can do it.

That self-reliance also extends to the admin and setup processes where the expectation is that the user has to get exposed to some linux sysadmin concepts. That's another intentional difference because I think people should be able to tinker and that they should do some maintenance etc. themselves. Rather like the ability to change a tire, check the battery etc. in a car.

Another related difference is the use of containers, which helm touts as a feature. Although one of my applications (the chat server) uses a snap and I recommend building the project using a docker instance, the core functionality (blog, comments (and file storage/backup - when I deliver that)) does not use snaps or dockers or other containers and that is intentional.

Yunohost

The use of containers is in fact a key difference between Micronetia and Yunohost, which I looked at before deciding to go my own way. There is a lot to like about Yunohost and I agree with a lot of their philosphy. The problem I have with them is that they too wrap things up too much. Yunohost has a (lovely) web UI and it uses docker a lot and both of these make things simple at the expense of losing control and flexibility.

The web UI is a problem because it abstracts away the OS and it limits the flexibility of what you can do. I like shell CLIs for administration because they give you the power to do what you want and force you to take responsibility for what you do. The Yunohost install has SSH available but they don't recommend using it except in "advanced" cases or unless the web UI isn't working for some reason. In other words you can do everything in the CLI that you can do in the Web UI but not everything you can do in the CLI can be done using the Web UI. I don't like that because it means that a regular user is forced to undergo a steep learning curve to do things like troubleshooting at precisely the time when the user doesn't want to learn new things, but just get the damn thing working (again). If the user has been exposed to the CLI and running scripts and things from the beginning there's less of a learning curve.

The catch with containers is that the supply-chain is opaque. I.e. you don't know who built them or what is in them. Now it is true that Micronetia uses node.js and thus installs a ton of npm hosted packages but it is possible to search the lists and directories to see what is installed and to verify versions etc. etc. Containers also have a problem of tending to use more system resources than native installations. Even a 8GB Pi4 is comparatively limited in its resources and it seems to me that it would be better if those resources were not used on handling multiple containers.

With Micronetia I made a conscious decision to limit the use of containers and make them optional. I also made a deliberate decision to limit the use of different languages and platforms and to try and avoid ones that have a poor history of security (so php is out, as is wordpress). My intent is to have things scripts in node.js only and where possible in compiled applications. These days it seems more and more things are being compiled in go and I like that as a choice.

Commercialization

Going back the Helm product. One other difference is that it is a commecrial product. There's nothing wrong with the concept, but it adds a lot more dependencies to the whole tree. The simplest is what happens if the company fails? does your helm become a brick?

Then there's the number of third party scripts that the helm webpage wants to download. For a company that wants to get people off "the cloud" it certainly includes lots of cloud. This is concerning from a tracking point of view. The Micronetia ghost install may possibly require gstatic.com/googlefonts to deliver fonts (and there's a ghost.org svg somewhere on this site) but that's it.