kaniini's blog!

One of my areas of interest in multimedia coding has always been writing audio visualizers. Audio visualizers are software which take audio data as input, run various equations on it and use the results of those equations to render visuals.

You may remember from your childhood using WinAmp to listen to music. The MilkDrop plugin and AVS plugin included in WinAmp are examples of audio visualizers. AVS is a classical visualization engine that operates on evaluating a scene graph to composite the visuals. MilkDrop on the other hand defines the entire visual as a Direct3D scene and uses the hardware to composite the image.

MilkDrop is a very powerful visualizer, and a clone of it was made called projectM. projectM, unfortunately, has some stability issues (and frankly design issues, like the insistence on loading fonts to render text that will never actually be rendered) and so we do not include it in Audacious at the moment. The newest versions of MilkDrop and projectM even support pixel shaders, which allow for many calculations to be run in parallel for each pixel in the final image. It is a very impressive piece of software.

But, with all that said about MilkDrop, I feel like AVS is closer to what I actually like in a visualization engine. But AVS has a lot of flaws too. Describing logic in a scene graph is a major pain. However, in 2019, the situation is a lot different than when AVS was created — JavaScript engines are ubiquitous and suitably performant, so what if we could develop a programming language based on JavaScript that is domain-specific for visualization coding?

And so, that is what LVis is. LVis stands for Lapine Visualizer (I mean, I'm egotistical, what is there to say?) and uses the same underlying tech that QML apps use to glue together native pieces of code that result in beautiful visuals.

LVis rendering a complex waveform
LVis rendering a complex waveform

But why JavaScript? People already know it, and the fundamentals are easy enough that anyone can pick it up. I already hear the groans of “hacker” “news” at this justification, but expecting people to learn something like Rust to do art is simply not realistic — p5.js is popular for a reason: it's quick and easy to get something going.

And the LVis variant of JavaScript is indeed quite easy to learn: you get the Math functions from JavaScript and a bunch of other components that can you pull into your presets. That's all you get, nothing else.

LVis rendering another complex waveform with bloom
LVis rendering another complex waveform, with bloom

There are quite a few details we need to fill in, like documenting the specifics of the JavaScript-based DSL used by the LVis engine, so people can confidently write presets. We also need to add additional effects, like video echo and colour remapping.

I think it is important to talk about what LVis is not. LVis is not Milkdrop or projectM. It is based on raster graphics, and provides an architecture not that dissimilar to SVGA-based graphics cards in the 90s. Everything is a surface, and surfaces are freely allocateable, but the world is not an OpenGL scene graph. In this way, it is very similar to AVS.

Right now, the LVis source kit includes a plugin for Audacious and a more fully-featured client that allows for debugging and running specific presets. The client, however, requires PulseAudio, as it monitors the system audio output. I am open to adding patches for other audio systems.

You can download the sources from git.sr.ht.

LVis demonstration of complex waveforms

At this point, I think it is at a point where others can start playing with it and contributing presets. The core language DSL is basically stable — I don't expect to change anything in a way that would cause breakage. So, please download it and send me your presets!

I've been taking a break from focusing on fediverse development for the past couple of weeks — I've done some things, but it's not my focus right now because I'm waiting for Pleroma's develop tree to stabilize enough to branch it for the 1.1 stable releases. So, I've been doing some multimedia coding instead.

The most exciting aspect of this has been libreplayer, which is essentially a generic interface between replayer emulation cores and audio players. For example, it will be possible for a replayer emulation core to simply target libreplayer and an audio player to target libreplayer and allow these two things (the emulation core and the libreplayer client) to work together to produce audio.

The first release of libreplayer will drop soon. It will contain a PSF1/PSF2 player that is free of binary blobs. This is an important milestone because the only other PSF1/PSF2 replayer that is blob-free has many emulation bugs due to the use of incorrectly modified CPU emulation code from MAME. Highly Experimental's dirty secret is that it contains an obfuscated version of the PS2 firmware that has been stripped down.

And so, the naming of libreplayer is succinct, in two ways: one, it's self-descriptive, libreplayer obviously conveys that it's a library for accessing replayers, but also due to the emulation cores included in the source kit being blob-free, it implies that the replayer emulation cores we include are free as in freedom, which is also important to me.

What does this mean for audacious? Well, my intention is to replace the uglier replayer emulation plugins in audacious with a libreplayer client and clean-room implementations of each replayer core. I also intend to introduce replayer emulation cores that are not yet supported in audacious in a good way.

Hopefully this allows for the emulation community to be more effective stewards of their own emulation cores, while allowing for projects like audacious to focus on their core competencies. I also hope that having high-quality clean-room implementations of emulator cores written to modern coding practice will help to improve the security of the emulation scene in general. Time will tell.

(I Shitpost Therefore I Am)

This blog has been a long time coming, because we have a lot we need to talk about. Every day, too many people try to claim tutelage over a perpetually growing dung heap. I've written before about the flawed security model that was adopted in the ensuing rush to get real-world ActivityPub implementations out the door. This is not one of those posts.

In the interests of avoiding outright cancellation (which will happen anyway), I will just note that the next sections should be taken with an extreme content warning: many of the sections dissect and examine various incidents that intersect outright harassment or direct examples of white nationalism that have gone entirely unnoticed by the “cancel crew.”

Arguably, I think it's time to cancel the “cancel crew” because they're not protecting us as promised, and in the absence of funding to purchase security services from Prolexic, they will be completely unequipped for the future that they've largely created for us all.

What is the Fediverse anyway?

It's 2019, we're in Web 5.0 or whatever the current buzzword is, Social was dead, Facebook stock was in freefall and, for the last few years, the idea of an independent, federated social network has been growing a new life, largely catalyzed by the launch of the Mastodon platform in late 2016.

It has been said that the Internet is a series of tubes, and that services like Netflix are clogging them up. I have a different perspective: I argue that the collective Sidekiq and Oban instances of Mastodon and Pleroma nodes are the slow-moving garbage-laden trucks shipping around untold terabytes per day of trash. And that trash? That trash is what we lovingly call the fediverse.

If you want to get technical, the fediverse is the federation of servers running OStatus and ActivityPub protocols. Numerous software implement these protocols: Mastodon, Pleroma, Hubzilla, Friendica, GNU Social and PixelFed are good examples. They serve various niches, but have some level of interoperability.

Defenders of the fediverse say that the growth of the fediverse is the fruit of cooperation and collaboration. However, they rarely mention how this cooperation is achieved: name-calling, mischaracterization, disinformation and “cancellation”.

Death By Shitposting

How does an open-world network based on anarchy police itself? Cancellation and tribalism of course, but at least there's the ACAB emoji. Like in proprietary social media, clout on the fediverse is derived from elevating one's reputation at the expense of others. Sometimes this happens for good reason, but usually nobody actually knows the reason it is happening. Like the AMBER alerts you receive on your phone, you just know it's time to get your shotgun and join the mob!

Whenever there's a design flaw with the protocol, it's best to blame the software implementations for disagreeing to the level at which they will cover up the design flaw, instead of the actual design flaw. As is frequently observed, software other than the software of the user's choice is seen as problematic because their software created a flawed security model in the first place.

A Social Network Free Of Nazis

Content warning: We're going to talk about actual nazis. If this bothers you, you may want to skip this section.

One of the main advertising points of the Mastodon software is that the Mastodon Network is free of nazis. Of course, the Mastodon Network is the fediverse, an open-world federated network, and Mastodon itself is free software licensed under AGPL, all of which means that this claim is technically infeasible to enforce. So, how have they been doing with this?

Well, if you use Pleroma or other software that is not Mastodon and doesn't completely buy into the (broken) Mastodon security model, you're a nazi according to many Mastodon users. So, that's part of the point, but not really, and it's not even what I am getting at.

The real question is how is Mastodon doing with having a nazi-free network? Well, Gab and KiwiFarms joined the fediverse lately, and much of the fediverse as a whole are completely anxious about these developments. There's certainly arguments for blocking both of those instances, but that's still not what I'm talking about. This is, however, the ball the Mastodon people have been keeping their eyes on.

Nazis? In the fediverse? It's more likely than you think.

The easiest way to find actual bonafide nazis on the fediverse is to look at Pieville. Pieville is an instance operated by people associated with StormFront, a self-described “White Nationalist Community.” Users openly share videos and messages from key people in the white nationalist movement, such as Billy Roper and William Pierce. Other neo-nazi figures like Alex Linder have an account there. Oh, and Pieville runs Mastodon v2.7.4 at present time of writing.

Whatever you think of Gab or KiwiFarms, Pieville is on a completely different level, and it's surprising to see nobody discussing them as a threat, while cooking up all sorts of threat scenarios about Gab and KiwiFarms. This is not a defense of either of those instances, but it makes me wonder why our eyes aren't on the real ball.

Pieville isn't the only one. There are others, but Pieville has recently blocked fediverse.space from crawling their instance.

The Scriptkiddie-ification Of The Fediverse

Nazis aren't the only problem. The security model where data is distributed to as many nodes as humanly possible and security is not properly enforced to ensure relationships exist with nodes prior to sharing data with them is a problem.

This leads to numerous incidents where instances you don't expect to have copies of your data have copies of your data.

But even that is not the real problem. The real problem is the script kiddies abusing these implementation flaws, and the lack of audience restriction capabilities in the software, which lead people to post things publicly when they probably shouldn't.

Oh, and by the way, there is already a fediverse-wide search engine, which was built in public view while everyone was fighting in order to gain clout.

So, how do we fix the fediverse?

We need to transition the security model away from one that is cooperative, to one that has border-oriented security. The Internet itself, is a federated network, but BGP defines clear boundaries and policy. OCAP or other capability-based systems will do the same for the fediverse. Instead of cancelling each other, we should concentrate on building real security tools and deploying a real security model.

The good news is that progress is being made on this front. Hopefully by 2020, we will have some real solutions widely deployed and people can go back to taking it easy.

We are approaching the end of the merge window for the 1.0.5 release of Pleroma, which will likely be cut next Tuesday (August 13). I have been trying to aim for bi-weekly updates to the Pleroma releases, so that communities tracking stable have the latest security fixes as well as minimally impacting feature additions.

How to get features into the stable release branch?

As the stable release branches are largely frozen, you have to request that a feature be included into the master branch. Stable releases are always cut from master. To do so, open an issue on the Pleroma gitlab or comment on the relevant MR so that a maintainer may tag it with a backport request.

When cutting new releases, a branch is created, such as release/1.0.5, which contains the proposed release. Users are encouraged to test this branch and report on whether any problems exist in the proposed release. These branches contain manual backports done by me at the time of preparing the release, and build on any backports done by others to master, so tracking master instead of a release tag will also get you some of the backports if they are done using the process of using an MR and feature branch for the backport.

Bugfixes

Mastodon API: Set follower/following counters to 0 when hiding followers/following is enabled by @rin@patch.cx

Pleroma reports follower/following counts as 0 in the ActivityStreams 2.0 representations when the user requests to hide their social network. This change adjusts the Mastodon API responses to also return 0 when this setting is enabled.

(backport 409bcad5 to release/1.0.5)

Mastodon API: Fix thread mute detection by @rin@patch.cx

Fix a logic error where CommonAPI.thread_muted? was being called in the wrong context, leading it to always report as false.

(backport 0802a088 to release/1.0.5)

Mastodon API: Return profile URL when available instead of actor URI for MastodonAPI mention URL by @Thib@social.sitedethib.com

Return the profile URL specified in the actor object instead of the actor's IRI when possible in Mastodon API responses. This makes our behaviour consistent with how Mastodon returns profile URLs.

(backport 9c0da100..a10c840a to release/1.0.5)

Correctly style anchor tags in templates by @lanodan@queer.hacktivis.me

Correctly style anchor tags in templates so they match the rest of the template design.

(backport a035ab8c to release/1.0.5)

Do not re-embed ActivityStreams objects after updating them in the IR by @rin@patch.cx

Pleroma's current internal representation (IR) uses a split log of activities (the activities table) and underlying AS2 objects (the objects table). For storage efficiency, the IR refers to child objects by their stable IRI when stored in the IR. In some cases, updates of child objects would result in the child object being re-embedded in the parent activity.

(backport 73d8d5c4..4f1b9c54 to release/1.0.5)

Strip IR-specific fields including likes from incoming and outgoing activities by @sergey@pleroma.broccoli.si

In some cases, IR fields would be shared with peer instances. This caused occasional problems, as some of the IR fields would be serialized in ways that would be inappropriate. Accordingly, we remove all IR-specific vocabulary from incoming and outgoing activities before processing them further.

(backport 0c1d72ab..fa59de5c to release/1.0.5)

Fix --uploads-dir in instance gen task by @lanodan@queer.hacktivis.me

Due to a typo, --uploads-dir is not correctly respected when using the CLI to deploy an instance.

(backport 977c2d04 to release/1.0.5)

Fix documentation for invite gen task by @lanodan@queer.hacktivis.me

Fix typos in the documentation of this task for the --max-use and --expires-at options, underscores were used instead of dashes.

(backport 8815f070 to release/1.0.5)

Handle MRF rejections of incoming AP activities by @sergey@pleroma.broccoli.si

Previously, MRF rejections would be logged to the error log as a crash. This allows for MRF rejections to be more gracefully handled.

(backport d61c2ca9 to release/1.0.5)

New Features

Add relay list task by @kaniini@pleroma.site

Adds the relay list task that has been missing since relay support was implemented. Multiple people have observed that this task was missing for a long time, but nobody got around to writing it until now.

(backport cef3af55 to release/1.0.5)

Add listener port and ip option for 'instance gen' task by @sachin@bikeshed.party

Adds the --listener-port and --listener-ip to the instance gen task. This is primarily useful for automated deployments of Pleroma.

(backport 6d0ae264 to release/1.0.5)

Add wildcard domain matches to MRF simple policy by @alexs@bikeshed.party

Adds the ability for mrf_simple to match using wildcards, for example, against *.example.com instead of just example.com.

(backport 54832360..e886ade7 to release/1.0.5)

With all of the recent hullabaloo with Gab, and then, today Kiwi Farms joining the fediverse, there has been a lot of people asking questions about how data flows in the fediverse and what exposure they actually have.

I'm not really particularly a fan of either of those websites, but that's beside the point. The point here is to provide an objective presentation of how instances federate with each other and how these federation transactions impact exposure.

How Instances Federate

To start, lets describe a basic model of a federated network. This network will have five actors in it:

  • alyssa@social.example
  • bob@chatty.example
  • chris@photos.example
  • emily@cat.tube
  • sophie@activitypub.dev

(yeah yeah, I know, I'm not that good at making up fake domains.)

Next, we will build some relationships:

  • Sophie follows Alyssa and Bob
  • Emily follows Alyssa and Chris
  • Chris follows Emily and Alyssa
  • Bob follows Sophie and Alyssa
  • Alyssa follows Bob and Emily

Here's what that looks like as a graph:

A graph of social relationships.
A graph of social relationships.

Broadcasts

Normally posts flow through the network in the form of broadcasts. A broadcast type post is one that is sent to and only to a pre-determined set of targets, typically your followers collection.

So, this means that if Sophie makes a post, chatty.example is the only server that gets a copy of it. It does not matter that chatty.example is peered with other instances (social.example).

This is, by far, the majority of traffic inside the fediverse.

Relaying

The other kind of transaction is easily described as relaying.

To extend on our example above, lets say that Bob chooses to Announce (Mastodon calls this a boost, Pleroma calls this a repeat) the post Sophie sent him.

Because Bob is followed by Sophie and Alyssa, both of these people receive a copy of the Announce activity (an activity is a message which describes a transaction). Relay activities refer to the original message by it's unique identifier, and recipients of Announce activities use the unique identifier to fetch the referred message.

For now, we will assume that Alyssa's instance (social.example) was able to succeed in fetching the original post, because there's presently no access control in practice on fetching posts in ActivityPub.

This now means that Sophie's original post is present on three servers:

  • activitypub.dev
  • chatty.example
  • social.example

Relaying can cause perceived problems when an instance blocks another instance, but these problems are actually caused by a lack of access control on object fetches.

Replying

A variant on the broadcast-style transaction is a Create activity that references an object as a reply.

Lets say Alyssa responds to Sophie's post that was boosted to her. She composes a reply that references Sophie's original post with the inReplyTo property.

Because Alyssa is followed by actors on the entire network, now the entire network goes and fetches Sophie's post and has a copy of it.

This too can cause problems when an instance blocks another. And like in the relaying case, it is caused by a lack of access control on object fetches.

Metadata Leakage

From time to time, people talk about metadata leakage with ActivityPub. But what does that actually mean?

Some people erroneously believe that the metadata leakage problem has to do with public (without access control) posts appearing on instances which they have blocked. While that is arguably a problem, that problem is related to the lack of access controls on public posts. The technical term for a publicly available post is as:Public, a reference to the security label that is applied to them.

The metadata leakage problem is an entirely different problem. It deals with posts that are not labelled as:Public.

The metadata leakage problem is this: If Sophie composes a post addressed to her followers collection, then only Bob receives it. So far, so good, no leakage. However, because of bad implementations (and other problems), if Bob replies back to Sophie, then his post will be sent not only to Sophie, but Alyssa. Based on that, Alyssa now has knowledge that Sophie posted something, but no actual idea what that something was. That's why it's called a metadata leakage problem — metadata about one of Sophie's objects existing and it's contents (based on the text of the reply) are leaked to Alyssa.

This problem is the big one. It's not technically ActivityPub's fault, either, but a problem in how ActivityPub is typically implemented. But at the same time, it means that followers-only posts can be risky. Mastodon covers up the metadata leakage problem by hiding replies to users you don't follow, but that's all it is, a cover up of the problem.

Solution?

The solution to the metadata leakage problem is to have replies be forwarded to the OP's audience. But to do this, we need to rework the way the protocol works a bit. That's where proposals like moving to an OCAP-based variant of ActivityPub come into play. In those variants, doing this is easy. But in what we have now, doing this is difficult.

Anyway, I hope this post helps to explain how data flows through the network.

OCAP refers to Object CAPabilities. Object Capabilities are one of many possible ways to achieve capability-based security. OAuth Bearer Tokens, for example, are an example of an OCAP-style implementation.

In this context, OCAP refers to an adaptation of ActivityPub which utilizes capability tokens.

But why should we care about OCAP? OCAP is a more flexible approach that allows for more efficient federation (considerably reduced cryptography overhead!) as well as conditional endorsement of actions. The latter enables things like forwarding Create activities using tokens that would not normally be authorized to do such things (think of this like sudo, but inside the federation). Tokens can also be used to authorize fetches allowing for non-public federation that works reliably without leaking metadata about threads.

In short, OCAP fixes almost everything that is lacking about ActivityPub's security, because it defines a rigid, robust and future-proof security model for the fediverse to use.

How does it all fit together?

This work is being done in the LitePub (maybe soon to be called SocialPub) working group. LitePub is to ActivityPub what the WHATWG is to HTML5. The examples I use here don't necessarily completely line up with what is really in the spec, because they are meant to just be a basic outline of how the scheme works.

So the first thing that we do is extend the AS2 actor description with a new endpoint (capabilityAcquisitionEndpoint) which is used to acquire a new capability object.

Example: Alyssa P. Hacker's actor object
{
  "@context": "https://social.example/litepub-v1.jsonld",
  "id": "https://social.example/~alyssa",
  "capabilityAcquisitionEndpoint": "https://social.example/caps/new"
  [...]
}

Bob has a server which lives at chatty.example. Bob wants to exchange notes with Alyssa. To do this, Bob's instance needs to acquire a capability that he uses to federate in the future by POSTing a document to the capabilityAcquisitionEndpoint and signing it with HTTP Signatures:

Example: Bob's instance acquires the inbox:write and objects:read capabilities
{
  "@context": "https://chatty.example/litepub-v1.jsonld",
  "id": "https://chatty.example/caps/request/9b2220dc-0e2e-4c95-9a5a-912b0748c082",
  "type": "Request",
  "capability": ["inbox:write", "objects:read"],
  "actor": "https://chatty.example"
}

It should be noted here that Bob's instance itself makes the request, using an instance-specific actor. This is important because capability tokens are scoped to their actor. In this case, the capability token may be invoked by any children actors of the instance, because it's an instance-wide token. But the instance could request the token strictly on Bob's behalf by using Bob's actor and signing the request with Bob's key.

Alyssa's instance responds with a capability object:

Example: A capability token
{
  "@context": "https://social.example/litepub-v1.jsonld",
  "id": "https://social.example/caps/640b0093-ae9a-4155-b295-a500dd65ee11",
  "type": "Capability",
  "capability": ["inbox:write", "objects:read"],
  "scope": "https://chatty.example",
  "actor": "https://social.example"
}

There's a few peculiar things about this object that I'm sure you've probably noticed. Lets look at this object together:

  • The scope describes the actor which may use the token. Implementations check the scope for validity by merging it against the actor referenced in the message.

  • The actor here describes the actor which granted the capability. Usually this is an instance-wide actor, but it may also be any other kind of actor.

In traditional ActivityPub the mechanism through which Bob authenticates and later authorizes federation is left undefined. This is the hole that got filled with signature-based authentication, and is being filled again with OCAP.

But how do we invoke the capability to exchange messages? There's a couple of ways.

When pushing messages, we can simply reference the capability by including it in the message:

Example: Pushing a note using a capability
{
  "@context": "https://chatty.example/litepub-v1.jsonld",
  "id": "https://chatty.example/activities/63ffcdb1-f064-4405-ab0b-ec97b94cfc34",
  "capability": "https://social.example/caps/640b0093-ae9a-4155-b295-a500dd65ee11",
  "type": "Create",
  "object": {
    "id": "https://chatty.example/objects/de18ad80-879c-4ad2-99f7-e1c697c0d68b",
    "type": "Note",
    "attributedTo": "https://chatty.example/~bob",
    "content": "hey alyssa!",
    "to": ["https://social.example/~alyssa"]
  },
  "to": ["https://social.example/~alyssa"],
  "cc": [],
  "actor": "https://chatty.example/~bob"
}

Easy enough, right? Well, there's another way we can do it as well, which is to use the capability as a bearer token (because it is one). This is useful when fetching objects:

Example: Fetching an object with HTTP + capability token
GET /objects/de18ad80-879c-4ad2-99f7-e1c697c0d68b HTTP/1.1
Accept: application/activity+json
Authorization: Bearer https://social.example/caps/640b0093-ae9a-4155-b295-a500dd65ee11

HTTP/1.1 200 OK
Content-Type: application/activity+json

[...]

Because we have a valid capability token, the server can make decisions on whether or not to disclose the object based on the relationship associated with that token.

This is basically OCAP in a nutshell. It's simple and easy for implementations to adopt and gives us a framework for extending it in the future to allow for all sorts of things without leakage of cryptographically-signed metadata.

If this sort of stuff interests you, drop by #litepub on freenode!

A little over 2 years ago, Pleroma was started. At the time, Pleroma was largely developed by one person, and they were busy working toward a MVP. This lead to an interesting post being noted in dzuk's controversial blocklist advisory post.

Of course, time has moved on, and Pleroma has gained moderation tools that, in the hands of a skilled admin, provide the best possible moderation experience on the fediverse today. But getting to where we are now from 2 years ago has been a long journey.

moderator role

A few months after the post where lain said that he was still working on basic functionality, Pleroma got the first moderation tool around December 2017. You can set the moderator role on a user using the CLI:

$ MIX_ENV=prod mix pleroma.user set kaniini —moderator

Moderators have the ability to do a few things, namely delete any post from the local instance. For a while, this got the job done for most Pleroma instances because this was a reasonably quiet period of existence for the fediverse.

April 2018: birth of the Message Rewrite Facility

In April 2018, a new instance launched called Switter in response to the FOSTA/SESTA bill which unfairly targeted sex workers. This lead to some new problems in the fediverse, because largely the fediverse had never been exposed to an instance designed around advertising before. There were many cultural conflicts as well which lead to many fights during the launch.

Eventually, Switter modified Mastodon so that their posts would federate in a way that always ensured that media was always marked sensitive while not requiring their local users to mark their media sensitive, but this was a point of contention for several months.

In the meantime, the very first version of MRF was written and integrated into Pleroma, allowing for admins to force incoming posts from Switter to be unconditionally marked sensitive.

This version of MRF was very limited by comparison to the MRF we know today. For example, it only allowed one policy module to be loaded at any given time. It also did not implement a proper Elixir behaviour so that the compiler could validate the policy module for correctness. It did get the job done however.

May 2018: MRF begins to resemble the framework we have today

The original version of MRF was a minimal patch intended to allow instance admins to be able to block content from a configured set of instances, but the implementation lacked flexibility. href (the admin of pleroma.fr) came along and expanded upon my initial patch by allowing policies to be chained. This was a serious advancement in terms of enabling MRF to turn into the fully-fledged framework we enjoy today.

June 2018: Accept lists

Some instances on the fediverse operate on an accept list basis, where your server has to be explicitly granted permission to federate with the instance. An example of this would be awoo.space.

Based on a request, this functionality was added to Pleroma's MRF in June. This allows admins to set up an instance operating on an accept list basis without having to do any major changes in the code.

December 2018: Large thread filter

Extremely large threads (colloquially referred to as “hellthreads”) cause significant problems for resource consumption on instances and were abused by some people to be very annoying. [Pleroma implemented a large thread filter][mrf_hellthread] in the form of the mrf_hellthread module that blocked these threads based on a configurable threshold.

January 2019: Anti-followbot module

Followbots are unpopular among many users in the fediverse because they are perceived as being a data mining vector, or perhaps just downright creepy. At one point, they were necessary to help bootstrap new instances and get them well-federated, but this niche has been better solved through the use of relays. As a mitigation for these concerns, an anti-followbot mitigation was introduced to MRF.

February 2019: Keyword module, user tags and tag module

Sometimes it is necessary to mark posts as sensitive if they contain certain keywords. On most platforms, this work has to be done manually, and it can take up a lot of time. As a solution to this problem, a module which matches messages based on keywords was added.

We also added an API which allowed for users to be labelled with various classifiers. This was leveraged inside the MRF framework with a module that acted based on the presence of specific user tags.

April 2019: Pleroma FE integration, Reporting

In April, we added integration for the moderation tools exposed by MRF into Pleroma FE. This mostly consisted of tagging users with the appropriate tags using the user tagging API, but allows for efficient moderation work to be done.

We also added support for a report system which allows people to report spam and other TOS violations to their admins.

Future

As can be seen, most initiatives involving moderation circle around the MRF framework and the future of the MRF framework is bright. We are already planning to rework the MRF framework after Pleroma 1.x release to make it more cleanly behaved. This work involves splitting MRF into classifiers, mutators and subchains.

The idea is that you have modules which detect if messages meet certain criterion, and if so, they attach classifiers to the message. Mutators then act on the message, making whatever modifications are requested. This flow is controlled by the use of conditional subchains: if classifier X is present, then process the message through subchain Y.

I'll be writing more about this design in the near future, but it is promising because it allows for backward compatibility with policy modules written against MRF today.

Some fediverse developers approach project management from the philosophy that they are building a product in it's own right instead of a tool. But does that approach really make sense for the fediverse?

It's that time again, patches have been presented which improve Mastodon's compatibility with the rest of the fediverse. However, the usual suspect has expressed disinterest in clicking the merge button. The users protest loudly about this unilateral decision, as is expected by the astute reader. Threats of hard forks are made. GitHub's emoji reactions start to arrive, mostly negative. The usual suspect fires back saying that the patches do not fit into his personal vision, leading to more negative reactions. But why?

I believe the main issue at stake is whether or not fediverse software is the product, or if it is the instances themselves which are the product. Yes, both the software and the instance itself, are products, but the question, really, is which one is actually more impactful?

Gargron (the author of Mastodon), for whatever reason, sees Mastodon itself as the core product. This is obvious based on the marketing copy he writes to promote the Mastodon software and the 300,000+ user instance he personally administrates where he is followed by all new signups by default. It is also obvious based on the dictatorial control he exerts over the software.

But is this view aligned with reality? Mastodon has very few configurable options, but admins have made modifications to the software, which add configuration options that contradict Gargron's personal vision. These features are frequently deployed by Mastodon admins and, to an extent, Mastodon instances compete with each other on various configuration differences: custom emoji, theming, formatting options and even the maximum length of a post. This competition, largely, has been enabled by the existence of “friendly” forks that add the missing configuration options.

My view is different. I see fediverse software as a tool that is used to build a community which optionally exists in a community of communities (the fediverse). In my view, users should be empowered to choose an instance which provides the features they want, with information about what features are available upfront. In essence, it is the instances themselves which are competing for users, not the software.

Monoculture harms competitiveness, there are thousands of Mastodon instances to choose from, but how many of them are truly memorable? How many are shipping stock Mastodon with the same old default color scheme and theme?

Outside of Mastodon, the situation is quite different. Most of us see the software we work on as a tool for facilitating community building. Accordingly, we try to do our best to give admins as many ways as possible to make their instance look and feel as they want. They are building the product that actually matters, we're just facilitating their work. After all, they are the ones who have to spend time customizing, promoting and managing the community they build. This is why Pleroma has extensive configuration and theming options that are presented in a way that is very easy to leverage. Likewise, Friendica, Hubzilla and even GNU Social can be customized in the same way: you're in control as the admin, not a product designer.

But Mastodon is still problematic when it comes to innovation in the fediverse at large. Despite the ability that other fediverse software give to users and admins to present their content in whatever form they want, Mastodon presently fails to render the content correctly:

Mastodon presents lists in an incorrect way.

The patches I referred to earlier correct this problem by changing how Mastodon processes posts from remote instances. They also provide a path toward improving usability in the fediverse by allowing us to work toward phasing out the use of Unicode mathematical constants as a substitute for proper formatting. The majority of fediverse microblogging software has supported this kind of formatting for a long time, many implementations predating Mastodon itself. Improved interoperability with other fediverse implementations sounds like a good thing, right? Well, it's not aligned with the Mastodon vision, so it's rejected.

The viewpoint that the software itself is primarily what matters is stifling fediverse development. As developers, we should be working together to improve the security and expressiveness of the underlying technology. This means that some amount of flexibility is required. Quoting RFC791:

In general, an implementation must be conservative in its sending behavior, and liberal in its receiving behavior.

There is no God of the fediverse. The fediverse exists and operates smoothly because we work together, as developers, in concert with the admin and user community at large. Accomplishing this requires compromise, not unilateral decision making.

CRTNet was an experiment to create an IRC network by and for the greater fediverse community.

Unfortunately, the project hasn't worked out in a desirable way. So, the network will be sunset effective March 15.

The rest of this post will be an examination of reasons why the project failed.

Software

CRTNet used what was believed to be a reasonably stable combination of UnrealIRCd and Atheme services. While there were many personal reasons I chose to use atheme for the project (like having previously written them), the choice of UnrealIRCd was largely a poor one.

A feature of CRTNet was integration with a bot called viera, which allowed for linking IRC services accounts to fediverse profiles. This feature depended on functional WHOX support, which UnrealIRCd did not provide. So, I found a module which provided WHOX support. All seemed well until a few months later when I observed UnrealIRCd was using 13 GB of RAM.

This lead us to discuss switching to another software, InspIRCd. Unfortunately, we standardized on using SPKIFP fingerprints to authenticate servers in the network with each other. Switching to InspIRCd meant abandoning SPKIFP support, so this proposal fizzled out. Meanwhile, my modified UnrealIRCd continues to consume large amounts of RAM.

From a technical perspective, the final nail in the coffin however is not software-related, but instead the result of IPv4 exhaustion: I needed to move the primary hub, but cannot due to being unable to coordinate access to the secondary hub. The reasons for that are complicated and not very interesting to discuss, so we will just leave it explained as a communications failure.

Cultural problems

The vision behind the project was to create a network for fediverse communities, much in the same way as Snoonet was started for reddit communities.

Unfortunately, what we have discovered is that creating such a network results in fediverse 'meta' drama and gossip being the primary topic of discussion. With this being the primary discussion topic, it provides no value to the userbase, so we were unable to gain traction regarding users.

Finally, structuring the network in an adhoc way instead of the way a traditional IRC network is structured (CRTNet had no shared responsibility for routing, etc) lead to the final set of technical problems.

Accordingly we are left with a network that has little value and little usage, and so I am sunsetting the project by terminating the primary hub on March 15.

To my knowledge, the main channel still on CRTNet is moving to their own server, irc.catgirl.network. I suggest giving that network a try instead.

Contrary to the public's perceptions, CommonsPub is no longer a fork of Pleroma and has not been for some time. They hired some professional Elixir developers who rewrote the codebase from scratch, in my opinion, badly.

CommonsPub began as a fork of Pleroma in July 2018 with the intention of enabling the creation of a generic platform for federated apps. This was, needless to say, confusing to us: the entire point of the Pleroma project itself is to create a generic platform for federated apps — this is, in fact, why it is called Pleroma: that is a reference to the omnipresent nature of a generic federated app platform. We have also been talking about federated apps for several years now, prior to the announcement of CommonsPub.

It should also be mentioned that at no time did the CommonsPub developers ever make any attempt to talk with or coordinate with us. While it is true that they are free to fork our code at any time, for any reason, it was quite disappointing that they forked our code and then contrasted our project in a light that was misleading at best — they discussed CommonsPub as existing for the purpose of providing this generic backend and Pleroma as not, while in reality Pleroma has been a generic backend all along.

At any rate, there is not much Pleroma code (but there still is some) remaining in CommonsPub, so I wouldn't classify it as a fork.

CommonsPub is not a generic ActivityPub server, but Pleroma is.

CommonsPub is not built on the ActivityPub protocol. While ActivityPub is used for federation, CommonsPub does not directly model itself on ActivityPub or ActivityStreams 2.0, instead using a custom graph model optimized for GraphQL usage.

Pleroma is built on ActivityPub in all ways: federation, on-disk storage and internal representation. Pleroma walks AS2 object trees as a proper RDF-style graph. Pleroma supports ActivityPub C2S and ActivityPub S2S protocols, as well as API emulations. CommonsPub does not support ActivityPub C2S.

CommonsPub does not even deliver on generic federated apps. Pleroma does.

MoodleNet, the primary application built on CommonsPub is directly bolted into the CommonsPub server itself.

Pleroma in contrast does not have any application logic directly bolted into the core: federated apps on Pleroma contain all application logic directly in the client or in the API emulations they consume if they are not native ActivityPub C2S clients.

CommonsPub components which remain and have been derived from Pleroma do not provide copyright attribution to Pleroma and thus violate the AGPL license Pleroma is made available to them under. This lack of documented legal provenance is another strong reason to not use CommonsPub in your project: if they do not attribute the code they borrowed from us, how can you know that there are not other missing attributions?