kaniini's blog!

A little over 2 years ago, Pleroma was started. At the time, Pleroma was largely developed by one person, and they were busy working toward a MVP. This lead to an interesting post being noted in dzuk's controversial blocklist advisory post.

Of course, time has moved on, and Pleroma has gained moderation tools that, in the hands of a skilled admin, provide the best possible moderation experience on the fediverse today. But getting to where we are now from 2 years ago has been a long journey.

moderator role

A few months after the post where lain said that he was still working on basic functionality, Pleroma got the first moderation tool around December 2017. You can set the moderator role on a user using the CLI:

$ MIX_ENV=prod mix pleroma.user set kaniini —moderator

Moderators have the ability to do a few things, namely delete any post from the local instance. For a while, this got the job done for most Pleroma instances because this was a reasonably quiet period of existence for the fediverse.

April 2018: birth of the Message Rewrite Facility

In April 2018, a new instance launched called Switter in response to the FOSTA/SESTA bill which unfairly targeted sex workers. This lead to some new problems in the fediverse, because largely the fediverse had never been exposed to an instance designed around advertising before. There were many cultural conflicts as well which lead to many fights during the launch.

Eventually, Switter modified Mastodon so that their posts would federate in a way that always ensured that media was always marked sensitive while not requiring their local users to mark their media sensitive, but this was a point of contention for several months.

In the meantime, the very first version of MRF was written and integrated into Pleroma, allowing for admins to force incoming posts from Switter to be unconditionally marked sensitive.

This version of MRF was very limited by comparison to the MRF we know today. For example, it only allowed one policy module to be loaded at any given time. It also did not implement a proper Elixir behaviour so that the compiler could validate the policy module for correctness. It did get the job done however.

May 2018: MRF begins to resemble the framework we have today

The original version of MRF was a minimal patch intended to allow instance admins to be able to block content from a configured set of instances, but the implementation lacked flexibility. href (the admin of pleroma.fr) came along and expanded upon my initial patch by allowing policies to be chained. This was a serious advancement in terms of enabling MRF to turn into the fully-fledged framework we enjoy today.

June 2018: Accept lists

Some instances on the fediverse operate on an accept list basis, where your server has to be explicitly granted permission to federate with the instance. An example of this would be awoo.space.

Based on a request, this functionality was added to Pleroma's MRF in June. This allows admins to set up an instance operating on an accept list basis without having to do any major changes in the code.

December 2018: Large thread filter

Extremely large threads (colloquially referred to as “hellthreads”) cause significant problems for resource consumption on instances and were abused by some people to be very annoying. [Pleroma implemented a large thread filter][mrf_hellthread] in the form of the mrf_hellthread module that blocked these threads based on a configurable threshold.

January 2019: Anti-followbot module

Followbots are unpopular among many users in the fediverse because they are perceived as being a data mining vector, or perhaps just downright creepy. At one point, they were necessary to help bootstrap new instances and get them well-federated, but this niche has been better solved through the use of relays. As a mitigation for these concerns, an anti-followbot mitigation was introduced to MRF.

February 2019: Keyword module, user tags and tag module

Sometimes it is necessary to mark posts as sensitive if they contain certain keywords. On most platforms, this work has to be done manually, and it can take up a lot of time. As a solution to this problem, a module which matches messages based on keywords was added.

We also added an API which allowed for users to be labelled with various classifiers. This was leveraged inside the MRF framework with a module that acted based on the presence of specific user tags.

April 2019: Pleroma FE integration, Reporting

In April, we added integration for the moderation tools exposed by MRF into Pleroma FE. This mostly consisted of tagging users with the appropriate tags using the user tagging API, but allows for efficient moderation work to be done.

We also added support for a report system which allows people to report spam and other TOS violations to their admins.


As can be seen, most initiatives involving moderation circle around the MRF framework and the future of the MRF framework is bright. We are already planning to rework the MRF framework after Pleroma 1.x release to make it more cleanly behaved. This work involves splitting MRF into classifiers, mutators and subchains.

The idea is that you have modules which detect if messages meet certain criterion, and if so, they attach classifiers to the message. Mutators then act on the message, making whatever modifications are requested. This flow is controlled by the use of conditional subchains: if classifier X is present, then process the message through subchain Y.

I'll be writing more about this design in the near future, but it is promising because it allows for backward compatibility with policy modules written against MRF today.

Some fediverse developers approach project management from the philosophy that they are building a product in it's own right instead of a tool. But does that approach really make sense for the fediverse?

It's that time again, patches have been presented which improve Mastodon's compatibility with the rest of the fediverse. However, the usual suspect has expressed disinterest in clicking the merge button. The users protest loudly about this unilateral decision, as is expected by the astute reader. Threats of hard forks are made. GitHub's emoji reactions start to arrive, mostly negative. The usual suspect fires back saying that the patches do not fit into his personal vision, leading to more negative reactions. But why?

I believe the main issue at stake is whether or not fediverse software is the product, or if it is the instances themselves which are the product. Yes, both the software and the instance itself, are products, but the question, really, is which one is actually more impactful?

Gargron (the author of Mastodon), for whatever reason, sees Mastodon itself as the core product. This is obvious based on the marketing copy he writes to promote the Mastodon software and the 300,000+ user instance he personally administrates where he is followed by all new signups by default. It is also obvious based on the dictatorial control he exerts over the software.

But is this view aligned with reality? Mastodon has very few configurable options, but admins have made modifications to the software, which add configuration options that contradict Gargron's personal vision. These features are frequently deployed by Mastodon admins and, to an extent, Mastodon instances compete with each other on various configuration differences: custom emoji, theming, formatting options and even the maximum length of a post. This competition, largely, has been enabled by the existence of “friendly” forks that add the missing configuration options.

My view is different. I see fediverse software as a tool that is used to build a community which optionally exists in a community of communities (the fediverse). In my view, users should be empowered to choose an instance which provides the features they want, with information about what features are available upfront. In essence, it is the instances themselves which are competing for users, not the software.

Monoculture harms competitiveness, there are thousands of Mastodon instances to choose from, but how many of them are truly memorable? How many are shipping stock Mastodon with the same old default color scheme and theme?

Outside of Mastodon, the situation is quite different. Most of us see the software we work on as a tool for facilitating community building. Accordingly, we try to do our best to give admins as many ways as possible to make their instance look and feel as they want. They are building the product that actually matters, we're just facilitating their work. After all, they are the ones who have to spend time customizing, promoting and managing the community they build. This is why Pleroma has extensive configuration and theming options that are presented in a way that is very easy to leverage. Likewise, Friendica, Hubzilla and even GNU Social can be customized in the same way: you're in control as the admin, not a product designer.

But Mastodon is still problematic when it comes to innovation in the fediverse at large. Despite the ability that other fediverse software give to users and admins to present their content in whatever form they want, Mastodon presently fails to render the content correctly:

Mastodon presents lists in an incorrect way.

The patches I referred to earlier correct this problem by changing how Mastodon processes posts from remote instances. They also provide a path toward improving usability in the fediverse by allowing us to work toward phasing out the use of Unicode mathematical constants as a substitute for proper formatting. The majority of fediverse microblogging software has supported this kind of formatting for a long time, many implementations predating Mastodon itself. Improved interoperability with other fediverse implementations sounds like a good thing, right? Well, it's not aligned with the Mastodon vision, so it's rejected.

The viewpoint that the software itself is primarily what matters is stifling fediverse development. As developers, we should be working together to improve the security and expressiveness of the underlying technology. This means that some amount of flexibility is required. Quoting RFC791:

In general, an implementation must be conservative in its sending behavior, and liberal in its receiving behavior.

There is no God of the fediverse. The fediverse exists and operates smoothly because we work together, as developers, in concert with the admin and user community at large. Accomplishing this requires compromise, not unilateral decision making.

CRTNet was an experiment to create an IRC network by and for the greater fediverse community.

Unfortunately, the project hasn't worked out in a desirable way. So, the network will be sunset effective March 15.

The rest of this post will be an examination of reasons why the project failed.


CRTNet used what was believed to be a reasonably stable combination of UnrealIRCd and Atheme services. While there were many personal reasons I chose to use atheme for the project (like having previously written them), the choice of UnrealIRCd was largely a poor one.

A feature of CRTNet was integration with a bot called viera, which allowed for linking IRC services accounts to fediverse profiles. This feature depended on functional WHOX support, which UnrealIRCd did not provide. So, I found a module which provided WHOX support. All seemed well until a few months later when I observed UnrealIRCd was using 13 GB of RAM.

This lead us to discuss switching to another software, InspIRCd. Unfortunately, we standardized on using SPKIFP fingerprints to authenticate servers in the network with each other. Switching to InspIRCd meant abandoning SPKIFP support, so this proposal fizzled out. Meanwhile, my modified UnrealIRCd continues to consume large amounts of RAM.

From a technical perspective, the final nail in the coffin however is not software-related, but instead the result of IPv4 exhaustion: I needed to move the primary hub, but cannot due to being unable to coordinate access to the secondary hub. The reasons for that are complicated and not very interesting to discuss, so we will just leave it explained as a communications failure.

Cultural problems

The vision behind the project was to create a network for fediverse communities, much in the same way as Snoonet was started for reddit communities.

Unfortunately, what we have discovered is that creating such a network results in fediverse 'meta' drama and gossip being the primary topic of discussion. With this being the primary discussion topic, it provides no value to the userbase, so we were unable to gain traction regarding users.

Finally, structuring the network in an adhoc way instead of the way a traditional IRC network is structured (CRTNet had no shared responsibility for routing, etc) lead to the final set of technical problems.

Accordingly we are left with a network that has little value and little usage, and so I am sunsetting the project by terminating the primary hub on March 15.

To my knowledge, the main channel still on CRTNet is moving to their own server, irc.catgirl.network. I suggest giving that network a try instead.

Contrary to the public's perceptions, CommonsPub is no longer a fork of Pleroma and has not been for some time. They hired some professional Elixir developers who rewrote the codebase from scratch, in my opinion, badly.

CommonsPub began as a fork of Pleroma in July 2018 with the intention of enabling the creation of a generic platform for federated apps. This was, needless to say, confusing to us: the entire point of the Pleroma project itself is to create a generic platform for federated apps — this is, in fact, why it is called Pleroma: that is a reference to the omnipresent nature of a generic federated app platform. We have also been talking about federated apps for several years now, prior to the announcement of CommonsPub.

It should also be mentioned that at no time did the CommonsPub developers ever make any attempt to talk with or coordinate with us. While it is true that they are free to fork our code at any time, for any reason, it was quite disappointing that they forked our code and then contrasted our project in a light that was misleading at best — they discussed CommonsPub as existing for the purpose of providing this generic backend and Pleroma as not, while in reality Pleroma has been a generic backend all along.

At any rate, there is not much Pleroma code (but there still is some) remaining in CommonsPub, so I wouldn't classify it as a fork.

CommonsPub is not a generic ActivityPub server, but Pleroma is.

CommonsPub is not built on the ActivityPub protocol. While ActivityPub is used for federation, CommonsPub does not directly model itself on ActivityPub or ActivityStreams 2.0, instead using a custom graph model optimized for GraphQL usage.

Pleroma is built on ActivityPub in all ways: federation, on-disk storage and internal representation. Pleroma walks AS2 object trees as a proper RDF-style graph. Pleroma supports ActivityPub C2S and ActivityPub S2S protocols, as well as API emulations. CommonsPub does not support ActivityPub C2S.

CommonsPub does not even deliver on generic federated apps. Pleroma does.

MoodleNet, the primary application built on CommonsPub is directly bolted into the CommonsPub server itself.

Pleroma in contrast does not have any application logic directly bolted into the core: federated apps on Pleroma contain all application logic directly in the client or in the API emulations they consume if they are not native ActivityPub C2S clients.

CommonsPub components which remain and have been derived from Pleroma do not provide copyright attribution to Pleroma and thus violate the AGPL license Pleroma is made available to them under. This lack of documented legal provenance is another strong reason to not use CommonsPub in your project: if they do not attribute the code they borrowed from us, how can you know that there are not other missing attributions?

This is the third article in a series of articles about ActivityPub detailing the challenges of building a trustworthy, secure implementation of the protocol stack.

In this case, it also does a significant technical deep dive into informally specifying a set of protocol extensions to ActivityPub. Formal specification of these extensions will be done in the Litepub working group, and will likely see some amount of change, so this blog entry should be considered non-normative in it's entirety.

Over the past few years of creating and revising ActivityPub, many people have made a push for the inclusion of a capability-based security model as the core security primitive (instead, the core security primitive is “this section is non-normative,” but I'm not salty), but what would that look like?

There's a few different proposals in the works at varying stages of development that could be used to retrofit capability-based security into ActivityPub:

  • OCAP-LD, which adds a generic object capabilities framework for any consumer of JSON-LD (such as the Linked Data Platform, or the neutered version of LDP that is described as part of ActivityPub),

  • Litepub Capability Enforcement, which is preliminarily described by this blog post, and

  • PolaPub aka CapabilityPub which is only an outline stored in an .org document. It is presumed that PolaPub or CapabilityPub or whatever it is called next week will be built on OCAP-LD, but in fairness, this is pure speculation.

Why capabilities instead of ACLs?

ActivityPub, like the fediverse in general, is an open world system. Traditional ACLs fail to provide proper scalability to the possibility of 100s of millions of accounts across millions of instances. Object capabilities, on the other hand, are opaque tokens which allow the bearer to possibly consume a set of permissions.

The capability enforcement proposals presently proposed would be deployed as a hybrid approach: capabilities to provide horizontal scalability for the large number of accounts and instances, and deny lists to block specific interactions from actors. The combination of capabilities and deny lists provides for a highly robust permissions system for the fediverse, and mimics previous work on federated open world systems.

Drawing inspiration from previous work: the Second Life Open Grid Protocol

I've been following large scale interactive communications architectures for many years, which has allowed me to learn many things about the design and implementation of open world horizontally-scaled systems.

One of the projects that I followed very closely was started in 2008, as a collaboration between Linden Lab, IBM and some other participants: the Open Grid Protocol. While the Open Grid Protocol itself ultimately did not work out for various reasons (largely political), a large amount of the work was recycled into a significant redesign of the Second Life service's backend, and the SL grid itself now resembles a federated network in many ways.

OGP was built on the concept of using capability tokens as URIs, which would either map to an active web service or a confirmation. Since the capability token was opaque and difficult to forge, it provided sufficient proof of authentication without sharing any actual information about the authorization itself: the web services act on the session established by the capability URIs instead of on an account directly.

Like ActivityPub, OGP is an actor-centric messaging protocol: when logging in, the login server provides a set of “seed capabilities”, which allow use of the other services. From the perspective of the other services, invocation of those capability URIs is seen as an account performing an action. Sound familiar in a way?

The way Linden Lab implemented this part of OGP was by having a capabilities server which handled routing the invoked capability URIs to other web services. This step in and of itself is not particularly required, an OGP implementation could handle consumption of the capability URIs directly, as OpenSim does for example.

Bringing capability URIs into ActivityPub as a first step

So, we have established that capability URIs are an opaque token that can be called as a substitute for whatever backend web service was going to be used in the first place. But, what does that get us?

The simplest way to look at it is this way: there are activities which are relayable and others which are not relayable. Both can become capability-enabled, but require separate strategies.

Relayable activities

Create (in this context, thread replies) activities are relayable. This means the capability can simply be invoked by treating it as an inbox, and the server the capability is invoked on will relay the side effects forward. The exact mechanism for this is not yet defined, as it will require prototyping and verification, but it's not impossible. Capability URIs for relayable activities can likely be directly aliased to the sharedInbox if one is available, however.

Intransitive activities

Intransitive activities (ones which act on a pre-existing object that is not supplied) like Announce, Like, Follow will require proofs. We can already provide proofs in the form of an Accept activity:

  "@context": "https://www.w3.org/ns/activitystreams",
  "id": "https://example.social/proofs/fa43926a-63e5-4133-9c52-36d5fc6094fa",
  "type": "Accept",
  "actor": "https://example.social/users/bob",
  "object": {
    "id": "https://example.social/activities/12945622-9ea5-46f9-9005-41c5a2364f9c",
    "type": "Announce",
    "object": "https://example.social/objects/d6cb8429-4d26-40fc-90ef-a100503afb73",
    "actor": "https://example.social/users/alyssa",
    "to": ["https://example.social/users/alyssa/followers"],
    "cc": ["https://www.w3.org/ns/activitystreams#Public"]

This proof can be optionally signed with LDS in the same way as OCAP-LD proofs. Signing the proof is not covered here, and the proof must be fetchable, as somebody looking to distribute their intransitive actions on objects known to be security labeled must validate the proof somehow.

Object capability discovery

A security labelled object has a new field, capabilities which is an Object that contains a set of allowed actions and the corresponding capability URI for them:

  "@context": [
  "capabilities": {
    "Announce": "https://example.social/caps/4f230498-5a01-4bb5-b06b-e3625fc03947",
    "Create": "https://example.social/caps/d4c4d96a-36d9-4df5-b9da-4b8c74e02567",
    "Like": "https://example.social/caps/21a946fb-1bad-48ae-82c1-e8d1d2ab28c3"

Example: Invoking a capability

Bob makes a post, which he allows liking, and replying, but not announcing. That post looks like this:

  "@context": [
  "capabilities": {
    "Create": "https://example.social/caps/d4c4d96a-36d9-4df5-b9da-4b8c74e02567",
    "Like": "https://example.social/caps/21a946fb-1bad-48ae-82c1-e8d1d2ab28c3"
  "id": "https://example.social/objects/d6cb8429-4d26-40fc-90ef-a100503afb73",
  "type": "Note",
  "content": "I'm really excited about the new capabilities feature!",
  "attributedTo": "https://example.social/users/bob"

As you can tell, the capabilities object does not include an Announce grant, which means that a proof will not be provided for Announce objects.

Alyssa wants to like the post, so she creates a normal Like activity and sends it to the Like capability URI. The server responds with an Accept object that she can forward to her recipients:

  "@context": [
  "id": "https://example.social/proofs/fa43926a-63e5-4133-9c52-36d5fc6094fa",
  "type": "Accept",
  "actor": "https://example.social/users/bob",
  "object": {
    "id": "https://example.social/activities/12945622-9ea5-46f9-9005-41c5a2364f9c",
    "type": "Like",
    "object": "https://example.social/objects/d6cb8429-4d26-40fc-90ef-a100503afb73",
    "actor": "https://example.social/users/alyssa",
    "to": [

Bob can be removed from the recipient list, as he already processed the side effects of the activity when he accepted it. Alyssa can then forward this object on to her followers, which can verify the proof by fetching it, or alternatively verifying the LDS signature if present.

Example: Invoking a relayable capability

Some capabilities, like Create result in the server hosting the invoked capability relaying the message forward instead of using proofs.

In this example, the post being relayed is assumed to be publicly accessible. Instances where a post is not publicly accessible should create a capability URI which returns the post object.

Alyssa decides to post a reply to the message from Bob she just liked above:

  "@context": [
  "to": ["https://example.social/users/alyssa/followers"],
  "cc": ["https://www.w3.org/ns/activitystreams#Public"],
  "type": "Create",
  "actor": "https://www.w3.org/users/alyssa",
  "object": {
    "capabilities": {
      "Create": "https://example.social/caps/97706df4-86c0-480d-b8f5-f362a1f45a01",
      "Like": "https://example.social/caps/6db4bec5-619d-45a2-b3d7-82e5a30ce8a5"
    "type": "Note",
    "content": "I am really liking the new object capabilities feature too!",
    "attributedTo": "https://example.social/users/alyssa"

An astute reader will note that the capability set is the same as the parent. This is because the parent reserves the right to reject any post which requests more rights than were in the parent post's capability set.

Alyssa POSTs this message to the Create capability from the original message and gets back a 202 Accepted status from the server. The server will then relay the message to her followers collection by dereferencing it remotely.

A possible extension here would be to allow the Create message to become intransitive and combined with a proof. This could be done by leaving the to and cc fields empty, and specifying audience instead or something along those lines.

Considerations with backwards compatibility

Obviously, it goes without saying that an ActivityPub 1.0 implementation can ignore these capabilities and do whatever they want to do. Thusly, it is suggested that messages with security labelling contrary to what is considered normal for ActivityPub 1.0 are not sent to ActivityPub 1.0 servers.

Determining what servers are compatible ahead of time is still an area that needs significant research activity, but I believe it can be done!

This is the second article in a series that will be a fairly critical review of ActivityPub from a trust & safety perspective. Stay tuned for more.

In our previous episode, I laid out some personal observations about implementing an AP stack from scratch over the past year. When we started this arduous task, there were only three other AP implementations in progress: Mastodon, Kroeg and PubCrawl (the AP transport for Hubzilla), so it has been a pretty significant journey.

I also described how ActivityPub was a student of the 'worse is better' design philosophy. Some people felt a little hurt by this, but they shouldn't have: after all, UNIX (of which modern Linux and BSD systems are a derivative) is also a student of the 'worse is better' philosophy. And much like the unices of yesteryear, ActivityPub right now has a lot of missing pieces. But that's alright, as long as the participants in this experiment understand the limitations.

For the first time in decades, the success of ActivityPub, in part by way of it's aggressive adoption of the 'worse is better' philosophy (which enabled them to ship something) has made some traction that has inspired people to believe that perhaps we can take back the Web and make it open again. This in itself is a wonderful thing, and we must do our best to seize this opportunity and run with it.

As I mentioned, there have been a huge amount of projects looking to implement AP in some way or other, many not yet in a public stage but seeking guidance on how to write an AP stack. My DMs have been quite busy with questions over the past couple of months about ActivityPub.

Let's talk about the elephant in the room, actually no not that one.

ActivityPub has been brought this far by the W3C Social CG. This is a Community Group that was chartered by the W3C to advance the Social Web.

While they did a good job at getting some of the best minds into the same room and talking about building a federated social web, a lot of decisions were already predetermined (using pump.io as a basis) or left underspecified to satisfy other groups inside W3C. Finally, the ActivityPub specification itself claimed that pure JSON could be used to implement ActivityPub, but the W3C kept pushing for layered specs on top like JSON-LD Linked Data Signatures, a spec that is not yet finalized but depends on JSON-LD.

LDS has a lot of problems, but I already covered them already. You can read about some of those problems by reading up on a mitigation known as Blind Key Rotation. Anyway, this isn't really about W3C pushing for use of LDS in AP, that is just one illustrated example of trying to bundle JSON-LD and dependencies into ActivityPub to make JSON-LD a defacto requirement.

Because of this bundling issue, we established a new community group, called LitePub, this was meant to be a workspace for people actually implementing ActivityPub stacks so that they could get documentation and support for using ActivityPub without JSON-LD, or using JSON-LD in a safe way. To date, the LitePub community is one of the best resources for asking questions about ActivityPub and getting real answers that can be used in production today.

But to build the next generation of ActivityPub, the LitePub group isn't enough. Is W3C still interested? Unfortunately, from what I can tell, not really: they are pursuing another system that was developed in house called SOLID, which is built on the Linked Data Platform. Since SOLID is being developed by W3C top brass, I would assume that they aren't interested in stewarding a new revision of ActivityPub. And why would they be? SOLID is essentially a semantic web retread of ActivityPub, which gives the W3C top brass exactly what they wanted in the first place.

In some ways, I argue that W3C's perceived disinterest in Social Web technologies other than SOLID largely has to do with fediverse projects having a very luke warm response to JSON-LD and LDS.

The good news is that there have been some initial conversations between a few projects on what a working group to build the next generation of ActivityPub would look like, how it would be managed, and how it would be funded. We will be having more of these conversations over the next few months.

ActivityPub: the present state

In the first blog post, I went into a little detail about the present state of ActivityPub. But is it really as bad as I said?

I am going to break down a few examples of faults in the protocol and talk about their current state as well as what we are doing for short-term mitigations and where we are doing them.

Ambiguous addressing: is it a DM or just a post directly addressed to a circle of friends?

As Osada and Hubzilla started to get attention, Mastodon and Pleroma users started to see weird behavior in their notifications and timelines: messages from people they didn't necessarily follow which got directly addressed to the user. These are messages sent to a group of selected friends, but can otherwise be forwarded (boosted/repeated/announced) to other audiences.

In other words, they do not have the same semantic meaning as a DM. But due to the way they were addressed, Mastodon and Pleroma saw them as a DM.

Mastodon fixed this issue in 2.6 by adding heuristics: if a message has recipients in both the to and cc fields, then it's a public message that is addressed to a group of recipients, and not a DM. Unfortunately, Mastodon treats it similarly to a followers-only post and does not infer the correct rights.

Meanwhile, Pleroma and Friendica came up with the idea to add a semantic hint to the message with the litepub:directMessage field. If this is set to true, it should be considered as a direct message. If the field is set to false, then it should be considered a group message. If the field is unset, then heuristics are used to determine the message type.

Pleroma has a branch in progress which adds both support for the litepub:directMessage field as well as the heuristics. It should be landing shortly (it needs a rebase and I need to fix up some of the heuristics).

So overall, the issue is reasonably mitigated at this point.

Fake direction attacks

Several months ago, Puckipedia did some fake direction testing against mainstream ActivityPub implementations. Fake direction attacks are especially problematic because they allow spoofing to happen.

She found vulnerabilities in Mastodon, Pleroma and PixelFed, as well as recently a couple of other fediverse software.

The vulnerabilities she reported in Mastodon, Pleroma and PixelFed have been fixed, but the class of vulnerability as she observes keeps appearing.

In part, we can mitigate this by writing excellent security documentation and referring people to read it. This is something that I hope the LitePub group can do in the future.

But for now, I would say this issue is not fully mitigated.

Leakage caused by Mastodon's followers-only scope

Software which is directly compatible with the Mastodon followers-only scope have a few problems, I am grouping them together here:

  • New followers can see content that was posted before they were authorized to view any followers-only content

  • Replies to followers-only posts are addressed to their own followers instead of the followers collection of the OP at the time the post was created (which creates metadata leaks about the OP)

  • Software which does not support the followers-only scope can dereference the OP's followers collection in any way they wish, including interpreting it as as:Public (this is explicitly allowed by the ActivityStreams 2.0 specification, you can't even make this up)

Mitigation of this is actually incredibly easy, which makes me question why Mastodon didn't do it to begin with: simply expand the followers collection when preparing to send the message outbound.

An implementation of this will be landing in Pleroma soon to harden the followers-only scope as well as fix followers-only threads to be more usable.

Implementation of this mitigation also brings the followers-only threads to Friendica and Hubzilla in a safe and compatible way: all fediverse software will be able to properly interact with the threads.

The “don't @ me” problem

Some of this interpretation about Zot may be slightly wrong, it is based on reading the specification for Zot and Zot 6.

Other federated protocols such as DFRN, Zot and Zot 6 provide a rich framework for defining what interactions are allowed with a given message. ActivityPub doesn't.

DFRN provides UI hints on each object that hint at what may be done with the object, but uses a capabilities system under the hood. Capability enforcement is done by the “feed producer,” which either accepts your request or denies it. If you comment on a post in DFRN, it is the responsibility of the parent “feed producer” to forward your post onward through the network.

Zot uses a similar capabilities system but provides a magic signature in response to consuming the capability, which you then forward as proof of acceptance. Zot 6 uses a similar authentication scheme, except using OpenWebAuth instead of the original Zot authentication scheme.

For ActivityPub, my proposal is to use a system of capability URIs and proof objects that are cross-checked by the receiving server. In terms of the proof objects themselves, cryptographic signatures are not a component of this proof system, it is strictly capability based. Cryptographic verification could be provided by leveraging HTTP Signatures to sign the response, if desired. I am still working out the details on how precisely this will work, and that will probably be the what the next blog post is about.

As a datapoint: in Pleroma, we already use this cross-checking technique to verify objects which have been forwarded to us due to ActivityPub §7.1.2. This allows us to avoid JSON-LD and LDS signatures and is the recommended way to verify forwarded objects in LitePub implementations.

Unauthenticated object fetching

Right now, due to the nature of ActivityPub and the design motivations behind it, fetching public objects is entirely unauthenticated.

This has lead to a few incidents where fediverse users have gotten upset over their posts still arriving at servers they have blocked, since they naturally expect that posts won't arrive at servers they have blocked.

Mastodon has implemented an extension for post fetching where fetching private posts is authenticated using the HTTP Signature of the user who is fetching the post. This is a possible way of solving the authentication problem: instances can be identified based on which actor signed the request.

However, I don't think that fetching private posts in this way (instead this should always fail) is a good idea and wouldn't recommend it. With that said, a more generalized approach based on using HTTP Signatures to fetch public posts could be workable.

But I do not think the AP server should use a random user's key to sign the requests: instead there should be an AP actor which explicitly represents the whole instance, and the instance actor's key should be used to sign the fetch requests instead. That way information about individual users isn't leaked, and signatures aren't created without the express consent of a random instance user.

Once object fetches are properly authenticated in a way that instances are identifiable, then objects can be selectively disclosed. This also hardens object fetching via third parties such as crawlers.


In this particular blog entry, I discussed why ActivityPub is still the hero we need despite being designed with the 'worse is better' philosophy, as well as discussed some early plans for cross-project collaboration on a next generation ActivityPub-based protocol, and discussed a few of the common problem areas with ActivityPub and how we can mitigate them in the future.

And with that, despite the present issues we face with ActivityPub, I will end this by borrowing a common saying from the cryptocurrency community: the future is bright, the future is decentralized.

This is the first article in a series that will be a fairly critical review of ActivityPub from a trust & safety perspective. Stay tuned for more.

In the modern day, myself and many other developers working on libre software have been exposed to a protocol design philosophy that emphasizes safety and correctness. That philosophy can be summarized with these goals:

  • Simplicity: the protocol must be simple to implement. It is more important for the protocol to be simple than the backend implementation.

  • Correctness: the protocol must be verifiably correct. Incorrect behavior is simply not allowed.

  • Safety: the protocol must be designed in a way that is safe. Behavior and functionality which risks safety is considered incorrect.

  • Completeness: the protocol must cover as many situations as is practical. All reasonably expected cases must be covered. Simplicity is not a valid excuse to reduce completeness.

Most people would correctly refer to these as good characteristics and overall the right way to approach designing protocols, especially in a federated and social setting. In many ways, the Diaspora protocol could be considered as an example of this philosophy of design.

The “worse is better” approach to protocol design is only slightly different:

  • Simplicity: the protocol must be simple to implement. It is important for the backend implementation to be equally simple as the protocol itself. Simplicity of both implementation and protocol are the most important considerations in the design.

  • Correctness: the protocol must be correct when tested against reasonably expected cases. It is more important to be simple than correct. Inconsistencies between real implementations and theoretical implementations are acceptable.

  • Safety: the protocol must be safe when tested against basic use cases. It is more important to be simple than safe.

  • Completeness: the protocol must cover reasonably expected cases. It is more important for the protocol to be simple than complete. Under-specification is acceptable when it improves the simplicity of the protocol.

OStatus and ActivityPub are examples of the “worse is better” approach to protocol design. I have intentionally portrayed this design approach in a way to attempt to convince you that it is a really bad approach.

However, I do believe that this approach, even though it is considerably worse approach to protocol design which creates technologies that people simply cannot trust or have confidence in their safety while using those technologies, has better survival characteristics.

To understand why, we have to look at both what expected security features of federated social networks are, and what people mostly use social networks for.

When you ask people what security features they expect of a federated social networking service such as Mastodon or Pleroma, they usually reply with a list like this:

  • I should be able to interact with my friends.

  • The messages I share only with my friends should be handled in a secure manner. I should be able to depend on the software to not compromise my private posts.

  • Blocking should work reasonably well: if I block someone, they should disappear from my experience.

These requirements sound reasonable, right? And of course, ActivityPub mostly gets the job done. After all, the main use of social media is shitposting, posting selfies of yourself and sharing pictures of your dog. But would they be better served by a different protocol? Absolutely.

See, the thing is, ActivityPub is like a virus. The protocol is simple enough to implement that people can actually do it. And they are, aren't they? There's over 40 applications presently in development that use ActivityPub as the basis of their networking stack.

Why is this? Because, despite the design flaws in ActivityPub, it is generally good enough: you can interact with your friends, and in compliant implementations, addressing ensures that nobody else except for those you explicitly authorize will read your messages.

But it's not good enough: for example, people have expressed that they want others to be able to read messages, but not reply to them.

Had ActivityPub been a capability-based system instead of a signature-based system, this would never have been a concern to begin with: replies to the message would have gone to a special capability URI and then accepted or rejected.

There are similar problems with things like the Mastodon “followers-only” posts and general concerns like direct messaging: these types of messages imply specific policy, but there is no mechanism in ActivityPub to convey these semantics. (This is in part solved by the LitePub litepub:directMessage flag, but that's a kludge to be honest.)

I've also mentioned before that a large number of instances where there have been discourse about Mastodon verses Pleroma have actually been caused by complete design failures of ActivityPub.

An example of this is with instances you've banned being able to see threads from your instance still: what happens with this is that somebody from a third instance interacts with the thread and then the software (either Mastodon or Pleroma) reconstructs the entire thread. Since there is no authentication requirement to retrieve a thread, these blocked instances can successfully reconstruct the threads they weren't allowed to receive in the first place. The only difference between Mastodon and Pleroma here is that Pleroma allows the general public to view the shared timelines without using a third party tool, which exposes the leaks caused by ActivityPub's bad design.

In an ideal world, the number of ActivityPub implementations would be zero. But of course this is not an ideal world, so that leaves us with the question: “where do we go from here?”

And honestly, I don't know how to answer that yet. Maybe we can save ActivityPub by extending it to be properly capability-based and eventually dropping support for the ActivityPub of today. But this will require coordination between all the vendors. And with 40+ projects out there, it's not going to be easy. And do we even care about those 40+ projects anyway?

ActivityPub uses cryptographic signatures, mainly for the purpose of authenticating messages. This is largely for the purpose of spoofing prevention, but as any observant person would understand, digital signatures carry strong forensic value.

Unfortunately, while ActivityPub uses cryptographic signatures, the types of cryptographic signatures to use have been left unspecified. This has lead to various implementations having to choose on their own which signature types to use.

The fediverse has settled on using not one but two types of cryptographic signature:

  • HTTP Signatures: based on an IETF internet-draft, HTTP signatures provide a cryptographic validation of the headers, including a Digest header which provides some information about the underlying object. HTTP Signatures are an example of detached signatures. HTTP Signatures also generally sign the Date header which provides a defacto validity period.

  • JSON-LD Linked Data Signatures: based on a W3C community draft, JSON-LD Linked Data Signatures provide an inline cryptographic validation of the JSON-LD document being signed. JSON-LD Linked Data Signatures are commonly referred to as LDS signatures or LDSigs because frankly the title of the spec is a mouthful. LDSigs are an example of inline signatures.

Signatures and Deniability

When we refer to deniability, what we're talking about is forensic deniability, or put simply the ability to plausibly argue in a court or tribunal that you did not sign a given object. In essence, forensic deniability is the ability to argue plausible deniability when presented with a piece of forensic evidence.

Digital signatures are by their very nature harmful with regard to forensic deniability because they are digital evidence showing that you signed something. But not all signature schemes are made equal, some are less harmful to deniability than others.

A good signature scheme which does not harm deniability has the following basic attributes:

  • Signatures are ephemeral: they only hold validity for a given time period.

  • Signatures are revocable: they can be invalidated during the validity period in some way.

Both HTTP Signatures and LDSigs have weaknesses — specifically, both implementations do not allow for the possibility of future revocation of the signature, but LDSigs is even worse because LDSigs are intentionally forever.

Mitigating the revocability problem with Blind Key Rotation

Blind Key Rotation is a mitigation that builds on the fact that ActivityPub implementations must fetch a given actor again in the event that signature authentication fails, by using this fact to provide some level of revocability.

The mitigation works as follows:

  1. You delete one or more objects in a short time period.

  2. Some time after the deletions are processed, the instance rekeys your account. It does not send any Update message or similar because signing your new key with your old key defeats the purpose of this exercise.

  3. When you next publish content, signature validation fails and the instance fetches your account's actor object again to learn the new keys.

  4. With the new keys, signature validation passes and your new content is published.

It is important to emphasize that in a Blind Key Rotation, you do not send out an Update message with new keys. The reason why this is, is because you do not want to create a cryptographic relationship between the keys. By creating a cryptographic relationship, you introduce new digital evidence which can be used to prove that you held the original keypair at some time in the past.


If you still have questions, contact me on the fediverse: @kaniini@pleroma.site

A lot of people make assumptions about my position on whether or not JSON-LD is actually good or not. The reality is that my view is more nuanced than that: there are great uses for JSON-LD, but it's not appropriate in the scenario it is used in ActivityPub.

What is JSON-LD anyway?

JSON-LD stands for JSON Linked Data. Linked Data is a “Big Data” technique which involves creating large graphs of interlinked pieces of data, intended to help enrich data sets with more semantic context (this is known as graph coloring), as well as additional data linked by URI (hince why it's called linked data). The Linked Data concept can be extremely powerful for data analysis when used in the appropriate context. A good example of where linked data is useful is healthcare.gov, where they use it to help compare performance and value verses cost of US health insurance plans.

ActivityPub and JSON-LD

Another example where JSON-LD is ostensibly used is ActivityPub. ActivityPub inherits it's JSON-LD dependency from ActivityStreams 2.0, which is a data format that enjoys wide use outside of the ActivityPub ecosystem: for example, Twitter, Instagram, Facebook and Tumblr all use variations of ActivityStreams 2.0 objects in various places inside their APIs.

These services find the JSON-LD concept useful because their advertising customers can leverage JSON-LD (in facebook, the open graph concept they frequently pitch to advertisers is built in part on top of JSON-LD) to optimize their advertising campaigns.

But does JSON-LD provide any value in a social networking environment which does not have advertising? In my opinion, not really: it's just a artifact of the “if you're not the customer, you're the product” nature of the proprietary social networking services. As previously stated, the primary advantage of JSON-LD and the linked data philosophy in general is data enrichment, and data enrichment is largely useful to two groups: advertisers and intelligence (public or private).

Since the federated social networking services don't have advertising, that just leaves intelligence.

Private intelligence and social networking, how data enrichment can impact your credit score

There are various kinds of private intelligence firms out there which collect information about you, me, and everyone else. You've probably heard of some of them, and some of the products they sell: companies like Experian, InfoCheckUSA and Equifax sell various products like FICO credit scores and background reports which determine everything from whether or not you can rent or buy a car or house to whether or not you can get a job.

But did you know these companies crawl your use of the proprietary social networking services? There are companies like FriendlyScore which sell credit-related data based on how you utilize social networking services. Those “social” credit scores are directly enabled by technology such as JSON-LD and ActivityStreams 2.0.

Public intelligence and social networking, how data enrichment can get you killed

We've all heard about Predator drones and drone strikes in the news. In the past decade, drone strikes have been used to attack countless targets. But how do our public intelligence agencies determine who is a target? It's very similar to how the private intelligence agencies determine whether you should own a house or have a job: they use big data methods to analyze all of the metadata they collected.

If you write a post on a social networking service and attach GPS data to it, they can use that information to determine a general pattern of when and where you are, and then feed it into a machine learning algorithm to determine when and where you will likely be in the future. They can also use this metadata analysis to prove certain assertions about your identity to a level of certainty which determines if you become a target, even if you're not really the same person they are trying to find.

Conclusion: safety is more important than data enrichment

These techniques that are used both in the public and private sector are what the press tend to refer to as “Big Data” techniques. JSON-LD is a “Big Data” technology that can be leveraged in these ways. But at the same time, we can leverage some “Big Data” techniques in such a way that JSON-LD parsers will automatically do what we want them to do.

In my opinion, it is a critical obligation of federated social networking service developers to ensure that handling of data is done in the most secure way possible, built on proven fundamentals. I view the inclusion of JSON-LD in the ActivityPub and ActivityStreams 2.0 standards to be harmful toward that obligation.

Pleroma and JSON-LD

As you may know, there are two mainstream ActivityPub servers that are in wide use: Mastodon and Pleroma. Mastodon uses JSON-LD and Pleroma does not. But they are able to interoperate just fine despite this. This is largely because Pleroma provides JSON-LD attributes in the messages it generates without actively using them itself.

Handling ActivityPub in a world without JSON-LD

The origin of the Transmogrifier name

Instead, Pleroma has a module called Transmogrifier that translates between real ActivityPub and our ActivityPub internal representation. The use of AP constructs in our internal representation is the origin of the statement that Pleroma uses ActivityPub internally, and to an extent it is a very truthful statement: our internal representation and object graph are directly derived from an earlier ActivityPub draft, but it's not quite the same, and there have been a few bugs where things have not been translated correctly which have resulted in leaks and other problems.

Besides the Transmogrifier, we have two functions which fetch new pieces into the graphs we build: Object.normalize() and Activity.normalize(). This could be considered to be a similar approach to JSON-LD except that it's explicit instead of implicit. The explicit fetching of new graph pieces is a security feature: it allows us to validate that we actually trust what we're fetching before we do it. This helps us to prevent various “fake direction” attacks which can be used for spoofing.

LitePub and JSON-LD

LitePub is a recent initiative that was started between Pleroma and a few other ActivityPub implementations to slim down the ActivityPub standard into something that is minimalist and secure. While LitePub itself does not require JSON-LD, LitePub implementations follow some JSON-LD like behaviors where it makes sense, and LitePub provides a @context which allows JSON-LD parsers to transparently parse LitePub messages.

Leveraging Linked Data for Object Capability Enforcement

The main principle LitePub is built on is the use of leveraging the linked data paradigm to perform object capability enforcement. This can work either explicitly (as is done in Pleroma) or implicitly (as is done in Mastodon when parsing a LitePub activity).

We do this by treating every Object ID in LitePub as a capability URI. When processing messages that reference a capability URI, we check to make sure the capability URI is still valid by re-fetching the object. If fetching the object fails, then the capability URI is no longer valid. This prevents zombie activities.

A note on Zombie Activities

There are two primary ways of securing ActivityPub implementations with digital signatures: JSON Linked Data Signatures (LDSigs) and the construction built on HTTP Signatures that is leveraged in LitePub. These can be referred to as inline signatures and transient signatures, respectively.

The problem with inline signatures is that they are valid forever. LDSig signatures have no expiration and have no revocation method. Because of this, if an Object is deleted, it can come back to life. The solution created by the LDSig advocates is to use Tombstone objects for all deletions, but that creates a potential metadata leak that proves a post once existed which harms plausible deniability.

The LitePub approach on the other hand is to treat all objects as capability URIs. This means when an object is deleted, future attempts to access the capability URI fail and thus the object cannot come back to life through boosting or other means.


Hopefully this clarifies my views on JSON-LD and it's applications in the fediverse. Feel free to ask me questions if you have any.

This is my new blog which replaces the old Jekyll-based one. Long-form content which is best appreciated in blog format will be published here. I decided to use Write Freely as a test and ultimately it does seem to fit my requirements the best, so I am going to stick with using it.

If you're using Pleroma then this will appear as a nicely rendered Article. If you're using some other fediverse software, your mileage varies.

Stay tuned for a few blog posts about various things, such as:

  • ActivityPub, why JSON-LD is harmful in an server-to-server context
  • More detailed discussion of various security postures in Fediverse development
  • Posts about LitePub, the specification which intends to bring ActivityPub back to something simple and robust with good security properties

Ok, bye.