kaniini's blog!

With all of the recent hullabaloo with Gab, and then, today Kiwi Farms joining the fediverse, there has been a lot of people asking questions about how data flows in the fediverse and what exposure they actually have.

I'm not really particularly a fan of either of those websites, but that's beside the point. The point here is to provide an objective presentation of how instances federate with each other and how these federation transactions impact exposure.

How Instances Federate

To start, lets describe a basic model of a federated network. This network will have five actors in it:

  • alyssa@social.example
  • bob@chatty.example
  • chris@photos.example
  • emily@cat.tube
  • sophie@activitypub.dev

(yeah yeah, I know, I'm not that good at making up fake domains.)

Next, we will build some relationships:

  • Sophie follows Alyssa and Bob
  • Emily follows Alyssa and Chris
  • Chris follows Emily and Alyssa
  • Bob follows Sophie and Alyssa
  • Alyssa follows Bob and Emily

Here's what that looks like as a graph:

A graph of social relationships.
A graph of social relationships.

Broadcasts

Normally posts flow through the network in the form of broadcasts. A broadcast type post is one that is sent to and only to a pre-determined set of targets, typically your followers collection.

So, this means that if Sophie makes a post, chatty.example is the only server that gets a copy of it. It does not matter that chatty.example is peered with other instances (social.example).

This is, by far, the majority of traffic inside the fediverse.

Relaying

The other kind of transaction is easily described as relaying.

To extend on our example above, lets say that Bob chooses to Announce (Mastodon calls this a boost, Pleroma calls this a repeat) the post Sophie sent him.

Because Bob is followed by Sophie and Alyssa, both of these people receive a copy of the Announce activity (an activity is a message which describes a transaction). Relay activities refer to the original message by it's unique identifier, and recipients of Announce activities use the unique identifier to fetch the referred message.

For now, we will assume that Alyssa's instance (social.example) was able to succeed in fetching the original post, because there's presently no access control in practice on fetching posts in ActivityPub.

This now means that Sophie's original post is present on three servers:

  • activitypub.dev
  • chatty.example
  • social.example

Relaying can cause perceived problems when an instance blocks another instance, but these problems are actually caused by a lack of access control on object fetches.

Replying

A variant on the broadcast-style transaction is a Create activity that references an object as a reply.

Lets say Alyssa responds to Sophie's post that was boosted to her. She composes a reply that references Sophie's original post with the inReplyTo property.

Because Alyssa is followed by actors on the entire network, now the entire network goes and fetches Sophie's post and has a copy of it.

This too can cause problems when an instance blocks another. And like in the relaying case, it is caused by a lack of access control on object fetches.

Metadata Leakage

From time to time, people talk about metadata leakage with ActivityPub. But what does that actually mean?

Some people erroneously believe that the metadata leakage problem has to do with public (without access control) posts appearing on instances which they have blocked. While that is arguably a problem, that problem is related to the lack of access controls on public posts. The technical term for a publicly available post is as:Public, a reference to the security label that is applied to them.

The metadata leakage problem is an entirely different problem. It deals with posts that are not labelled as:Public.

The metadata leakage problem is this: If Sophie composes a post addressed to her followers collection, then only Bob receives it. So far, so good, no leakage. However, because of bad implementations (and other problems), if Bob replies back to Sophie, then his post will be sent not only to Sophie, but Alyssa. Based on that, Alyssa now has knowledge that Sophie posted something, but no actual idea what that something was. That's why it's called a metadata leakage problem — metadata about one of Sophie's objects existing and it's contents (based on the text of the reply) are leaked to Alyssa.

This problem is the big one. It's not technically ActivityPub's fault, either, but a problem in how ActivityPub is typically implemented. But at the same time, it means that followers-only posts can be risky. Mastodon covers up the metadata leakage problem by hiding replies to users you don't follow, but that's all it is, a cover up of the problem.

Solution?

The solution to the metadata leakage problem is to have replies be forwarded to the OP's audience. But to do this, we need to rework the way the protocol works a bit. That's where proposals like moving to an OCAP-based variant of ActivityPub come into play. In those variants, doing this is easy. But in what we have now, doing this is difficult.

Anyway, I hope this post helps to explain how data flows through the network.

OCAP refers to Object CAPabilities. Object Capabilities are one of many possible ways to achieve capability-based security. OAuth Bearer Tokens, for example, are an example of an OCAP-style implementation.

In this context, OCAP refers to an adaptation of ActivityPub which utilizes capability tokens.

But why should we care about OCAP? OCAP is a more flexible approach that allows for more efficient federation (considerably reduced cryptography overhead!) as well as conditional endorsement of actions. The latter enables things like forwarding Create activities using tokens that would not normally be authorized to do such things (think of this like sudo, but inside the federation). Tokens can also be used to authorize fetches allowing for non-public federation that works reliably without leaking metadata about threads.

In short, OCAP fixes almost everything that is lacking about ActivityPub's security, because it defines a rigid, robust and future-proof security model for the fediverse to use.

How does it all fit together?

This work is being done in the LitePub (maybe soon to be called SocialPub) working group. LitePub is to ActivityPub what the WHATWG is to HTML5. The examples I use here don't necessarily completely line up with what is really in the spec, because they are meant to just be a basic outline of how the scheme works.

So the first thing that we do is extend the AS2 actor description with a new endpoint (capabilityAcquisitionEndpoint) which is used to acquire a new capability object.

Example: Alyssa P. Hacker's actor object
{
  "@context": "https://social.example/litepub-v1.jsonld",
  "id": "https://social.example/~alyssa",
  "capabilityAcquisitionEndpoint": "https://social.example/caps/new"
  [...]
}

Bob has a server which lives at chatty.example. Bob wants to exchange notes with Alyssa. To do this, Bob's instance needs to acquire a capability that he uses to federate in the future by POSTing a document to the capabilityAcquisitionEndpoint and signing it with HTTP Signatures:

Example: Bob's instance acquires the inbox:write and objects:read capabilities
{
  "@context": "https://chatty.example/litepub-v1.jsonld",
  "id": "https://chatty.example/caps/request/9b2220dc-0e2e-4c95-9a5a-912b0748c082",
  "type": "Request",
  "capability": ["inbox:write", "objects:read"],
  "actor": "https://chatty.example"
}

It should be noted here that Bob's instance itself makes the request, using an instance-specific actor. This is important because capability tokens are scoped to their actor. In this case, the capability token may be invoked by any children actors of the instance, because it's an instance-wide token. But the instance could request the token strictly on Bob's behalf by using Bob's actor and signing the request with Bob's key.

Alyssa's instance responds with a capability object:

Example: A capability token
{
  "@context": "https://social.example/litepub-v1.jsonld",
  "id": "https://social.example/caps/640b0093-ae9a-4155-b295-a500dd65ee11",
  "type": "Capability",
  "capability": ["inbox:write", "objects:read"],
  "scope": "https://chatty.example",
  "actor": "https://social.example"
}

There's a few peculiar things about this object that I'm sure you've probably noticed. Lets look at this object together:

  • The scope describes the actor which may use the token. Implementations check the scope for validity by merging it against the actor referenced in the message.

  • The actor here describes the actor which granted the capability. Usually this is an instance-wide actor, but it may also be any other kind of actor.

In traditional ActivityPub the mechanism through which Bob authenticates and later authorizes federation is left undefined. This is the hole that got filled with signature-based authentication, and is being filled again with OCAP.

But how do we invoke the capability to exchange messages? There's a couple of ways.

When pushing messages, we can simply reference the capability by including it in the message:

Example: Pushing a note using a capability
{
  "@context": "https://chatty.example/litepub-v1.jsonld",
  "id": "https://chatty.example/activities/63ffcdb1-f064-4405-ab0b-ec97b94cfc34",
  "capability": "https://social.example/caps/640b0093-ae9a-4155-b295-a500dd65ee11",
  "type": "Create",
  "object": {
    "id": "https://chatty.example/objects/de18ad80-879c-4ad2-99f7-e1c697c0d68b",
    "type": "Note",
    "attributedTo": "https://chatty.example/~bob",
    "content": "hey alyssa!",
    "to": ["https://social.example/~alyssa"]
  },
  "to": ["https://social.example/~alyssa"],
  "cc": [],
  "actor": "https://chatty.example/~bob"
}

Easy enough, right? Well, there's another way we can do it as well, which is to use the capability as a bearer token (because it is one). This is useful when fetching objects:

Example: Fetching an object with HTTP + capability token
GET /objects/de18ad80-879c-4ad2-99f7-e1c697c0d68b HTTP/1.1
Accept: application/activity+json
Authorization: Bearer https://social.example/caps/640b0093-ae9a-4155-b295-a500dd65ee11

HTTP/1.1 200 OK
Content-Type: application/activity+json

[...]

Because we have a valid capability token, the server can make decisions on whether or not to disclose the object based on the relationship associated with that token.

This is basically OCAP in a nutshell. It's simple and easy for implementations to adopt and gives us a framework for extending it in the future to allow for all sorts of things without leakage of cryptographically-signed metadata.

If this sort of stuff interests you, drop by #litepub on freenode!

A little over 2 years ago, Pleroma was started. At the time, Pleroma was largely developed by one person, and they were busy working toward a MVP. This lead to an interesting post being noted in dzuk's controversial blocklist advisory post.

Of course, time has moved on, and Pleroma has gained moderation tools that, in the hands of a skilled admin, provide the best possible moderation experience on the fediverse today. But getting to where we are now from 2 years ago has been a long journey.

moderator role

A few months after the post where lain said that he was still working on basic functionality, Pleroma got the first moderation tool around December 2017. You can set the moderator role on a user using the CLI:

$ MIX_ENV=prod mix pleroma.user set kaniini —moderator

Moderators have the ability to do a few things, namely delete any post from the local instance. For a while, this got the job done for most Pleroma instances because this was a reasonably quiet period of existence for the fediverse.

April 2018: birth of the Message Rewrite Facility

In April 2018, a new instance launched called Switter in response to the FOSTA/SESTA bill which unfairly targeted sex workers. This lead to some new problems in the fediverse, because largely the fediverse had never been exposed to an instance designed around advertising before. There were many cultural conflicts as well which lead to many fights during the launch.

Eventually, Switter modified Mastodon so that their posts would federate in a way that always ensured that media was always marked sensitive while not requiring their local users to mark their media sensitive, but this was a point of contention for several months.

In the meantime, the very first version of MRF was written and integrated into Pleroma, allowing for admins to force incoming posts from Switter to be unconditionally marked sensitive.

This version of MRF was very limited by comparison to the MRF we know today. For example, it only allowed one policy module to be loaded at any given time. It also did not implement a proper Elixir behaviour so that the compiler could validate the policy module for correctness. It did get the job done however.

May 2018: MRF begins to resemble the framework we have today

The original version of MRF was a minimal patch intended to allow instance admins to be able to block content from a configured set of instances, but the implementation lacked flexibility. href (the admin of pleroma.fr) came along and expanded upon my initial patch by allowing policies to be chained. This was a serious advancement in terms of enabling MRF to turn into the fully-fledged framework we enjoy today.

June 2018: Accept lists

Some instances on the fediverse operate on an accept list basis, where your server has to be explicitly granted permission to federate with the instance. An example of this would be awoo.space.

Based on a request, this functionality was added to Pleroma's MRF in June. This allows admins to set up an instance operating on an accept list basis without having to do any major changes in the code.

December 2018: Large thread filter

Extremely large threads (colloquially referred to as “hellthreads”) cause significant problems for resource consumption on instances and were abused by some people to be very annoying. [Pleroma implemented a large thread filter][mrf_hellthread] in the form of the mrf_hellthread module that blocked these threads based on a configurable threshold.

January 2019: Anti-followbot module

Followbots are unpopular among many users in the fediverse because they are perceived as being a data mining vector, or perhaps just downright creepy. At one point, they were necessary to help bootstrap new instances and get them well-federated, but this niche has been better solved through the use of relays. As a mitigation for these concerns, an anti-followbot mitigation was introduced to MRF.

February 2019: Keyword module, user tags and tag module

Sometimes it is necessary to mark posts as sensitive if they contain certain keywords. On most platforms, this work has to be done manually, and it can take up a lot of time. As a solution to this problem, a module which matches messages based on keywords was added.

We also added an API which allowed for users to be labelled with various classifiers. This was leveraged inside the MRF framework with a module that acted based on the presence of specific user tags.

April 2019: Pleroma FE integration, Reporting

In April, we added integration for the moderation tools exposed by MRF into Pleroma FE. This mostly consisted of tagging users with the appropriate tags using the user tagging API, but allows for efficient moderation work to be done.

We also added support for a report system which allows people to report spam and other TOS violations to their admins.

Future

As can be seen, most initiatives involving moderation circle around the MRF framework and the future of the MRF framework is bright. We are already planning to rework the MRF framework after Pleroma 1.x release to make it more cleanly behaved. This work involves splitting MRF into classifiers, mutators and subchains.

The idea is that you have modules which detect if messages meet certain criterion, and if so, they attach classifiers to the message. Mutators then act on the message, making whatever modifications are requested. This flow is controlled by the use of conditional subchains: if classifier X is present, then process the message through subchain Y.

I'll be writing more about this design in the near future, but it is promising because it allows for backward compatibility with policy modules written against MRF today.

Some fediverse developers approach project management from the philosophy that they are building a product in it's own right instead of a tool. But does that approach really make sense for the fediverse?

It's that time again, patches have been presented which improve Mastodon's compatibility with the rest of the fediverse. However, the usual suspect has expressed disinterest in clicking the merge button. The users protest loudly about this unilateral decision, as is expected by the astute reader. Threats of hard forks are made. GitHub's emoji reactions start to arrive, mostly negative. The usual suspect fires back saying that the patches do not fit into his personal vision, leading to more negative reactions. But why?

I believe the main issue at stake is whether or not fediverse software is the product, or if it is the instances themselves which are the product. Yes, both the software and the instance itself, are products, but the question, really, is which one is actually more impactful?

Gargron (the author of Mastodon), for whatever reason, sees Mastodon itself as the core product. This is obvious based on the marketing copy he writes to promote the Mastodon software and the 300,000+ user instance he personally administrates where he is followed by all new signups by default. It is also obvious based on the dictatorial control he exerts over the software.

But is this view aligned with reality? Mastodon has very few configurable options, but admins have made modifications to the software, which add configuration options that contradict Gargron's personal vision. These features are frequently deployed by Mastodon admins and, to an extent, Mastodon instances compete with each other on various configuration differences: custom emoji, theming, formatting options and even the maximum length of a post. This competition, largely, has been enabled by the existence of “friendly” forks that add the missing configuration options.

My view is different. I see fediverse software as a tool that is used to build a community which optionally exists in a community of communities (the fediverse). In my view, users should be empowered to choose an instance which provides the features they want, with information about what features are available upfront. In essence, it is the instances themselves which are competing for users, not the software.

Monoculture harms competitiveness, there are thousands of Mastodon instances to choose from, but how many of them are truly memorable? How many are shipping stock Mastodon with the same old default color scheme and theme?

Outside of Mastodon, the situation is quite different. Most of us see the software we work on as a tool for facilitating community building. Accordingly, we try to do our best to give admins as many ways as possible to make their instance look and feel as they want. They are building the product that actually matters, we're just facilitating their work. After all, they are the ones who have to spend time customizing, promoting and managing the community they build. This is why Pleroma has extensive configuration and theming options that are presented in a way that is very easy to leverage. Likewise, Friendica, Hubzilla and even GNU Social can be customized in the same way: you're in control as the admin, not a product designer.

But Mastodon is still problematic when it comes to innovation in the fediverse at large. Despite the ability that other fediverse software give to users and admins to present their content in whatever form they want, Mastodon presently fails to render the content correctly:

Mastodon presents lists in an incorrect way.

The patches I referred to earlier correct this problem by changing how Mastodon processes posts from remote instances. They also provide a path toward improving usability in the fediverse by allowing us to work toward phasing out the use of Unicode mathematical constants as a substitute for proper formatting. The majority of fediverse microblogging software has supported this kind of formatting for a long time, many implementations predating Mastodon itself. Improved interoperability with other fediverse implementations sounds like a good thing, right? Well, it's not aligned with the Mastodon vision, so it's rejected.

The viewpoint that the software itself is primarily what matters is stifling fediverse development. As developers, we should be working together to improve the security and expressiveness of the underlying technology. This means that some amount of flexibility is required. Quoting RFC791:

In general, an implementation must be conservative in its sending behavior, and liberal in its receiving behavior.

There is no God of the fediverse. The fediverse exists and operates smoothly because we work together, as developers, in concert with the admin and user community at large. Accomplishing this requires compromise, not unilateral decision making.

CRTNet was an experiment to create an IRC network by and for the greater fediverse community.

Unfortunately, the project hasn't worked out in a desirable way. So, the network will be sunset effective March 15.

The rest of this post will be an examination of reasons why the project failed.

Software

CRTNet used what was believed to be a reasonably stable combination of UnrealIRCd and Atheme services. While there were many personal reasons I chose to use atheme for the project (like having previously written them), the choice of UnrealIRCd was largely a poor one.

A feature of CRTNet was integration with a bot called viera, which allowed for linking IRC services accounts to fediverse profiles. This feature depended on functional WHOX support, which UnrealIRCd did not provide. So, I found a module which provided WHOX support. All seemed well until a few months later when I observed UnrealIRCd was using 13 GB of RAM.

This lead us to discuss switching to another software, InspIRCd. Unfortunately, we standardized on using SPKIFP fingerprints to authenticate servers in the network with each other. Switching to InspIRCd meant abandoning SPKIFP support, so this proposal fizzled out. Meanwhile, my modified UnrealIRCd continues to consume large amounts of RAM.

From a technical perspective, the final nail in the coffin however is not software-related, but instead the result of IPv4 exhaustion: I needed to move the primary hub, but cannot due to being unable to coordinate access to the secondary hub. The reasons for that are complicated and not very interesting to discuss, so we will just leave it explained as a communications failure.

Cultural problems

The vision behind the project was to create a network for fediverse communities, much in the same way as Snoonet was started for reddit communities.

Unfortunately, what we have discovered is that creating such a network results in fediverse 'meta' drama and gossip being the primary topic of discussion. With this being the primary discussion topic, it provides no value to the userbase, so we were unable to gain traction regarding users.

Finally, structuring the network in an adhoc way instead of the way a traditional IRC network is structured (CRTNet had no shared responsibility for routing, etc) lead to the final set of technical problems.

Accordingly we are left with a network that has little value and little usage, and so I am sunsetting the project by terminating the primary hub on March 15.

To my knowledge, the main channel still on CRTNet is moving to their own server, irc.catgirl.network. I suggest giving that network a try instead.

Contrary to the public's perceptions, CommonsPub is no longer a fork of Pleroma and has not been for some time. They hired some professional Elixir developers who rewrote the codebase from scratch, in my opinion, badly.

CommonsPub began as a fork of Pleroma in July 2018 with the intention of enabling the creation of a generic platform for federated apps. This was, needless to say, confusing to us: the entire point of the Pleroma project itself is to create a generic platform for federated apps — this is, in fact, why it is called Pleroma: that is a reference to the omnipresent nature of a generic federated app platform. We have also been talking about federated apps for several years now, prior to the announcement of CommonsPub.

It should also be mentioned that at no time did the CommonsPub developers ever make any attempt to talk with or coordinate with us. While it is true that they are free to fork our code at any time, for any reason, it was quite disappointing that they forked our code and then contrasted our project in a light that was misleading at best — they discussed CommonsPub as existing for the purpose of providing this generic backend and Pleroma as not, while in reality Pleroma has been a generic backend all along.

At any rate, there is not much Pleroma code (but there still is some) remaining in CommonsPub, so I wouldn't classify it as a fork.

CommonsPub is not a generic ActivityPub server, but Pleroma is.

CommonsPub is not built on the ActivityPub protocol. While ActivityPub is used for federation, CommonsPub does not directly model itself on ActivityPub or ActivityStreams 2.0, instead using a custom graph model optimized for GraphQL usage.

Pleroma is built on ActivityPub in all ways: federation, on-disk storage and internal representation. Pleroma walks AS2 object trees as a proper RDF-style graph. Pleroma supports ActivityPub C2S and ActivityPub S2S protocols, as well as API emulations. CommonsPub does not support ActivityPub C2S.

CommonsPub does not even deliver on generic federated apps. Pleroma does.

MoodleNet, the primary application built on CommonsPub is directly bolted into the CommonsPub server itself.

Pleroma in contrast does not have any application logic directly bolted into the core: federated apps on Pleroma contain all application logic directly in the client or in the API emulations they consume if they are not native ActivityPub C2S clients.

CommonsPub components which remain and have been derived from Pleroma do not provide copyright attribution to Pleroma and thus violate the AGPL license Pleroma is made available to them under. This lack of documented legal provenance is another strong reason to not use CommonsPub in your project: if they do not attribute the code they borrowed from us, how can you know that there are not other missing attributions?

This is the third article in a series of articles about ActivityPub detailing the challenges of building a trustworthy, secure implementation of the protocol stack.

In this case, it also does a significant technical deep dive into informally specifying a set of protocol extensions to ActivityPub. Formal specification of these extensions will be done in the Litepub working group, and will likely see some amount of change, so this blog entry should be considered non-normative in it's entirety.

Over the past few years of creating and revising ActivityPub, many people have made a push for the inclusion of a capability-based security model as the core security primitive (instead, the core security primitive is “this section is non-normative,” but I'm not salty), but what would that look like?

There's a few different proposals in the works at varying stages of development that could be used to retrofit capability-based security into ActivityPub:

  • OCAP-LD, which adds a generic object capabilities framework for any consumer of JSON-LD (such as the Linked Data Platform, or the neutered version of LDP that is described as part of ActivityPub),

  • Litepub Capability Enforcement, which is preliminarily described by this blog post, and

  • PolaPub aka CapabilityPub which is only an outline stored in an .org document. It is presumed that PolaPub or CapabilityPub or whatever it is called next week will be built on OCAP-LD, but in fairness, this is pure speculation.

Why capabilities instead of ACLs?

ActivityPub, like the fediverse in general, is an open world system. Traditional ACLs fail to provide proper scalability to the possibility of 100s of millions of accounts across millions of instances. Object capabilities, on the other hand, are opaque tokens which allow the bearer to possibly consume a set of permissions.

The capability enforcement proposals presently proposed would be deployed as a hybrid approach: capabilities to provide horizontal scalability for the large number of accounts and instances, and deny lists to block specific interactions from actors. The combination of capabilities and deny lists provides for a highly robust permissions system for the fediverse, and mimics previous work on federated open world systems.

Drawing inspiration from previous work: the Second Life Open Grid Protocol

I've been following large scale interactive communications architectures for many years, which has allowed me to learn many things about the design and implementation of open world horizontally-scaled systems.

One of the projects that I followed very closely was started in 2008, as a collaboration between Linden Lab, IBM and some other participants: the Open Grid Protocol. While the Open Grid Protocol itself ultimately did not work out for various reasons (largely political), a large amount of the work was recycled into a significant redesign of the Second Life service's backend, and the SL grid itself now resembles a federated network in many ways.

OGP was built on the concept of using capability tokens as URIs, which would either map to an active web service or a confirmation. Since the capability token was opaque and difficult to forge, it provided sufficient proof of authentication without sharing any actual information about the authorization itself: the web services act on the session established by the capability URIs instead of on an account directly.

Like ActivityPub, OGP is an actor-centric messaging protocol: when logging in, the login server provides a set of “seed capabilities”, which allow use of the other services. From the perspective of the other services, invocation of those capability URIs is seen as an account performing an action. Sound familiar in a way?

The way Linden Lab implemented this part of OGP was by having a capabilities server which handled routing the invoked capability URIs to other web services. This step in and of itself is not particularly required, an OGP implementation could handle consumption of the capability URIs directly, as OpenSim does for example.

Bringing capability URIs into ActivityPub as a first step

So, we have established that capability URIs are an opaque token that can be called as a substitute for whatever backend web service was going to be used in the first place. But, what does that get us?

The simplest way to look at it is this way: there are activities which are relayable and others which are not relayable. Both can become capability-enabled, but require separate strategies.

Relayable activities

Create (in this context, thread replies) activities are relayable. This means the capability can simply be invoked by treating it as an inbox, and the server the capability is invoked on will relay the side effects forward. The exact mechanism for this is not yet defined, as it will require prototyping and verification, but it's not impossible. Capability URIs for relayable activities can likely be directly aliased to the sharedInbox if one is available, however.

Intransitive activities

Intransitive activities (ones which act on a pre-existing object that is not supplied) like Announce, Like, Follow will require proofs. We can already provide proofs in the form of an Accept activity:

{
  "@context": "https://www.w3.org/ns/activitystreams",
  "id": "https://example.social/proofs/fa43926a-63e5-4133-9c52-36d5fc6094fa",
  "type": "Accept",
  "actor": "https://example.social/users/bob",
  "object": {
    "id": "https://example.social/activities/12945622-9ea5-46f9-9005-41c5a2364f9c",
    "type": "Announce",
    "object": "https://example.social/objects/d6cb8429-4d26-40fc-90ef-a100503afb73",
    "actor": "https://example.social/users/alyssa",
    "to": ["https://example.social/users/alyssa/followers"],
    "cc": ["https://www.w3.org/ns/activitystreams#Public"]
  }
}

This proof can be optionally signed with LDS in the same way as OCAP-LD proofs. Signing the proof is not covered here, and the proof must be fetchable, as somebody looking to distribute their intransitive actions on objects known to be security labeled must validate the proof somehow.

Object capability discovery

A security labelled object has a new field, capabilities which is an Object that contains a set of allowed actions and the corresponding capability URI for them:

{
  "@context": [
    "https://www.w3.org/ns/activitystreams",
    "https://litepub.social/litepub/lice-v0.0.1.jsonld"
  ],
  "capabilities": {
    "Announce": "https://example.social/caps/4f230498-5a01-4bb5-b06b-e3625fc03947",
    "Create": "https://example.social/caps/d4c4d96a-36d9-4df5-b9da-4b8c74e02567",
    "Like": "https://example.social/caps/21a946fb-1bad-48ae-82c1-e8d1d2ab28c3"
  },
  [...]
}

Example: Invoking a capability

Bob makes a post, which he allows liking, and replying, but not announcing. That post looks like this:

{
  "@context": [
    "https://www.w3.org/ns/activitystreams",
    "https://litepub.social/litepub/lice-v0.0.1.jsonld"
  ],
  "capabilities": {
    "Create": "https://example.social/caps/d4c4d96a-36d9-4df5-b9da-4b8c74e02567",
    "Like": "https://example.social/caps/21a946fb-1bad-48ae-82c1-e8d1d2ab28c3"
  },
  "id": "https://example.social/objects/d6cb8429-4d26-40fc-90ef-a100503afb73",
  "type": "Note",
  "content": "I'm really excited about the new capabilities feature!",
  "attributedTo": "https://example.social/users/bob"
}

As you can tell, the capabilities object does not include an Announce grant, which means that a proof will not be provided for Announce objects.

Alyssa wants to like the post, so she creates a normal Like activity and sends it to the Like capability URI. The server responds with an Accept object that she can forward to her recipients:

{
  "@context": [
    "https://www.w3.org/ns/activitystreams",
    "https://litepub.social/litepub/lice-v0.0.1.jsonld"
  ],
  "id": "https://example.social/proofs/fa43926a-63e5-4133-9c52-36d5fc6094fa",
  "type": "Accept",
  "actor": "https://example.social/users/bob",
  "object": {
    "id": "https://example.social/activities/12945622-9ea5-46f9-9005-41c5a2364f9c",
    "type": "Like",
    "object": "https://example.social/objects/d6cb8429-4d26-40fc-90ef-a100503afb73",
    "actor": "https://example.social/users/alyssa",
    "to": [
      "https://example.social/users/alyssa/followers",
      "https://example.social/users/bob"
    ]
  }
}

Bob can be removed from the recipient list, as he already processed the side effects of the activity when he accepted it. Alyssa can then forward this object on to her followers, which can verify the proof by fetching it, or alternatively verifying the LDS signature if present.

Example: Invoking a relayable capability

Some capabilities, like Create result in the server hosting the invoked capability relaying the message forward instead of using proofs.

In this example, the post being relayed is assumed to be publicly accessible. Instances where a post is not publicly accessible should create a capability URI which returns the post object.

Alyssa decides to post a reply to the message from Bob she just liked above:

{
  "@context": [
    "https://www.w3.org/ns/activitystreams",
    "https://litepub.social/litepub/lice-v0.0.1.jsonld"
  ],
  "to": ["https://example.social/users/alyssa/followers"],
  "cc": ["https://www.w3.org/ns/activitystreams#Public"],
  "type": "Create",
  "actor": "https://www.w3.org/users/alyssa",
  "object": {
    "capabilities": {
      "Create": "https://example.social/caps/97706df4-86c0-480d-b8f5-f362a1f45a01",
      "Like": "https://example.social/caps/6db4bec5-619d-45a2-b3d7-82e5a30ce8a5"
    },
    "type": "Note",
    "content": "I am really liking the new object capabilities feature too!",
    "attributedTo": "https://example.social/users/alyssa"
  }
}

An astute reader will note that the capability set is the same as the parent. This is because the parent reserves the right to reject any post which requests more rights than were in the parent post's capability set.

Alyssa POSTs this message to the Create capability from the original message and gets back a 202 Accepted status from the server. The server will then relay the message to her followers collection by dereferencing it remotely.

A possible extension here would be to allow the Create message to become intransitive and combined with a proof. This could be done by leaving the to and cc fields empty, and specifying audience instead or something along those lines.

Considerations with backwards compatibility

Obviously, it goes without saying that an ActivityPub 1.0 implementation can ignore these capabilities and do whatever they want to do. Thusly, it is suggested that messages with security labelling contrary to what is considered normal for ActivityPub 1.0 are not sent to ActivityPub 1.0 servers.

Determining what servers are compatible ahead of time is still an area that needs significant research activity, but I believe it can be done!

This is the second article in a series that will be a fairly critical review of ActivityPub from a trust & safety perspective. Stay tuned for more.

In our previous episode, I laid out some personal observations about implementing an AP stack from scratch over the past year. When we started this arduous task, there were only three other AP implementations in progress: Mastodon, Kroeg and PubCrawl (the AP transport for Hubzilla), so it has been a pretty significant journey.

I also described how ActivityPub was a student of the 'worse is better' design philosophy. Some people felt a little hurt by this, but they shouldn't have: after all, UNIX (of which modern Linux and BSD systems are a derivative) is also a student of the 'worse is better' philosophy. And much like the unices of yesteryear, ActivityPub right now has a lot of missing pieces. But that's alright, as long as the participants in this experiment understand the limitations.

For the first time in decades, the success of ActivityPub, in part by way of it's aggressive adoption of the 'worse is better' philosophy (which enabled them to ship something) has made some traction that has inspired people to believe that perhaps we can take back the Web and make it open again. This in itself is a wonderful thing, and we must do our best to seize this opportunity and run with it.

As I mentioned, there have been a huge amount of projects looking to implement AP in some way or other, many not yet in a public stage but seeking guidance on how to write an AP stack. My DMs have been quite busy with questions over the past couple of months about ActivityPub.

Let's talk about the elephant in the room, actually no not that one.

ActivityPub has been brought this far by the W3C Social CG. This is a Community Group that was chartered by the W3C to advance the Social Web.

While they did a good job at getting some of the best minds into the same room and talking about building a federated social web, a lot of decisions were already predetermined (using pump.io as a basis) or left underspecified to satisfy other groups inside W3C. Finally, the ActivityPub specification itself claimed that pure JSON could be used to implement ActivityPub, but the W3C kept pushing for layered specs on top like JSON-LD Linked Data Signatures, a spec that is not yet finalized but depends on JSON-LD.

LDS has a lot of problems, but I already covered them already. You can read about some of those problems by reading up on a mitigation known as Blind Key Rotation. Anyway, this isn't really about W3C pushing for use of LDS in AP, that is just one illustrated example of trying to bundle JSON-LD and dependencies into ActivityPub to make JSON-LD a defacto requirement.

Because of this bundling issue, we established a new community group, called LitePub, this was meant to be a workspace for people actually implementing ActivityPub stacks so that they could get documentation and support for using ActivityPub without JSON-LD, or using JSON-LD in a safe way. To date, the LitePub community is one of the best resources for asking questions about ActivityPub and getting real answers that can be used in production today.

But to build the next generation of ActivityPub, the LitePub group isn't enough. Is W3C still interested? Unfortunately, from what I can tell, not really: they are pursuing another system that was developed in house called SOLID, which is built on the Linked Data Platform. Since SOLID is being developed by W3C top brass, I would assume that they aren't interested in stewarding a new revision of ActivityPub. And why would they be? SOLID is essentially a semantic web retread of ActivityPub, which gives the W3C top brass exactly what they wanted in the first place.

In some ways, I argue that W3C's perceived disinterest in Social Web technologies other than SOLID largely has to do with fediverse projects having a very luke warm response to JSON-LD and LDS.

The good news is that there have been some initial conversations between a few projects on what a working group to build the next generation of ActivityPub would look like, how it would be managed, and how it would be funded. We will be having more of these conversations over the next few months.

ActivityPub: the present state

In the first blog post, I went into a little detail about the present state of ActivityPub. But is it really as bad as I said?

I am going to break down a few examples of faults in the protocol and talk about their current state as well as what we are doing for short-term mitigations and where we are doing them.

Ambiguous addressing: is it a DM or just a post directly addressed to a circle of friends?

As Osada and Hubzilla started to get attention, Mastodon and Pleroma users started to see weird behavior in their notifications and timelines: messages from people they didn't necessarily follow which got directly addressed to the user. These are messages sent to a group of selected friends, but can otherwise be forwarded (boosted/repeated/announced) to other audiences.

In other words, they do not have the same semantic meaning as a DM. But due to the way they were addressed, Mastodon and Pleroma saw them as a DM.

Mastodon fixed this issue in 2.6 by adding heuristics: if a message has recipients in both the to and cc fields, then it's a public message that is addressed to a group of recipients, and not a DM. Unfortunately, Mastodon treats it similarly to a followers-only post and does not infer the correct rights.

Meanwhile, Pleroma and Friendica came up with the idea to add a semantic hint to the message with the litepub:directMessage field. If this is set to true, it should be considered as a direct message. If the field is set to false, then it should be considered a group message. If the field is unset, then heuristics are used to determine the message type.

Pleroma has a branch in progress which adds both support for the litepub:directMessage field as well as the heuristics. It should be landing shortly (it needs a rebase and I need to fix up some of the heuristics).

So overall, the issue is reasonably mitigated at this point.

Fake direction attacks

Several months ago, Puckipedia did some fake direction testing against mainstream ActivityPub implementations. Fake direction attacks are especially problematic because they allow spoofing to happen.

She found vulnerabilities in Mastodon, Pleroma and PixelFed, as well as recently a couple of other fediverse software.

The vulnerabilities she reported in Mastodon, Pleroma and PixelFed have been fixed, but the class of vulnerability as she observes keeps appearing.

In part, we can mitigate this by writing excellent security documentation and referring people to read it. This is something that I hope the LitePub group can do in the future.

But for now, I would say this issue is not fully mitigated.

Leakage caused by Mastodon's followers-only scope

Software which is directly compatible with the Mastodon followers-only scope have a few problems, I am grouping them together here:

  • New followers can see content that was posted before they were authorized to view any followers-only content

  • Replies to followers-only posts are addressed to their own followers instead of the followers collection of the OP at the time the post was created (which creates metadata leaks about the OP)

  • Software which does not support the followers-only scope can dereference the OP's followers collection in any way they wish, including interpreting it as as:Public (this is explicitly allowed by the ActivityStreams 2.0 specification, you can't even make this up)

Mitigation of this is actually incredibly easy, which makes me question why Mastodon didn't do it to begin with: simply expand the followers collection when preparing to send the message outbound.

An implementation of this will be landing in Pleroma soon to harden the followers-only scope as well as fix followers-only threads to be more usable.

Implementation of this mitigation also brings the followers-only threads to Friendica and Hubzilla in a safe and compatible way: all fediverse software will be able to properly interact with the threads.

The “don't @ me” problem

Some of this interpretation about Zot may be slightly wrong, it is based on reading the specification for Zot and Zot 6.

Other federated protocols such as DFRN, Zot and Zot 6 provide a rich framework for defining what interactions are allowed with a given message. ActivityPub doesn't.

DFRN provides UI hints on each object that hint at what may be done with the object, but uses a capabilities system under the hood. Capability enforcement is done by the “feed producer,” which either accepts your request or denies it. If you comment on a post in DFRN, it is the responsibility of the parent “feed producer” to forward your post onward through the network.

Zot uses a similar capabilities system but provides a magic signature in response to consuming the capability, which you then forward as proof of acceptance. Zot 6 uses a similar authentication scheme, except using OpenWebAuth instead of the original Zot authentication scheme.

For ActivityPub, my proposal is to use a system of capability URIs and proof objects that are cross-checked by the receiving server. In terms of the proof objects themselves, cryptographic signatures are not a component of this proof system, it is strictly capability based. Cryptographic verification could be provided by leveraging HTTP Signatures to sign the response, if desired. I am still working out the details on how precisely this will work, and that will probably be the what the next blog post is about.

As a datapoint: in Pleroma, we already use this cross-checking technique to verify objects which have been forwarded to us due to ActivityPub §7.1.2. This allows us to avoid JSON-LD and LDS signatures and is the recommended way to verify forwarded objects in LitePub implementations.

Unauthenticated object fetching

Right now, due to the nature of ActivityPub and the design motivations behind it, fetching public objects is entirely unauthenticated.

This has lead to a few incidents where fediverse users have gotten upset over their posts still arriving at servers they have blocked, since they naturally expect that posts won't arrive at servers they have blocked.

Mastodon has implemented an extension for post fetching where fetching private posts is authenticated using the HTTP Signature of the user who is fetching the post. This is a possible way of solving the authentication problem: instances can be identified based on which actor signed the request.

However, I don't think that fetching private posts in this way (instead this should always fail) is a good idea and wouldn't recommend it. With that said, a more generalized approach based on using HTTP Signatures to fetch public posts could be workable.

But I do not think the AP server should use a random user's key to sign the requests: instead there should be an AP actor which explicitly represents the whole instance, and the instance actor's key should be used to sign the fetch requests instead. That way information about individual users isn't leaked, and signatures aren't created without the express consent of a random instance user.

Once object fetches are properly authenticated in a way that instances are identifiable, then objects can be selectively disclosed. This also hardens object fetching via third parties such as crawlers.

Conclusion

In this particular blog entry, I discussed why ActivityPub is still the hero we need despite being designed with the 'worse is better' philosophy, as well as discussed some early plans for cross-project collaboration on a next generation ActivityPub-based protocol, and discussed a few of the common problem areas with ActivityPub and how we can mitigate them in the future.

And with that, despite the present issues we face with ActivityPub, I will end this by borrowing a common saying from the cryptocurrency community: the future is bright, the future is decentralized.

This is the first article in a series that will be a fairly critical review of ActivityPub from a trust & safety perspective. Stay tuned for more.

In the modern day, myself and many other developers working on libre software have been exposed to a protocol design philosophy that emphasizes safety and correctness. That philosophy can be summarized with these goals:

  • Simplicity: the protocol must be simple to implement. It is more important for the protocol to be simple than the backend implementation.

  • Correctness: the protocol must be verifiably correct. Incorrect behavior is simply not allowed.

  • Safety: the protocol must be designed in a way that is safe. Behavior and functionality which risks safety is considered incorrect.

  • Completeness: the protocol must cover as many situations as is practical. All reasonably expected cases must be covered. Simplicity is not a valid excuse to reduce completeness.

Most people would correctly refer to these as good characteristics and overall the right way to approach designing protocols, especially in a federated and social setting. In many ways, the Diaspora protocol could be considered as an example of this philosophy of design.

The “worse is better” approach to protocol design is only slightly different:

  • Simplicity: the protocol must be simple to implement. It is important for the backend implementation to be equally simple as the protocol itself. Simplicity of both implementation and protocol are the most important considerations in the design.

  • Correctness: the protocol must be correct when tested against reasonably expected cases. It is more important to be simple than correct. Inconsistencies between real implementations and theoretical implementations are acceptable.

  • Safety: the protocol must be safe when tested against basic use cases. It is more important to be simple than safe.

  • Completeness: the protocol must cover reasonably expected cases. It is more important for the protocol to be simple than complete. Under-specification is acceptable when it improves the simplicity of the protocol.

OStatus and ActivityPub are examples of the “worse is better” approach to protocol design. I have intentionally portrayed this design approach in a way to attempt to convince you that it is a really bad approach.

However, I do believe that this approach, even though it is considerably worse approach to protocol design which creates technologies that people simply cannot trust or have confidence in their safety while using those technologies, has better survival characteristics.

To understand why, we have to look at both what expected security features of federated social networks are, and what people mostly use social networks for.

When you ask people what security features they expect of a federated social networking service such as Mastodon or Pleroma, they usually reply with a list like this:

  • I should be able to interact with my friends.

  • The messages I share only with my friends should be handled in a secure manner. I should be able to depend on the software to not compromise my private posts.

  • Blocking should work reasonably well: if I block someone, they should disappear from my experience.

These requirements sound reasonable, right? And of course, ActivityPub mostly gets the job done. After all, the main use of social media is shitposting, posting selfies of yourself and sharing pictures of your dog. But would they be better served by a different protocol? Absolutely.

See, the thing is, ActivityPub is like a virus. The protocol is simple enough to implement that people can actually do it. And they are, aren't they? There's over 40 applications presently in development that use ActivityPub as the basis of their networking stack.

Why is this? Because, despite the design flaws in ActivityPub, it is generally good enough: you can interact with your friends, and in compliant implementations, addressing ensures that nobody else except for those you explicitly authorize will read your messages.

But it's not good enough: for example, people have expressed that they want others to be able to read messages, but not reply to them.

Had ActivityPub been a capability-based system instead of a signature-based system, this would never have been a concern to begin with: replies to the message would have gone to a special capability URI and then accepted or rejected.

There are similar problems with things like the Mastodon “followers-only” posts and general concerns like direct messaging: these types of messages imply specific policy, but there is no mechanism in ActivityPub to convey these semantics. (This is in part solved by the LitePub litepub:directMessage flag, but that's a kludge to be honest.)

I've also mentioned before that a large number of instances where there have been discourse about Mastodon verses Pleroma have actually been caused by complete design failures of ActivityPub.

An example of this is with instances you've banned being able to see threads from your instance still: what happens with this is that somebody from a third instance interacts with the thread and then the software (either Mastodon or Pleroma) reconstructs the entire thread. Since there is no authentication requirement to retrieve a thread, these blocked instances can successfully reconstruct the threads they weren't allowed to receive in the first place. The only difference between Mastodon and Pleroma here is that Pleroma allows the general public to view the shared timelines without using a third party tool, which exposes the leaks caused by ActivityPub's bad design.

In an ideal world, the number of ActivityPub implementations would be zero. But of course this is not an ideal world, so that leaves us with the question: “where do we go from here?”

And honestly, I don't know how to answer that yet. Maybe we can save ActivityPub by extending it to be properly capability-based and eventually dropping support for the ActivityPub of today. But this will require coordination between all the vendors. And with 40+ projects out there, it's not going to be easy. And do we even care about those 40+ projects anyway?

ActivityPub uses cryptographic signatures, mainly for the purpose of authenticating messages. This is largely for the purpose of spoofing prevention, but as any observant person would understand, digital signatures carry strong forensic value.

Unfortunately, while ActivityPub uses cryptographic signatures, the types of cryptographic signatures to use have been left unspecified. This has lead to various implementations having to choose on their own which signature types to use.

The fediverse has settled on using not one but two types of cryptographic signature:

  • HTTP Signatures: based on an IETF internet-draft, HTTP signatures provide a cryptographic validation of the headers, including a Digest header which provides some information about the underlying object. HTTP Signatures are an example of detached signatures. HTTP Signatures also generally sign the Date header which provides a defacto validity period.

  • JSON-LD Linked Data Signatures: based on a W3C community draft, JSON-LD Linked Data Signatures provide an inline cryptographic validation of the JSON-LD document being signed. JSON-LD Linked Data Signatures are commonly referred to as LDS signatures or LDSigs because frankly the title of the spec is a mouthful. LDSigs are an example of inline signatures.

Signatures and Deniability

When we refer to deniability, what we're talking about is forensic deniability, or put simply the ability to plausibly argue in a court or tribunal that you did not sign a given object. In essence, forensic deniability is the ability to argue plausible deniability when presented with a piece of forensic evidence.

Digital signatures are by their very nature harmful with regard to forensic deniability because they are digital evidence showing that you signed something. But not all signature schemes are made equal, some are less harmful to deniability than others.

A good signature scheme which does not harm deniability has the following basic attributes:

  • Signatures are ephemeral: they only hold validity for a given time period.

  • Signatures are revocable: they can be invalidated during the validity period in some way.

Both HTTP Signatures and LDSigs have weaknesses — specifically, both implementations do not allow for the possibility of future revocation of the signature, but LDSigs is even worse because LDSigs are intentionally forever.

Mitigating the revocability problem with Blind Key Rotation

Blind Key Rotation is a mitigation that builds on the fact that ActivityPub implementations must fetch a given actor again in the event that signature authentication fails, by using this fact to provide some level of revocability.

The mitigation works as follows:

  1. You delete one or more objects in a short time period.

  2. Some time after the deletions are processed, the instance rekeys your account. It does not send any Update message or similar because signing your new key with your old key defeats the purpose of this exercise.

  3. When you next publish content, signature validation fails and the instance fetches your account's actor object again to learn the new keys.

  4. With the new keys, signature validation passes and your new content is published.

It is important to emphasize that in a Blind Key Rotation, you do not send out an Update message with new keys. The reason why this is, is because you do not want to create a cryptographic relationship between the keys. By creating a cryptographic relationship, you introduce new digital evidence which can be used to prove that you held the original keypair at some time in the past.

Questions?

If you still have questions, contact me on the fediverse: @kaniini@pleroma.site