And I guess this question is two parts: 1. Regarding the current lemmy implementation, and 2. The activityPub protocol in general
Speaking of scale only, bigger instances are certainly better. More and smaller instances increase the coordination overhead significantly (remember that your instance saves and serves a copy of any remote post. In the extreme case this means every server needs to have a copy of all other servers. Also, the more instances, the more peers each server has to ask for an update.
Many small instances have other benefits though, among them higher resillience and independence.
I believe in the fediverse updates are almost always sent, not requested
Every complex system (and federated systems like Lemmy qualify) has more than one potential bottleneck that can become a problem in different conditions.
- Right now, the common performance bottleneck for Lemmy instances is heavy database reads caused by users browsing. Many of these queries are written inefficiently and can be optimized, there are things that can be done in Postgres to scale as well. But browse traffic is one kind of workload load that can reach limits, and it gets stressed when lots users are active on one big instance.
- Federated networks CAN experience federated replication load when there are lots of instances to deliver federation messages to. If I comment on this post, and the server hosting the community has to deliver the comment to (pinky to mouth) one million instances… that’s a different kind of workload and it gets stressed when there are lots of different instances subscribed to a single community.
The goldilocks zone is where there is a medium number of medium sized instances. Then each federation message can efficiently power browse traffic for a lot of users, and no one instance gets overwhelmed with browse traffic.
In practice, this is not how networks organize. There will both be instances that are “too large” and also lots of small instances. Right now, the Lemmy network is small and federation traffic is not a meaningful bottleneck. Browse traffic is, and that’s what the devs are working on. But with time, the limits of both these things can be pushed further out improving scalability of the etwork in both directions.
It’s a great question! To know this, we’d need to look into not just what the ActivityPub protocol says, but also exactly how the code base implements it, and how the server actually performs on the computers it’s deployed on.
We might look specifically at:
- How many requests (and of what types) does a typical end user send to their local instance?
- How many requests (etc.) does an instance send to its peer instances?
- How is (2) controlled by the number of subscriptions, posts, or other variables?
- How does instance performance respond to different kinds of request load?
- How have instance operators tuned the Lemmy server or its backends to manage different loads?
Because ActivityPub is built on HTTP, different types of request are expressed as different URL paths and HTTP methods. This should make it straightforward to characterize the servers’ behavior under different kinds of load (e.g. lots of local posts; lots of remote posts; lots of new user accounts; etc.)
Doing this “for serious” as an engineering project would require some amount of testing infrastructure, e.g. the ability to replay various kinds of traffic against a Lemmy server while monitoring its performance.
I had a similar curiousity… Like if I make my own instance but it’s just myself, is that even a net positive to the network? Now there’s a new instance pulling everything I want to it, rather than another bigger instance that might have used that share subscriptions…