649 lines
44 KiB
Plaintext
649 lines
44 KiB
Plaintext
---
|
||
title: "Do Servers Matter on Mastodon? Data-driven Design for Decentralized Social Media"
|
||
short-title: Mastodon Recommendations
|
||
authors:
|
||
- name: Carl Colglazier
|
||
affiliation:
|
||
name: Northwestern University
|
||
city: Evanston
|
||
state: Illinois
|
||
country: United States
|
||
corresponding: true
|
||
bibliography: references.bib
|
||
format:
|
||
acm-html:
|
||
comments:
|
||
hypothesis: false
|
||
acm-pdf:
|
||
output-file: mastodon-recommendations-acm.pdf
|
||
keep-md: true
|
||
include-in-header:
|
||
- text: |
|
||
\usepackage{siunitx}
|
||
acm-metadata:
|
||
# comment this out to make submission anonymous
|
||
anonymous: true
|
||
# comment this out to build a draft version
|
||
#final: true
|
||
|
||
# comment this out to specify detailed document options
|
||
# acmart-options: sigconf, review
|
||
|
||
# acm preamble information
|
||
copyright-year: 2018
|
||
acm-year: 2018
|
||
copyright: acmcopyright
|
||
doi: XXXXXXX.XXXXXXX
|
||
conference-acronym: "Conference acronym 'XX"
|
||
conference-name: |
|
||
Make sure to enter the correct
|
||
conference title from your rights confirmation email
|
||
conference-date: June 03--05, 2018
|
||
conference-location: Woodstock, NY
|
||
price: "15.00"
|
||
isbn: 978-1-4503-XXXX-X/18/06
|
||
|
||
# if present, replaces the list of authors in the page header.
|
||
shortauthors: Colglazier
|
||
|
||
# The code below is generated by the tool at http://dl.acm.org/ccs.cfm.
|
||
# Please copy and paste the code instead of the example below.
|
||
ccs: |
|
||
\begin{CCSXML}
|
||
<ccs2012>
|
||
<concept>
|
||
<concept_id>10003120.10003130.10003233.10010519</concept_id>
|
||
<concept_desc>Human-centered computing~Social networking sites</concept_desc>
|
||
<concept_significance>500</concept_significance>
|
||
</concept>
|
||
<concept>
|
||
<concept_id>10002951.10003317.10003338</concept_id>
|
||
<concept_desc>Information systems~Retrieval models and ranking</concept_desc>
|
||
<concept_significance>300</concept_significance>
|
||
</concept>
|
||
<concept>
|
||
<concept_id>10010405.10010497.10010498</concept_id>
|
||
<concept_desc>Applied computing~Document searching</concept_desc>
|
||
<concept_significance>300</concept_significance>
|
||
</concept>
|
||
<concept>
|
||
<concept_id>10003120.10003130</concept_id>
|
||
<concept_desc>Human-centered computing~Collaborative and social computing</concept_desc>
|
||
<concept_significance>300</concept_significance>
|
||
</concept>
|
||
</ccs2012>
|
||
\end{CCSXML}
|
||
|
||
\ccsdesc[500]{Human-centered computing~Social networking sites}
|
||
\ccsdesc[300]{Information systems~Retrieval models and ranking}
|
||
\ccsdesc[300]{Applied computing~Document searching}
|
||
\ccsdesc[300]{Human-centered computing~Collaborative and social computing}
|
||
keywords:
|
||
- decentralized online social networks
|
||
abstract: |
|
||
When trying to join Mastodon, a decentralized collection of interoperable social networking servers, new users face the dilemma of choosing a home server. Using trace data from millions of new Mastodon accounts, we show that new accounts are less likely to remain active on the network's largest general instances compared to others. Additionally, we observe a trend of users migrating from larger to smaller servers. Addressing the challenge of onboarding and server selection, the paper proposes a decentralized recommendation system for server using hashtags and the Okapi BM25 algorithm. This system leverages servers' top hashtags and their frequency to create a recommendation mechanism that respects Mastodon's decentralized ethos. Simulations demonstrate that such a tool can be effective even with limited data on each local server.
|
||
execute:
|
||
echo: false
|
||
error: false
|
||
warning: false
|
||
message: false
|
||
freeze: false
|
||
cache: true
|
||
fig-width: 6.75
|
||
knitr:
|
||
opts_knit:
|
||
verbose: true
|
||
code-block-border-left: false
|
||
code-block-bg: false
|
||
---
|
||
|
||
```{r}
|
||
#| label: setup
|
||
|
||
my_profile <- Sys.getenv("QUARTO_PROFILE", unset="acm")
|
||
if (my_profile == "acm") {
|
||
class_wide <- ".column-body"
|
||
} else {
|
||
class_wide <- ".column-page"
|
||
}
|
||
|
||
library(here)
|
||
|
||
get_here <- function(file) {
|
||
here::here(file)
|
||
}
|
||
|
||
envs <- Sys.getenv()
|
||
|
||
library(modelsummary)
|
||
# Revert to old modelsummary system for now.
|
||
options(modelsummary_factory_default = 'kableExtra')
|
||
```
|
||
|
||
# Introduction
|
||
|
||
Following Twitter's 2022 acquisition, Mastodon---an open-source, decentralized social network and microblogging community---saw an increase in activity and attention as a potential Twitter alternative [@heFlockingMastodonTracking2023; @lacavaDriversSocialInfluence2023]. While millions of people set up new accounts and significantly increased the size of the network, many newcomers found the process confusing and many accounts did not remain active. Unlike centralized social media platforms, Mastodon is a network of independent servers with their own rules and norms [@nicholsonMastodonRulesCharacterizing2023]. Each server can communicate with each other using the shared ActivityPub protocols and accounts can move between Mastodon servers, but the local experience can vary widely from server to server.
|
||
|
||
<!-- Further, many Mastodon servers have specific norms which people coming from Twitter may find confusing, such as local norms around content warnings [@nicholsonMastodonRulesCharacterizing2023]. -->
|
||
|
||
Although attracting and retaining newcomers is a key challenge for online communities [@krautBuildingSuccessfulOnline2011 p. 182], Mastodon's onboarding process has not always been straightforward. Variation among servers can also present a challenge for newcomers who may not even be aware of the specific rules, norms, or general topics of interest on the server they are joining [@diazUsingMastodonWay2022]. Various guides and resources for people trying to join Mastodon offered mixed advice on choosing a server. Some suggest that the most important thing is to simply join any server and work from there [@krasnoffMastodon101How2022; @silberlingBeginnerGuideMastodon2023], while others have created tools and guides to help people find potential servers of interest by size and location[@thekinrarMastodonInstances2017; @kingMastodonMe2024].
|
||
|
||
Mastodon's decentralized design has long been in tension with the disproportionate popularity of a small set of large, general-topic servers within the system [@ramanChallengesDecentralisedWeb2019a]. Analysing the activity of new accounts that join the network, we find that users who sign up on such servers are less likely to remain active after 91 days. We also find that many users who move accounts tend to gravitate toward smaller, more niche servers over time, suggesting that established users may also find additional utility from such servers.
|
||
|
||
In response to these findings, we propose a potential way to create server and tag recommendations on Mastodon. This recommendation system could both help newcomers find servers that match their interests and help established accounts discover "neighborhoods" of related servers to enable further discovery.
|
||
|
||
# Background
|
||
|
||
## Empirical Setting
|
||
|
||
The Fediverse is a set of decentralized online social networks which interoperate using shared protocols like ActivityPub. Mastodon is a software program used by many Fediverse servers and offers a user experience similar to the Tweetdeck client for Twitter. It was first created in late 2016 and saw a surge in interest in 2022 during and after Elon Musk's Twitter acquisition.
|
||
|
||
Mastodon features three kinds of timelines. The primary timeline is a "home" timeline which shows all posts from accounts followed by the user. Mastodon also supports a "local" timeline which shows all public posts from the local server and a "federated" timeline which includes all posts from users followed by other users on their server. The local timeline is unique to each server and can be used to discover new accounts and posts from the local community. On larger servers, this timeline can be unwieldy; however, on smaller servers, this presents the opportunity to discover new posts and users of potential interest.
|
||
|
||
Discovery has been challenging on Mastodon. Text search, for instance, was impossible on most servers until support for this feature was added on an optional, opt-in basis using Elasticsearch in late 2023 [@rochkoMastodon2023]. Recommendation systems are currently a somewhat novel problem in the context of decentralized online social networks. @trienesRecommendingUsersWhom2018 developed a recommendation system for finding new accounts to follow on the Fediverse which used collaborative filtering based on BM25 in an early example of a content discovery system on Mastodon.
|
||
|
||
Individual Mastodon servers can have an effect on the end experience of users. For example, some servers may choose to federate with some servers but not others, altering the topology of the Fediverse network for their users. At the same time, accounts can only map to one specific server. Because of Mastodon's data portability, users can move their accounts freely between servers while retaining their followers, though their post history remains with their original account.
|
||
|
||
## The Mastodon Migrations
|
||
|
||
Mastodon saw a surge in interest in 2022 and 2023, particularly after Elon Musk's Twitter acquisition. In particular, four events of interests drove measurable increases in new users to the network: the announcement of the acquisition (April 14, 2022), the closing of the acquisition (October 27, 2022), a day when Twitter suspended a number of prominent journalists (December 15, 2022), and a day when Twitter experienced an outage and started rate limiting accounts (July 1, 2023). Many Twitter accounts announced they were setting up Mastodon accounts and linked their new accounts to their followers, often using tags like `#TwitterMigration`[@heFlockingMastodonTracking2023] and driving interest in Mastodon in a process @lacavaDriversSocialInfluence2023 found consistent with social influence theory.
|
||
|
||
Some media outlets have framed reports on Mastodon [@hooverMastodonBumpNow2023] through what @zulliRethinkingSocialSocial2020 calls the "Killer Hype Cycle", whereby the media finds a new alternative social media platform, declares it a potential killer of some established platform, and later calls it a failure if it does not displace the existing platform. Such framing fails to take systems like the Fediverse seriously for their own merits: completely replacing existing commercial systems is not the only way to measure success, nor does it account for the real value the Fediverse provides for its millions of active users.
|
||
|
||
Mastodon's approach to onboarding has also changed over time. In much of 2020 and early 2021, the Mastodon developers closed sign-ups to their flagship server and linked to an alternative server, which saw increased sign-ups during this period. They also linked to a list of servers on the "Join Mastodon" webpage [@mastodonggmbhServers], where all servers are pre-approved and follow the Mastodon Server Covenant which guarantees certain content moderation standards and data protections. Starting in 2023, the Mastodon developers shifted toward making the flagship server the default when people sign up on the official Mastodon Android and iOS apps [@rochkoNewOnboardingExperience2023; @rothItGettingEasier2023].
|
||
|
||
## Newcomers in Online Communities
|
||
|
||
Onboarding newcomers is an important part of the life cycle of online communities. Any community can expect a certain amount of turnover, and so it is important for the long-term health and longevity of the community to be able to bring in new members [@krautBuildingSuccessfulOnline2011 p. 182]. However, the process of onboarding newcomers is not always straightforward.
|
||
|
||
The series of migrations of new users into Mastodon in many ways reflect folk stories of "Eternal Septembers" on previous communication networks, where a large influx of newcomers challenged the existing norms [@driscollWeMisrememberEternal2023; @kieneSurvivingEternalSeptember2016]. Many Mastodon servers do have specific norms which people coming from Twitter may find confusing, such as local norms around content warnings [@nicholsonMastodonRulesCharacterizing2023]. Variation among servers can also present a challenge for newcomers who may not even be aware of the specific rules, norms, or general topics of interest on the server they are joining [@diazUsingMastodonWay2022]. Mastodon servers open to new accounts must thus be both accommodating to newcomers while at the same ensuring the propagation of their norms and culture, either through social norms or through technical means.
|
||
|
||
## Recommendation Systems and Collaborative Filtering
|
||
|
||
Recommender systems help people filter information to find resources relevant to some need [@ricciRecommenderSystemsHandbook2022]. The development of these systems as an area of formal study harkens back to information retrieval (e.g. @saltonIntroductionModernInformation1987) and foundational works imagining the role of computing in human decision-making (e.g. @bushWeMayThink1945). Early work on these systems produced more effective ways of filtering and sorting documents in searches such as the probabilistic models that motivated the creation of the okapi (BM25) relevance function [@robertsonProbabilisticRelevanceFramework2009]. Many contemporary recommendation systems use collaborative filtering, a technique which produces new recommendations for items based on the preferences of a collection of similar users [@korenAdvancesCollaborativeFiltering2022].
|
||
|
||
Collaborative filtering systems build on top of a user-item-rating ($U-I-r$) model where there is a set of users who each provide ratings for a set of items. The system then uses the ratings from other users to predict the ratings of a user for an item they have not yet rated and uses these predictions to create a ordered list of the best recommendations for the user's needs [@ekstrandCollaborativeFilteringRecommender2011 pp. 86-87]. Collaborative filtering recommender systems typically produce better results as the number of users and items in the system increases; however, they must also deal with the "cold start" problem, where limited data makes recommendations unviable [@lamAddressingColdstartProblem2008]. The cold start problem has three possible facets: boostrapping new communities, dealing with new items, and handling new users [@schaferCollaborativeFilteringRecommender2007 pp. 311-312]. In each case, limited data on the entity makes it impossible to find similar entities without some way of building a profile. Further, uncorrected collaborative filtering techniques often also produce a bias where more broadly popular items receive more recommendations than more obscure but possibly more relevant items [@zhuPopularityOpportunityBiasCollaborative2021]. Research on collaborative filtering has also shown that the quality of recommendations can be improved by using a combination of user-based and item-based collaborative filtering [@sarwarItembasedCollaborativeFiltering2001]. <!-- TODO: check this -->
|
||
|
||
Although all forms of collaborative filtering use some combination of users and items, there are two main approaches to collaborative filtering: memory-based and model-based. Memory-based approaches use the entire user-item matrix to make recommendations, while model-based approaches use a reduced form of the matrix to make recommendations. This is particularly useful because the matrix of items and users tends to be extremely sparse, e.g. in a movie recommendor system, most people have not seen most of the movies in the database. Singular value decomposition (SVD) is one such dimension reduction technique which transforms a $m \times n$ matrix $M$ into the form $M = U \Sigma V^{T}$ [@paterekImprovingRegularizedSingular2007]. SVD is particularly useful for recommendation systems because it can be used to find the latent factors which underlie the user-item matrix and use these factors to make recommendations.
|
||
|
||
While researchers in the recommendation system space often focus on ways to design the system to produce good results mathematically, human-computer interaction researchers also consider various human factors which contribute to the overall system. Crucially, McNee et al. argued “being accurate is not enough”: user-centric evaluations, which consider multiple aspects of the user experience, are necessary to evaluate the full system. HCI researchers have also contributed pioneering recommender systems in practice. For example, GroupLens researchers @resnickGrouplensOpenArchitecture1994 created a collaborative filtering systems for Usenet and later produced advancements toward system evaluation and explaination of movie recommendations [@herlockerEvaluatingCollaborativeFiltering2004; @herlockerExplainingCollaborativeFiltering2000]. @cosleySuggestBotUsingIntelligent2007 created a system to match people with tasks on Wikipedia to encourage more editing. This prior work shows that recommender systems can be used to help users find relevant information in a variety of contexts.
|
||
|
||
|
||
## Evaluation of Recommendation Systems
|
||
|
||
Evaluating recommender systems can be tricky because a measure of good performance must take into account various dimensions[@zangerleEvaluatingRecommenderSystems2022]. A measure of accuracy must be paired with a question of “accuracy toward what?” Explainability requires a transparent means of showing the user why a certain item was recommended.
|
||
|
||
It is often important to both start with an end goal in mind and to keep evaluation integrated throughout the entire process of creating a recommender systems, from conceptualization to optimization. There are several considerations to keep in mind such as the trade-off between optimizing suggestions and the risks of over-fitting. For example, a system designed to create suggestions with the highest propensity that the user will like the recommendations may struggle with a reduced diversity of its suggestions.
|
||
|
||
Recommender systems can be evaluated using three board categories of techniques: offline, online, and user studies. Offline evaluation uses pre-collected data and a measure to describe the performance of the system, assuming there is insufficient relevance to the difference in time between when the data was collected and the present moment. Online evaluation uses a deployed, live system, e.g. A/B testing. In this case, the user is often unaware of the experiment. In contrast, user studies involve subjects which are aware they are being studied.
|
||
|
||
# Data
|
||
|
||
```{r}
|
||
#| label: fig-account-timeline
|
||
#| fig-cap: "Accounts in the dataset created between January 2022 and March 2023. The top panels shows the proportion of accounts still active 45 days after creation, the proportion of accounts that have moved, and the proportion of accounts that have been suspended. The bottom panel shows the count of accounts created each week. The dashed vertical lines in the bottom panel represent the annoucement day of the Elon Musk Twitter acquisition, the acquisition closing day, a day where Twitter suspended a number of prominent journalist, and a day when Twitter experienced an outage and started rate limiting accounts."
|
||
#| fig-height: 2.75
|
||
#| fig-width: 6.75
|
||
#| fig-env: figure*
|
||
#| fig-pos: tb!
|
||
|
||
library(here)
|
||
source(here("code/helpers.R"))
|
||
account_timeline_plot()
|
||
```
|
||
|
||
Mastodon has an extensive API which allows for the collection of public posts and account information. We collected data from the public timelines of Mastodon servers using the Mastodon API with a crawler which runs once per day. We also collected account information from the opt-in public profile directories on these servers.
|
||
|
||
```{r}
|
||
#| label: data-counts
|
||
#| cache: true
|
||
|
||
library(arrow)
|
||
library(tidyverse)
|
||
library(here)
|
||
source(here("code/helpers.R"))
|
||
|
||
accounts <- load_accounts(filt = FALSE) %>%
|
||
filter(created_at >= "2020-08-14") %>%
|
||
filter(created_at < "2024-01-01")
|
||
|
||
tag_posts <- "data/scratch/all_tag_posts.feather" %>%
|
||
arrow::read_ipc_file(. , col_select = c("host", "acct", "created_at")) %>%
|
||
filter(created_at >= as.Date("2023-05-01")) %>%
|
||
filter(created_at < as.Date("2023-08-01"))
|
||
|
||
text_format <- function(df) {
|
||
return (format(nrow(df), big.mark=","))
|
||
}
|
||
|
||
num_tag_posts <- tag_posts %>% text_format()
|
||
num_tag_accounts <- tag_posts %>% distinct(host, acct) %>% text_format()
|
||
num_tag_servers <- tag_posts %>% distinct(host) %>% text_format()
|
||
|
||
num_accounts_unfilt <- accounts %>% text_format()
|
||
num_account_bots <- accounts %>% filter(bot) %>% text_format()
|
||
num_account_nostatuses <- accounts %>% filter(is.na(last_status_at)) %>% text_format()
|
||
num_account_suspended <- accounts %>% mutate(suspended = replace_na(suspended, FALSE)) %>% filter(suspended) %>% text_format()
|
||
num_accounts_moved <- accounts %>% filter(has_moved) %>% text_format()
|
||
num_account_limited <- accounts %>% filter(limited) %>% text_format()
|
||
num_account_samedaystatus <- accounts %>% filter(last_status_at <= created_at) %>% text_format()
|
||
num_account_filt <- load_accounts(filt = TRUE) %>% text_format()
|
||
```
|
||
|
||
**Mastodon Profiles**: We collected accounts using data previously collected from posts on public Mastodon timelines from October 2020 to August 2023. We then queried for up-to-date information on those accounts including their most recent status and if the account had moved as of February 2024. Through this process, we discovered a total of `r num_accounts_unfilt` account created between August 14, 2020 and January 1, 2024. We then filtered out accounts which were bots (`r num_account_bots` accounts), had been suspended (`r num_account_suspended` accounts), had been marked as moved to another account (`r num_accounts_moved` accounts), had been limited by their local server (`r num_account_limited` accounts), had no statuses (`r num_account_nostatuses` accounts), or had posted their last status on the same day as their account creation (`r num_account_samedaystatus` accounts). This gave us a total of `r num_account_filt` accounts which met all the filtering criteria. Note that because we got updated information on each account, we include only accounts on servers which still existed at the time of our profile queries and which returned records for the account.
|
||
|
||
**Tags**: Mastodon supports hashtags, which are user-generated metadata tags that can be added to posts. Clicking the link for a tag shows a stream of posts which also have that tag from the federated timeline, which includes accounts on the same server and posts from accounts followed by the accounts on the local server. We collected `r num_tag_posts` statuses posted by `r num_tag_accounts` accounts on `r num_tag_servers` unique servers from between May to July 2023 which contained at least one hashtag.
|
||
|
||
# Analysis and Results
|
||
|
||
```{r}
|
||
# Calculate how "general" a server is based on the simularity matrix.
|
||
library(tidyverse)
|
||
library(igraph)
|
||
library(arrow)
|
||
|
||
sim_servers <- "data/scratch/server_similarity.feather" %>% arrow::read_ipc_file() %>% rename("weight" = "Similarity")
|
||
#sim_net <- as.network(sim_servers)
|
||
g <- graph_from_data_frame(sim_servers, directed = FALSE)
|
||
|
||
g_strength <- log(sort(strength(g)))
|
||
normalized_strength <- (g_strength - min(g_strength)) / (max(g_strength) - min(g_strength))
|
||
|
||
server_centrality <- enframe(normalized_strength, name="server", value="strength")
|
||
server_centrality %>% arrow::write_ipc_file("data/scratch/server_centrality.feather")
|
||
```
|
||
|
||
## Survival Model
|
||
|
||
*Are accounts on suggested general servers less likely to remain active than accounts on other servers?*
|
||
|
||
```{r, cache.extra = tools::md5sum("code/survival.R")}
|
||
#| cache: true
|
||
#| label: fig-survival
|
||
#| fig-env: figure
|
||
#| fig-cap: "Survival probabilities for accounts created during May 2023."
|
||
#| fig-width: 3.375
|
||
#| fig-height: 2.5
|
||
#| fig-pos: h!
|
||
|
||
library(here)
|
||
source(here("code/survival.R"))
|
||
plot_km
|
||
```
|
||
|
||
### Kaplan–Meier Estimator
|
||
|
||
```{r}
|
||
#| label: table-coxme
|
||
library(ehahelper)
|
||
library(broom)
|
||
|
||
cxme_table <- tidy(cxme) %>%
|
||
mutate(conf.low = exp(conf.low), conf.high=exp(conf.high)) %>%
|
||
mutate(term = case_when(
|
||
term == "factor(group)1" ~ "Join Mastodon",
|
||
term == "factor(group)2" ~ "General Servers",
|
||
term == "small_serverTRUE" ~ "Small Server",
|
||
TRUE ~ term
|
||
)) %>%
|
||
mutate(exp.coef = paste("(", round(conf.low, 2), ", ", round(conf.high, 2), ")", sep="")) %>%
|
||
select(term, estimate, exp.coef , p.value)
|
||
```
|
||
|
||
Using `r text_format(sel_a)` accounts created from May 1 to June 30, 2023, we create a Kaplan–Meier estimator for the probability that an account will remain active based on whether the account is on one of the largest general instances [^1] featured at the top of the Join Mastodon webpage or otherwise if it is on a server in the Join Mastodon list. Accounts are considered active if they have made at least one post after the censorship period `r active_period` days after account creation.
|
||
|
||
[^1]: `r paste(general_servers, collapse=", ")`
|
||
|
||
::: {#tbl-cxme .column-body}
|
||
```{r}
|
||
if (knitr::is_latex_output()) {
|
||
cxme_table %>% knitr::kable(format="latex", booktabs=TRUE, digits=3)
|
||
} else {
|
||
cxme_table %>% knitr::kable(digits = 3)
|
||
}
|
||
```
|
||
|
||
Coefficients for the Cox Proportional Hazard Model with Mixed Effects. The model includes a random effect for the server.
|
||
|
||
:::
|
||
|
||
### Mixed Effects Cox Proportional Hazard Model
|
||
|
||
|
||
We also construct a Mixed Effects Cox Proportional Hazard Model
|
||
|
||
$$
|
||
h(t_{ij}) = h_0(t) \exp\left(\begin{aligned}
|
||
&\beta_1 \text{Join Mastodon} \\
|
||
&+ \beta_2 \text{General Servers} \\
|
||
&+ \beta_3 \text{Small Server} \\
|
||
&+ b_{j}
|
||
\end{aligned}\right)
|
||
$$
|
||
|
||
where $h(t_{ij})$ is the hazard for account $i$ on server $j$ at time $t$, $h_0(t)$ is the baseline hazard, $\beta_1$ is the coefficient for whether the account is on a server featured on Join Mastodon, $\beta_2$ is the coefficient for whether the account is on one of the largest general instances, $\beta_3$ is the coefficient for whether the account is on a small server with less than 100 accounts, and $b_{j}$ is the random effect for server $j$.
|
||
|
||
<!-- with coefficients for whether the account is on a small server (less than a hundred accounts), and whether the account in featured on JoinMastodon or is featured as one of the largest general instances. -->
|
||
|
||
We again find that accounts on the largest general instances are less likely to remain active than accounts on other servers, while accounts created on smaller servers are more likely to remain active.
|
||
|
||
### Logistic Regression
|
||
|
||
First, we calculate a continuous measure for the generality of the server based on the item-item similarity between servers. We then use this measure to predict whether an account will remain active after 91 days using a logistic regression model.
|
||
|
||
```{r}
|
||
modelsummary(logit)
|
||
```
|
||
|
||
The results of this analysis again suggests that the generality of the server is negatively associated with the likelihood that an account will remain active after 91 days.
|
||
|
||
## Moved Accounts
|
||
|
||
*Do accounts tend to move to larger or smaller servers?*
|
||
|
||
Mastodon users can move their accounts to another server while retaining their connections (but not their posts) to other Mastodon accounts. This feature, built into the Mastodon software, offers data portability and helps avoid lock-in.
|
||
|
||
```{r}
|
||
#| label: table-ergm-table
|
||
#| echo: false
|
||
#| warning: false
|
||
#| message: false
|
||
#| error: false
|
||
|
||
library(here)
|
||
library(modelsummary)
|
||
library(kableExtra)
|
||
library(tinytable)
|
||
library(purrr)
|
||
library(stringr)
|
||
load(file = here("data/scratch/ergm-model-early.rda"))
|
||
load(file = here("data/scratch/ergm-model-late.rda"))
|
||
|
||
if (knitr::is_latex_output()) {
|
||
my_format <- "latex_tabular"
|
||
} else {
|
||
my_format <- "html"
|
||
}
|
||
|
||
x <- modelsummary(
|
||
list("Coef." = model.early, "Std.Error" = model.early, "Coef." = model.late, "Std.Error" = model.late),
|
||
estimate = c("{estimate}", "{stars}{std.error}", "{estimate}", "{stars}{std.error}"),
|
||
statistic = NULL,
|
||
gof_omit = ".*",
|
||
coef_rename = c(
|
||
"sum" = "Sum",
|
||
"nonzero" = "Nonzero",
|
||
"diff.sum0.h-t.accounts" = "Smaller server",
|
||
"nodeocov.sum.accounts" = "Server size\n(outgoing)",
|
||
"nodeifactor.sum.registrations.TRUE" = "Open registrations\n(incoming)",
|
||
"nodematch.sum.language" = "Languages match"
|
||
),
|
||
align="lrrrr",
|
||
stars = c('*' = .05, '**' = 0.01, '***' = .001),
|
||
output = my_format
|
||
) |> group_tt(j = list("Model A" = 2:3, "Model B" = 4:5))
|
||
```
|
||
|
||
:::: {#tbl-ergm-table `r class_wide`}
|
||
|
||
```{r}
|
||
x
|
||
```
|
||
|
||
Exponential family random graph models for account movement between Mastodon servers. Accounts in Model A were created in May 2022 and moved to another account at some later point. Accounts in Model B were created at some earlier point and moved after October 2023.
|
||
|
||
::::
|
||
|
||
To corroborate our findings, we also use data from thousands of accounts which moved between Mastodon servers, taking advantage of the data portability of the platform. Conceiving of these moved accounts as edges within a weighted directional network where nodes represent servers, edges represent accounts, and weights represent the number of accounts that moved between servers, we construct an exponential family random graph model (ERGM) with terms for server size, open registrations, and language match between servers. We find that accounts are more likely to move from larger servers to smaller servers.
|
||
|
||
|
||
# Proposed Recommendation System
|
||
|
||
*How can we build an opt-in, low-resource recommendation system for finding Fediverse servers?*
|
||
|
||
Based on these findings, we suggest a need for better ways for newcomers to find servers and propose a viable way to create server and tag recommendations on Mastodon. This system could both help newcomers find servers that match their interests and help established accounts discover "neighborhoods" of related servers.
|
||
|
||
|
||
## Constraints and Evaluation
|
||
|
||
The decentralized web presents unique challenges for recommendation systems. Centralized recommendation systems can collect data from all users and use this data to make recommendations. However, this is less desirable on the decentralized web, where data is spread across many servers and users may not want to share their data with a central authority. Instead, I propose a system where servers can report the top hashtags by the number of unique accounts on the server using them during the last three months. Such a system would be opt-in and require few additional server resources since tags already have their own database table. Because each server only reports aggregated counts of publicly posted hashtags, this also reduces the risk of privacy violations.
|
||
|
||
In the Mastodon context, the cold start problem has two possible facets: there is no information on new servers and there is also no information on new users. New servers are thus likely prone to falling for popularity bias: there is simply more data on larger servers. A common strategy to deal with new users is to ask for some initial preferences to create an initial workable user profile. In the case of this system, we ask the user to provide a set of tags which they are interested in. We then use these tags to find the top servers which match these tags.
|
||
|
||
|
||
I plan to evaluate the system in part using the accounts which moved between servers. Based on their posting history (e.g. hashtags), can the recommendations system predict where they will move to?
|
||
|
||
As my recommender system operates under the assumption that smaller, more topic-focused servers are better, it follows that a diverse set of niche results which only match a few tags are more helpful than a set of results which match a larger and more broad set of tags. The system therefore presents results sorted in a manner which encourages a higher diversity of results.
|
||
|
||
One current limitation of my system is that it does not account for the relationship between tags, e.g. “union” and “unions” are essentially the same tag and “furry” and “fursuit” are highly related tags which are in similar areas of embedded space. In future revisions, I hope to account for the relationship between similar tags and pull the top servers from clusters of highly related tags with the top priority going to clusters based on their number of selected tags. This system could be implemented efficiently in O(nt) time given a minimum cluster size of $t$.
|
||
|
||
<!--
|
||
The choice of evaluation criteria follows from the goal or user need to provide relevant, specific, and plausible good servers for a set of tags. We test the relevance of the system based on the posting patterns of users who chose to move from one server to another. Crucially, these users were previously familiar with Mastodon before setting up their next account and, as shown in the previous section, these users tend to move toward smaller, more niche servers. We evaluate the recommender system by measuring the rank k of their destination server. We use the formula...
|
||
-->
|
||
|
||
## Recommendation System Design
|
||
|
||
We use Okapi BM25 to construct a term frequency-inverse document frequency (TF-IDF) model to associate the top tags with each server using counts of tag-account pairs from each server for the term frequency and the number of servers that use each tag for the inverse document frequency. We then L2 normalize the vectors for each tag and calculate the cosine similarity between the tag vectors for each server.
|
||
|
||
$$
|
||
tf = \frac{f_{t,s} \cdot (k_1 + 1)}{f_{t,s} + k_1 \cdot (1 - b + b \cdot \frac{|s|}{avgstl})}
|
||
$$
|
||
|
||
where $f_{t,s}$ is the number of accounts using the tag $t$ on server $d$, $k_1$ and $b$ are tuning parameters, and $avgstl$ is the average sum of account-tag pairs. For the inverse document frequency, we use the following formula:
|
||
|
||
$$
|
||
idf = \log \frac{N - n + 0.5}{n + 0.5}
|
||
$$
|
||
|
||
where $N$ is the total number of servers and $n$ is the number of servers where the tag appears as one of the top tags. We then apply L2 normalization:
|
||
|
||
$$
|
||
tf \cdot idf = \frac{tf \cdot idf}{\| tf \cdot idf \|_2}
|
||
$$
|
||
|
||
|
||
We then used the normalized TF-IDF matrix to produce recommendations using SVD where the relationship between tags and servers can be presented as $A = U \Sigma V^{T}$. We then use the similarity matrix to find the top servers which match the user's selected tags. We can also suggest related tags to users based on the similarity between tags, $U \Sigma$.
|
||
|
||
## Applications
|
||
|
||
### Server Similarity Neighborhoods
|
||
|
||
Mastodon provides two feeds in addition to a user's home timeline populated by accounts they follow: a local timeline with all public posts from their local server and a federated timeline which includes all posts from users followed by other users on their server. We suggest a third kind of timeline, a *neighborhood timeline*, which filters the federated timeline by topic.
|
||
|
||
We calculate the pairwise similarity between two servers with TF-IDF vectors $A$ and $B$ using cosine similarity:
|
||
|
||
$$
|
||
\text{similarity}(A, B) = \frac{A \cdot B}{\|A\| \|B\|}
|
||
$$
|
||
|
||
For an example of how this might work in practice, consider a use-case for someone who is a researcher in the field of human-computer interaction. They might be situated with an account on `hci.social`, but also interested in discovering posts from account on similar related servers. We can use the similarity matrix to find the top five servers most similar to `hci.social`, which is shown in @tbl-sim-servers.
|
||
|
||
::: {#tbl-sim-servers}
|
||
|
||
```{r}
|
||
#| label: table-sim-servers
|
||
library(tidyverse)
|
||
library(arrow)
|
||
|
||
sim_servers <- "data/scratch/server_similarity.feather" %>% arrow::read_ipc_file()
|
||
server_of_interest <- "hci.social"
|
||
server_table <- sim_servers %>%
|
||
arrange(desc(Similarity)) %>%
|
||
filter(Source == server_of_interest | Target == server_of_interest) %>%
|
||
head(5) %>%
|
||
pivot_longer(cols=c(Source, Target)) %>%
|
||
filter(value != server_of_interest) %>%
|
||
select(value, Similarity) %>%
|
||
rename("Server" = "value")
|
||
|
||
if (knitr::is_latex_output()) {
|
||
server_table %>% knitr::kable(format="latex", booktabs=TRUE, digits=3)
|
||
} else {
|
||
server_table %>% knitr::kable(digits = 3)
|
||
}
|
||
```
|
||
|
||
Top five servers most similar to hci.social
|
||
|
||
:::
|
||
|
||
### Tag Similarity
|
||
|
||
We also calculate the similarity between tags using the same method. This can be used to suggest related tags to users based on their interests.
|
||
|
||
```{r}
|
||
#| eval: false
|
||
|
||
#| fig-cap: "100 popular hashtags visualized in two dimensions using a principal component analysis (PCA) on the transformed singular value decomposition (SVD) matrix."
|
||
library(tidyverse)
|
||
library(arrow)
|
||
library(ggrepel)
|
||
library(here)
|
||
library(jsonlite)
|
||
|
||
top_tags <- "data/scratch/tag_svd.feather" %>% arrow::read_ipc_file() %>%
|
||
as_tibble %>%
|
||
mutate(s = variance * log(count)) %>% arrange(desc(s))
|
||
|
||
top_tags %>%
|
||
select(tag, index) %>%
|
||
jsonlite::write_json(here("recommender/data/top_tags.json"))
|
||
|
||
top_tags %>%
|
||
head(100) %>%
|
||
ggplot(aes(x = x, y = y, label = tag)) +
|
||
geom_text_repel(size = 2.5, max.overlaps = 20) +
|
||
#geom_point() +
|
||
theme_minimal()
|
||
```
|
||
|
||
### Server Discovery
|
||
|
||
Given a set of popular tags and a list of servers, we build a recommendation system where users select tags from a list of popular tags and receive server suggestions. The system first creates a subset of vectors based on the TF-IDF matrix which represents the top clusters of topics. After a user selects the top tags of interest to them, it suggests servers which match their preferences.
|
||
|
||
## Evaluation
|
||
|
||
### Server Recommendations for Users (Offline)
|
||
|
||
#### Time-based
|
||
|
||
For evaluation, we plan to use data from posts on accounts during a different time period from the one we used to train the recommender system. The goal of the system is to suggest the best servers for these accounts.
|
||
|
||
```{r}
|
||
library(tidyverse)
|
||
library(arrow)
|
||
|
||
# Create a histogram
|
||
recc_evals <- arrow::read_ipc_file("data/scratch/svd50_eval.feather")
|
||
recc_evals %>%
|
||
ggplot(aes(x=svd)) +
|
||
geom_histogram(binwidth = 5) +
|
||
labs(title = "Distribution of SVD Ranks for Server Recommendations", x = "Rank", y = "Count")
|
||
```
|
||
|
||
The SVD system predicts the server with a median rank of `r median(recc_evals$svd)` and a mean rank of `r round(mean(recc_evals$svd))`.
|
||
|
||
#### Movement-based
|
||
|
||
In parallel with the analysis of server survival, we also take an interest in users who moved servers since we can assume that these users found a server they liked better than their original server. We can use the recommender system to predict where these users will move to and use these predictions to evaluate the system.
|
||
|
||
### Online Evaluation
|
||
|
||
_I have also given some thought to online evaluation. Could we use an aleternative version of the front-end to produce recommendations for interesting servers from existing accounts?_
|
||
|
||
### Robustness to Limited Data
|
||
|
||
```{r}
|
||
#| label: fig-simulations-rbo
|
||
#| fig-env: figure*
|
||
#| cache: true
|
||
#| fig-width: 6.75
|
||
#| fig-height: 3
|
||
#| fig-pos: tb
|
||
library(tidyverse)
|
||
library(arrow)
|
||
simulations <- arrow::read_ipc_file("data/scratch/simulation_rbo.feather")
|
||
|
||
simulations %>%
|
||
group_by(servers, tags, run) %>% summarize(rbo=mean(rbo), .groups="drop") %>%
|
||
mutate(ltags = as.integer(log2(tags))) %>%
|
||
ggplot(aes(x = factor(ltags), y = rbo, fill = factor(ltags))) +
|
||
geom_boxplot() +
|
||
facet_wrap(~servers, nrow=1) +
|
||
#scale_y_continuous(limits = c(0, 1)) +
|
||
labs(x = "Tags (log2)", y = "RBO", title = "Rank Biased Overlap with Baseline Rankings by Number of Servers") +
|
||
theme_minimal() + theme(legend.position = "none")
|
||
```
|
||
|
||
A challenge for a federated recommendation system like we propose is that it needs buy in from a sufficient number of servers to provide value. There is also a tradeoff between the amount of tags to expose for each server and potential concerns about exposing too much data.
|
||
|
||
We simulated various scenarios that limit both servers that report data and the number of tags they report. We used rank biased overlap (RBO) to then compare the outputs from these simulations to the baseline with more complete information from all tags on all servers [@webberSimilarityMeasureIndefinite2010]. In particular, we gave a higher weight to suggestions with a higher rank, with weights decaying by a factor of $k^{0.80}$. @fig-simulations-rbo shows how the average agreement with the baseline scales, which take the top 256 tags from each server.
|
||
|
||
|
||
# Discussion
|
||
|
||
This work provides a first step toward building a recommendation system for finding servers on the Fediverse based on empirical evidence of trace data from thousands of Fediverse newcomers. We find that servers matter and that users tend to move from larger servers to smaller servers. Our recommender system considers constraints in a novel context where data is decentralized and privacy is a major concern. We propose a federated recommendation system which can be implemented with minimal resources and which can provide value to users by helping them find servers which match their interests.
|
||
|
||
The analysis can also be improved by additionally focusing on factors lead to accounts remaining active or dropping out, which a particular focus on the actual activity of accounts over time. For instance, do accounts that interact with other users more remain active longer? Are there particular markers of activity that are more predictive of account retention? Future work could use these to provide suggests for ways to helps newcomers during the onboarding process.
|
||
|
||
The observational nature of the data limit some of the causal claims we can make. It is unclear, for instance, if accounts on general servers are less likely to remain active because of the server itself or because of the type of users who join such servers. For example, it is conceivable that the kind of person who spends more time researching which server to join is more invested in their Mastodon experience than one who simply joins the first server they find.
|
||
|
||
|
||
## Future Work
|
||
|
||
Future work is necessary to determine the how well the recommendation system is at helping users find servers that match their interests. This may involve user studies and interviews to determine how well the system works in practice.
|
||
|
||
While the work presented here is based on observed posts on the public timelines, simulations may be helpful in determining the robustness of the system to targeted attacks. Due to the decentralized nature of the system, it is feasible that a bad actor could set up zombie accounts on servers to manipulate the recommendation system. Simulations could help determine how well the system can resist such attacks and ways to mitigate this risk.
|
||
|
||
# Conclusion
|
||
|
||
Based on analysis of trace data from millions of new Fediverse accounts, we find evidence that suggests that servers matter and that users tend to move from larger servers to smaller servers. We then propose a recommendation system that can help new Fediverse users find servers with a high probability of being a good match based on their interests. Based on simulations, we demonstrate that such a tool can be effectively deployed in a federated manner, even with limited data on each local server.
|
||
|
||
# References {#references}
|
||
|
||
|
||
# Appendix {.appendix}
|
||
|
||
::: {.content-visible when-format="html"}
|
||
|
||
```{r}
|
||
library(tidyverse)
|
||
library(arrow)
|
||
library(ggrepel)
|
||
|
||
"data/scratch/server_svd.feather" %>% arrow::read_ipc_file() %>%
|
||
as_tibble %>%
|
||
ggplot(aes(x = x, y = y, label = server)) +
|
||
geom_text_repel(size = 2, max.overlaps = 10) +
|
||
#geom_point() +
|
||
theme_minimal()
|
||
```
|
||
|
||
:::
|
||
|
||
## Glossary {.appendix}
|
||
|
||
*ActivityPub*: A decentralized social networking protocol based on the ActivityStreams 2.0 data format.
|
||
|
||
*Fediverse*: A set of decentralized online social networks which interoperate using shared protocols like ActivityPub.
|
||
|
||
*Mastodon*: An open-source, decentralized social network and microblogging community.
|
||
|
||
*Hashtag*: A user-generated metadata tag that can be added to posts.
|
||
|
||
*Federated timeline*: A timeline which includes all posts from users followed by other users on their server.
|
||
|
||
*Local timeline*: A timeline with all public posts from the local server.
|
||
|
||
|
||
## More Evaluation
|
||
|
||
### User Stories
|
||
|
||
We also illustrate the potential value of such a system with three user stories:
|
||
|
||
**User Story 1**: Juan is a human-computer interaction researcher looking for a server to connect with colleagues and also share about his projects. He is interested in finding a server with a focus on research and technology. Juan inputs the tags "research", "academia", and "technology" into the system and receives a list of servers which match his interests: `synapse.cafe`, `sciences.social`, `mathstodon.xyz`, `mastodon.social`, `mastodon.education`.
|
||
|
||
**User Story 2** (Arthur) just wants to connect with friends and family. For some reason, Arthur clicks every single major category and gets the suggestions: `mas.to`, `library.love`, `mastodon.world`, `mstdn.social`.
|
||
|
||
**User Story 3** (Tracy) has run a niche fandom blog on Tumblr for the last eight years and is curious about migrating to the Fediverse. She inputs the tags "doctorwho", "fanart", and "fanfiction" and gets the suggestions: `blorbo.social`, `mastodon.nz`, `sakurajima.moe`, `toot.kif.rocks`, `mastodon.scot`. |