Recommender systems are a form of artificial intelligence (AI) technology, which are the secret architects of our digital experiences, curating the vast expanses of online content into individualised, heavily curated packages.

Whether it's the next video on YouTube, a product on Amazon, or a news article on a media platform, these systems guide our online decisions in ways both visible and covert.

As we navigate the Digital Services Act (DSA), a complex piece of legislation aiming to regulate the digital marketplace in the EU, our attention is increasingly drawn towards its provisions related to recommender systems. These ubiquitous algorithms, which curate individualised content for users, have long operated in something of a legal vacuum. The DSA aims to fill this gap by imposing new governance frameworks.

The DSA defines a 'recommender system' as a:

"fully or partially automated system used by an online platform to suggest in its online interface specific information to recipients of the service or prioritise that information, including as a result of a search initiated by the recipient of the service or otherwise determining the relative order or prominence of information displayed."

In essence, recommender systems are automated or semi-automated algorithms used by online platforms to prioritise or suggest content based on a myriad of factors such as user behaviour, demographics, and other complex machine learning metrics. The recommendation could be based on a variety of factors including user behaviour, demographics, or even broader societal trends captured through machine learning algorithms. These systems are particularly important given their role in shaping user experience and driving user engagement on various platforms.

Plain Language

The DSA requires providers of online platforms that employ recommender systems to articulate, in "plain and intelligible language," the main parameters driving these systems. This language must be embedded within the platform's terms and conditions. This measure aims to simplify the technical jargon often associated with algorithms, making them comprehensible for the average user. The practical application here is two-fold: legal compliance necessitates the translation of complex algorithmic processes into layman's terms, and platforms must offer clear communication channels through which users can understand how their content is being curated.

Unveiling the 'Why' Behind Content Suggestions

Transparency in recommender systems is put to the fore by requiring that their "main parameters" are not only defined but also have a justification. This involves explaining (a) the criteria that significantly affect the information presented to the users, and (b) the reasons for the relative importance of those criteria. This is a kind of algorithmic accountability which goes beyond mere transparency. Providers won't just have to reveal which variables are at play; they will have to rationalise why these variables matter. This offers a powerful tool for those who want to scrutinise how these systems potentially perpetuate systemic biases or misinformation, for example.

User-Centric Functionality

Tired of always getting recommendations to watch old Dawson's Creek clips? The DSA enhances user autonomy by obliging providers to offer functionalities that allow users to "select and modify at any time their preferred option." Moreover, this functionality must be "directly and easily accessible" from the specific platform section where information is being prioritised. This not only amplifies the user's control over their own digital experience but also offers an intriguing layer of user engagement, allowing the user to interact with the algorithm itself. This could potentially open a new avenue of competition among platforms: not just who has the best algorithm, but also who offers the best user control over that algorithm.

The Push for Non-Profiling Options

The DSA also introduces an obligation for providers of "very large online platforms" (VLOPs) and "very large online search engines" (VLOSEs) to offer at least one recommender system that is not based on profiling. "Profiling" is any automated processing of personal data for evaluating certain personal aspects, such as behaviour or location.

This provision elevates the role of user agency, presenting users with an alternative to the often opaque, data-driven models which define their online experience. By forcing platforms to offer non-profiling-based options, the DSA acts as a counterweight to the growing concerns about data protection and the ethical considerations around predictive algorithms. Moreover, this provision could also catalyse technological innovation, as companies may be inclined to develop new methods for content recommendation that do not rely on personal profiling.

Risk Assessments and Algorithmic Design

Providers must carry out comprehensive risk assessments that focus on systemic risks associated with their platforms. The evaluation must consider how the design of their recommender systems and any other relevant algorithmic systems influence these risks. This point is fascinating in that it integrates the technical aspects of recommender systems directly into the risk management equation. Providers are tasked with examining whether risks such as the spread of misinformation or other harmful content are amplified by the algorithmic systems they use, including recommender systems.

The DSA obliges platforms to analyse potential vulnerabilities arising from 'intentional manipulation' such as inauthentic use or automated exploitation, for example, bots or fraudulent activity that could distort information flows or user perceptions. This lays the groundwork for a holistic review process that doesn't just consider algorithmic flaws but also accounts for the way human actors might exploit these systems for nefarious ends.

Going forward, providers of VLOPs are required to report annually on their efforts to mitigate systemic risks, including those arising from the operation of their recommender systems. This reporting must include information on any algorithmic changes and their impact on systemic risks.

Shadowbanning

Shadowbanning refers to the practice of restricting a user's visibility on a platform without their knowledge, a technique that can be facilitated through manipulations in recommender systems. The DSA recitals explicitly express concern over such covert activities that can limit the discoverability of content or users.

Recital 55 identifies 'restriction of visibility' as a category inclusive of several practices that shape user interaction on online platforms. These include demotion in ranking systems and alterations in recommender algorithms. The recitals also explore the economic dimensions of these practices, such as suspending or terminating the advertising revenue linked to specific content.

Interestingly, Recital 55 incorporates an exception for 'deceptive high-volume commercial content disseminated through intentional manipulation of the service.' In simpler terms, the provision acknowledges the need for stricter governance when it comes to fake accounts, bot activity, or other deceptive practices that could disrupt or exploit the platform's functions.

Adding another layer of complexity, the recital stresses the user's right to legal remedy in accordance with national law. This suggests that there should be tangible, real-world avenues for users to seek redress if they believe their rights have been violated by these practices.

Conclusion

In bringing the obscure mechanics of recommender systems into the light of regulatory scrutiny, the DSA has set the precedent for a new era of transparent and accountable technology.

The DSA's recommender systems provisions are an attempt to achieve a balanced governance of one of the most pervasive yet poorly understood aspects of our digital lives.

As legal frameworks like the DSA evolve, so must the dialogues between technologists, legal experts, and policymakers. The DSA is much more than a regulatory regime; it represents a paradigm shift in how we approach the governance of AI and other advanced technologies.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.