Guest post by Sabrina Tosi

Influencer Law Clinic series

12/3/20213 min read

Meta, the newly-formed company previously known as Facebook Inc., has announced a change of policy that has provoked a general debate on targeted advertising on social media among its users, commercial entities and lawmakers. The company will limit certain forms of targeted advertising from its program as of January 19, 2022. This decision will affect the major social media platforms such as Facebook, Instagram, Messengers and third party apps. But what exactly is Meta’s plan? The plan is to remove detailed ad-targeting options based on users’ interaction with content related to “sensitive” topics such as health, race or ethnicity, political affiliation, religion, or sexual orientation.[1] To provide an example, Facebook would remove advertising on chemoterapy, World Diabetes day, same-sex marriage etc.

Targeted Advertising in short

Broadly speaking, targeted advertising is a type of Internet advertising that delivers promotional messages to a customer according to their specific traits, interests, and preferences.[2] The practice of targeted advertising is very much used in social media and it refers to the way that these platforms reach a precise audience using profile data to deliver personalised advertisements. This is done by tracking the information that consumers provide on the platform and specifically focusing on general targeting attributes based on location, interest, behaviour or socio-psychographic basis. [3]

Have you ever experienced the feeling of Instagram or Facebook knowing what you like or what you are interested in better than yourself? If you have, that is the proof that this practice is very much effective in reaching its objectives.

The new Measure under magnifying Lens

Online targeted advertising has been criticised for a number of reasons. Firstly, a privacy concern has been raised since the practice requires a large amount of personal data to be collected, traded between companies and analysed, most of the time without the individual being fully aware of it. The scandal of Cambridge Analytica is a practical example of it. Secondly, it has been accused of advertisers discriminating against or targeting vulnerable groups. Linked to this is the accusation of increasing the circulation of harmful disinformation.[4] What this term entails is the phenomenon of isolating online consumers caused by the fact that the information they see is limited to what is targeted at them. This consequence appears to be harmful since it discourages public debate and the development of individual’s opinions on the matter.

In the post through which Meta announced the new limiting measure, the company argued that it would prevent abuse of its features thus limiting “negative experiences for people in underrepresented groups”.[5] Indeed, the company has been accused already in 2019 for its failure to limit disinformation on the platform and to stop advertising companies from targeting users with discriminatory ads.[6] However, while it is understandable that the company is adding these measures to avoid past experiences, is this really the best way to improve on their lacunae? In other words, is limiting detailed targeting on a number of sensitive topics enough or capable of being substantially influential?

At first glance, the measure might seem arbitrary both in terms of topics included and timing. Why is the measure being taken only now and will start to apply as of next year? And also, do these topics which are categorised as sensitive have more negative effects for underrepresented groups as compared to other topics? For example, there is nothing wrong with a careful person who would be very keen on getting more information about diabetes and on ways to get that check out to receive an advertisement regarding World Diabetes day. Indeed, this would not directly imply any more negative effects on her or because of her belonging to an underrepresented group (such as one of persons sensitive to diseases or genetically prone to being diabetic) as it would have in other people. Rather, if she was totally denied the chance of seeing such information being sponsored on her social media, she might be clueless about the event happening and miss a chance of getting her blood checked out.

The choice is difficult, different opinions on the matter have been presented and competing interests have to have been balanced out, but, ultimately, it is the company that has the ability to self-regulate the content that is being shared on its platforms. Platform discretion on what is to be regarded as a sensitive topic is almost exclusive. For now, all that is left for us to do is to wait for the measure to take effect and hope to experience a positive outcome paving the way for a beneficial and harmless targeted experience of users on social media platforms.






[6] For further readings,