Social media: Disinformation expert offers 3 safety tips in a time of fake news and dodgy influencers

Social media: Disinformation expert offers 3 safety tips in a time of fake news and dodgy influencers

Boniface Muthoni/SOPA Images/LightRocke

Social networks have revolutionised the way we communicate, stay informed and share moments of our daily lives. We use platforms like Facebook, Twitter, Instagram and TikTok to keep in touch with our friends and family, share our experiences, keep informed, and express our opinions.

But beyond these personal and often superficial uses, social networks play a much more complex and sometimes troubling role in society. The question arises: what impact do social networks have on societal security risk? How can these tools influence or even destabilise society? And how can individual users mitigate the risks?

Societal security risks refer to threats that can undermine the social fabric and stability of a community or nation. These risks often arise from issues such as political instability, economic inequality, social unrest, or large-scale migration. For instance, widespread unemployment can lead to social unrest, jeopardising societal stability. A more specific example is the spread of misinformation and disinformation. Misinformation is the unintentional spread of falsehoods, while disinformation is the calculated dissemination of lies intended to deceive.

False information circulating through social media and other channels can polarise societies, erode trust in institutions, and incite violence or discrimination.

I study interactions within organisations, with a focus on the impact of new technologies and human dynamics. In a recent article I attempted to answer these questions about the risks that social networks introduce. To do so, I analysed various aspects of the interactions between social networks and public security. In short, I found that the societal security risk posed by social networks is complex, multifaceted and dynamic. It requires ongoing research, careful regulation and, above all, that all users learn to understand and navigate digital environments critically.

Here, I offer three tips to help individual users minimise the risks of social networks while not losing the benefits:

build your digital literacy

avoid algorithmic traps

be quick to report and block suspicious information or problematic content.

See also  Alcohol marketing has crossed borders and entered the metaverse – how do we regulate the new digital risk?

A range of risks

Videos and testimonials shared on social media platforms can help spread the word about events far beyond a single geographical area. Take, for example, the police killing of George Floyd, an African-American man, in 2020. Although the events took place far away, they had a considerable impact in France, where I was living until a few months ago, generating demonstrations of support.

Floyd’s death also reignited the debate on police violence and racism in France. These events were taken up by associations defending Black people’s rights in France, rapidly creating a phenomenon of transnational solidarity.

The flip side is that sometimes, videos and testimonies can also contribute to the circulation of unverified or even false information, amplifying confusion and anger. Research has shown that fake news spreads six times faster than real information on platforms such as X, formerly called Twitter.

Social networks have also become formidable tools of influence. For example, they allow political leaders and parties to interact directly with their voters, bypass traditional media and control their message by targeting an often young audience.

However, this power to influence can be used maliciously to manipulate information. There is no shortage of examples of disinformation campaigns on platforms such as Twitter or Facebook, whether unfounded rumours, fake accounts or political trolls.

This phenomenon is part of a wider trend of increasing disinformation in Africa: the Africa Center for Strategic Studies reported in March 2024 that

disinformation campaigns seeking to manipulate African information systems have surged nearly fourfold since 2022.

Given that young people are heavy users of these platforms, they become prime targets for misinformation and manipulation.

This is especially worrying since states have increasingly begun to use social networks as a battleground for “information wars”. These battles are fought with true or false information rather than with traditional weapons. They aim to influence public opinion, destabilise political opponents and promote national interests. Electoral interference via social networks has become commonplace, with accusations of orchestrated disinformation campaigns to influence election results.

See also  Cryptocurrencies are in crisis, but they are not going to disappear


Read more:
AI propaganda campaign in Rwanda has been pushing pro-Kagame messages – a dangerous new trend in Africa

The potentially dangerous influence of social networks does not stop at politics or misinformation. Online platforms have become fertile ground for spreading extremist rhetoric. This is because they are so easy to access and offer the opportunity to contact individuals directly.

Research shows that extremist organisations have used these platforms to spread their ideologies, often targeting vulnerable young people and exploiting their sense of exclusion or seeking identity. (Social networks are not the only factor in radicalisation – it is a complex process. However, its role should not be ignored.)

Of course, governments and technology companies can make a major contribution to solving these problems. They can work together to develop effective strategies to detect and counter misinformation and disinformation, ensuring that social media platforms remain reliable sources of information and do not become tools for manipulation and deception.

But there is also plenty that individual users can do to make online spaces safer for themselves.

Three tips

1. Develop your digital literacy: My research has shown that learning how to manage information is a necessary prerequisite for combating disinformation. Users can learn how to critically evaluate and verify information, and how to identify reliable sources. There are initiatives to support this learning, such as WhatsApp’s collaboration with the NASSCOM Foundation in India, which aims to train users to spot fake news.

Fact-checking tools and platforms like Libération’s CheckNews or Africa Check can be used to verify the accuracy of information circulating online.

2. Avoid algorithmic traps: Be aware of algorithmic biases. I and others have shown that algorithms are never neutral. This is because of inherent biases in their construction and the opaque nature of these systems. These biases can trap users in filter bubbles and promote misinformation to fuel disinformation. It is essential to diversify your sources of information and follow accounts that offer varied perspectives.

See also  'Liberate the tractors': the right to repair movement that's regaining control of our devices


Read more:
Algorithms are moulding and shaping our politics. Here’s how to avoid being gamed

3. Don’t hesitate to report and block: If you encounter suspicious information or problematic content, use platforms’ reporting features to alert moderators. It is also advisable to block persistent sources of disinformation to guard yourself against further exposure.

The Conversation

Fabrice Lollia does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.