Authors
Olivia AOUATI, Pietro FREGUGLIA, Raffael HEISS, Sophie PATRAS, Panagiota PAVLOU, Florent PELSY & Margaux TRUC
- Pages
- 72
- Published in
- Belgium
Table of Contents
- How to reduce the impact of disinformation on Europeans’ health 1
- DO NOT DELETE PAGE BREAK 2
- How to reduce the impact of disinformation on Europeans’ health 3
- Abstract 3
- This paper provides a broad overview of the emerging challenges connected with disinformation in the area of health, how it spreads and the damage it causes. The report highlights proposed or implemented measures at national, European and international level to address health-related disinformation, accompanied by recommendations to mitigate its impact on the well-being of Europeans. 3
- This document was provided by the Policy Department for Economic, Scientific and Quality of Life Policies at the request of the Subcommittee on Public Health (SANT). 3
- This document was requested by the European Parliament's Subcommittee on Public Health (SANT). 4
- AUTHORS 4
- Olivia AOUATI, Milieu Consulting 4
- Pietro FREGUGLIA, Milieu Consulting 4
- Raffael HEISS, MCI | The Entrepreneurial School 4
- Sophie PATRAS, Milieu Consulting 4
- Panagiota PAVLOU, Milieu Consulting 4
- Florent PELSY, Milieu Consulting 4
- Margaux TRUC, Milieu Consulting 4
- ADMINISTRATOR RESPONSIBLE 4
- Christian KURRER 4
- EDITORIAL ASSISTANT 4
- Irene VERNACOTOLA 4
- LINGUISTIC VERSIONS 4
- Original: EN 4
- ABOUT THE EDITOR 4
- Policy departments provide in-house and external expertise to support European Parliament committees and other parliamentary bodies in shaping legislation and exercising democratic scrutiny over EU internal policies. 4
- To contact the Policy Department or to subscribe for email alert updates, please write to: 4
- Policy Department for Economic, Scientific and Quality of Life Policies 4
- European Parliament 4
- L-2929 - Luxembourg 4
- Email: Poldep-Economy-Science@ep.europa.eu 4
- Manuscript completed: July 2024 4
- Date of publication: Month 2024 4
- © European Union, 2024 4
- This document is available on the internet at: 4
- http://www.europarl.europa.eu/supporting-analyses 4
- DISCLAIMER AND COPYRIGHT 4
- The opinions expressed in this document are the sole responsibility of the authors and do not necessarily represent the official position of the European Parliament. 4
- Reproduction and translation for non-commercial purposes are authorised, provided the source is acknowledged and the European Parliament is given prior notice and sent a copy. 4
- For citation purposes, the publication should be referenced as: Aouati, O., Freguglia, P., Heiss, R., Patras, S., Pavlou, P., Pelsy, F., and Truc, M., 2024, How to reduce the impact of disinformation on Europeans’ health, publication for the Subcommittee on Public Health, Policy Department for Economic, Scientific and Quality of Life Policies, European Parliament, Luxembourg. 4
- © Cover image used under licence from Adobe Stock 4
- CONTENTS 5
- LIST OF BOXES 6 5
- LIST OF FIGURES 6 5
- EXECUTIVE SUMMARY 7 5
- 1. UNDERSTANDING HEALTH DISINFORMATION 10 5
- 1.1. Definition and distinguishing factors 10 5
- 1.2. The ‘infodemic’ phenomenon 11 5
- 2. DOMAINS OF HEALTH DISINFORMATION 13 5
- 2.1. Historical roots and evolution 13 5
- 2.2. Main domains of health disinformation 17 5
- 3. CAUSES AND TARGETS OF HEALTH DISINFORMATION 20 5
- 3.1. Causes of health disinformation 20 5
- 3.2. Targets of health disinformation 23 5
- 4. CHANNELS OF HEALTH DISINFORMATION 26 5
- 4.1. Online and offline channels 26 5
- 4.1.1. Forms and trends of online and offline disinformation 26 5
- 4.1.2. The dual role of AI in the spread of disinformation content 27 5
- 4.1.3. Industry disinformation and market strategies 29 5
- 4.1.4. Hybrid threats 31 5
- 4.2. Role of social media 32 5
- 4.2.1. Social media as a channel of disinformation 32 5
- 4.2.2. Social media as a channel to tackle disinformation 35 5
- 5. IMPACT OF HEALTH DISINFORMATION 37 5
- 5.1. Information integrity and trust 37 5
- 5.2. Social impacts of health disinformation 40 5
- 6. EU INITIATIVES AND EMERGING CHALLENGES 43 5
- 6.1. EU efforts to counter health disinformation 43 5
- 6.2. Challenges in fighting health disinformation 44 5
- 7. FUTURE STEPS AND RECOMMENDATIONS 47 5
- REFERENCES 54 5
- ANNEX I – STAKEHOLDERS INTERVIEWED 67 5
- LIST OF ABBREVIATIONS 4 5
- LIST OF ABBREVIATIONS 6
- Artificial intelligence 6
- AI 6
- American Psychological Association 6
- APA 6
- American Political Science Association 6
- APSA 6
- Audiovisual Media Services Directive 6
- AVMSD 6
- US Center for Disease Control and Prevention 6
- CDC 6
- Centre de la recherche sur les liens sociaux, Paris 6
- CERLIS 6
- Coronavirus disease 6
- COVID-2019 6
- Digital Services Act 6
- DSA 6
- Direct-to-consumer 6
- DTC 6
- European Court of Human Rights 6
- ECtHR 6
- European Convention on Human Rights 6
- ECHR 6
- European Digital Media Observatory 6
- EDMO 6
- European Fact-Checking Standards Network 6
- EFCSN 6
- European Parliamentary Research Service 6
- EPRS 6
- European Regulators Group for Audiovisual Media Services 6
- ERGA 6
- European Science-Media Hub 6
- ESMH 6
- European Trade Union Institute for Research 6
- ETUI 6
- European Union 6
- EU 6
- European Statistical Office 6
- Eurostat 6
- European Union Agency for Fundamental Rights 6
- FRA 6
- General Data Protection Regulation 6
- GDPR 6
- Generation Y or Millennials, i.e. people born between 1981 and 1996 6
- Gen Y 6
- Generation Z, i.e. people born between 1997 and early 2010s 6
- Gen Z 6
- International Telecommunication Union 7
- ITU 7
- Non-communicable diseases 7
- NCDs 7
- Non-governmental organisation 7
- NGO 7
- Organisation for Economic Co-operation and Development 7
- OECD 7
- Rapid Alert System 7
- RAS 7
- European Parliament Panel for the Future of Science and Technology 7
- STOA 7
- United Kingdom 7
- UK 7
- United Nations 7
- UN 7
- United Nations Educational, Scientific and Cultural Organisation 7
- UNESCO 7
- United States of America 7
- US 7
- World Health Organization 7
- WHO 7
- Explainable artificial intelligence 7
- XAI 7
- LIST OF BOXES 8
- Box 2: Examples of educational initiatives to combat health disinformation 25 8
- Box 2: Examples of governance measures to combat health disinformation (I) 39 8
- Box 3: Examples of governance measures to combat health disinformation (II) 41 8
- LIST OF FIGURES 8
- Figure1: Share of people seeking health information on the internet by age group, 2015-2023 14 8
- Figure 2: Share of people seeking health information on the internet by place of living, 2015-2023 14 8
- Figure 3: Individuals having low or no digital skills, 2021 and 2023 15 8
- Figure 4: Check of truthfulness of information or content, 2021 and 2023 16 8
- Figure 5: Limits and risks altering AI capacity to moderate disinformation contents 29 8
- Box 1: Examples of disinformation identification measures 11 8
- EXECUTIVE SUMMARY 9
- Background 9
- In the rapidly evolving landscape of information dissemination, the pervasiveness of health disinformation poses a significant threat to public well-being. This report first introduces the concept of health disinformation, unravelling its intricacies and providing a foundational knowledge base. It then explores the domains of health disinformation, shedding light on the specific areas where misinformation tends to proliferate, and identifies the most vulnerable targets. The report investigates the causes of health disinformation and the underlying factors that contribute to its dissemination. Crucially, it examines the channels through which health disinformation spreads to comprehend its reach and impact. The subsequent sections scrutinise the profound impacts of health disinformation on the well-being of Europeans. The report recognises the role of the European Union (EU) in mitigating this crisis and, accordingly, investigates existing frameworks and initiatives. Challenges in combating health disinformation are identified, paving the way for an exploration of future steps and recommendations. The report concludes with forward-looking insights, outlining future steps and providing recommendations to strengthen European health against the insidious influence of health disinformation. 9
- This study is based on desk research and interviews with relevant stakeholders at EU level, including research institutes or individual academic experts, EU associations, public (health) institutions and non-governmental organisations (NGOs). 9
- Key findings 9
- The report deals with the complexities of defining and distinguishing terms such as ‘misinformation’, 'fake news' and 'disinformation.' It identifies disinformation as a deliberate and strategic dissemination of false or incomplete information, accompanied by the element of intent. The report underscores the crucial reliance on trusted sources. It delves into the ‘infodemic’ phenomenon during health crises, where an overflow of information, both accurate and misleading, impacts public health and societal perceptions. At the same time, it debates the suitability of the term ‘infodemic’, emphasising the need for research on a better understanding of the impact of misinformation in this area. 9
- The report unveils the diverse challenges of health disinformation. In the lifestyle domain, profit-driven entities distort public knowledge by promoting unhealthy products. Vaccination-related disinformation, fuelled by anti-vaccine movements, obstructs vaccination efforts, posing a threat to public health. Alternative medicine influences public opinion on treatments, with influencers advocating untested supplements. Global variations in regulations pose challenges for prescription drugs, emphasising the necessity for international collaboration. Diagnostic tools, including direct-to-consumer (D2C) services, raise concerns about misleading self-diagnosis. In disaster scenarios, health-related disinformation manipulates individuals and complicates response efforts. The report stresses the significance of collaborative international efforts to tackle these challenges. 9
- The complexity of health disinformation is rooted in various societal elements. It is shaped by a variety of profit-driven business models, tools of dissemination, and communal identity within disinformation communities, all of which foster an environment conducive to the perpetuation of health disinformation. Intercultural conflicts and societal divisions amplify the spread, fostering confirmation bias and forming echo chambers. Commercial interests in the health industry incentivise the dissemination of misleading information, exploiting consumer vulnerability. 9
- The media landscape, driven by attention-seeking and economic motives, accelerates the spread of unverified health information. Political and ideological motivations complicate matters, eroding trust in health institutions, with internal motivations, such as personal beliefs and a desire for chaos, also contributing. The coronavirus (COVID-19) pandemic exacerbated the issue, capitalising on oversimplified narratives and the search for scapegoats, thriving on the lack of information and uncertainty. 10
- Disinformation occurs both offline (e.g. leaflets) and online (e.g. posts or videos). However, it spreads much more quickly online than offline. More specific channels of disinformation, such as artificial intelligence (AI), industrial marketing strategies, and hybrid threats, contribute to this phenomenon. 10
- Health disinformation poses a significant threat to people’s health, targeting various segments of society through channels such as social media. The general public is susceptible to misleading content, especially on emotional issues. Patients are vulnerable, facing potential harm and compromised outcomes. Specific demographics are particularly at risk, notably older people and those with pre-existing conditions. Cultural, religious, and socioeconomic factors also influence perspectives and contribute to the spread of inaccurate information within communities. Research indicates that higher socioeconomic status reduces susceptibility, but vulnerabilities exist across all age groups. Younger generations are not immune, facing additional concerns beyond the COVID-19 pandemic, such as unverified diets and skincare products. 10
- Health disinformation significantly erodes public trust in information systems, triggering a cycle of uncertainty and scepticism, as described by the concept of the ‘liar's dividend’. The general attitude of scepticism prevalent during crises such as the COVID-19 pandemic amplifies vulnerability to disinformation. The impact extends to heightened hate speech and xenophobia, polarised public debates, and strained democratic institutions. Successful disinformation campaigns exploit health vulnerabilities, fostering fear and anxiety, undermining economic stability, and impeding effective public health measures. Consequences include adverse effects on mental health, obstacles to the development of health literacy, inappropriate use of medical products, and limitations in public debate participation. This report underscores the need for nuanced governance measures to counter health disinformation, emphasising the broader societal and individual repercussions. 10
- The European Union (EU) has undertaken substantial efforts to combat health disinformation, establishing frameworks, codes of practice, and initiatives, such as the Rapid Alert System (RAS) and European Digital Media Observatory (EDMO). The Code of Practice on Disinformation was strengthened in 2022 and involves commitments from online platforms and advertisers to counter disinformation. The European Democracy Action Plan and the Digital Services Act (DSA) reinforce the EU's commitment to addressing disinformation comprehensively. Despite these endeavours, challenges persist. Fact-checkers face an overwhelming volume of information during crises like COVID-19, compounded by the lack of a multidisciplinary approach. Varying approaches to disinformation between the EU institutions and Member States, the lack of a common definition, and enforcement limitations of the DSA all contribute to the complexity of the issue. Regulatory measures struggle to address alternative platforms and the rise of AI-generated content, complicating detection and undermining trust. Achieving a balanced approach that upholds freedom of expression while countering disinformation remains a key challenge for regulators. 10
- Combating health disinformation and mitigating its impact on public health requires a comprehensive, multi-stakeholder approach, with international collaboration and interventions at policy, organisational, and social levels. 10
- It necessitates sustained attention to disinformation, promoting media and digital literacy, fostering collaboration, engaging healthcare professionals, supporting research and fact-checking, encouraging responsible journalism, ensuring transparent communication, and utilising behavioural science insights. Tailored approaches should be developed for smaller communities, empowering community leaders, and fostering critical thinking skills. An effective and resilient response will be built on continuous evaluation and adaptation of strategies, core and sustainable funding, and preparedness for future disinformation crises. 11
- 1. Understanding health disinformation 12
- 1.1. Definition and distinguishing factors 12
- 1.2. The ‘infodemic’ phenomenon 13
- In today's fast-paced information environment, intentionally or unintentionally generated misleading information can circulate swiftly, exerting significant impacts on people's lives, particularly in the realm of health. The phenomenon of disinformation shows recurring patterns, often closely following the main sources of information, including major news stories and political agendas. However, defining the concept of misleading information poses a considerable challenge. Various terms are associated with the dissemination of misleading information, including ‘fake news’, ‘false news’, ‘disinformation’, ‘misinformation’, or ‘propaganda’. These terms frequently overlap, making their distinctions challenging to discern. 12
- Disinformation involves the intentional and strategic dissemination of false, inaccurate or incomplete information, with the intent to mislead. The key elements that qualify ‘disinformation’ are the intentional spread of false information and the intent to manipulate or deceive. However, practical application of that definition becomes challenging, due to the inherent difficulty in determining intent. In fact, the term ‘disinformation’ itself faces a significant challenge in differentiating between genuine concerns and deliberate attempts to erode trust, as evidenced by instances of anti-vaccine propaganda. To distinguish health disinformation from reliable information, it is crucial to rely on trusted sources such as national and international authorities and reputable health organisations. 12
- In seeking to differentiate disinformation from other similar concepts, the analysis explores the elements of deception. Deception is defined as the intentional act of misleading, with identifiable actors' prior intentions resulting in corresponding attitudinal or behavioural outcomes. Although distinct from lying and lack of knowledge, it may align with both, while encompassing various methods, such as withholding information, using strategic ambiguity, diversion, and generating conditional or counterfactual versions of events. Observing these tactics makes it possible to identify how they contribute to successful deception. Examples span harmful advertising to political campaigns. 12
- Box 1: Examples of disinformation identification measures 13
- During health crises such as the COVID-19 pandemic and disasters, there is a notable increase in data from a variety of origins. The sometimes excessive flow of data from diverse sources, quality of information, and rapid dissemination of new data collectively generate social and health-related consequences. In addition, public debates between conflicting scientists can be particularly confusing for non-experts in the general public. This surge of information flow has been termed an ‘infodemic’,,, referring to a mix of online information containing both accurate content and misleading or false information. In 2020, the World Health Organization (WHO) declared a worldwide infodemic based on concerns that ‘a global epidemic of misinformation - spreading rapidly through social media platforms and other outlets - poses a serious problem for public health’. 13
- The concept of the infodemic goes beyond health, with excessive information evident across all domains. It encompasses both the overwhelming amount of information and the complexity in how that information is presented. 14
- This flood of data – including both valid and misleading information – extends beyond online platforms, for example to traditional media. The impact of the infodemic can affect people's mental health by instigating feelings of anxiety and a lack of control. In addition, the widespread dissemination of misinformation erodes trust in institutions, influencing societal perceptions and potentially undermining the foundations of democratic societies. 14
- In response to the surge of misinformation and disinformation during the recent COVID-19 pandemic and other health emergencies, there has been a notable increase in research on infodemics. Numerous studies are investigating the effects of infodemics and misinformation on societal behaviours, with a specific focus on evaluating their impact on individuals' lives and communities, as well as the frequency and predominant sources of unreliable data. 14
- The term ‘infodemic’ is itself the subject of debate, drawing an analogy between a medical phenomenon (virus) and a social phenomenon. While the term was valuable during the COVID-19 pandemic, it may no longer be the most suitable term to describe current disinformation phenomena, especially when addressing health disinformation more broadly. 14
- 2. Domains of health disinformation 15
- 2.1. Historical roots and evolution 15
- 2.2. Main domains of health disinformation 19
- The evolution of health disinformation is deeply rooted in historical and societal developments. Its historical roots extend from movements such as the flat earth theory to the denial of scientific consensus on climate change, often driven by narrow political or economic motives. The resurgence of the anti-vaccine movement, driven by both historical concerns and contemporary online platforms, underscores the lasting impact of misinformation on public health. 15
- More recently, the dissemination of health-related misinformation has adapted as a function of the communications technologies available,. The internet and social media platforms have amplified the rapid spread of pseudoscientific claims and unfounded health information. This problem worsened during the COVID-19 pandemic, with anti-vaccine groups using social media to spread distrust, conspiracy theories, and concerns about the safety of COVID-19 vaccines,. Before the COVID-19 pandemic, other health crises, including Ebola outbreaks, the HIV/AIDS epidemic, and influenza pandemics, had also triggered the spread of health disinformation. Finally, political motivations and economic interests further fuelled the spread of health disinformation during the COVID-19 pandemic (see section 2.2),,. 15
- These aspects are coupled with the ongoing digitalisation of the ways in which people search for information. Statistical data on the use of internet EU-wide show that in recent years, growing numbers of people are consistently using internet to look for health information. Figure 1 shows the share of population using the internet to source health information, by age group. While young people aged 25-34 years are the group that use the internet most for this purpose (growing from 56% in 2015 to 65% in 2023), the sharpest increase is evident among older people aged 65-74 years (growing from 25% in 2015 to 42% in 2023). 15
- The youngest group (people aged 16-19 years) shows relatively moderate use of the internet for health information, as well as the lowest growth between 2015 and 2023 (from 40% to 47%). 16
- Figure 1: Share of people seeking health information on the internet, by age group, 2015-2023 16
- / 16
- Source: Authors’ calculations based on Eurostat, information and communications technology (ICT) usage in households and by individuals (dataset code: isoc_ci_ac_i). 16
- Figure 2 shows that the use of the internet to look for health information is more common among those living in cities, increasing from 50% to more than 60% over the 2015-2023 period. People living in towns and suburbs are more likely to use the internet for health information (46% in 2015 and 55% in 2023), while those living in rural areas are least likely to source health information via the internet (38% in 2015 and 51% in 2023). This difference in the use of the internet likely reflects different factors, such as ease of access to a good internet connection, differences in digital literacy levels, and the higher concentration of young people in towns and suburbs. 16
- Figure 2: Share of people seeking health information on the internet, by place of living, 16
- 2015-2023 16
- /Source: Authors’ calculations based on Eurostat, ICT usage in households and by individuals (dataset code: isoc_ci_ac_i). 16
- Despite technological developments and widespread access to the internet, individuals do not always possess the necessary sophistication, knowledge, and literacy to effectively navigate and recognise accurate health information. The gap between individuals’ increased access to information and their capacity to critically engage with health-related content remains a significant challenge. This phenomenon is transgenerational, as both younger and older people are provided with increasing means and tools to access a wide range of information and sources online, but sometimes lack sufficient literacy skills to use them properly. In this context, the term ‘digital natives’ is particularly misleading. The term refers to young users growing up with significant exposure to digital technologies such that they are familiar with computers and the internet. While the term suggests that those young users are skilled and used to digital tools, their critical thinking or media literacy does not necessarily keep pace. 17
- Data from Eurostat show that, notwithstanding access to digital content, the ability to interpret information properly remains an issue among various age groups. Figure 3 shows the share of individuals with low or no level of digital skills in 2021 and 2023 at EU level, by age group. It uses a composite indicator of five specific areas (Information and data literacy, Communication and collaboration, Digital content creation, Safety, and Problem solving) as a proxy for individuals’ digital skills. 17
- The indicator sheds some light on the most recent trends on the use of internet or software in everyday life in recent years. Broadly speaking, the levels of digital literacy across groups follow the expected trends (i.e. the level of low or no digital skills increases in older groups). 17
- These recent data show that for most age groups, the number of people with no or low digital skills continued to decrease over time. However, the most senior (55-74 years) and most junior (16-19 years) groups show an increase in the share of people with low or no levels of digital skills. 17
- Figure 3: Individuals with low or no digital skills, 2021 and 2023 17
- / 17
- Source: Authors’ calculations based on Eurostat, Individuals' level of digital skills(dataset code: isoc_sk_dskl_i21). 17
- In addition to individuals’ ability to properly use digital resources to find information, it is worth considering how often people actually check the truthfulness of that information. Figure 4 shows a general increase in the share of people who have checked (in one way or another) the reliability of information found online. Information-checking processes are carried out either on the internet or offline, although there is a trend of preferring online information. Similarly, there has been an increase in the share of people not checking the truthfulness of information or content. The largest increase is among those not checking the truthfulness of the information because they already knew the information or because they recognised that the content or source was not reliable. 18
- Figure 4: Check of truthfulness of information or content, 2021 and 2023 18
- / 18
- Source: Authors’ calculations based on Eurostat, Individuals' level of digital skills (dataset code: isoc_sk_edic_i21). 18
- Health disinformation spans various domains, each presenting unique challenges for public health. 19
- One significant domain is lifestyle, which is the single most important cause of non-communicable diseases (NCDs) in the EU. For-profit companies and individuals often promote unhealthy products, including ultra-processed food, nutrition products, or substances such as alcohol, tobacco, and vapes, in ways that dissimulate potential harms,,,. In doing so, they push their own narratives and distort public knowledge. This deliberate manipulation of information can have severe repercussions for individual choices and community well-being. 19
- Vaccination-related disinformation is a particularly pressing concern, fuelled by anti-vaccination movements and sometimes even framed by actors within the healthcare industry. Disinformation campaigns are not a new phenomenon. The internet and social media platforms have become breeding grounds for disinformation, contributing to vaccine hesitancy and the resurgence of preventable diseases,. 19
- The COVID-19 pandemic highlighted the impact of these campaigns, which hindered vaccination efforts and compromised public health responses,. The significant amount of anti-vaccine messages and new questions raised about vaccine safety during the pandemic may now spill over to other important established immunisation programmes, including measles, for which vaccination rates declined after the pandemic. 19
- Alternative medicine plays a role in disseminating health disinformation. Providers can influence public opinion on diagnoses and treatments, both unintentionally and intentionally. 20
- For instance, numerous influencers promote simple nutrition supplements for miracle purposes using unvalidated claims about effectiveness, thereby contributing to the dissemination of health disinformation. These influencers, often with large online followings, may lack scientific expertise and provide misleading information that can influence individuals’ perceptions and decisions about health and wellness. 20
- This domain is characterised by intentional bias and misinformation, often spread by those who distrust conventional medicine. The promotion of alternative treatments, especially during critical events such as the COVID-19 pandemic, poses a risk, as individuals may opt for unregulated supplements or undergo potentially harmful treatments. 20
- Prescription drugs constitute a significant domain of health disinformation. The sensitivity of this issue is evident in the varying rules governing D2C advertising. In the EU, such advertising for prescription drugs is largely banned, due to concerns about the inherent conflict of interest arising from manufacturing potentially harmful drugs for profit,. However, in countries such as the United States (US), this practice is legal and pharmaceutical companies often collaborate with social media influencers for promotion. The global reach of influencers, coupled with weak and outdated national regulations in certain regions, exacerbates the challenge. Despite the EU's stringent regulations on marketing these products, they prove insufficient in the dynamic and expansive landscape of global social media, highlighting the need for collaborative efforts to address this issue on an international scale. 20
- Diagnostic tools, especially DTC diagnostic tools, present another domain of disinformation. The promotion of self-diagnosis can be deceptive, with profit motives becoming a concern when industries, influencers, and other information disseminators collaborate. 20
- As the accessibility of health information increases through the internet and DTC services, individuals may fall prey to misleading claims and inaccurate interpretations of test results, leading to potentially harmful decisions about their health. 21
- In times of disasters, health-related disinformation can spread rapidly, exacerbating challenges faced by affected populations. Visual content, false rumours and misleading articles can manipulate individuals into taking inappropriate actions or ignoring genuine warnings, complicating disaster response efforts. 21
- 3. Causes and targets of health disinformation 22
- 3.1. Causes of health disinformation 22
- 3.2. Targets of health disinformation 25
- Health disinformation has become a pervasive and multifaceted challenge. Its multiple sources encompass business models, dissemination tools, and the shifting focus of disinformation communities. These communities find a sense of identity and emotional fulfilment in spreading misleading narratives, often within groups that amplify their beliefs. 22
- Intercultural conflicts play a pivotal role in shaping the landscape of health information. Cultural differences contribute to varying interpretations of health practices, leading to the circulation of misinformation,. The divergence in beliefs, norms and perceptions of medical treatments create a breeding ground for myths and unverified health advice, particularly in a globally connected world where information transcends borders. 22
- Similarly, societal divisions, including social, cultural, economic and ideological disparities, amplify the spread of health disinformation and create an environment where individuals are more susceptible to confirmation bias. People tend to gravitate towards information that aligns with their existing beliefs or values, reinforcing preconceived notions and hindering critical evaluation of health-related content,. This polarisation not only perpetuates the dissemination of misinformation but contributes to the formation of so-called echo chambers, limiting exposure to diverse perspectives. 22
- ‘Echo chamber’ refers to an environment in which individuals are exclusively exposed to information or opinions that mirror and bolster their pre-existing beliefs. Online platforms, guided by algorithms designed to maximise user engagement, create digital echo chambers. Content exposure is determined by individual users’ behaviour and platform algorithms, shielded from public scrutiny. As a consequence, individual users’ exposure to potential disinformation is amplified by digital tools, making it more challenging for them to engage with opposing viewpoints and discussions on complex subjects. 22
- Commercial interests within the health industry introduce a potent incentive for the dissemination of misleading information. Companies producing and selling health-related products or services may prioritise financial gains over accurate information,,. In pursuit of profit, these companies may engage in marketing tactics that exaggerate the benefits of their products or downplay potential risks, leading to the propagation of unverified health claims. The spread of such claims can exploit consumers’ vulnerability, contributing to a feeling of distrust and disappointment in the healthcare system. Several studies indicate that exposure to misleading information can significantly disturb individuals' perceptions, leading to fear and confusion. This can escalate into panic, with symptoms ranging from fatigue, stress, insomnia, and anger to more severe manifestations such as anxiety and depression. The impact of misinformation thus affects individuals' mental well-being and can exacerbate existing mental health conditions. 23
- Driven by the pursuit of attention and engagement, the contemporary media landscape plays a pivotal role in disseminating health disinformation, often at the expense of accuracy. Sensationalised headlines, exaggerated claims and clickbait strategies contribute to the rapid spread of unverified health information. The prioritisation of sensational narratives over responsible reporting exacerbates the challenges associated with health disinformation. In parallel, economic motives play a significant role, as the monetisation of emotionally charged content incentivises the spread of disinformation. Social media platforms play a central role here, as content on these platforms is distributed globally in a largely unregulated ‘black box’ environment, characterised by a lack of transparency and accountability. In response to these challenges, the DSA aims to establish a safer online space by setting out regulations that govern content moderation, transparency, and user safeguarding. By imposing related obligations on platforms, the DSA aims to mitigate the spread of misinformation, harmful content, and the unregulated flow of information, thereby promoting a more secure and trustworthy digital environment for all users. 23
- Health issues are frequently entangled with political and ideological motivations, further complicating the information landscape,. 24
- Governments and/or interest groups may manipulate health narratives to suit their agendas, whether to downplay the severity of a health crisis, promote specific treatments, or discredit opposing viewpoints. This calculated manipulation, often witnessed in times of crises, aims to strengthen political movements by sowing division and influencing public opinion, ultimately jeopardising the integrity of democratic processes. Such politicisation erodes public trust in health institutions, impeding effective communication and response efforts. 24
- Internal motivations also have a role to play. Personal beliefs may be influenced by unfounded notions about health conditions such as fertility or autism and shared with others, in good faith, to raise awareness. Some instances of health disinformation are driven by internal motivations for entertainment, where individuals may share misleading content simply because they find it amusing. The ‘Need for Chaos Scale from US academic literature introduces the concept that some individuals find satisfaction in creating chaos. With the advent of social media and interconnected platforms, these individuals, previously isolated, can now connect more widely and sow turmoil more effectively. 24
- The surge in health disinformation during the COVID-19 pandemic can be attributed to various factors, primarily the global nature of the crisis. The mandatory vaccination campaigns and lockdown restrictions on movements created a fertile ground for disinformation to flourish. Disinformation and conspiracy theories capitalised on two key pillars: firstly, oversimplification of the truth, framing situations as good versus bad, offered cognitive comfort in a world that is inherently intricate; secondly, the search for a scapegoat – often the government in pandemic response scenarios – became a source of comfort for those grappling with uncertainty. 24
- Disinformation thrived due to a lack of reliable information. In the absence of credible information, the human reflex to seek reassurance led individuals to embrace false narratives to fill the gap. For example, initial contradictions from the Ministry of Health in Italy suggesting that masks might be detrimental created confusion and uncertainty. In such circumstances, people may turn to misinformation to fill the information gap, undermining effective prevention and mitigation strategies. 24
- Health disinformation poses a significant threat to public well-being, targeting various segments of society through different channels. The general public is often exposed to health disinformation through social media, websites and various online platforms. Misleading content can spread easily, reaching a wide audience and contributing to the proliferation of false health narratives. Emotional subjects, such as health-related issues, which affect everyone individually, tend to be more susceptible to both mis- and disinformation. Such emotional matters are fuelled by sensationalism, making health-related news more prone to disinformation. 25
- Patients emerge as a vulnerable target in the field of health disinformation, as they face potential harm from misleading narratives that affect their decision-making and well-being. Disinformation, often disseminated through online platforms, websites and social media, can influence patients' perceptions, leading to poor health choices and compromised outcomes. 25
- Within the patient populations, certain demographic groups are particularly vulnerable, such as older people, children, pregnant women and people with pre-existing health conditions. By exploiting their specific concerns and fears, misinformation can prompt poor health decisions, deter key treatments and contribute to an overall decline in well-being,. For instance, the increased tendency of older adults to share fake news on social media compared to younger adults is attributed not to cognitive decline, but, rather, to their lower level of digital literacy compared to their younger counterparts,. The consequences of such misinformation can be profound, affecting not only individual health outcomes but also challenging patients' trust in health professionals and institutions. 25
- Health disinformation often targets specific communities based on cultural, religious or socioeconomic factors. For instance, cultural health beliefs play a crucial role in shaping individuals' perspectives on their health. It influences their attitudes toward health problems, determining when they will refer to a healthcare service, as well as the type of healthcare service or professional to which they will turn. This impacts people’s responses to suggestions for lifestyle modifications, healthcare interventions, and adherence to treatment. Similarly, religious factors can significantly influence people’s perspectives on health-related issues, contributing to the spread of misleading or inaccurate health-related information within religious communities. For instance, the COVID-19 pandemic affected traditional religious practices, leading some communities not to comply with restrictions, contributing to the spread of the virus. This resistance created a conflict between religion and evidence-based science, as some relied on faith over scientific advice. 25
- Conversely, certain religious communities successfully integrated religious guidance and scientific recommendations, reducing the risk of viral transmission,. 26
- Individuals with higher socioeconomic status, including education and income, are generally less likely to accept health misinformation compared to those in lower socioeconomic groups. Tested in numerous studies, the knowledge gap hypothesis highlights the role of social class in the effectiveness of information dissemination, including health-related information, with higher-status individuals showing greater health literacy and being more inclined to follow recommended preventive measures. Individuals who are less educated or older tend to display heightened propensity to distrust authorities, making them more susceptible to oversimplified narratives,. However, it is essential to recognise that disinformation adapts to its target audience, exploiting contextual vulnerabilities. While it is true that older or less educated individuals may exhibit a higher susceptibility to explicit disinformation, the complex nature of the disinformation landscape implies that vulnerabilities exist across all age groups. Younger generations, such as Gen Y (so-called Millennials, born between 1981 and 1996) or Gen Z (born between 1997 and the early 2010s), are not immune, particularly in the era of information overload or infodemics. Concerns extend to issues beyond the pandemic, encompassing areas such as unverified diets, skincare products, and self-diagnosis habits among younger individuals,. 26
- Box 2: Examples of education initiatives to combat health disinformation 27
- Disinformation affects healthcare professionals, compromising their ability to provide evidence-based care. Health practitioners often experience feelings of confusion when confronted with health-threatening disinformation. For instance, individuals with chronic conditions such as diabetes may deviate from prescribed treatment plans due to misinformation. This confusion among medical practitioners can lead to inconsistent treatment approaches and compromise patient outcomes,,. In addition, patients do not always trust the expertise of healthcare professionals. This lack of confidence can be attributed to various factors, including reliance on alternative information sources such as social media, concerns about healthcare professionals’ experience, as well as constraints on time and availability to patients, particularly during the COVID-19 pandemic. 27
- In conclusion, health disinformation can influence political and policy decision-makers, leading to the formulation of ineffective policies that impact the overall health of a population. In the same context, health disinformation can create confusion within the scientific community, challenging established scientific evidence and impeding the progress of research and the development of evidence-based health practices. Finally, during pandemics, health disinformation poses a severe threat to global health efforts, including public health measures as responses to health crises. 27
- 4. Channels of health disinformation 28
- 4.1. Online and offline channels 28
- 4.1.1. Forms and trends of online and offline disinformation 28
- 4.1.2. The dual role of AI in the spread of disinformation content 29
- 4.1.3. Industry disinformation and market strategies 31
- 4.1.4. Hybrid threats 33
- 4.2. Role of social media 34
- 4.2.1. Social media as a channel of disinformation 34
- 4.2.2. Social media as a channel to tackle disinformation 37
- Disinformation occurs both offline (e.g. leaflets) and online (e.g. posts or videos), but spreads much more quickly online than offline (see section 4.1.1.). Additionally, more specific channels of disinformation such as AI (see section 4.1.2), industrial marketing strategies (see section 4.1.3) and hybrid threats (see section 4.1.4) contribute to this phenomenon. 28
- The main channels of offline disinformation are traditional media such as TV, newspapers, and radio, books and encyclopaedias, or social circles such as friends and relatives. Stakeholders also pointed to communities such as religious groups or students’ university networks as channels of offline disinformation. 28
- Although there is a conceptual difference between online and offline channels, in practice this dichotomy is blurred. Firstly, online disinformation is quantitatively more prevalent due to the amplifying features of the internet. Secondly, both online and offline channels are connected, as all information transits from one to another. However, offline channels are more impactful on public opinion as they remain historically more trusted and reliable,. The circulation or retransmission of disinformation through offline channels can therefore aggravate disinformation outbreaks by giving it more visibility and importance than it would have received had it remained exclusively online. 28
- Offline communications (i.e. non-internet-based communications) have proven to be efficient in tackling disinformation by sending individual reliable information through private messages such as emails or applications (apps). The public often rely on traditional media as trustworthy sources of information and generally tend to fact-check the information found online through these channels,. 28
- Online disinformation is quantitatively more significant due to its viral nature. It spreads faster and reaches a wider audience due to the accessibility and immediacy of the internet, as well as its business model based on content popularity, automated profiling and advertising revenue. 29
- AI has an increasing impact on mass disinformation. AI tools have been used not only to create false and misleading content but to spread it rapidly and widely, strengthening users’ engagement, adding misinformation to disinformation. Once the public is exposed to disinformation and incited to share and react to such posts on social media, disinformation spreads through users republishing messages without any intent to disseminate false information purposely. 29
- Among the different types of AI tools and methods, deepfakes, chatbots, micro-targeting and tracking are the main techniques used by disinformation sources,. Deepfakes involve digitally manipulated audio or visual content to create realistic deceiving materials. They can fabricate false information and present them as truth, or manipulate genuine information to make it appear fake. Both techniques can deceive human eyes and machines,. Chatbots are software robots that autonomously engage in social media activities and pass as real users by mimicking human language and behaviours. Micro-targeting consists of using AI-powered tools to identify users' preferences and tailoring content recommendations based on their preferences and interests, while tracking uses cookies or browser fingerprinting to track users across websites, gathering usage data or unique browser and device features for identification purposes. 29
- One American study revealed that with a single publicly available language-based AI tool, within 65 minutes, 102 blog articles were generated with more than 17 000 words of disinformation related to vaccines and vaping. These publications included fake testimonials and scientific-looking references and targeted specific groups sensitive to health issues, including young parents and pregnant women, older people, and people suffering from chronic health conditions. 29
- Conversely, AI also has the potential to address the issue, especially by automatically detecting disinformation and moderating or suppressing misleading content. 29
- AI detection tools use machine learning based on large datasets and pattern-recognition functions to detect factual inaccuracies and identify authors of disinformation. 30
- They can also support human fact-checking with automated tools such as filters, retrieval of evidence, and pre-analysis of users’ reported content, or assess the veracity of scientific claims. 30
- New algorithms are being developed to directly moderate content using AI to filter, label, block, remove or deprioritise disinformation content to limit users’ exposure at an early stage. Initiatives are also developing more transparent and explainable AI,. To date, fact-checking measures have primarily relied on manual human intervention. As the volume of disinformation continues to grow, however, manual fact-checking is becoming increasingly ineffective. 30
- AI could thus play an important role in tackling disinformation. Currently, the power of AI tools to counter disinformation remains quite limited, and bears greater risks for fundamental rights and democracy than disinformation itself,, (see section 4.5). 30
- Figure 5 summarises, the limits and risks affecting AI’s capacity to monitor disinformation content. 30
- Figure 5 Limits and risks altering AI’s capacity to moderate disinformation content 31
- / 31
- Source: Authors’ scheme based on literature review. 31
- Although AI is efficient in supporting humans debunking information, it is not yet powerful enough to autonomously counter disinformation. Using AI for debunking disinformation still requires considerable human intervention, making it less powerful to tackle disinformation than to create automated disinformation content. Overall, more efforts need to be made to overturn the harmful potential of AI generating and spreading disinformation. Countermeasures could include: guidance documents promoting self-regulation and guidelines for leveraged implementation of the DSA; regulatory measures or supporting documents to encourage social media and platforms to configure their algorithms to identify and limit the spread of health misinformation transparently. The creation of a platforms’ vigilance duty against disinformation could also be considered (see section 7). 31
- Industrial communication and marketing strategies can be a channel of health disinformation, although rules exist within the EU to prevent these kinds of abuses. 31
- Companies can be tempted to intentionally spread inaccurate or misleading messages to attract and retain customers and enhance their market power through different communication strategies. 31
- For instance, undertakings can combine several communication channels to convey misleading information about the healthiness of their products to invite consumers to buy their products. Such communication strategies can be relatively effective. 31
- One study revealed that public exposure to positive messages advertising products commonly known as unhealthy increased their attractiveness, especially to young people. In the field of advertising, consumers have become more wary and tend to engage more with verifications of advertisers’ claims compared to other forms of disinformation, such as realistic fake or ambiguous photos and videos or marketing campaigns through influencers. 32
- Another strategy is to sponsor research and communication. The pharmaceutical and food industries, at both international and national levels, provides funds to researchers and opinion leaders to influence scientific discourses and public perception,,. Studies revealed that sponsored clinical research influences physicians’ practices and could favourably influence the outcome of such research to the advantage of the sponsor. Evidence shows that certain companies also use strategic health communications and political networks to weigh on public health policies,. These trends are paralleled in the pharmaceutical, alcohol, tobacco and vaping, and fossil fuel industries,,,,,. Other businesses interact directly with patients or customers to enhance their reputation. They use either subtle persuasion strategies or rely on influencers on social media to strengthen their relationships with regular customers while responding to unfavourable customer views,,. 32
- This channel of disinformation seems more limited in Europe due to the existing EU regulatory framework. For instance, the Audiovisual Media Services Directive (AVMSD), the DSA, the General Data Protection Regulation (GDPR), and the EU Regulation on the provision of food information to consumers limits companies’ possibilities to promote unhealthy products or practices and false claims. Although their scope is limited and may not directly apply to certain sources of disinformation per se, they provide a framework that contributes to the containment of disinformation. Stakeholder interviews confirmed the effectiveness of the existing EU legal framework in relation to industrial disinformation and its limits in respect of product sponsorship and subliminal advertisement on social media. Despite limited empirical data, the US-centred literature describing companies’ disinformation practices seems to indicate, by contrast, that the EU regulatory framework tends to successfully contribute to limiting disinformation within the EU. 33
- Hybrid threats are state or non-state actions aiming to exploit the vulnerabilities of the EU or its Member States to their own advantage by using a combination of diplomatic, military, economic or technological measures while remaining below the threshold of formal warfare. Such destabilisation threats have been identified by the European Council as a source of disinformation that can particularly affect the EU in times of crises such as the COVID-19 pandemic. Indeed, foreign powers can have an interest in spreading health disinformation to establish or enhance a climate of fear, as fear can lead to instability and weaken public authorities’ actions. Health disinformation can be particularly harmful. Health affects everyone and health disinformation can therefore reach a wide range of people and create a general atmosphere of panic, slowing public authorities’ initiatives. For example, in 2020, State-sponsored disinformation campaigns from Russia, Iran and China conveying conspiracy theories in relation to the COVID-19 pandemic were identified as hybrid threats targeting EU Member States. 33
- However, there is little publicly available evidence establishing the existence of successful pandemic-related health disinformation campaigns from foreign powers within the EU territory,,. 34
- Social media is an essential channel of information today. As a result, it is also a major channel of disinformation, especially in times of great uncertainty, such as the COVID-19 pandemic (see section 4.2.1). However, public and private initiatives have emerged to tackle disinformation through social media and platforms, which have also taken reactive measures against disinformation outbreaks (see section 4.2.2.). Social media is defined here as any digital platform that facilitates social interaction online, including social networks such as Instagram and Facebook, video-sharing platforms such as YouTube, and instant messaging services similar to WhatsApp and Snapchat that enable user connections and content-sharing. 34
- Poor-quality health information spreads rapidly through platforms, particularly affecting those with lower health or media literacy. This phenomenon stems from four key factors: (1) the shift of information consumption from traditional mass media to online social media and platforms, (2) an increased demand of information in times of crises, (3) the relatively limited editorial oversight exercised over social media publications, and (4) the technical and behavioural features of online platforms, making them prone to spread disinformation on a wider scale. The role of health influencers on social media is also relevant. 34
- Social media has become one of users’ primary sources of information. Gen Y (increasingly consume news via Youtube and Facebook. Some groups opposing immunisation have been identified spreading disinformation on platforms such as WhatsApp and Facebook, contributing to vaccine hesitancy. This shift from traditional media sources to social media news is primarily due to hyperconnectivity and easy access to health disinformation online via smartphones and other devices. Secondly, certain types of news found online, including disinformation, are often available for free, whereas more reliable news editors are often based on the pay-per-view model that replaced subscriptions to printed newspapers . 34
- This increased use of social media as a primary source of information is intensified by people’s higher need for information in times of crises,. This is particularly the case during pandemics or health scandals, as the uncertainties surrounding everyone’s health make people consume news more intensely to find reliable or confirmatory information. 35
- Information available online is not always subject to the same editorial rules and ethics as traditional media,. However, the recently adopted DSA, if effectively enforced, would help reporting and de-ranking of disinformation content, introducing a form of editorial oversight,,,. Questions remain about platforms’ legitimacy and ability to exercise such oversight, as it may lead to potential abuses or a chilling effect on freedom of speech. Additionally, DSA legislation does not fully apply to private messaging apps and smaller platforms, identified as niches,,,. Smaller chat groups and platforms such as Discord, Telegram or MeWe attract partisan communities who adhere to or share disinformation theories because they meet less opinion diversity and opposition on these social media. Accordingly, they are chosen by disinformation sources to avoid mainstream platforms’ stricter policies and spread disinformation more easily. 35
- Social media is characterised by structural elements that make it prone to content becoming viral, especially in anxiogenic times such as outbreaks and disasters,. Users’ exposure to a high amount of information flowing through a single channel combining news and non-news elements make it more difficult to identify disinformation. Platforms’ infrastructure and business models also enhance the spread of viral disinformation,,,,,. 35
- News and content on online social media is designed to engage with users and entice them to click and share posts, which generate financial revenues for authors and advertisers . 36
- Social behaviours online also contribute to the spread of disinformation. Filter bubbles and echo chambers are generally believed to facilitate the spread of disinformation online,,,,,. The particular infrastructure of social media allows disinformation to reach people who could not have been reached before and to link like-minded people more easily than traditional networks. It also provides disinformation authors and believers with tools to spread disinformation more efficiently. Social media features encourage users to share disinformation even when they know it is inaccurate for various reasons, such as instant gratification, community approval, a certain need for chaos, or to seek confirmation from people they know. 36
- Influencers and other opinion leaders play a role in the spread of disinformation online. As individuals, they are not bound by editorial processes (e.g. as in a news organisation) or professional norms (e.g. in the medical community). They are often driven by commercial interests, either through collaboration with industry or through the pursuit of their own business goals (e.g. selling their own products). They may also hold and express personal views that are not sufficiently supported by scientific evidence. Some of these influencers have millions of followers. There are various types of health influencers, from ordinary people or physicians sponsored by the pharmaceutical industry for strategic communications, to celebrities and politicians, trusted figures who bear a special responsibility when spreading disinformation. Their moral authority can give more credibility to disinformation, drawing people to false or misleading health information. Empirical research has demonstrated that disinformation caused by political figures constituted 20% of all misleading claims about COVID-19, but generated 69% of total social media engagement. In other words, online disinformation from political figures is relatively limited in volume but appears much more engaging than ordinary users’ posts spreading similar disinformation content. 36
- Although social media can be instrumentalised to rapidly spread disinformation online, it can also be used to share reliable information and counter online disinformation. During the pandemic, social media platforms developed their own initiatives against disinformation. 37
- Public authorities are increasingly making use of social media networks and platforms as an effective channel to convey reliable information campaigns. During the COVID-19 pandemic, governments and reliable sources used social media to address disinformation through information campaigns. For instance, the UNESCO campaign #ThinkBeforeSharing was used to counter the spread of vaccine-related disinformation. Member States’ regulation and cooperation with social media platforms to coordinate actions against online disinformation was also supported by NGOs, which simultaneously warned about the need to avoid ‘monopoly of truth’ abuses,. 37
- Platforms also developed their own initiatives, which took different forms. While some platform providers underwent a stricter self-regulation process following the reform of the EU 2022 Strengthened Code of Practice on Disinformation, others adopted a lighter approach, focusing only on some aspects of the Code. Most of these initiatives relied on cooperation with fact-checking organisations, the building of internal fact-checking teams, or automated content moderation technologies to detect and remove false information,,. Some platforms also banned ads for medical masks and respirators. 37
- Although these initiatives were welcomed by public authorities, mainstream platforms such as Meta and X now appear to be dismantling their fact-checking teams and/or reducing their efforts to tackle disinformation,,,. The reasons for this are twofold: firstly, platforms are cutting staff and turning to AI to support their fact-checking measures to save money; and secondly, disinformation supported by AI continues to overcome human fact-checking efforts, calling their effectiveness into question, despite their positive impact on information quality. 37
- Platforms’ efforts to moderate content also raised concerns that it could lead to a violation of freedom of speech. Future access to platforms’ data envisaged by the DSA should enable fact-checking organisations to better identify the targets and common trends of online disinformation. 38
- 5. Impact of health disinformation 39
- 5.1. Information integrity and trust 39
- 5.2. Social impacts of health disinformation 42
- Like any form of disinformation, health disinformation ultimately affects public trust in the information system as a whole. It pushes people to question the integrity of information and sources of information, leading to a vicious circle of uncertainty. Rayan-Moseley’s concept of the ‘liar’s dividend’ illustrates this vicious circle. He demonstrated how the public’s wariness of the existence of fabricated information makes people more sceptical of true information, especially in times of instability when false information runs rampant. When related to health issues, especially in the context of a pandemic or pharmaceutical scandal, this vicious circle can add uncertainty to an already anxiogenic context, amplifying peoples’ vulnerability to disinformation content. Similarly, disinformation can affect audiences’ vulnerability to manipulation and prevent people from making informed choices. Repetitive exposure to misleading and contradictory health information, combined with online targeted communications and recommendations on social media, can also affect people’s capacity to make informed decisions and find trustworthy information within a significant volume of news,,. Health disinformation misleads public perceptions of scientific evidence in order to gain economic, geopolitical, financial, or personal advantage from their decisions. 39
- At the same time, health disinformation tends to polarise public debates and accentuate opposition between groups. In the context of COVID-19, in countries such as Austria and Italy, health concerns and disinformation have led to intensified hate speech and xenophobia against refugees and the Chinese community,,,. 39
- New actors are now confronted with communication challenges, as health disinformation causes distrust and scepticisms in pharmaceutical companies, health authorities and physicians. In the context of vaccine hesitancy during the COVID-19 pandemic, doctors and scientists had to learn how to engage in adapted communications to share reliable, understandable and appealing health information with the public. They were also invited to take part in public debates to counteract disinformation discourses and limit the after-effects on public health. 40
- Disinformation may lead to other information issues, such as misinformation and infodemics (see Chapter 4). The engaging nature of disinformation content, combined with repetitive recommendations by platforms’ algorithms and audiences’ needs for confirmatory opinions, contribute to its viralness. Yet, this also accentuates audiences’ insecurity and need for confirmatory information, leading to a vicious circle that spreads disinformation further. 40
- Certain counter-measures can also harm public trust and information integrity. For instance, content moderation, if not accompanied by an adapted communication engaging in a healthy debate, can feed conspiracy theories and draw its authors to less-regulated channels. 40
- Abuses surrounding the definition of truth may also affect information-related fundamental rights such as media pluralism, freedom of speech and the right to information. There is a risk of exceeding the qualification of disinformation and blocking factually true content, or disproportionately using the qualification of disinformation to censor information that might ‘offend, shock or disturb the State or any sector of the population’. Fact-checkers interviewed confirmed that the scope of their task was limited to checking facts and not accuracy. Yet, the line between merely controlling factual accuracy to uphold the rule of law and disproportionately infringing freedom of speech through the definition of truth can be blurred. To date, the European Court of Human Rights (ECtHR) has never directly been asked to balance national measures tackling disinformation with the safeguards of Article 9 and 10 of the European Convention on Human Rights (ECHR) as such. Its case-law, however, provides for several relevant elements. Firstly, it observed in 2023 that platform users owning a social media account for private purposes cannot fully moderate comments reacting to their posts but could not be fully exempted from all liability as it might ‘facilitate or encourage abuse and misuse, including hate speech and calls to violence, but also manipulation, lies and disinformation’. The Court has also acknowledged, on several occasions, the existence of a form of ‘citizen journalism’ in parallel with traditional journalism. Via online tools, citizens share content that could have been ignored by traditional media, contributing to users’ right to information. Blocking such content could constitute a disproportionate interference with the rights protected under Article 10 ECHR. 40
- This risk was particularly evident in the debates surrounding the establishment of the criminal offence of scaremongering in Hungarian law during the COVID-19 pandemic,,,. 41
- Box 2: Examples of governance measures to combat health disinformation (I) 41
- Health disinformation has a wide range of negative social impacts. It hampers national authorities efficiently securing high standards of public health and economic stability, as well as affecting individuals’ daily life and health. 42
- At national and international level, health disinformation campaigns exploit health vulnerabilities to destabilise the functioning of society. These campaigns nurture a climate of fear and anxiety. They enhance people’s distrust in public institutions and hamper public authorities’ ability to take important decisions in times of crisis,. This is accentuated when disinformation campaigns lead public figures such as politicians to make erroneous statements. Two situations can be differentiated here: firstly, disinformation can benefit political discourse from the opposition and become part of a political agenda to undermine the leading party’s authority and credibility; and secondly, genuine relaying of false information by political figures broadly engages the public but erodes trust in State authorities once it is refuted. 42
- As a result of public distrust in public institutions, health disinformation affects the rule of law and the functioning of democratic institutions. Mistrust in public institutions and insufficiently informed decisions hamper public participation in democracy. The use of AI in this context plays a particular role – as users may genuinely believe in realistic AI-generated disinformation online and spread it as a form of contribution to the public debate, disinformation may then influence health and political views in real life. 42
- Box 3: Examples of governance measures to combat health disinformation (II) 43
- On a broader scale, health disinformation leads to public health issues. During the COVID-19 pandemic, disruptions such as self-medication, anti-vaccination beliefs, the use of bogus medicine and homemade cures, some of which were fatal or caused aggravated symptoms, reluctance to adhere to social distancing and lockdown rules, or refusals to wear protective masks, cost lives,,. Such reactions also had consequences for healthcare services and government actions, as they undermined the effectiveness of containment strategies, delayed the administration of proper treatment to patients or the diagnosis of a given disease, or aggravated existing health issues,,,,. 43
- Academic literature notes that health disinformation also leads to economic turbulence, such as panic purchase of medical supplies, a more difficult economic recovery, increase in social security expenses, and a higher pressure on healthcare systems. 44
- At individual level, health disinformation has an adverse effect on health behaviours and well-being. Distorted perceptions of the truth and people’s difficulties in finding reliable information can cause physical harm and inappropriate protective behaviours. For example, disinformation can lead to both underuse and overuse of medical products and services. Disinformation on the safety of vaccines can lead to significant and dangerous underuse of established immunisation programmes. Additionally, disinformation about alternative medicine (e.g. homeopathy or untested dietary supplements) can divert resources from more helpful, evidence-based treatments. Extensive and inappropriate use of screening or DTC tools can also lead to overdiagnosis and overtreatment, with all their negative consequences, such as physical disorders from inappropriate medications, or financial and psychological distress. 44
- Health disinformation can have a negative impact on mental health,. The anxiogenic context, amplified by the struggle to identify reliable information, increases mental health disorders such as depression, and exacerbates negative feelings. Health disinformation prevents people from finding clear, consistent and certain information on health prevention or disease management. This intensifies anxiety and reduces people’s ability to cope with imminent health threats. In the most difficult times, such as the COVID-19 pandemic, global health threats increase these phenomena, creating a generalised panic. 44
- Health disinformation hinders the development of adequate levels of health literacy, which is needed to address NCDs such as cancer or type 2 diabetes. People need to be well informed about how diet or substance use is related to their risk of NCDs. Yet health disinformation distracts people form accurate prevention campaigns. They can also praise the merits of certain products or methods without sufficient information on their appropriateness to the users’ health condition or their potential side effects. 44
- Information overload and misleading information prevents individuals from participating in public debates, creating anger and frustration. 44
- Finally, disinformation can lead to certain public order issues. For example, during the COVID-19 pandemic, disinformation about radio waves emitted by 5G towers making people more vulnerable to COVID-19 resulted in arson, vandalism and harassment against telecom technicians in the United Kingdom (UK). Similar arson offences were recorded in the Netherlands. 44
- 6. EU initiatives and emerging challenges 45
- 6.1. EU efforts to counter health disinformation 45
- 6.2. Challenges in fighting health disinformation 46
- In recent years, the EU has actively addressed the challenge of disinformation through a series of strategic initiatives. In 2018, the European Commission introduced a comprehensive framework, articulated in the Communication on tackling online disinformation: a European approach, outlining essential principles and goals to guide actions against disinformation. 45
- Subsequently, in October 2018, representatives from online platforms, tech companies, and the advertising industry collectively endorsed a self-regulatory Code of Practice on Disinformation. The Code includes commitments from online platforms, social networks, and advertising industry players to combat the spread of disinformation. While not specifically health-focused, it addresses disinformation across various topics, including health. Monitored by the European Commission, which published an assessment in 2020 and guidance, the Code was strengthened in 2022, endorsed by 34 signatories. It covers demonetisation, transparency and access to data, supported by a robust monitoring framework and a transparency centre to enhance accountability. In line with the DSA, the Code obliges very large online platforms and search engines to regularly assess systemic risks, including the potential misuse of their services for disinformation campaigns. 45
- In 2019, the European Commission released an Action Plan Against Disinformation, which is a set of actions designed to strengthen cooperation between the EU and its Member States in order to proactively tackle the issue of disinformation. 45
- In 2018, the EU launched the Rapid Alert System (RAS), in which EU institutions and Member States work together to streamline the exchange of information on disinformation campaigns and orchestrate coordinated responses. Based on open-source information, the system harnesses insights from diverse sources, such as academia, fact-checkers, online platforms, and international partners. This collaborative approach is a key feature of the EU strategy against disinformation. 45
- EDMO was launched in 2020 and engages in monitoring and combating disinformation across a spectrum of topics, including health. 45
- In December 2020, the European Commission presented the European Democracy Action Plan, which aimed to step-up the fight against disinformation in a dynamic threat landscape,. Rooted in European values, this initiative seeks to preserve freedom of expression and individuals’ rights to access legal content. It promotes election integrity, enhances citizen participation, fosters civic engagement, and cultivates trust in democracy. The Action Plan recognises the profound impact of disinformation on European democracy, particularly during elections, and underscores the need for coordinated efforts beyond national or local level. It proposes the development of a comprehensive toolbox to counter foreign interference in the information space and mobilises financial support through the Citizens, Equality, Rights and Values programme, Creative Europe, Erasmus+, Horizon Europe, and Cohesion funds. The plan also reinforces news media organisations within the EU and globally by providing funding through Creative Europe, Digital Europe, and the Global Europe Human Rights and Democracy Programme. 46
- The 2022 DSA harmonises rules for intermediary service providers, addressing the risks associated with disinformation propagation. It imposes clear obligations on platforms in a bid to mitigate the spread of misleading or false information that can manipulate public opinion, exacerbate social divisions, and undermine trust in institutions. Through mechanisms for fact-checking, content labeling, and enforcement, the DSA incentivises platforms to prioritise the integrity of their content moderation efforts, fostering a more resilient digital environment for all. 46
- The massive amount of information that needs to be fact-checked, monitored and reported presents a significant burden for investigators. The issue is even more prevalent in the context of international crises, such as during COVID-19, where fact-checkers have to operate in different countries and languages. This challenge is compounded by the lack of a multidisciplinary knowledge among officials responsible for combating disinformation, which makes it difficult to systematically distinguish between misleading and accurate content. EDMO has set up an ad hoc task force to combat the spread of disinformation in Ukraine, comprising experts with different backgrounds and expertise. The scalability of such initiatives in the face of multiple crises remains uncertain, however. 46
- Differences in approaches to disinformation between EU institutions and Member States hamper coherent action. Some Member States may downplay the issue or face political pressure to limit support for EU-level efforts against disinformation, leading to inconsistent approaches to information-sharing. Agreeing thresholds for identifying and combating disinformation is another obstacle. 46
- For example, if disinformation is circulating on the internet but is limited to niche platforms and has limited virality, it is not always clear whether action should be taken. 47
- All of this is exacerbated by the lack of a common definition of disinformation among EU institutions, Member States and stakeholders,. Arguably, the concept of disinformation is ‘used as a catch-all term that does not help EU institutions to define different areas of problematic behaviour’. However, finding a common definition is complicated by the need to preserve a public sphere in which freedom of expression is assured,. The EU Charter of Fundamental Rights requires that efforts to combat disinformation must not undermine freedom of expression, therefore any measures that interfere with communication should be necessary and proportionate, as well as anchored in the legal and normative framework of human rights. 47
- Regulators should therefore focus not only on the content itself, but on the underlying message and intent. A focus on content alone would inevitably raise freedom of expression issues. Indeed, the majority of fake news is not accidental (which would be misinformation) but driven by profit rather than influence. When the goal of the message becomes political, fake news begins to resemble more insidious content. A key issue is enforcing platforms' content moderation policies to combat health disinformation. However, this approach faces challenges due to varying and evolving policies of different online platforms. Despite notable improvements in platform policies and regulations on health disinformation, particularly during the COVID-19 pandemic, inconsistencies remain. In addition, some platforms have reduced their policy and safety measures teams over time, limiting their ability to effectively identify and address misleading content. 47
- While the DSA is considered an important tool for regulating online content, its enforcement is limited. Smaller online platforms – unlike the very large ones – often spread disinformation because they have lenient moderation policies and are still insufficiently regulated. Instant messaging apps such as Telegram, for example, play an important role in facilitating private and group conversations but fall outside the scope of the DSA. 47
- Finally, the emergence of AI-generated content poses a significant challenge in the fight against disinformation. AI systems capable of generating content are sometimes trained with disinformation, complicating detection efforts and undermining trust and information integrity. Understanding the evolving dynamics of AI-generated disinformation is therefore crucial to understanding its profound impact on trust and information integrity. 48
- 7. Future steps and recommendations 49
- Reducing health disinformation and its impact on public health requires a multifaceted approach involving various stakeholders. Given the global nature of the disinformation challenge, it is crucial for all stakeholders to collaborate in their efforts to counter disinformation, encompassing policy, organisational and social levels,. The OECD has also endorsed the promotion of multistakeholder partnerships and international coordination,. 49
- At policy level, governments need to enforce regulations and sanctions on platforms’ passiveness against false information, striving for a balance between fundamental rights and moderation. Transparency policies, including algorithmic transparency and content moderation practices, must contribute to building trust during crises, e.g. through open data sources,. Educational initiatives promoting media and health literacy are crucial,,, alongside support for high-quality media providers and independent journalism. Equipping the public with verification skills is essential in countering disinformation, as well as redirecting users to evidence-based websites,,. Cyber experts can contribute to the detection of false narratives. Support for independent and ethical journalism is considered the ‘best antidote against disinformation’ and is central to guaranteeing the sustainability of journalism as ‘a public good’. Certain media institutions could also be designated trusted sources. Various authors have suggested redirecting users to evidence-based websites, for example by attaching warning labels to headlines or entire news pieces, as well as flagging content by platform owners,,. 49
- From an organisational perspective, schools, companies and civil society organisations are pivotal actors in enhancing media literacy through dedicated programmes. School-based initiatives should specifically foster media comprehension and discerning misinformation. Additionally, employers can contribute to combating disinformation by implementing various measures in the workplace. They can support vaccination campaigns by disseminating information and resources through their intranet platforms, thereby encouraging employees to make informed decisions about their health. Employers can also incorporate misinformation awareness, digital literacy, and health literacy training into their continuous learning programmes, equipping their workforce with the skills necessary to critically evaluate information and identify misinformation. Employers can leverage the expertise of company physicians, if available, to preemptively address and debunk vaccination or health-related myths. By taking proactive steps and investing in education and resources, employers can significantly contribute to the fight against disinformation and promote a more informed and resilient workforce. 50
- Finally, at social level, citizens can play an active role and authenticate, confront, correct, and deliberate on information, actively engaging in the battle against false narratives. This entails fostering an environment where individuals exchange facts and opinions in a civil manner, while consciously tempering negative emotions such as anger and hatred. This requires better education and increased digital health literacy among the populace. It also implies an environment that encourages deliberative discourse. Social media platforms, for instance, must promote environments where content is not solely driven by emotional reactions but civil exchange of ideas. 50
- The following recommendations emerge from the literature and consultations with stakeholders: 50
- Educational initiatives: 50
- Integrate critical thinking skills into educational curricula and initiatives at all ages and levels. This aligns with general efforts to enhance media and digital health literacy, to empower individuals to question information, verify claims, and make informed decisions about their health. Particular attention should be paid to young users who, despite being familiar with digital tools, remain exposed to health disinformation and are not yet sufficiently aware of the risks. To that end, new networks should be built and existing networks extended to digital communities; 50
- Practice the inoculation method to develop audiences’ resilience to health disinformation before any outbreak. As disinformation campaigns are difficult to predict, a promising approach could consist of confronting disinformation content, together with explanations and methods to raise awareness of reliable sources of information, fact-checking techniques, and reporting tools. Interactive tools can be efficient in engaging with younger users; 50
- Train healthcare professionals to effectively communicate health information, debunk myths, and address patient concerns. Healthcare providers can play a crucial role in building trust and disseminating accurate information. Efforts should first concentrate on general practitioners and family physicians, as the public’s first contact for health matters and a trustworthy authoritative source of health information. Measures could be taken to help them to identify disinformation content and prepare them to deconstruct disinformation narratives when consulted by patients. Proportionality would be needed to balance the beneficial impact of communication against health disinformation, and the subsequent burden on practitioners in engaging with patients during and outside consultations; and 51
- Develop tailored approaches for smaller communities, including specific messaging and interventions based on community characteristics,,. Although evaluated as effective and necessary, this measure would be more efficiently implemented at national (rather than EU) level. The specificity and diversity of each community’s needs and sensitivity to health disinformation, paired with limited means to implement such tailored approaches on a wide scale, could lower the practical EU added value of such measures. Grassroots actions involving communities’ representatives and local authorities should be preferred to a broader top-down approach. 51
- Support for civil society: 51
- Work with opinion leaders and influencers to disseminate accurate health information and good practices within their communities. Building trust at the community level can help to combat health disinformation. Many influencers have a large audience and a certain authority over younger audiences. Influencers’ training and partnerships could thus contribute to the dissemination of reliable information and/or good practices in relation to public consumption of health information. Appropriate safeguards should be envisaged to ensure that opinion leaders can engage in healthy debates and effective communications without feeding conspiracy theories affecting the EU and/or individuals; 51
- Ensure sustained and permanent funding for civil society organisations engaged in combating disinformation. This would emphasise their financial safety to maintain, update and expand research, thereby fostering cooperation and advancing solutions. To ensure the sustainability of this measure and safeguard EU fundamental values such as freedom of speech and media pluralism, this funding should be conditional on effectiveness checks. To ensure that resources are not diverted away from well-organised organisations abiding by appropriate ethic and professional standards, mechanisms should be put in place to assess the proportionality and results obtained by funded organisations. Candidate organisations combating disinformation could respond to tender offers setting out the precise eligibility criteria, results expected, professional standards and the relevant effectivity assessments to apply ex-post; 51
- Encourage collaboration and interdisciplinarity in civil society efforts. Partnerships should be fostered between stakeholders, public authorities, and fact-checking organisations. To reinforce efficiency, collaboration should be supported with international organisations, such as the WHO, to share conclusions and strategies for combating disinformation, especially concerning global issues like the COVID-19 pandemic. Cooperation should also be considered with actors engaged in tackling foreign interferences in democratic processes and information manipulation. Common grounds, methods and appropriate forums should be developed and refined to enable professionals with different backgrounds and mindsets to work together; 52
- Collaborate with social media and platforms to configure algorithms to identify and limit the spread of health misinformation. Transparency by design in content moderation processes should be incentivised, ensuring that users understand how misinformation is detected and addressed online. This measure could also grant users more autonomy by making social media settings more open to users’ preferences; 52
- Foster projects that promote science-based health content and improve reporting on health issues to combat misinformation. Popularisation of evidence-based research articles written in plain language could make health information more accessible to the public while encouraging academic research. This would not only reduce the panic effects related to pandemic and health crises, but deepen constructive public and academic debates on health issues; 52
- Promote responsible reporting among social media and journalists. Journalists are subject to stricter editorial standards than content shared on mainstream social media. However, they are losing their traditional influence. Efforts should be made to encourage accurate, evidence-based reporting on health issues and to discourage sensationalism, which may contribute to the spread of misinformation,. Journalist-oriented fact-checking training should be supported to raise awareness of the importance and correct ways of sharing sound scientific information; and 52
- Support national public service and media service providers’ independence by providing guidelines and resources to avoid any form of censorship or foreign interference and to support media pluralism. Although traditional media are subject to stricter editorial rules, their resources and business models are losing momentum in the face of social media. Supporting sustainable, ethical and science-based journalism could contribute to safeguarding media pluralism and fostering the circulation of evidence-based information. 52
- Preparedness and evaluation: 52
- Adopt a proactive and systematic approach to counter disinformation, focusing on preventive, rather than solely reactive, measures. Constant monitoring of potential disinformation should be set up to anticipate health disinformation before it causes harm. 52
- Disinformation strategies, content and channels should be analysed, recorded, and documented to better identify future threats and the most effective ways to address those threats; 53
- Sustained attention on disinformation issues. This measure contributes to a proactive approach and includes ongoing education, resource provision, and regular information updates to the public. Disinformation and health disinformation is a multidimensional problem requiring comprehensive solutions. Continuous awareness-raising initiatives addressing disinformation concerns before outbreaks should be used to reinforce society’s preparedness and resilience; 53
- Continuously assess the effectiveness of interventions and adjust strategies based on emerging trends in health misinformation. This involves regularly updating educational programmes and campaigns to stay relevant and effective, as well as assessing the efficient implementation of existing legal frameworks, notably the DSA; 53
- Integrate multidimensional research on the causes and impacts of health misinformation to policy responses addressing disinformation. This includes leveraging insights from behavioural science to understand and address cognitive biases that contribute to the acceptance or voluntary spread of misinformation and acknowledging the economic impacts of disinformation, especially on overall vaccination campaigns and social security costs. This approach should strengthen both the consistency and efficiency of policy measures to tackle disinformation; 53
- Advocate for greater social media platform transparency and data-sharing with researchers beyond the DSA. Research can help to identify and understand the nature of misinformation at an early stage and assess intervention impacts in real-world settings,. Greater access to data would thus allow greater research quality and fast-tracked responses; 53
- Ensure up-to-date, transparent and clear communication from public health authorities. Uncertain, contradictory and ambiguous communication can have counterproductive effects on public health and public trust, whereas timely and accurate information can counteract the spread of misinformation during health crises. Particular attention should be paid to effective communication and engaging with the public and with detractors to enhance public authorities’ trustworthiness and address conspiracy theories; 53
- Address stigma and fear associated with health issues on a regular basis, as these emotions can make individuals more susceptible to misinformation. Promote open conversations that reduce anxiety and provide accurate information,,. Interventions should be tailored to align with human decision-making processes to increase public acceptance of public measures and their effectivenessand; and 53
- Develop existing task forces and issue recommendations to address specific disinformation crises. This includes providing specific training and solutions for different scenarios and investing in media literacy to mitigate the costs associated with disinformation. 54
- Task forces could be entrusted with monitoring disinformation campaigns to identify threats at an early stage and evaluate public and private countermeasures’ effectiveness. 54
- Regulatory measures: 54
- The existing regulatory framework already contains effective provisions to tackle disinformation at the EU regulatory level. As a result, additional regulatory measures should focus on leveraging and enhancing effective implementation of the existing legal framework, rather than adopting major new EU legislation. Such an implementation approach could also contribute to identify and addressing remaining gaps in the legislation. 54
- Strengthen the implementation of the existing framework for health communication, ensuring that information disseminated to the public is accurate, evidence-based and follows ethical standards. This includes leveraging existing key legislation, such as the DSA and the AVMSD. The DSA contains several relevant provisions that, if implemented effectively, would tackle disinformation. The development of practical guidelines and a close follow-up of the DSA implementation, combined with public health risk assessments, should therefore be a priority; 54
- Mandate social media platforms to engage further in content moderation. Platforms should be provided with further obligations or tailored practical guidelines on implementing the DSA to provide effective reporting systems for harmful disinformation, label disinformation, moderate content, or deplatform malicious actors. Guidelines could constitute a first step enabling the EU to identify the effectiveness of the measures and the resources needed to monitor compliance. Particular attention should be paid to the timeframe and proportionality of content moderation actions; 54
- Promote a principle of transparency by design in technology companies. When mandating social media platforms, transparency, proportionality and good administration practices should be respected. This would require significant efforts and resources to engage with platform service providers to change their business models; 54
- Incite and strengthen self-regulation measures, such as the Code of Practice on Disinformation, to limit platforms’ and social media service providers’ unwillingness to invest in fighting disinformation or their withdrawal from the exemplary efforts undertaken during the COVID-19 pandemic. These measures should establish a dialogue with the relevant stakeholders to better understand the burden and effectiveness of these efforts. Incentives should also be developed to support self-regulation to enhance public trust in platforms’ capacity to tackle such disinformation; 54
- Adapt social media platforms’ algorithms to ensure that they prioritise reliable sources of information. This measure poses several challenges as it implies changing platforms’ business models, which are based on revenue generated by popular content. 54
- Firstly, platform providers might be unwilling to significantly change their business models, which would require substantial negotiation efforts from policymakers to foster a culture and mindset shift. Secondly, at EU level, technical capacity is needed to monitor actors’ compliance and evaluate the continuous effectiveness of such change. Thirdly, well-defined, clear and transparent legal criteria should be applied to determine reliable sources. These criteria should also be subject to a proportionality test to avoid chilling effects, media source mainstreaming, and breaches of fundamental rights such as freedom of speech and media pluralism. Lastly, to prevent suspicions against the EU and national authorities, collaborative models should be established to implement this measure, based on diversity and adequate judicial review; 55
- Mandate platforms to conduct and publish risk assessments of the spread of health misinformation through their services and to implement mitigation strategies based on those assessments. To ensure their reliability and effectiveness, the results of these risk-assessments should be open to debate and/or involve independent experts; 55
- Similar to the transparency reports required by Articles 15, 24 and 42 of the DSA, oblige platforms to publish regular reports detailing efforts to combat health misinformation, including statistics on misinformation detected, actions taken, and the effectiveness of those actions. These reports should be results-driven and followed by appropriate implementing actions to address issues. Proactive initiatives should be acknowledged and encouraged. This measure could also be extended to other EU priorities beyond health disinformation; 55
- Empower trusted flaggers and independent researchers to verify the efficacy claims of health products and test misinformation. Certification mechanisms could be developed to provide trusted flaggers and experts with the necessary authority to intervene, ultimately ensuring accountability in intervention design and implementation; 55
- Mandate broadcasters and on-demand services to issue corrections or counternarratives for content identified as health misinformation, especially when such misinformation poses a significant risk to public health. Although this measure could be extended beyond health disinformation, the matter would be better addressed at national level; and 55
- Involve cyber experts and innovators in the regulatory process and allow them to report and explore technological solutions based on the principle of safety by design. 55
- In conclusion, the recommended measures converge on a collaborative approach across education, technology, healthcare, and policy measures to counter health disinformation. By supporting fact-checkers, promoting media literacy, and fostering scepticism, these efforts aim to prevent the spread of health disinformation. Together, they contribute to building a more informed and resilient public, reducing the impact of health misinformation on individuals, communities and society. 55
- REFERENCES 56
- Adams, Z., Osman, M., Bechlivanidis, C. and and Meder, B., 2023, ‘(Why) Is misinformation a problem?’, Perspectives on Psychological Science, Vol. 18, No 6, p. 1450. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10623619/. 56
- Albarracin, D., Romer, D., Jones, C., Hall Jamieson, K. and Jamieson, P., 2018, ‘Misleading claims about tobacco products in YouTube videos: experimental effects of misinformation on unhealthy attitudes’, Journal of Medical Internet Research, Vol. 20, No 6, e229. Available at: https://pubmed.ncbi.nlm.nih.gov/29959113/. 56
- Alemanno, A. and Garde, A. (Eds.), 2015, ‘Regulating lifestyles: Europe, Tobacco, Alcohol and Unhealthy Diets’, In Regulating Lifestyle Risks: The EU, Alcohol, Tobacco and Unhealthy Diets, Cambridge University Press, pp. 1-20. Available at: https://doi.org/10.1017/CBO9781107478114.003. 56
- Almond, D., Du, X. and Papp, A., 2022, ‘Favourability towards natural gas relates to funding source of university energy centres’, Nature Climate Change, Vol. 12, No 12, pp. 1122-1128. Available at: https://www.researchgate.net/publication/365299920_Favourability_towards_natural_gas_relates_to_funding_source_of_university_energy_centres. 56
- American Psychological Association, 2023, Using Psychological Science to Understand and Fight Health Misinformation, An APA Consensus Statement. Available at: https://www.apa.org/pubs/reports/misinformation-recommendations.pdf. 56
- Arceneaux, K., Gravelle, T. B., Osmundsen, M., Bang Petersen, M., Reifler, J. and Scotto, T.J., 2021, ‘Some people just want to watch the world burn: the prevalence, psychology and politics of the “Need for Chaos”’, Philos Trans R Soc Lond B Biol Sci, Vol. 376, No 1822. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7934998/. 56
- Ayub, S., Anugwom, G. O., Basiru, T., Sachdeva, V., Muhammad, N., Bachu, A., Trudeau, M., Gulati, G., Sullivan, A., Ahmed, S. and Jain, L., 2023, ‘Bridging science and spirituality: The intersection of religion and public health in the COVID-19 pandemic’, Frontiers in Psychiatry, Vol. 14. Available at: https://doi.org/10.3389/fpsyt.2023.1183234. 56
- Baeten, R., 2009, Social developments in the European Union 2009: EU pharmaceutical policies: direct-to-consumer advertising, ETUI. Available at: https://www.etui.org/sites/default/files/chap8_Rita%20Baeten.pdf. 56
- Bayer, J., Holznagel, B., Lubianiec, K., Pintea, A., Schmitt, J.B., Szakács, J. and Uszkiewicz, E., 2021, Disinformation and propaganda: impact on the functioning of the rule of law and democratic processes in the EU and its Member States – 2021 update, Publication for the Special Committee on Foreign Interference in all Democratic Processes in the European Union, including Disinformation Policy Department for External Relations, European Parliament, Brussels, p. 109. Available at: https://www.europarl.europa.eu/thinktank/en/document/EXPO_STU(2021)653633. 56
- Bang Petersen, M., Osmundsen, M. and Arceneaux, K., 2023, ‘The “Need for Chaos” and Motivations to Share Hostile Political Rumours’, American Political Science Review, Vol. 117, No 4, pp. 1486-1505. Available at: https://www.cambridge.org/core/journals/american-political-science-review/article/need-for-chaos-and-motivations-to-share-hostile-political-rumors/7E50529B41998816383F5790B6E0545A. 57
- Bernard, R., Bowsher, G., Sullivan, R. and Gibson-Fall, F., 2021, ‘Disinformation and epidemics: Anticipating the next phase of biowarfare’, Health Security, Vol. 19, No 1. Available at: https://www.liebertpub.com/doi/full/10.1089/hs.2020.0038. 57
- Bianco, V. and Tomsa, S., 2020, Countering online misinformation resource pack, UNICEF Regional Office for Europe and Central Asia. Available at: https://www.unicef.org/eca/media/13636/file. 57
- Boesch, G., 2023 What is pattern recognition? A gentle introduction (2024), Visio.ai. Available at: https://viso.ai/deep-learning/pattern-recognition/. 57
- Bontridder, N. and Poullet, Y., 2021, ‘The role of artificial intelligence in disinformation’, Data & Policy, Vol. 3, No 32, Cambridge University Press. Available at: https://www.cambridge.org/core/journals/data-and-policy/article/role-of-artificial-intelligence-in-disinformation/7C4BF6CA35184F149143DE968FC4C3B6. 57
- Borges do Nascimento, I. J., Pizarro, A. B., Almeida, J. M., Azzopardi-Muscat, N., Gonçalves, M. A., Björklund, M. and Novillo-Ortiz, D., 2022, ‘Infodemics and health misinformation: A systematic review of reviews’, Bulletin of the World Health Organization, Vol. 100, No 9, pp. 544-561. Available at: http://doi.org/10.2471/BLT.21.287654. 57
- Botero Arcila, B. and Griffin, R., 2023, Social media platforms and challenges for democracy, rule of law and fundamental rights, Publication for the Special Committee on Civil Liberties, Justice and Home Affairs, Directorate General for Internal Policies, European Parliament, Brussels. Available at: https://www.europarl.europa.eu/thinktank/en/document/IPOL_STU(2023)743400. 57
- Brandt, AM., 2012, ‘Inventing conflicts of interest: a history of tobacco industry tactics’, Am J Public Health, Vol. 102, No 1. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3490543/. 57
- Brest, A., 2020, Filter bubbles and echo chambers, Fondation Descartes. Available at: https://www.fondationdescartes.org/en/2020/07/filter-bubbles-and-echo-chambers/. 57
- Britannica, Science & Tech: Confirmation Bias. Available at: https://www.britannica.com/science/confirmation-bias. 57
- Brown, S., 2021, Machine learning, explained, Artificial intelligence, MIT Management Sloan. Available at: https://mitsloan.mit.edu/ideas-made-to-matter/topics/artificial-intelligence. 57
- Canadian Red Cross, 2017, Tech Talk: Why misinformation can be dangerous in disasters. Available at: https://www.redcross.ca/blog/2017/4/tech-talk-why-misinformation-can-be-dangerous-in-disasters. 57
- Carcioppolo N., Di Lun, D. and McFarlane S. J., 2022, ‘Exaggerated and Questioning Clickbait Headlines and Their Influence on Media Learning’, Journal of Media Psychology, Vol. 34, No 1, pp. 30-41. Available at: https://econtent.hogrefe.com/doi/10.1027/1864-1105/a000298. 57
- Castle, G., Gathani, R. and Spivey, D., 2023, ‘Inside EU Life Sciences: Updates on legal developments in the EU Life Sciences Industry’, EU Pharma Legislation Review Series: Advertising Updates Reflect Evolution Rather than Revolution, Covington. Available at: https://www.insideeulifesciences.com/2023/05/03/eu-pharma-legislation-review-series-advertising-updates-reflect-evolution-rather-than-revolution/. 58
- Centre for Disease Control and Prevention (CDC), (2023), Cancer Diagnosis and Treatment: Complementary and Alternative Medicine. Available at: https://www.cdc.gov/cancer/survivors/patients/complementary-alternative-medicine.htm. 58
- Chadwick, A. and Stanyer, J., 2022, Deception as a bridging concept in the study of disinformation, misinformation, and misperceptions: Toward a holistic framework, communication theory, Vol. 32, No 1, pp. 1-24. Available at: https://doi.org/10.1093/ct/qtab019. 58
- Chou, Y. S., Gaysynsky, A. and Cappella, J. N., 2020, ‘Where we go from here: Health misinformation on social media’, American Journal of Public Health, Vol. 110, S273-S275. Available at: https://doi.org/10.2105/AJPH.2020.305905. 58
- Ciampaglia, G. L. , Menczer, F. and the Conversation US, 2018, ‘Biases make people vulnerable to misinformation spread by social media: Researchers have developed tools to study the cognitive, societal and algorithmic biases that help fake news spread’, Scientific American. Available at: https://www.scientificamerican.com/article/biases-make-people-vulnerable-to-misinformation-spread-by-social-media/. 58
- Clifford, T., 2020, ‘Biohaven Pharmaceuticals lands Khloe Kardashian as influencer of new migraine drug’, CNBC., 15 July 2020. Available at: https://www.cnbc.com/2020/07/15/khloe-kardashian-teams-with-biohaven-as-influencer-of-new-migraine-drug.html. 58
- Collier, R., 2018, ‘Containing health myths in the age of viral misinformation’, Canadian Medical Association Journal, Vol. 190, No 19. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5953573/. 58
- Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions on the European Democracy Action Plan, COM/2020/790 final. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2020%3A790%3AFIN&qid=1607079662423. 58
- Coombes, R., 2023. ‘Row over ultra-processed foods panel highlights conflicts of interest issue at heart of UK science reporting’, BMJ, Vol. 383. Available at: https://www.bmj.com/content/383/bmj.p2514/rr. 58
- Council of the European Union, 2020, Council conclusions on strengthening resilience and countering hybrid threats, including disinformation in the context of the COVID-19 pandemic, https://www.consilium.europa.eu/en/press/press-releases/2020/12/15/council-calls-for-strengthening-resilience-and-countering-hybrid-threats-including-disinformation-in-the-context-of-the-covid-19-pandemic/. 58
- Csaba, G, 2020, Fighting Fake News or Fighting Inconvenient Truths? – On the Amended Hugarian Crime of Scaremongering, Verfassungsblog. Available at: https://verfassungsblog.de/fighting-fake-news-or-fighting-inconvenient-truths/. 58
- Dallo, I., Elroy, O., Fallou, L. Komendantova, N. and Yosipov, A., 2023, ‘Dynamics and characteristics of misinformation related to earthquake predictions on Twitter’, Scientific Reports, 13, 13391. Available at: https://doi.org/10.1038/s41598-023-40399-9. 59
- Davidson, B. I., Wischerath, D., Racek, D., Parry, D. A., Godwin, E., Hinds, J., van der Linden, D., Roscoe, J.F., Ayravainen, L. and Cork, A.G., 2023, Platform-Controlled Social Media APIs Threaten Open Science. Nature Human Behaviour, 7(12), 2054-2057. Available at: https://doi.org/10.1038/s41562-023-01750-2. 59
- De Regt, A., Montecchi, M. and Lord Ferguson, S., 2020, ‘A false image of health: how fake news and pseudo-facts spread in the health and beauty industry’, Journal of Product & Brand Management, Vol. 29, No 2, pp. 168-179. Available at: https://doi.org/10.1108/JPBM-12-2018-2180. 59
- Directive (EU) 2018/1808 of the European Parliament and of the Council of 14 November 2018 amending Directive 2010/13/EU on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audiovisual media services (Audiovisual Media Services Directive) in view of changing market realities, OJ L 303, 28.11.2018, pp. 69-92. Available at: https://eur-lex.europa.eu/eli/dir/2018/1808/oj. 59
- ECtHR, Judgment of 15 May 2023, Sanchez v. France, no. 45581/15, paragraph 185. Available at: https://hudoc.echr.coe.int/#{%22fulltext%22:[%22\%22disinformation\%22%22],%22sort%22:[%22kpdate%20Descending%22],%22documentcollectionid2%22:[%22GRANDCHAMBER%22,%22CHAMBER%22],%22itemid%22:[%22001-224928%22]}. 59
- ECtHR, Plenary, Judgment of 7 December 1976, Handyside v. the United Kingdom, no 5493/72, paragraph 49. Available at: https://hudoc.echr.coe.int/#{%22itemid%22:[%22001-57499%22]}. 59
- ECtHR, KEY THEME Article 10 – Contribution to public debate: Journalists and other actors (Last updated: 31/08/2023), ECHR-KS, p. 2. Available at: https://ks.echr.coe.int/web/echr-ks/article-10. 59
- EDMO, At a glance. Available at: https://edmo.eu/edmo-at-a-glance/. 59
- Engel, E., Gell, S., Heiss, R., & Karsay, K., 2023, ‘Social media influencers and adolescents’ health: A scoping review of the research field, Social Science & Medicine, Vol. 340, 116387. Available at: https://doi.org/10.1016/j.socscimed.2023.116387. 59
- ERGA, 2020, Notions of disinformation and related concepts. Available at: https://erga-online.eu/wp-content/uploads/2021/03/ERGA-SG2-Report-2020-Notions-of-disinformation-and-related-concepts-final.pdf. 59
- European Charter of Fundamental Rights. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:12012P/TXT. 59
- European Commission, 2022, The 2022 Code of Practice on Disinformation. Available at: https://digital-strategy.ec.europa.eu/en/policies/code-practice-disinformation. 59
- European Commission, 2018, Action Plan on disinformation: Commission contribution to the European Council, 13-14 December 2018. Available at: https://commission.europa.eu/publications/action-plan-disinformation-commission-contribution-european-council-13-14-december-2018_en. 59
- European Commission, 2018, Communication on tackling online disinformation: a European approach, 26 April 2018. Available at: https://digital-strategy.ec.europa.eu/en/library/communication-tackling-online-disinformation-european-approach. 60
- European Commission, 2018, Shaping Europe’s digital future: 2018 Code of Practice on Disinformation. Available at: https://digital-strategy.ec.europa.eu/en/library/2018-code-practice-disinformation. 60
- European Commission, 2019, Priorities: Protecting democracy, Documents on European Democracy Action plan. Available at: https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/new-push-european-democracy/protecting-democracy_en#documents. 60
- European Commission, 2019, Protecting Democracy. Available at: https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/new-push-european-democracy/protecting-democracy_en#documents. 60
- European Commission, 2020, Assessment of the Code of Practice on Disinformation – Achievements and areas for further improvement, SWD(2020) 180 final. Brussels, 10 September 2020. Available at: https://digital-strategy.ec.europa.eu/en/library/assessment-code-practice-disinformation-achievements-and-areas-further-improvement. 60
- European Commission, 2021, Guidance on Strengthening the Code of Practice on Disinformation, 26 May 2021, COM(2021) 262 final. Available at: https://digital-strategy.ec.europa.eu/en/library/guidance-strengthening-code-practice-disinformation. 60
- European Commission, 2022, Countering Hybrid Threats, Factsheet. Available at: https://defence-industry-space.ec.europa.eu/eu-defence-industry/hybrid-threats_en. 60
- European Commission, Open Source Observatory (OSOR), Digital Service Act Transparency Database.se. Available at: https://joinup.ec.europa.eu/collection/open-source-observatory-osor/news/dsa-transparency-database. 60
- European Commission, Shaping Europe’s Digital Future: The Digital Services Package. Available at: https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package. 60
- European Parliament, 2023, A New Push for European Democracy, Legislative Train Schedule: European Democracy Action Plan. https://www.europarl.europa.eu/legislative-train/theme-a-new-push-for-european-democracy/file-european-democracy-action-plan. 60
- EPRS, 2020, At a glance: Countering the health 'infodemic'. Available at: https://www.europarl.europa.eu/RegData/etudes/ATAG/2020/649369/EPRS_ATA(2020)649369_EN.pdf. 60
- European Union External Action, 2019, Rapid Alert System, Factsheet. Available at: https://www.eeas.europa.eu/node/59644_en. 60
- European Union, 2022, The 2022 Strengthened Code of Practice on Disinformation. Available at: https://digital-strategy.ec.europa.eu/en/policies/code-practice-disinformation. 60
- Eurostat, 2013, Individuals’ level of digital skills (from 2021 onwards) (isoc_sk_dskl_i21) ESMS Indicator Profile. Avalable at: https://ec.europa.eu/eurostat/cache/metadata/en/isoc_sk_dskl_i21_esmsip2.html. 61
- Fabbri, A., Lai, A., Grundy, Q. and Bero, L. A., 2018, ‘The influence of industry sponsorship on the research agenda: a scoping review’, American Journal of Public Health, Vol. 108, No 11. Available at: https://pubmed.ncbi.nlm.nih.gov/30252531/. 61
- Fidler, David, P., 2019, Disinformation and Disease: Social Media and the Ebola Epidemic in the Democratic Republic of the Congo. Council on Foreign Relations. Available at: https://www.cfr.org/blog/disinformation-and-disease-social-media-and-ebola-epidemic-democratic-republic-congo. 61
- FRA, 2020, ‘Coronavirus pandemic in the EU – fundamental rights implications’, Bulletin No 4: Country report – Hungary. Available at: https://fra.europa.eu/en/publication/2020/covid19-rights-impact-july-1#country-related. 61
- Galanti, G., A., 2000, ‘An introduction to cultural differences’, West J Med, Vol. 172, No 5. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1070887/. 61
- Garett, R. M. and Young, S. D., 2022, ‘The Impact of misinformation and health literacy on HIV prevention and service usage’, Journal of the Association of Nurses in AIDS Care, Vol. 33, No 1, pp. e1-e5. Available at: https://journals.lww.com/janac/citation/2022/02000/the_impact_of_misinformation_and_health_literacy.11.aspx. 61
- GCF Global, Digital Media Literacy - What is an Echo Chamber?. Available at: https://edu.gcfglobal.org/en/digital-media-literacy/what-is-an-echo-chamber/1/. 61
- Golder, S., Garry, J. and McCambridge, J., 2020, ‘Declared funding and authorship by alcohol industry actors in the scientific literature: a bibliometric study’, European Journal of Public Health, Vol. 30, No 6. Available at: https://academic.oup.com/eurpub/article/30/6/1193/5906182. 61
- Greenhalgh, S., 2019, ‘Making China safe for Coke: How Coca-Cola shaped obesity science and policy in China’, BMJ, Vol. 364. Available at: https://www.bmj.com/content/364/bmj.k5050/rapid-responses. 61
- Grimes, D.R., 2021, ‘Medical disinformation and the unviable nature of COVID-19 conspiracy theories’, PLOS One. Available at: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0245900. 61
- Gunning, D., Vorm, E. and Yunyan Wang, J. and Turek, M., 2021, ‘DARPA’s explainable AI (XAI) program: A retrospective’, Applied AI Letters, Vol. 2, No 4. Available at: https://onlinelibrary.wiley.com/doi/full/10.1002/ail2.61. 61
- Harvey, A., 2021, ‘Combatting Health Misinformation and Disinformation: Building an Evidence Base’, Health Affairs Blog. Available at: https://www.healthaffairs.org/content/forefront/combatting-health-misinformation-and-disinformation-building-evidence-base. 61
- Heiss, R., Bode, L., Bradshaw, S.R., MacCarthy, M., Porter, E., Engel, E. and Gell, S., 2023, Countering Misinformation on Social Media: A Socio-Ecological Response Model, Presentation to the Information Technology, & Politics Division at the Annual Meeting of the American Political Science Association (APSA), August 31-September 3, Los Angeles, USA. Available at: https://arxiv.org/pdf/2107.02775. 61
- Heiss, R., Nanz, A. and Matthes, J., 2023, ‘Social media information literacy: Conceptualization and associations with information overload, news avoidance and conspiracy mentality’, Computers in Human Behaviour, Vol. 148. Available at: https://doi.org/10.1016/j.chb.2023.107908. 62
- Himmelfarb Health Sciences Library, n.d., Correcting Misinformation with Patients: Misinformation and cultural competency: Cross-cultural differences and misinformation. Available at: https://guides.himmelfarb.gwu.edu/correcting-misinformation/misinformation-and-cultural-competency. 62
- Hungarian Act XII of 2020 on the containment of Coronavirus (2020. évi XII. törvény a koronavírus elleni védekezésről), 2020. Available at: https://njt.hu/jogszabaly/2020-12-00-00. 62
- Nielsen-Bohlman, L., Panzer, A.M. and Kindig, D.A. (Eds.), 2004, Health Literacy: A Prescription to End Confusion. Institute of Medicine (US) Committee on Health Literacy, Washington (DC): National Academies Press (US). Available at: https://www.ncbi.nlm.nih.gov/books/NBK216037/. 62
- Ismail, N., Kbaier, D., Farrell, T. and Kane, A., 2022, ‘The Experience of Health Professionals With Misinformation and Its Impact on Their Job Practice: Qualitative Interview Study’, JMIR Form Res, Vol. 6, No 11. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9635441/. 62
- Jacob, C., Hausemer, P., Zagoni-Bogsch, A. and Diers-Lawson, A., 2023, The effect of communication and disinformation during the COVID-19 pandemic, European Parliament. Available at: https://www.europarl.europa.eu/thinktank/en/document/IPOL_STU(2023)740063. 62
- Jiang, J., Ren, X. and Ferrara, E., 2021, ‘Social media polarisation and echo chambers in the context of COVID-19: Case study’, JMIRx Med, Vol. 2, No 3. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8371575/. 62
- Kalichman, S.C., Eaton, L.A., Earnshaw, V.A. and Brousseau N., 2022, ‘Faster than warp speed: early attention to COVD-19 by anti-vaccine groups on Facebook’, J Public Health (Oxf), Vol. 7, No 44, pp. e96-p102. Available at: https://pubmed.ncbi.nlm.nih.gov/33837428/. 62
- Karanasios, S. and Hayes, P., 2022, In disasters, people are abandoning official info for social media. Here’s how to know what to trust, PreventionWeb. Available at: https://www.preventionweb.net/news/disasters-people-are-abandoning-official-info-social-media-heres-how-know-what-trust. 62
- Kertysova, K., 2018, ‘Artificial intelligence and disinformation – how AI changes the way disinformation is produced, disseminated, and can be countered’, Security and Human Rights, Vol. 29, No 1-4, pp. 55-81. Available at: https://brill.com/view/journals/shrs/29/1-4/article-p55_55.xml. 62
- Koval, R., Dorrler, N. and Schillo, B., 2023, ‘Tobacco industry advertising: efforts to shift public perception of big tobacco with paid media in the USA Tobacco Control’, BMJ Journals: Tobacco Control, Vol. 32, No 6. Available at: https://tobaccocontrol.bmj.com/content/32/6/801. 62
- Lexchin, J., Bero, L. A., Djulbegovic, B, and Clark, O., 2003, ‘Pharmaceutical industry sponsorship and research outcome and quality: systematic review’, BMJ, Vol. 326, No 7400. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC156458/. 63
- Lieneck, C., Heinemann, K., Patel, J., Huynh, H., Leafblad, A., Moreno, E. and Wingfield, C., 2022, ‘Facilitators and barriers of COVID-19 vaccine promotion on social media in the United States: a systematic review’, Healthcare (Basel), Vol. 10, No 2, pp. 321. Available at: https://doi.org/10.3390/healthcare10020321 . 63
- Lin, T. H., Chang, M. C., Chang, C. C. and Chou, Y.H., 2022, ‘Government-sponsored disinformation and the severity of respiratory infection epidemics including COVID-19: A global analysis, 2001-2020’, Soc Sci Med, Vol. 296. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8789386/. 63
- Martín-Neira, J.-I., Trillo-Dominguez, M. and Olvera-Lobo, M.-D., 2023, ‘Ibero-American journalism in the face of scientific disinformation: Fact-checkers’ initiatives on the social network Instagram’, Professional de la Información, Vol. 32, No 5. Available at: https://doi.org/10.3145/epi.2023.sep.03. 63
- McKee, M., Gugushvili, A., Koltai, J. and Stuckler, D., 2021, ‘Are populist leaders creating the conditions for the spread of COVID-19? Comment on "A scoping review of populist radical right parties’ influence on welfare policy and its implications for population health in Europe"’, International Journal of Health Policy and Management, Vol. 10, No 8, pp. 511-515. Available at: https://doi.org/10.34172/ijhpm.2020.124. 63
- MedlinePlus, 2022, What are the benefits and risks of direct-to-consumer genetic testing? National Library of Medicine. Available at: https://medlineplus.gov/genetics/understanding/dtcgenetictesting/dtcrisksbenefits/. 63
- MedlinePlus, 2022, What is direct-to-consumer genetic testing? National Library of Medicine. Available at: https://medlineplus.gov/genetics/understanding/dtcgenetictesting/directtoconsumer/. 63
- Menz, B., Modi, N, Sorich, M. and Hopkins, A. M., 2023, ‘Health disinformation use case highlighting the urgent need for artificial intelligence vigilance – Weapons of mass disinformation’, JAMA Internal Medicine 2024, Vol. 184, No 1, pp. 92-96. Available at: https://jamanetwork.com/journals/jamainternalmedicine/article-abstract/2811333. 63
- Muhammed, T. S. and Mathew, S. K., 2022, ‘The disaster of misinformation: a review of research in social media’, Int J Data Sci Anal, Vol. 13, No 4, pp. 271-85. Available at: https://doi.org/10.1007%2Fs41060-022-00311-6. 63
- Maani, N., van Schalkwyk, M. C. I., Filippidis, F. T., Knai, C. and Petticrew, M., 2022, ‘Manufacturing doubt: Assessing the effects of independent vs industry-sponsored messaging about the harms of fossil fuels, smoking, alcohol, and sugar sweetened beverages’, SSM - Population Health, Vol. 17. Available at: https://www.sciencedirect.com/science/article/pii/S2352827321002846. 63
- Nan, X., Wang, Y. and Thier, K., 2022, ‘Why do people believe health misinformation and who is at risk? A systematic review of individual differences in susceptibility to health misinformation’, Social Science & Medicine, Vol. 314, 115398. Available at: https://doi.org/10.1016/j.socscimed.2022.115398. 63
- Ng, J. Y., Liu, S., Maini, I., Pereira, W., Cramer, H. and Moher, D., 2023, ‘Complementary, alternative, and integrative medicine-specific COVID-19 misinformation on social media: A scoping review’, Integrative Medicine Research, Vol. 12, No 3. Available at: https://www.sciencedirect.com/science/article/pii/S2213422023000549. 64
- O’Connor, A., Gilbert, C. and Chavkin, S., 2023, ‘The food industry pays influencer dietitians to shape your eating habits’, The Washington Post, 13 September 2023. Available at: https://www.washingtonpost.com/wellness/2023/09/13/dietitian-instagram-tiktok-paid-food-industry/. 64
- OECD, n.d., Voluntary Transparency Reporting Framework. Available at: https://www.oecd.org/digital/vtrf/. 64
- OECD, 2020, Transparency, communication and trust: The role of public communication in responding to the wave of disinformation about the new Coronavirus, OECD Policy Response to Coronavirus (COVID-19). Available at: https://www.oecd.org/coronavirus/policy-responses/transparency-communication-and-trust-the-role-of-public-communication-in-responding-to-the-wave-of-disinformation-about-the-new-coronavirus-bef7ad6e/%23section-d1e325. 64
- OECD, 2020, Combatting COVID-19 disinformation on online platforms, OECD Policy Responses to Coronavirus (COVID-19). Available at: https://www.oecd.org/coronavirus/policy-responses/combatting-covid-19-disinformation-on-online-platforms-d854ec48/. 64
- Pamment, J., 2020, The EU’s Role in Fighting Disinformation: Taking Back the Initiative, Carnegie Endowment for International Peace. Available at: https://www.jstor.org/stable/pdf/resrep25788.1.pdf. 64
- Pan, W., Liu, D. and Fang, J., 2021, ‘An Examination of Factors Contributing to the Acceptance of Online Health Misinformation’, Frontiers in Psychology, Vol. 12. Available at: https://doi.org/10.3389/fpsyg.2021.630268. 64
- Pergolizzi, J. V., LeQuang, J. A., Taylor, R., Wollmuth, C., Nalamachu, M., Varrassi, G., Christo, P., Breve, F. and Magnusson, P., 2021, ‘Four pandemics: lessons learned, lessons lost’, Signa Vitae, Vol. 17, No 1, pp. 1-5. Available at: https://doi.org/10.22514/sv.2020.16.0096. 64
- Pian, W., Chi, J. and Ma, F., 2021, ‘The causes, impacts and countermeasures of COVID-19 “Infodemic”: A systematic review using narrative synthesis’, Information Processing & Management, Vol. 58, No 6, 102713. Available at: https://doi.org/10.1016/j.ipm.2021.102713. 64
- Pírková, E., 2020, Fighting Misinformation and defending free expression during COVID-19: recommendations for States, Australian Human Rights Institute. Available at: https://www.humanrights.unsw.edu.au/news/fighting-misinformation-and-defending-free-expression-during-covid-19-recommendations-states. 64
- Posetti, J. and Bontcheva, K., 2020, Disinfodemic – Deciphering COVID-19 disinformation, Policy brief 1, UNESCO, Paris. Available at: https://unesdoc.unesco.org/ark:/48223/pf0000374416. 64
- Quinn, E. K., Fenton, S., Ford-Sahibzada, C. A., Harper, A., Marcon, A. R., Caulfield, T., Fazel, S. S. and Peters, C. E., 2022, ‘COVID-19 and Vitamin D Misinformation on YouTube: Content Analysis’, JMIR Infodemiology, Vol. 2, No 1. Available at: https://doi.org/10.2196/32452. 64
- Rayan-Moseley, T., 2023, ‘How generative AI is boosting the spread of disinformation and propaganda’, MIT Technology Review, 4 October 2023. Available at: https://www.technologyreview.com/2023/10/04/1080801/generative-ai-boosting-disinformation-and-propaganda-freedom-house/. 65
- Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJ L 119, 4.5.2016, pp. 1-88. Available at: https://eur-lex.europa.eu/eli/reg/2016/679/oj. 65
- Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act), PE/30/2022/REV/1, OJ L 277, 27.10.2022, pp. 1-102. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A32022R2065. 65
- Regulation (EU) No 1169/2011 of the European Parliament and of the Council of 25 October 2011 on the provision of food information to consumers, amending Regulations (EC) No 1924/2006 and (EC) No 1925/2006 of the European Parliament and of the Council, and repealing Commission Directive 87/250/EEC, Council Directive 90/496/EEC, Commission Directive 1999/10/EC, Directive 2000/13/EC of the European Parliament and of the Council, Commission Directives 2002/67/EC and 2008/5/EC and Commission Regulation (EC) No 608/2004 ,OJ L 304, 22.11.2011, pp. 18-63. Available at: https://eur-lex.europa.eu/legal-content/FR/TXT/?uri=celex%3A32011R1169. 65
- Rocha, Y. M., de Moura, G. A., Desidério, G. A., de Oliveira, C. H., Lourenço, F. D. and de Figueiredo Nicolete, L. D., 2021, ‘The impact of fake news on social media and its influence on health during the COVID-19 pandemic: a systematic review’, Journal of Public Health, Vol. 1, No 10. Available at: https://doi.org/10.1007/s10389-021-01658-z. 65
- Roozenbeek, J., Culloty, E. and Suiter, J., 2023, Countering misinformation: evidence, knowledge gaps, and implications of current interventions. Available at: https://econtent.hogrefe.com/doi/10.1027/1016-9040/a000492. 65
- Ross Arguedas, A., Robertson, C. T., Fletcher, R. and Nielsen, R. K. 2022, Echo chambers, filter bubbles, and polarisation: a literature review, Reuters Institute, University of Oxford. Available at: https://reutersinstitute.politics.ox.ac.uk/echo-chambers-filter-bubbles-and-polarisation-literature-review. 65
- Rubin, R., 2022, ‘When physicians spread unscientific information about COVID-19’, JAMA, Vol. 327, No 10, pp. 904-906. Available at: https://jamanetwork.com/journals/jama/fullarticle/2789369. 65
- Sadiq, M.T. and Saji, K.M., 2022, ‘The disaster of misinformation: a review of research in social media’, International Journal of Data Science and Analytics, Vol. 13, No 4, pp. 271-285. Available at: https://link.springer.com/article/10.1007/s41060-022-00311-6. 65
- Scherer L., D. and Pennycook G., 2020, ‘Who Is susceptible to online health misinformation?’, Am J Public Health, Vol. 110, Supplement 3. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7532320/. 65
- Schillinger, D., Tran, J., Mangurian, C. and Kearns, C., 2016-2017, ‘Do sugar-sweetened beverages cause obesity and diabetes? Industry and the manufacture of scientific controversy’, Ann Intern Med, Vol. 165, No 12, pp. 865-867. Available at: https://pubmed.ncbi.nlm.nih.gov/27802504/. 65
- Sessa, M.G., 2023, Connecting the disinformation dots: insights, lessons, and guidance from 20 EU Member States, EU DisinfoLab. Available at: https://www.disinfo.eu/publications/connecting-the-disinformation-dots/. 66
- SOAS, University of London, 2023, Is disinformation during natural disasters an emerging vulnerability? Available at: https://www.soas.ac.uk/study/blog/disinformation-during-natural-disasters-emerging-vulnerability. 66
- Stanczyk, F. Z., Mandelbaum, R. S. and Lobo, R. A., 2022, ’Potential pitfalls of reproductive direct-to-consumer testing’, F&S Reports, Vol. 3, No 1, pp. 3-7. Available at: https://doi.org/10.1016/j.xfre.2022.01.007. 66
- Sternlicht, A., 2023, ‘TikTok’s hidden “Ozempic economy” where weight loss influencers trade referrals for drug discounts and cash’, Fortune, 19 September 2023. Available at: https://fortune.com/2023/09/19/inside-ozempic-tiktok-weight-loss-influencers-hidden-economy-telehealth-cash-drug-discounts-mounjaro-glp1-wegovy/. 66
- Stolle, L. B., Nalamasu, R., Pergolizzi, J. V., Varrassi, G., Magnusson, P., LeQuang, J., Breve, F. and the NEMA Research Group, 2020, ‘Fact vs fallacy: The anti-vaccine discussion reloaded’, Advances in Therapy, Vol. 37, pp. 4481–44900. Available at: https://link.springer.com/article/10.1007/s12325-020-01502-y. 66
- Suarez-Lledo, V. and Alvarez-Galvez, J., 2021, ‘Prevalence of Health Misinformation on Social Media: Systematic Review’ J Med Internet Res, Vol. 23, No 1. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7857950/. 66
- Suenaga, H. and Vicente, M. R., 2021, ‘Online and offline health information seeking and the demand for physician services’, European Journal of Health Economics, Vol. 3 pp. 337-356. Available at: https://link.springer.com/article/10.1007/s10198-021-01352-7 66
- Sule, S., DaCosta, M. C., DeCou, E., Gilson, C., Wallace, K. and Goff, S. L., 2023, ‘Communication of COVID-19 misinformation on social media by physicians in the US’, JAMA Netw Open, vol. 6, No 8. Available at: https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2808358. 66
- Tan A.S.L. and Bigman, C.A., 2020, ‘Misinformation about commercial tobacco products on social media – implications and research opportunities for reducing tobacco-related health disparities’, Am J Public Health, Vol. 110, Supplement 3. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7532322/. 66
- Tasnim, S., Mahbub Hossain, M. and Mazumder, H., 2020, ‘Impact of rumours and misinformation on COVID-19 in social media’, Journal of Preventive Medicine & Public Health, Vol. 53, No 3, pp. 171-174. Available at: https://www.jpmph.org/journal/view.php?number=2079. 66
- Thomas, E. and Reyes, K. D., 2023, Commercial Disinformation, Institute for Strategic Dialogue. Available at: https://www.isdglobal.org/explainers/commercial-disinformation-product-service/. 66
- Tokojima Machado, D. F., de Siqueira, A. F. and Gitahy, L., 2020, ‘Natural stings: Selling distrust about vaccines on Brazilian YouTube’, Frontiers in Communication, Vol. 5, 577941. Available at: https://www.frontiersin.org/articles/10.3389/fcomm.2020.577941/full. 66
- UNESCO, 2020. European social media campaign to address disinformation on COVID-19 & #ThinkBeforeSharing. Available at: https://www.unesco.org/en/articles/european-social-media-campaign-address-disinformation-covid-19-thinkbeforesharing. 67
- Velo, G. and Moretti, U., 2008, ‘Direct-to-consumer information in Europe: The blurred margin between promotion and information’, British Journal of Clinical Pharmacology, Vol. 66, No 5, pp. 626-628. Available at: https://doi.org/10.1111/j.1365-2125.2008.03283.x. 67
- Vériter, S.L., Kaminska, M., Broeders, D. and Koops, J., 2021, Responding to the COVID-19 ‘infodemic – National counter measures against information influence in Europe, The Hague Programme for Cyber Norms. Available at: https://www.thehaguecybernorms.nl/research-and-publication-posts/responding-to-the-covid-19-infodemic-national-countermeasures-against-information-influence-in-europe. 67
- Walter, N., Brooks, J. J., Saucier, C. J. and Suresh, S., 2021, ‘Evaluating the impact of attempts to correct health misinformation on social media: A meta-analysis’, Health Communication, Vol. 36, No 13, pp. 1776-1784. Available at: https://doi.org/10.1080/10410236.2020.1794553. 67
- Wang, L.L., 2023, ‘Using machine learning to verify scientific claims’, Artificial Intelligence in Science – Challenges, Opportunities and the Future of Research, OECD iLibrary, pp. 121- 128. Available at: https://www.oecd-ilibrary.org/sites/ab9d235c-en/index.html?itemId=/content/component/ab9d235c-en#. 67
- Wang, Y., McKee, M., Torbica, A. and Stuckler, D., 2019, ‘Systematic literature review on the spread of health-related misinformation on social media’, Social Science & Medicine, Vol. 240. Available at: https://doi.org/10.1016/j.socscimed.2019.112552. 67
- Wardle, C. and Hossein, D., 2017, Information disorder: Towards an interdisciplinary framework for research and policy making, Council of Europe report DGI (2017) 09. Available at: https://edoc.coe.int/en/media/7495-information-disorder-toward-an-interdisciplinary-framework-for-research-and-policy-making.html. 67
- Weidner, K., Beuk, F. and Bal, A., 2020, ‘Fake news and the willingness to share : a schemer schema and confirmatory bias perspective’, Journal of Product & Brand Management, Vol. 9, No 2, pp. 181-182. Available at: https://www.researchgate.net/publication/333323027_Fake_news_and_the_willingness_to_share_a_schemer_schema_and_confirmatory_bias_perspective. 67
- WHO, 2020, Managing the COVID-19 infodemic: Promoting healthy behaviours and mitigating the harm from misinformation and disinformation, Joint Statement by WHO, UN, UNICEF, UNDP, UNESCO, UNAIDS, ITU, UN Global Pulse, and IFRC. Available at: https://www.who.int/news/item/23-09-2020-managing-the-covid-19-infodemic-promoting-healthy-behaviours-and-mitigating-the-harm-from-misinformation-and-disinformation. 67
- WHO, 2022, Infodemics and misinformation negatively affect people’s health behaviours, new WHO review finds, Press release, 1 September 2022. Available at: https://www.who.int/europe/news/item/01-09-2022-infodemics-and-misinformation-negatively-affect-people-s-health-behaviours--new-who-review-finds. 67
- Whyte, C., 2020, ‘Deepfake news: AI-enabled disinformation as a multi-level public policy challenge’, Journal of Cyber Policy, Vol. 5, No 2, pp. 199-217. Available at: https://www.tandfonline.com/doi/abs/10.1080/23738871.2020.1797135. 68
- Willis, E. and Delbaere, M., 2022, ‘Patient influencers: the next frontier in direct-to-consumer pharmaceutical marketing’, Journal of Medical Internet Research, Vol. 24, No 3. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8924782/. 68
- Wong, C., 2024, ‘Measles outbreaks cause alarm: what the data say’, Nature, News Explainer. Available at: whttps://www.nature.com/articles/d41586-024-00265-8. 68
- Wood, B., Williams, O., Nagarajan, V. and Sacks, G., 2021, ‘Market strategies used by processed food manufacturers to increase and consolidate their power: a systematic review and document analysis’, Globalisation and Health, Vol. 17, No 17. Available at: https://globalizationandhealth.biomedcentral.com/articles/10.1186/s12992-021-00667-7. 68
- Zielinski, C., 2021, ‘Infodemics and infodemiology: a short history, a long future’, Pan American Journal of Public Health. Available at: https://doi.org/10.26633/RPSP.2021.40. 68
- Żuk, P. and Żuk, P., 2020, ‘Right-wing populism in Poland and anti-vaccine myths on YouTube: Political and cultural threats to public health’, Global Public Health, Vol. 15, No 6, pp. 790-804. Available at: https://pubmed.ncbi.nlm.nih.gov/31964228/. 68
- ANNEX I – stakeholders interviewed 69
- Blank Page 70
- Blank Page 71