Lesson 6
Algorithms, Bots, and Deepfakes: How worried should we be about technology?
Big Question:
Understanding the role of tech in the spread of disinformation, how can users be cautious yet remain optimistic about technology?
Against the backdrop of an ongoing “techlash” (a backlash against digital technologies over worsening problems of misinformation, privacy, and digital addiction, among others), is there a reason to remain optimistic about the future of humankind? Is technology really going to make the world a better place? In this lesson, we will dissect the design of social media platforms to understand why manipulative content thrives in it, look into AI-generated fake news, and think about counter-disinformation solutions through technology and beyond.
Lesson Overview
By the end of this lesson, the student will be able to…
- Explain how modern technologies like social media contribute to the spread of mis-/disinformation
- Reflect on their own relationship with media technologies
- Adopt different digital hygiene habits to minimize the harm of mis-/disinformation
Keywords:
Algorithms, Filter bubbles, Microtargeting, Bots, Astroturfing, Artificial Intelligence (AI), Deepfake, Moral panics, Techlash, Big Tech, Digital hygiene, Technological determinism
Duration:
90 minutes
Materials:
- Slide deck
- Explainer video
- Handouts
- Social media and disinformation
- Performance task: Explain X to Y
- Worksheet
Preparation:
Prior to the session, instruct the students to watch the following video explainers: “Should we be worried about technology?” (The Economist, 2021) and “Real or Fake? How can one identify a deepfake?” (GMA Integrated News, 2023).
Lesson Proper
Begin the lesson with a human spectrogram activity. Plaster colored tape across an open floor to symbolize a spectrogram. On one end of the tape, “strongly agree” is marked while the other end is labeled “strongly disagree”. Read one statement at a time and ask the students to physically position themselves along the spectrogram, based on the extent to which they agree or disagree with the statement. Below are statements you can use:
- Digital technologies make the world a better place.
- AI-created disinformation will only get better, and there will be no way to solve it.
- I feel that digital platforms know me better than I know myself.
- If social media algorithms are fixed, our disinformation problem will be fixed.
Between each statement and positioning, ask a few participants to explain why they chose their positions. Select different students each time (to get the most people to talk) and pick people on opposite ends of the spectrum. You may also allow the students to rethink and change their positions after hearing their peers’ viewpoints.
Introduce the lesson topic and objectives and invite students to think about the ways in which modern technologies affect their own lives and the society at large. More specifically, how does social media and other digital technologies today contribute to the spread of misinformation?
As you discuss social media, artificial intelligence (AI) and their role in misinformation, invite the students to reflect on their own media use and habits. Allow them to share about their experiences with algorithms, bots, and AI. If possible, ask them to access their phones while in class and let them observe how social media algorithms and AI tools work in their own devices.
You may watch the following videos to learn more about how algorithms, bots, and other features of social media platforms enable the spread of mis-/disinformation:
- Big Think (2018, December 19). How news feed algorithms supercharge confirmation bias | Eli Pariser | Big Think [Video]. YouTube. https://www.youtube.com/watch?v=prx9bxzns3g
- DW News (2018, April 23). Hidden code: Algorithms in social networks | DW English [Video]. YouTube. https://www.youtube.com/watch?v=OuU3UfRM2pE
- DW News (2018, April 26). Manipulative social bots | DW English [Video]. YouTube. https://www.youtube.com/watch?v=e14aK8s4QIA
Social media and Misinformation
While false and misleading stories are hardly new, and as discussed in Lesson 3, misinformation has always been part of our history, today’s media and information landscape is extremely different than before. In the age of digital platforms, everyone has the power to create media content. Now more than ever, It is easier for anyone to produce, distribute, and even profit from misinformation.
The same technologies that have given people access to more information have also made it more time consuming and confusing for people to determine its quality and source. Social media platforms, which have become a primary choice for entertainment and information for many Filipinos, have distinct features that make it perfect for disinformation to thrive in. Here are the basics:
Do you ever wonder how and why a specific set of content ends up in your social media feed?
Social media algorithms determine the content that users see on social media platforms, often based on what users like, share, and comment on. Because social media algorithms are designed to curate content that aligns with each user’s preferences and digital behaviors, it can lead to the amplification of sensational, controversial, or false content when users often interact with this kind of content. This can also foster a digital environment referred to as filter bubbles where users only connect with like-minded individuals, leading to confirmation bias, polarization, and a potential distortion of reality.
How do politicians maximize social media to spread disinformation?
One way to ensure that disinformation reaches its target audiences is through microtargeting on social media. Disinformation peddlers tailor ads and content to specific demographics, locations, or behaviors, aiming to increase engagement. This way, personalized content delivery may lead to a skewed presentation of information that echoes target users’ existing beliefs. It can leave individuals more susceptible to disinformation that aligns with their viewpoints.
How does online disinformation create an illusion of truth?
Bots are automated accounts on social media that interact with users or other bots. Bots are used to engage with disinformative content to trick social media algorithms into perceiving the content as popular, leading to its wider dissemination. Bots can also be used to disseminate large volumes of biased or misleading content rapidly, creating an illusion of consensus or credibility, also referred to as astroturfing. This can manipulate public opinion and influence decision-making.
To understand how disinformation works, it is not enough that we know how certain pieces of content twist facts and hook our attention. We must also be aware of how the technologies we are using are designed to allow such content to thrive in it. The Center for Humane Technology (2021) writes that, “TikTok isn’t addictive just because creators are funny; it’s addictive because it uses one of the most sophisticated persuasive algorithms on the planet to choose videos that will keep you watching.”
Sources:
- Ünver, A. (2023). The Role of Technology: New Methods of Information, Manipulation and Disinformation. EDAM. https://edam.org.tr/en/cyber-governance-digital-democracy/the-role-of-technology-new-methods-of-information-manipulation-and-disinformation
- Center for Humane Technology (2021). Persuasive Technology: How does technology use design to influence my behavior? https://www.humanetech.com/youth/persuasive-technology#question-2
Watch these explainer videos that discuss what deepfakes are, how they are produced, and the threats and potential risks they bring. After watching, discuss with the students their insights about AI and disinformation. Use the following guide questions:
- Have you tried using AI? If yes, for what purposes? How helpful is it to you?
- Do you find it easy to distinguish deepfake from real images and videos?
- Do the benefits of AI outweigh the harms?
Explainer videos
- GMA Integrated News (2023, November 30). Real or Fake? How can one identify a deepfake? | Facts Talk [Video]. YouTube. https://www.youtube.com/watch?v=T3MdMYYkoTE
- Al Jazeera English (2021, June 21). What are deepfakes and are they dangerous? | Start Here [Video]. YouTube. https://www.youtube.com/watch?v=pkF3m5wVUYI
Deepfakes and AI-generated misinformation
Deepfake technologies are becoming more and more common to the public through fun face swap apps that allow users to switch their faces with bodies and faces of celebrities or animate old photos of deceased relatives to make them “look alive”. While these seem to be harmless and fun ways to enjoy Artificial Intelligence (AI) technologies, other bad actors are using them for more sinister purposes like generating false and misleading audio-visual information.
Besides generating realistic looking fake videos and images, AI is also being used to automate the creation of text-based disinformative content. According to NewsGuard, an organization that tracks misinformation, websites that host AI-created false articles have increased by more than 1,000 percent in 2023, ballooning from 49 sites to more than 600 (Verma 2023). Alternative Intelligence (AI) is being described as a “misinformation superspreader” because of the sheer volume of content that these technologies can automatically generate in a variety of formats that become more and more difficult to detect.
From discussing the role of social media platforms in the spread of misinformation, shift to broader discussion of the relationship of humans with technology. Here you will re-frame the question to not be about whether technology is all good or all bad. We already know it can be both, this is the point of the widely affirmed idea of technology being a double-edged sword.
Watch this explainer video from The Economist that traces how humans have developed a kind of love-hate relationship with technology across history. Use the following guide questions:
- What’s a technology from a science-fiction movie/ book that you want to have or experience in your lifetime?
- What’s a technology from a science-fiction movie/ book that you fear or wish you will not have to live with?
- Cite examples of fears/anxieties and examples of perks/joys of people about some of today’s technologies like ride-hailing apps (e.g., Grab, Angkas), mobile wallet apps (e.g., GCash, Maya), and dating apps (e.g., Tinder, Bumble).
- Why are we seeing a shift from optimism to pessimism about technology in recent years?
Love-Hate Relationship with Technology
Writing, printing, and the flow of knowledge
- </3: For most of human history, people shared ideas and their culture through speech and song. When people began writing, and later on with the introduction of printing technologies, it led to a gradual neglect of many oral traditions.
- </3: Philosophers feared that because of writing, content would be separated from its source; hence, information can be misinterpreted easily or be twisted for ulterior motives.
- <3: The invention of the printing press by Johannes Gutenberg is arguably one of the greatest and most influential to humankind. Shifting away from handwritten books to printed ones, this technology allowed information to be delivered accurately and at speed.
- <3: As the number of newspapers, journals, and books increased, so did people’s literacy levels. As literary texts became much more readily available, more people took interest in reading and education.
Social media and activism
- <3: In the early 2010s, a wave of pro-democracy uprisings took place in the Middle East and North Africa that was called the Arab Spring. Many activists used Facebook and Twitter to organize, amplify their demands, and raise local and global awareness of the ongoing events. Dubbed as “social media revolutions”, the Arab Spring brought about a lot of optimism about the power of social media to break down barriers and spread democracy.
- </3: A few years after the initial enthusiasm about the Arab Spring and the promise of social media as a force for good, changes were short-lived and many countries in the Arab region collapsed into civil wars and fell back into repressive governments. Social media platforms were used to monitor activists and suppress dissent.
- <3: Activists around the world continue to benefit from social media, using hashtags and trending topics to campaign their causes (e.g., #MeToo and #BlackLivesMatter); online groups and messaging apps for organizing events; and video-streaming for real-time coverage of protest activities that do not appear on mainstream outlets.
- </3: Social media could indeed spotlight issues that might otherwise never cross people’s radar. However, its pitfalls include oversimplification of complex issues and failure to bring about actual change on the ground. “Slacktivism” is a term coined to refer to low-effort online activism that does not lead to meaningful engagement or real-world impact.
Our relationship with technology swings like a pendulum between love and hate
Understanding its role in the spread of disinformation, how can we be cautious yet remain optimistic about technology? Watch this
Techlash and moral panics over technology
Technologies are indeed socially disruptive forces; they can radically influence our personal and social lives. Right now, we are seeing a growing concern over misinformation, privacy, digital addiction, and host of other contemporary “moral panics”, widespread feelings of fear and concern over the perceived threats of technology towards values and norms of society.
These moral panics feed into today’s so-called “Techlash”, a backlash against technology and Big Tech, the collective term that refers to the most dominant and largest tech companies in the world such as Alphabet (Google), Amazon, Apple, Meta (Facebook), and Microsoft.
The pessimism and the Techlash are good in the sense that they trigger important conversations in our society. But if we are to learn the lessons of history, we should not overdo our pessimism and surrender to the idea that this is all terrible and there’s nothing we can do about it. Knowing the history of our relationship with technology can guide us in how we should deal with the problem of mis- and disinformation in the Digital Era.
Sources:
- Bowman, N. & Cohen E., (2019). Technologies of Mass Deception? War of the Worlds, Twitter, and a History of Fake and Misleading News in the United States. In E. Downs (Ed.), The Dark Side of Media and Technology: A 21st Century Guide to Media and Technological Literacy (pp. 25-36). NY: Peter Lang.
- The Economist (2021, August 6) Should we be worried about technology? [Video]. YouTube. https://www.youtube.com/watch?v=Zeza83WlXxg
Wrap up the session with an optional hands-on activity where the students can work with their devices to analyze and adjust their own app settings. Do this exercise with the consent of the students.
Taking control of technology
Understanding the power of technology is a crucial step to finding the right solutions to disinformation. As technologies continue to be more and more complex, users must become more conscious of the ways that technology affects their lives. Here are some digital hygiene habits that we can practice to minimize risks of tech-enabled disinformation (Center for Humane Technology, 2021):
- Setting boundaries
- Turn off notifications and alerts that are designed to draw your attention back to your phone
- Reduce or remove harmful apps
- Eliminate outrage and clickbait by unfollowing accounts and groups that intentionally trigger hate
- Staying balanced and mindful
- Follow voices you disagree with and be open to new perspectives
- Think first about your intentions before opening an app
- Be compassionate and do not be so quick to publicly argue or block people you disagree with
Besides improving our digital hygiene, digital rights advocates also enjoin the public to participate in conversations and initiatives that put pressure on tech companies to ensure that digital tools and platforms are designed to bring more good than harm to society.
Source: Center for Humane Technology (n.d.) Take Control Toolkit. https://www.humanetech.com/take-control
While this lesson has focused on the role of technology in the spread of mis-/disinformation, it is important to emphasize that mis-/disinformation is not all about tech. At the end, remind the students to not fall to the kind of thinking called “technological determinism” that views tech as the absolute driving force of culture and society.
Scholars remind us that to solve disinformation, we must not limit ourselves to tech-based solutions. Without downplaying the power of technology, we should not fall to the kind of thinking that puts technology at the pedestal, assuming that it is the absolute driving force of culture and society. This perspective is referred to as “Technological Determinism”. It assumes that disinformation is primarily a tech issue and hence only needs tech-based solutions. It forgets that disinformation is complex and involves political structures, economic incentives, psychology, and cultural factors that interact with the dynamics of technology.
While there may be many reasons to worry about the risks posed by digital technology, we should also not run out of reasons to remain optimistic about finding interdisciplinary and intersectoral solutions to disinformation.
Play the Lesson 6 video explainer to recap the main takeaways of the lesson. For possible next topics to discuss in class, check out the complete list of PH Disinfo Hub lessons here.
Performance Task and Other Activities
Explain X to Y
Most young people dive into a new device without having to go through its user manual. Most people create accounts in apps without reading its lengthy terms and conditions. Understanding how technology works can be easy or difficult depending on a person’s level of interest, prior knowledge, and exposure to the digital world. But surely, everyone from all ages, all walks of life and backgrounds will benefit from learning how everyday technologies work and influence our personal and social lives.
Your task is to explain a given topic that is covered in this lesson to a relative or acquaintance of yours who is either a child between 6 and 12 years old or a senior citizen who is 60 years old and above. Choose someone who has experience in using or trying social media and is curious enough and interested to participate in a digital literacy activity.
Record in video a 5-minute conversation between you and your partner talking about the topic assigned to you. The conversation has to demonstrate your skills in explaining the topic in a way that is creative, understandable and personally relevant to your partner. Meanwhile, your partner has to also express their understanding and interest in your topic by sharing their own thoughts or questions. You may use visual aids, props, songs, storytelling, or any strategy you find appropriate.
Topics:
- How are social media algorithms and filter bubbles connected to disinformation?
- How are bots used to spread disinformation?
- How are deepfakes created?
- What is techlash, and why does it matter?
- What are moral panics, and how should they be dealt with?
Download the Performance Task rubrics here.
Main readings:
- The Economist (2021, August 6) Should we be worried about technology? [Video]. YouTube. https://www.youtube.com/watch?v=Zeza83WlXxg
- GMA Integrated News (2023, November 30). Real or Fake? How can one identify a deepfake? | Facts Talk [Video]. YouTube. https://www.youtube.com/watch?v=T3MdMYYkoTE
Additional references:
- Al Jazeera English (2021, June 21). What are deepfakes and are they dangerous? | Start Here [Video]. YouTube. https://www.youtube.com/watch?v=pkF3m5wVUYI
- Bowman, N. & Cohen E., (2019). Technologies of Mass Deception? War of the Worlds, Twitter, and a History of Fake and Misleading News in the United States. In E. Downs (Ed.), The Dark Side of Media and Technology: A 21st Century Guide to Media and Technological Literacy (pp. 25-36). NY: Peter Lang.
- Center for Humane Technology (2021). Persuasive Technology: How does technology use design to influence my behavior? https://www.humanetech.com/youth/persuasive-technology#question-2
- Center for Humane Technology (n.d.) Take Control Toolkit. https://www.humanetech.com/take-control
- DW News (2018, April 23). Hidden code: Algorithms in social networks | DW English [Video]. YouTube. https://www.youtube.com/watch?v=OuU3UfRM2pE
- DW News (2018, April 26). Manipulative social bots | DW English [Video]. YouTube. https://www.youtube.com/watch?v=e14aK8s4QIA
- Grohmann, R., & Corpus Ong, J. (2024). Disinformation-for-Hire as Everyday Digital Labor: Introduction to the Special Issue. Social Media + Society, 10(1). https://doi.org/10.1177/20563051231224723
- Tactical Tech (2021). Hide and Seek on your Feed: How algorithms influence your information. https://datadetoxkit.org/en/wellbeing/filterbubbles/
- Ünver, A. (2023). The Role of Technology: New Methods of Information, Manipulation and Disinformation. EDAM. https://edam.org.tr/en/cyber-governance-digital-democracy/the-role-of-technology-new-methods-of-information-manipulation-and-disinformation
- Verma, P. (2023, December 17). The rise of AI fake news is creating a ‘misinformation superspreader’. The Washington Post. https://www.washingtonpost.com/technology/2023/12/17/ai-fake-news-misinformation/
Use this lesson in the Grade 11/12 subject Media and Information Literacy (MIL), and align it with the following learning competencies:
- Shares to class media habits, lifestyles and preferences. MIL11/12IMIL-IIIa-4
- Demonstrates ethical use of information. MIL11/12IL-IIIc-9
- Demonstrates proper conduct and behavior online (netiquette, virtual self). MIL11/12LESI-IIIg-18
- Enumerates opportunities and challenges in media and information. MIL12LESI-IIIg-23
- Realizes opportunities and challenges in media and information. MIL11/12OCP-IIIh-24
- Evaluates current trends in media and information and how it will affect/how they affect individuals and the society as a whole. MIL11/12CFT-IIIi-26
Use this lesson in the Grade 11/12 subject Empowerment Technologies, and align it with the following learning competencies:
- Compare and contrast the nuances of varied online platforms, sites, and content to best achieve specific class objectives or address situational challenges. CS_ICT11/12-ICTPT-Ia-b-1
- Apply online safety, security, ethics, and etiquette standards and practice in the use of ICTs as it would relate to their specific professional tracks. CS_ICT11/12-ICTPT-Ia-b-2
- Share anecdotes of how he/she has used ICTs to be part of a social movement, change, or cause to illustrate aspects of digital citizenship. CS_ICT11/12-ICTPT-IIl-15
- Create a reflexive piece or output using an ICT tool, platform, or application of choice on the learning experience undergone during the semester. CS_ICT11/12-ICTPT-IIt-23
Use this lesson in the Grade 11/12 subject Understanding Culture Society, and Politics (UCSP), and align it with the following learning competencies:
- Identify new challenges faced by human populations in contemporary societies. UCSP11/12CSCIIh-33
- Describe how human societies adapt to new challenges in the physical, social, and cultural environment. UCSP11/12CSCIIi-34
For school-based student governments, youth-based organizations, or youth councils (Sangguniang Kabataan):
Are you interested in hosting a public interactive exhibition in your school or community that looks at the role of technology in misinformation? In the Philippine MIL Summit event held in October 2023 at the University of the Philippines Diliman, Out of The Box partnered with Berlin-based international NGO Tactical Tech, to set up “The Glass Room Misinformation Edition” exhibit. Originally launched in 2020 and updated in 2022, Tactical Tech’s Glass Room exhibit explores how social media and the web have changed the way we read information and react to it. The exhibition consists of posters, video animations and apps.
If you are interested in hosting this wonderful resource in your community, you may contact Out of The Box at info@ootbmedialiteracy.org for your queries.
© This resource is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. This means you are free to copy and redistribute the material, remix, transform or build upon it so long as you attribute Sigla Research Center and Out of The Box Media Literacy Initiative as the original source. View detailed license information at https://creativecommons.org/licenses/by-nc-sa/4.0/