Lauren Huang / Daily Nexus

Science fiction movies like “Blade Runner” and “The Terminator” have defined the perception of artificial intelligence within popular culture. For most people, the term AI conjures up images of a dystopian future dominated by humanoid robots that have taken over the world. This common conception leads to the dismissal of the technology as impossible, or at least faroff in the future. Few people realize that we are already delving into a world dominated by AI, and it’s nothing like “The Terminator.”

The actual risks posed by artificial intelligence have nothing to do with killer robots; they relate to the machine-learning algorithms that recommend content on the internet. These algorithms have radically altered the way we receive information which has resulted in the degradation of the truth, and, in turn, our democracy.

When you endlessly scroll through the Instagram explore page or shamelessly watch that 10th YouTube video from the sidebar, you are praising the work of an AI system. It’s no fluke that this recommended content is so compelling. Billions of dollars and countless hours of work have gone into the development of sophisticated algorithms specifically designed to attract your attention. Almost every information-sharing website on the internet utilizes some version of this technology.

YouTube’s recommendation system, for instance, was developed by Google’s artificial intelligence team, Google Brain. The technology is a deep-learning AI that operates using a set of one billion parameters, including watch history and user demographics, and is trained using hundreds of billions of examples. 

This level of complexity is utterly incomprehensible to the human brain, but even if it were understandable, the AI entirely changes from one moment to the next, as it constantly builds on itself and learns from the vast amounts of data it steadily collects. The algorithm is essentially boundless and holds infinite potential.

Estimates show that the inexplicably complex algorithm is responsible for recommending around 70% of the videos watched on YouTube. Artificial intelligence largely controls what people watch.

The algorithm is designed to optimize a singular metric: watch time. That’s it. The longer people watch videos, the longer they watch ads and the more ad space YouTube can sell. It’s a rather simple objective, but the algorithm’s cold, calculated pursuit of watch time has created an environment primed for disinformation to run rampant.

The technology retains no capability of understanding the greater effects of its recommended videos. The content of each video is valued equally. The algorithm unflinchingly feeds political lies to QAnon supporters and vaccine skepticism to COVID-19 deniers, driving viewers further into their respective echo chambers. Viral videos or those with high rates of engagement are prioritized and can spread misinformation past their initial target audience or bubble. Videos that claim the Earth is flat initially grab the attention of a user, then the algorithm sends them down an endless, spiralling rabbit hole of fallacies. Algorithms are the messengers that deliver us the information we often accept as fact but the message may be a lie.

The media ecosystem propagated by inhuman recommendation algorithms can easily be weaponized by bad faith actors, and there exist clear examples in which such tactics have been utilized for anti-democratic ends. In the 2016 United States presidential election, Russian operatives created fake social media accounts with the goal of exacerbating political polarization and increasing social discord. These accounts propagated provocative and misleading statements that intentionally targeted delicate subjects, specifically issues regarding race. Controversial and highly engaging content like this is exactly what the recommendation algorithms favor, and there is no doubt the technology helped circulate Russian disinformation, which ultimately affected the results of the election.

Algorithms are the messengers that deliver us the information we often accept as fact but the message may be a lie.

Former President Donald Trump himself triggered an anti-democratic movement on the internet, one helped along by virality and AI algorithms. Since the beginning of his political career, he has continuously fanned the flames of conspiracy theories and supplied the public with “alternative facts.” After his loss in the 2020 presidential election, the former president’s “alternative facts” took on an anti-democratic pitch. He proliferated baseless accusations of widespread voter fraud that he claimed constituted a stolen election. His supporters, within their insular communities bolstered by the positive feedback loop of recommendation algorithms, were fed nothing but these “alternative facts,” and they ultimately accepted the former president’s narrative as reality. A certain subset of these people believed the lie so strongly that they attacked the United States Capitol in an attempt to halt the certification of the presidential election.

The algorithm will seek to reinforce a user’s opinions rather than challenge them, no matter how fallacious those opinions may be. The truth is irrelevant to the algorithm’s programmed goal. Without truth, however, the ideals of freedom and democracy are unattainable. Voters must be able to make an informed decision based on facts, otherwise we will fail to address the most pressing issues facing our nation.

This past week, the Biden administration pressured Facebook to crack down on the vast amount of COVID-19 misinformation on its platform. When asked to comment on the company’s role in this type of content, President Joe Biden said, in rather stark terms, “they’re killing people.” While this was an admittedly overly simplistic statement, and although he later walked it back, Biden’s words accurately reflect the dangers of Facebook’s unhealthy media ecosystem. Public health information is a matter of life and death.

Facebook’s Vice President of Integrity Guy Rosen responded in a blog post, citing facts and figures that emphasize the breadth of the company’s effort to address COVID-19 misinformation. The main avenues Facebook utilized toward this end were the removal of 18 million posts and the fact-checking of 167 million posts. While this is undoubtedly a step in the right direction, and these numbers sound impressive, Facebook’s efforts fail to address the scope of the issue.

The issue with Facebook (and social media platforms like it) is structural, and it revolves around the recommendation algorithm. Although there are 18 million fewer instances of COVID-19 misinformation, the mechanisms that allow this type of content to spread remain intact.

The bottom line is that social media companies have no incentive to truly address the issues posed by their technologies. Our attention is the commodity, so the longer we stare at screens, the bigger the profit margin. The same algorithm responsible for proliferating misinformation is equally responsible for making billions of dollars, so until the priorities of social media companies are fundamentally rearranged, our world will delve further into a post-truth reality.

The true danger of artificial intelligence has nothing to do with humanoid robots that develop a will of their own and revolt against their tyrannical creators. AI is dangerous due to its merciless application of ill-conceived algorithms without concern for the wreckage left in its wake. In its dogged pursuit of optimizing watch time, artificial intelligence poses an existential risk to democratic ideals.

Brian Byrne is concerned about the role AI plays in blurring the line between fact and fiction.

Print