THIS MONTH’S BEST READS
Making Sense of the Facebook Menace, The New Republic, 5 January 2021
The story of our interactions with Facebook—how Facebook affects us and how we affect Facebook—is fascinating and maddeningly complicated. Scholars around the world have been working for more than a decade to make sense of the offline social impact of the platform’s propaganda functions. And we are just beginning to get a grasp of it. The leaders of Facebook seem not to have a clue how it influences different people differently and how it warps our collective perceptions of the world. They also seem to lack anything close to a full appreciation of how people have adapted Facebook to serve their needs and desires in ways the engineers never intended.
Researchers find machine learning models still struggle to detect hate speech, Venture Beat, 6 January 2021
Detecting hate speech is a task even state-of-the-art machine learning models struggle with. That’s because harmful speech comes in many different forms, and models must learn to differentiate each one from innocuous turns of phrase. Historically, hate speech detection models have been tested by measuring their performance on data using metrics like accuracy. But this makes it tough to identify a model’s weak points and risks overestimating a model’s quality, due to gaps and biases in hate speech datasets.
Facebook, Twitter could face punishing regulation for their role in U.S Capitol Democrats say, The Washington Post, 8 January 2021
In the months to come, some Democrats now are promising to use their powerful new perches — and their control of the White House and Congress starting in a matter of days — to proffer the sort of tough new laws and other punishments that tech giants have successfully fended off for years. Their seething anger could result in major repercussions for the industry, opening the door for a wide array of policy changes that could hold Facebook, Google and Twitter newly liable for their missteps.
This explains how social media can both weaken- and strengthen- democracy, The Washington Post, 7 January 2021
One important question is whether and how democratic societies will use legal regulation to limit this emerging threat. As this debate continues to unfold, an understanding of how exactly social media threatens — and supports — democracy will be crucial in making sure policy changes have their desired effect. Democracies must be aware that any attempt to regulate the Internet may veer dangerously close to the censorship they deride in autocracies. For example, it is probably no accident that Russia was among the first to copy Germany’s new law threatening fines for social media companies that fail to adequately restrict online hate speech online.
AI Weekly: The future of tech policy after an attempted coup, Venture Beat, 8 January 2021
Video of the U.S. Capitol breach shows that Trump supporters were permitted to violate multiple federal laws, desecrate the people’s house, undermine national security, and violently oppose the largest exercise of the right to vote since the founding of this nation. People died. Then the mob went home. It’s one of the most public displays of white privilege I’ve seen in my lifetime.
In conversation with Trisha Ray, Associate Fellow at the Observer Researcher Foundation, Analytics India Magazine, 15 January 2021
Ray’s research focuses on Geotech, the implications of emerging technology, AI governance and norms, and Lethal Autonomous Weapons Systems (LAWS). Analytics India Magazine caught up with Ray to understand India’s position on autonomous weapons and her views on LAWS.
AI and International Stability: Risks and Confidence-Building Measures, CNAS, 12 January 2021
Militaries around the world believe that the integration of machine learning methods throughout their forces could improve their effectiveness. From algorithms to aid in recruiting and promotion to those designed for surveillance and early warning, to those used directly on the battlefield, applications of artificial intelligence (AI) could shape the future character of warfare. These uses could also generate significant risks for international stability. These risks relate to broad facets of AI that could shape warfare, limits to machine learning methods that could increase the risks of inadvertent conflict, and specific mission areas, such as nuclear operations, where the use of AI could be dangerous. To reduce these risks and promote international stability, we explore the potential use of confidence-building measures (CBMs), constructed around the shared interests that all countries have in preventing inadvertent war. Though not a panacea, CBMs could create standards for information-sharing and notifications about AI-enabled systems that make inadvertent conflict less likely.
How Deepfakes are Wrecking Havoc on Democracy, Analytics Insight, 4 January 2021
The biggest threat deep fake videos are likely to pose is that they add another layer of distrust to legitimate video and news. Knowing that deepfakes exist is somewhat destabilise, and this is evident in the politicians already claiming that authentic videos are deepfakes created to discredit them.
How the Defense Department wants to measure the success of its AI hub, C4ISRNET
This year, the JAIC expanded its mission focuses to include joint war fighting, an important mission given the military services focuses on multidomain operations — a concept that will require artificial intelligence to increase the speed at which data flows and commanders make decisions. In calendar year 2021, the JAIC will focus on war fighter integration and creation of an artificial intelligence ecosystem, Groen said, building on the work each respective service is doing.
COVID-19, digital rights and Nigeria’s emerging surveillance state, GlobalVoices, 19 January 2021
During the COVID-19 pandemic, governments took extraordinary measures to leverage technology to fight the virus. In addition to lockdowns, many African countries, including Nigeria, followed a global trend to use contact-tracing measures to track those who come into contact with an infected person.On the surface, the proliferation of contact-tracing apps — both public and private — appear harmless and noble. But Nigeria's history of surveillance raises serious questions about how the state may further its capabilities to track and target citizens during the pandemic using such technologies.
Another Arrest, and Jail Time, Due to a Bad Facial Recognition Match, The New York Times, 6 January 2021
In February 2019, Nijeer Parks was accused of shoplifting candy and trying to hit a police officer with a car at a Hampton Inn in Woodbridge, N.J. The police had identified him using facial recognition software, even though he was 30 miles away at the time of the incident.He is the third person known to be falsely arrested based on a bad facial recognition match. In all three cases, the people mistakenly identified by the technology have been Black men.
What Buddhism can do for AI ethics, MIT Technology Review, 6 January 2021
Buddhism proposes a way of thinking about ethics based on the assumption that all sentient beings want to avoid pain. Thus, the Buddha teaches that an action is good if it leads to freedom from suffering. The implications of this teaching for artificial intelligence is that any ethical use of AI must strive to decrease pain and suffering. In other words, for example, facial recognition technology should be used only if it can be shown to reduce suffering or promote well-being.
How Tech and Media Enabled a White Supremacist Coup, Points, 13 January 2021
These present-day lynch mobs have been incited to violence over the last four years through a network of right-wing platforms, such as Parler, and media outlets, such as FoxNews America’s most-watched cable news channel, which are funded by billionaires like Rebekah Mercer and Rupert Murdoch. The current tech/media ecosystem combined with violent white supremacy is what’s truly unprecedented about the current moment.
Deplatforming Our Way to the Alt-Tech Ecosystem, Knight First Amendment Institute at Columbia University, 11 January 2021
This points to a larger lesson. Building a healthy social media ecosystem will be full of tradeoffs, and it’s important to understand and highlight them, not because the changes are necessarily wrong, but because examining and responding to tradeoffs will be crucial to ensuring well-meaning changes don’t cause us to take one step forward and two steps back. Alt-tech presents powerful questions about speech online. Is it better to exile toxic speech from popular platforms if it risks making communities even more extreme? If toxic speech becomes harder to study and track? How do we ensure that deplatforming toxic speech isn’t weaponized to silence any dissenting point of view? These questions are beyond the scope of this post, but are worth considering as this space develops.