AI FOR PEACE NEWSLETTER

Your monthly dose of news and the latest developments in AI for Peace

JANUARY 2021

Spotlight on social media and white supremacy, hate speech, deplatforming, and content moderation and impact on freedom and democracy. 

For more resources on Democracy, Free Speech & AI look at our

Special Edition Newsletter

curated by Maanya Vaidyanathan, AI for Peace Research Fellow

If someone has forwarded this to you and you want to get our Newsletter delivered to you every month, you can subscribe here:
Subscribe

THIS MONTH’S SPOTLIGHT

Measurementality - Defining What Counts in the Algorithmic Age

 

Measurementality is a new series of podcasts, webinars and reports created by the IEEE Standards Association (IEEE SA) in collaboration with The Radical AI Podcast focused on defining what counts in the Algorithmic Age.  While it’s critical that Artificial Intelligence Systems (AIS) are transparent, responsible, and trustworthy, Measurementality will explore the deeper issues around what measurements of success we’re optimizing for in the first place.

 

The first ten episodes of The Measurementality podcast will be hosted on Jess and Dylan’s site as a way to help cross pollinate awareness between the Radical AI and IEEE SA communities.  Episodes will begin in January, 2021 and will be posted on the Radical AI site in the beginning of each month. The first 10 episodes of the Measurementality podcast will be hosted by The Radical AI Podcast and are sponsored by IEEE SA. You can register to follow all webinars here.

THIS MONTH’S BEST READS 

 

Making Sense of the Facebook Menace, The New Republic, 5 January 2021

The story of our interactions with Facebook—how Facebook affects us and how we affect Facebook—is fascinating and maddeningly complicated. Scholars around the world have been working for more than a decade to make sense of the offline social impact of the platform’s propaganda functions. And we are just beginning to get a grasp of it. The leaders of Facebook seem not to have a clue how it influences different people differently and how it warps our collective perceptions of the world. They also seem to lack anything close to a full appreciation of how people have adapted Facebook to serve their needs and desires in ways the engineers never intended.

 

Researchers find machine learning models still struggle to detect hate speech, Venture Beat, 6 January 2021

Detecting hate speech is a task even state-of-the-art machine learning models struggle with. That’s because harmful speech comes in many different forms, and models must learn to differentiate each one from innocuous turns of phrase. Historically, hate speech detection models have been tested by measuring their performance on data using metrics like accuracy. But this makes it tough to identify a model’s weak points and risks overestimating a model’s quality, due to gaps and biases in hate speech datasets.

 

Facebook, Twitter could face punishing regulation for their role in U.S Capitol Democrats say, The Washington Post, 8 January 2021

In the months to come, some Democrats now are promising to use their powerful new perches — and their control of the White House and Congress starting in a matter of days — to proffer the sort of tough new laws and other punishments that tech giants have successfully fended off for years. Their seething anger could result in major repercussions for the industry, opening the door for a wide array of policy changes that could hold Facebook, Google and Twitter newly liable for their missteps.

 

This explains how social media can both weaken- and strengthen- democracy, The Washington Post, 7 January 2021

One important question is whether and how democratic societies will use legal regulation to limit this emerging threat. As this debate continues to unfold, an understanding of how exactly social media threatens — and supports — democracy will be crucial in making sure policy changes have their desired effect. Democracies must be aware that any attempt to regulate the Internet may veer dangerously close to the censorship they deride in autocracies. For example, it is probably no accident that Russia was among the first to copy Germany’s new law threatening fines for social media companies that fail to adequately restrict online hate speech online.

 

AI Weekly: The future of tech policy after an attempted coup, Venture Beat, 8 January 2021

Video of the U.S. Capitol breach shows that Trump supporters were permitted to violate multiple federal laws, desecrate the people’s house, undermine national security, and violently oppose the largest exercise of the right to vote since the founding of this nation. People died. Then the mob went home. It’s one of the most public displays of white privilege I’ve seen in my lifetime.

 

In conversation with Trisha Ray, Associate Fellow at the Observer Researcher Foundation, Analytics India Magazine, 15 January 2021

Ray’s research focuses on Geotech, the implications of emerging technology, AI governance and norms, and Lethal Autonomous Weapons Systems (LAWS). Analytics India Magazine caught up with Ray to understand India’s position on autonomous weapons and her views on LAWS.

 

AI and International Stability: Risks and Confidence-Building Measures, CNAS, 12 January 2021

Militaries around the world believe that the integration of machine learning methods throughout their forces could improve their effectiveness. From algorithms to aid in recruiting and promotion to those designed for surveillance and early warning, to those used directly on the battlefield, applications of artificial intelligence (AI) could shape the future character of warfare. These uses could also generate significant risks for international stability. These risks relate to broad facets of AI that could shape warfare, limits to machine learning methods that could increase the risks of inadvertent conflict, and specific mission areas, such as nuclear operations, where the use of AI could be dangerous. To reduce these risks and promote international stability, we explore the potential use of confidence-building measures (CBMs), constructed around the shared interests that all countries have in preventing inadvertent war. Though not a panacea, CBMs could create standards for information-sharing and notifications about AI-enabled systems that make inadvertent conflict less likely.

 

How Deepfakes are Wrecking Havoc on Democracy, Analytics Insight, 4 January 2021

The biggest threat deep fake videos are likely to pose is that they add another layer of distrust to legitimate video and news. Knowing that deepfakes exist is somewhat destabilise, and this is evident in the politicians already claiming that authentic videos are deepfakes created to discredit them.

 

How the Defense Department wants to measure the success of its AI hub, C4ISRNET

This year, the JAIC expanded its mission focuses to include joint war fighting, an important mission given the military services focuses on multidomain operations — a concept that will require artificial intelligence to increase the speed at which data flows and commanders make decisions. In calendar year 2021, the JAIC will focus on war fighter integration and creation of an artificial intelligence ecosystem, Groen said, building on the work each respective service is doing.

 

COVID-19, digital rights and Nigeria’s emerging surveillance state, GlobalVoices, 19 January 2021

During the COVID-19 pandemic, governments took extraordinary measures to leverage technology to fight the virus. In addition to lockdowns, many African countries, including Nigeria, followed a global trend to use contact-tracing measures to track those who come into contact with an infected person.On the surface, the proliferation of contact-tracing apps — both public and private —  appear harmless and noble. But Nigeria's history of surveillance raises serious questions about how the state may further its capabilities to track and target citizens during the pandemic using such technologies.

 

Another Arrest, and Jail Time, Due to a Bad Facial Recognition Match, The New York Times, 6 January 2021

In February 2019, Nijeer Parks was accused of shoplifting candy and trying to hit a police officer with a car at a Hampton Inn in Woodbridge, N.J. The police had identified him using facial recognition software, even though he was 30 miles away at the time of the incident.He is the third person known to be falsely arrested based on a bad facial recognition match. In all three cases, the people mistakenly identified by the technology have been Black men.

 

What Buddhism can do for AI ethics, MIT Technology Review, 6 January 2021

Buddhism proposes a way of thinking about ethics based on the assumption that all sentient beings want to avoid pain. Thus, the Buddha teaches that an action is good if it leads to freedom from suffering. The implications of this teaching for artificial intelligence is that any ethical use of AI must strive to decrease pain and suffering. In other words, for example, facial recognition technology should be used only if it can be shown to reduce suffering or promote well-being. 

 

How Tech and Media Enabled a White Supremacist Coup, Points, 13 January 2021

These present-day lynch mobs have been incited to violence over the last four years through a network of right-wing platforms, such as Parler, and media outlets, such as FoxNews America’s most-watched cable news channel, which are funded by billionaires like Rebekah Mercer and Rupert Murdoch. The current tech/media ecosystem combined with violent white supremacy is what’s truly unprecedented about the current moment. 

 

Deplatforming Our Way to the Alt-Tech Ecosystem, Knight First Amendment Institute at Columbia University, 11 January 2021

This points to a larger lesson. Building a healthy social media ecosystem will be full of tradeoffs, and it’s important to understand and highlight them, not because the changes are necessarily wrong, but because examining and responding to tradeoffs will be crucial to ensuring well-meaning changes don’t cause us to take one step forward and two steps back. Alt-tech presents powerful questions about speech online. Is it better to exile toxic speech from popular platforms if it risks making communities even more extreme? If toxic speech becomes harder to study and track? How do we ensure that deplatforming toxic speech isn’t weaponized to silence any dissenting point of view? These questions are beyond the scope of this post, but are worth considering as this space develops.

THIS MONTH’S PODCAST EPISODE CHOICE

 

YOUR UNDIVIDED ATTENTION - Two Million Years in Two Hours: A Conversation with Yuval Noah Harari, January 15, 2021

Yuval Noah Harari is one of the rare historians who can give us a two-million-year perspective on today’s headlines. In this wide-ranging conversation, Yuval explains how technology and democracy have evolved together over the course of human history, from Paleolithic tribes to city states to kingdoms to nation states. So where do we go from here? “In almost all the conversations I have,” Yuval says, “we get stuck in dystopia and we never explore the no less problematic questions of what happens when we avoid dystopia.” We push beyond dystopia and consider the nearly unimaginable alternatives in this special episode of Your Undivided Attention.

 

BIG TECH - Joan Donovan On How Platforms Enabled the Capitol Hill Riot, January 21, 2021

In this episode of Big Tech, Taylor Owen speaks with Joan Donovan, research director of the Shorenstein Center on Media, Politics and Public Policy at Harvard University. Donovan studies social movements and their use of media and technology to spread their message. Social media platforms provide tools for individuals and groups to share information and organize, which has been valuable for societal movements such as Black Lives Matter and Standing Rock. But those same tools have been harnessed by bad actors aiming to incite destructive actions. In the case of the Make America Great Again movement, her team could see that those leading the Stop the Steal campaign were setting the stage for the Capitol attack. According to Donovan, “It’s about creating the conditions by which people feel [that] if they don’t do something, nothing will change.”

 

ICYMI

Decoding Hate Speech- Technology and hate speech: friend or foe? MIGS Institute, 8 October 2020

Can information and communication technologies still be used for positive change and democracy, and if so, how? How can we prevent Big Tech from profiting from online harm and once again become a tool for positive change? What tools, mechanisms and approaches can be used by states, civil society and the private sector to counter online hate?

THIS MONTH’S WEBINARS

Tech and data recommendations for the new Administration, Atlantic Council, 6 January 2021

Government interventions to ensure a firm recovery from the pandemic will be essential. Creating working groups, councils, and alliances to develop and distribute vaccines and guarantee food security is essential to promoting security and peace. Such coalitions would be best initiated at the local and state level by creating groups that represent people in an authentic way. Philanthropic donors and organizations should be incorporated into solutions to these challenges.

 

ALL TECH IS HUMAN - Improving Social Media: Content Moderation & Democracy, 21 January 2021

Content moderation is a hot issue in 2021, given that it greatly impacts the way people communicate on social media. What content should be allowed and what shouldn't? How open and transparent should platforms be about how they make these decisions? What about the wellbeing of moderators who are viewing disturbing content? Democracy in peril meets the power of big tech. Where do we go from here? Watch our discussion with two leading experts on content moderation. Special guests Sarah T. Roberts, PhD (co-founder and Co-Director of the UCLA Center for Critical Internet Inquiry, author of Behind the Screen: Content Moderation in the Shadows of Social Media) & Murtaza Shaikh, PhD (Senior Advisor on Hate Speech, Social Media and Minorities to UN Special Rapporteur on Minority Issues). The conversation will be moderated by David Ryan Polgar, founder & director of All Tech Is Human.

 

Measurementality: Defining What Counts in the Algorithmic Age, 28 January, 2021

Join John C. Havens of the IEEE Standards Association as he interviews Jess and Dylan, the co-hosts of the popular podcast, Radical AI.  John, Jess and Dylan will be discussing the Measurementality content series, including topics such as: How is success measured today in the world of Artificial Intelligence Systems (AIS)? What is the positive future we’re working to build with AIS?  And, what are the measures of success for that future?  We'll also be discussing how the Measurementality series features a call to action for listeners and the AIS community at large to respond to these questions to contribute to two reports helping us define and frame 'what counts in the algorithmic age.'

 

The Way Forward: Tech Policy Recommendations for the Biden Administration, Georgetown, 14 January 2021.

As President-elect Biden prepares to take office and the 117th Congress begins, CSET scholars offer recommendations for addressing critical issues affecting U.S. and overseas development of artificial intelligence. Their observations will build on briefing papers that were provided to officials with the Biden and Trump camps and then published online this past September. Continued leadership in artificial intelligence will require an alliance-centered strategy, targeted export controls and support for the U.S. research community that attracts global talent while defending against security threats.

 

ICYMI

Lessons Learned from the Practical Implementations of AI in the Humanitarian Sector, NETHOPE Solutions Center, 8 December 2020.

In this webinar, you will have the opportunity to learn about two practical implementations of artificial intelligence/machine learning (AI/ML) in the humanitarian sector focused on displacement and meeting the needs of refugees.

WELCOME TO OUR SPRING 2021 INTERN, ZAHRA SOMJI

Zahra Somji is joining AI for Peace as a Spring 2021 Intern. She helped curate this edition of AI for Peace Newsletter and is working on our February Special Edition on Technology, Racial Bias, and Racial Justice. Stay tuned!  

Follow Us
Follow on LinkedIn
Follow on X (Twitter)

Online Library 

On our website, AI for Peace, you can find even more awesome content, podcasts, articles, white papers and book suggestions that can help you navigate through AI and peace fields. Check our online library!

LIBRARY

Share on social

Share on FacebookShare on X (Twitter)Share on Pinterest

This email was created with Wix.‌ Discover More