AI FOR PEACE NEWSLETTER

Your monthly dose of news and the latest developments in AI for Peace

MAY 2020

Spotlight on AI and facial recognition in protests, “digital colonialism” in Africa, and more in AI and pandemic

For more news and resources on AI, privacy, surveillance, and contact tracing look at our Special Edition Newsletter on Covid19 and Privacy

If someone has forwarded this to you and you want to get our Newsletter delivered to you every month, you can subscribe here:
Subscribe

THIS MONTH’S BEST READS 

Facial Recognition Is Law Enforcement’s Newest Weapon Against Protesters, 2 June 2020, One Zero

As protests engulf the country following the murders of George Floyd and Breonna Taylor at the hands of police, law enforcement agencies with extensive facial recognition capabilities are now asking the public for footage of activists. Police in Seattle, Austin, and Dallas, as well as the FBI, have all asked for video or images that can be used to find violence and destruction during protests over the weekend. But because there are no federal or state laws that require transparency for government use of facial recognition technology, there’s no way to know how the technology is being used or which law enforcement departments have access to it.

 

Using Neural Networks to Predict Droughts, Floods and Conflict Displacements in Somalia, 6 May 2020, Omdena

Millions of people are forced to leave their current area of residence or community due to resource shortage and natural disasters such as droughts, floods. Our project partner, UNHCR, provides assistance and protection for those who are forcibly displaced inside Somalia. The goal of this challenge was to create a solution that quantifies the influence of climate change anomalies on forced displacement and/or violent conflict through satellite imaging analysis and neural networks for Somalia.

 

How the rise of ‘digital colonialism’ in the age of AI threatens Africa’s prosperity, 8 May 2020

The 21st century colonisation of Africa does not involve armies, argues cognitive scientist Abeba Birhane, but the mass harvesting of valuable data. What value do you put on all the data gathered about you on any given week, or any given day? Technologies such as AI, facial recognition software and now contact-tracing apps are increasingly being deployed all over the world.

 

From Cairo to Cambridge: One scientist’s quest to humanise AI, 22 May 2020

Scientist, entrepreneur and author Rana el Kaliouby discusses her journey from Egypt to working on emotion AI at the MIT Media Lab. Rana el Kaliouby is co-founder and CEO of Affectiva, a Boston-based emotion-recognition tech firm that grew out of MIT’s Media Lab. Having recently launched her memoir, Girl Decoded, we got the chance to chat to her about how she got to this point in her career and why she believes in a future “where AI and technology can make us more human, not less”.

 

Google will not develop artificial intelligence for oil, 25 May 2020

Google will no longer develop custom AI tools to accelerate oil and gas extraction, the company reported, taking distance from cloud computing rivals Microsoft and Amazon. The announcement was made after a report Tuesday that documents how the three tech companies harness artificial intelligence and computing power to help oil companies find and access oil and gas fields in the United States and around the world.

 

Facebook’s AI is still largely baffled by covid misinformation, 12 May 2020, MIT

The tech giant says 88.8% of all the hate speech it removed this quarter was detected by AI, up from 80.2% in the previous quarter. The AI can remove content automatically if the system has high confidence that it is hate speech, but most is still checked by a human being first.

 

Artificial Intelligence is not the cure for the COVID-19 infodemic, 9 May 2020, The Hill

More than 3 billion people–around 50 percent of the world’s population–engage with and post content online. Some of that content is misleading and potentially harmful, whether by design or as a side effect of its spread and manipulation. With the billions of daily active users on social media platforms, even if a mere 0.1 percent of total content contains mis or disinformation, there is a vast volume of content to review. In response to this challenge, automated content review technologies have emerged as an enticing and scalable solution to help triage mis/disinformation online. Yet, while many technology companies and social media platforms have promoted artificial intelligence (AI) as an omnipotent tactic to address mis/disinformation, AI is not a panacea for information challenges.

 

Armed drones contentious in German disarmament debate, 13 May 2020, Euractiv

Germany has reopened a controversial debate over whether its armed forces should be trusted to operate armed drones. While an agreement seems far off, the debate could soon get a European twist. A first step was made earlier this week on Monday (11 May), with the defence ministry inviting experts, representatives of civil society and members of parliamentary groups in the Bundestag to a public hearing on what it said was meant to be an “open debate on potential armament”.

 

Using Drones to Fight COVID-19 is the Slipperiest of All Slopes, 5 May 2020, EFF

Any current buy-up of drones would constitute a classic example of how law enforcement and other government agencies often use crises in order to justify the expenditures and negate the public backlash that comes along with buying surveillance equipment. For years, the LAPD, the NYPD, and other police departments across the country have been fighting the backlash from concerned residents over their acquisitions of surveillance drones. These drones present a particular threat to free speech and political participation.

 

Ethics of Acquiring Disruptive Technologies: Artificial Intelligence, Autonomous Weapons, And Decision Support Systems – Analysis, 15 May 2020, EuroasiaReview

Last spring, Google announced that it would not partner with the Department of Defense (DOD) on “Project Maven,” which sought to harness the power of artificial intelligence (AI) to improve intelligence collection and targeting, because its employees did not want to be “evil.” Later that fall, the European Union called for a complete ban on autonomous weapons systems. In fact, a number of AI-related organizations and researchers have signed a “Lethal Autonomous Weapons Pledge” that expressly prohibits development of machines that can decide to take a human life.

 

ICYMI in 2019

Building ethical AI approaches in the African context, 18 August 2019, Global Pulse

Countries worldwide are at different stages of designing and implementing AI strategies and policies to seize the opportunities of this technology. In Africa, Kenya, Tunisia, South Africa, Ghana, Uganda are already working to develop data protection and ethics strategies. The critical question now is : Which ethical approaches are relevant in the context of the African continent?

THIS MONTH’S PODCAST EPISODE CHOICE   

Artificial Intelligence for Global Good with Fred Werner, 14 May 2020

How do we make the Sustainable Development Goals (SDGs) a reality by 2030? Through community building and using emerging technology like AI to bridge the gap. Fred shares how the United Nations is driving to create a better world and bring communities together, and more importantly, how we can all help in this mission.

 

NEW AMERICA Audio Interview: Fei-Fei Li on AI and the Future of Work, Policy and Geopolitics

Fei-Fei Li’s life bridges two countries and two industries. She moved to the U.S. from China when she was 16 years old and just a few years later, graduated from Princeton with an undergraduate degree in physics. Fast forward to today, Dr. Li is the Co-Director of Stanford’s Human-Centered AI Institute. But on her sabbatical from Stanford in 2017, Dr. Li served as Vice President at Google and as Chief Scientist of AI at Google Cloud.

 

How to Make AI A Top National Security Priority, 26 May 2020

Katharina McFarland is a former Assistant Secretary of Defense for Acquisition and a commissioner at the National Security Commission on Artificial Intelligence. At the commission, Ms. McFarland’s line of effort is considering how to accelerate the application of AI in the Defense Department. She talked about the commission's recommendation that the Department of Defense and the Office of the Director of National Intelligence establish a steering committee on emerging technology to ensure that AI for national security gets top priority in the years ahead.

THIS MONTH’S WEBINAR CHOICE   

Emerging Technologies: Ai Ethics Webinar Series, NetHope Solutions Center

NetHope, USAID, MIT D-Lab, and Plan International are working together to deliver a set of webinars to equip the social impact sector with the information they need in order to implement AI responsibly and ethically. Below are listed the webinars we have planned for May and June. The recordings and presentation materials will be found on these pages following each webinar.

THIS MONTH’S PUBLICATIONS

Decision Points in AI Governance: Three Case Studies Explore Efforts to Operationalize AI Principles, 5 May 2020, Center for Long-term Cybersecurity

The Center for Long-Term Cybersecurity (CLTC) has issued a new report that takes an in-depth look at recent efforts to translate artificial intelligence (AI) principles into practice. The report, Decision Points in AI Governance, authored by CLTC Research Fellow and AI Security Initiative (AISI) Program Lead Jessica Cussins Newman, provides an overview of 35 efforts already under way to implement AI principles, ranging from tools and frameworks to standards and initiatives that can be applied at different stages of the AI development pipeline.

 

 

Report on Disinformation, Deepfakes & Democracy, Alliance on Democracies

Our new report Disinformation, Deepfakes & Democracy, part of the collaboration with Microsoft, explores the issue by providing an in-depth explanation of how and why new information technologies challenges our democracies. Author and research fellow Christoffer Waldemarsson addresses the question of "What can be done to mitigate the effects of disinformation campaigns?" by mapping out the European response to digital election interference and the voluntary measures taken by social media platforms, as well as providing recommendations for governments or policymakers. The report also explains how deepfakes, a way to digitally fabricate video and audio on for example prominent politicians, can put our free and democratic elections in danger.

OTHER NEWSLETTER WE RECOMMEND 

Montreal AI Ethics Institute – Monthly Newsletter

AI Ethics #10: Truth decay, fighting hate speech, data custodians, trends in ML scholarship, future of privacy and security in ML, and more ...

EVENTS TO FOLLOW 

  • Contact Tracing and Technology: A Deep Dive, 17 June 2020

Join us on Wednesday, June 17th at 11 am ET for a “Contact Tracing and Technology: A Deep Dive,” organized by the COVID Tech Task Force, and co-sponsored by the Berkman Klein Center, NYU’s Alliance for Public Interest Technology, TechCrunch, Betaworks Studios, and Hangar. Register here.

 

  • Flynn Coleman - Candid Conversations: How Technology Is Redefining Who We Are

Fast-moving technologies have so much potential for humanity, including transforming how we work as well as our health and well-being, but what does that mean during times of rapid change and uncertainty? During this conversation, Flynn Coleman joins moderator Ashton Marra, teaching assistant professor at the Reed College of Media at West Virginia University, to explore what’s happening now and what’s to come with technological advances that are changing not only how we live during a pandemic, but how we will continue to change and grow as individuals and as a society.

 

  • Covid + AI The Road Ahead, 1 June 2020

The Stanford Institute for Human-Centered Artificial Intelligence's second virtual conference on COVID will address these questions. Scholars from across disciplines will discuss their research using AI, data science and/or informatics to help us understand how we emerge from this crisis. Sessions will examine, among other topics: Preparing for the 2020 election, protecting privacy during contact tracing, and assessing COVID infections.

 

Follow Us
Follow on LinkedIn
Follow on X (Twitter)

Online Library 

On our website, AI for Peace, you can find even more awesome content, podcasts, articles, white papers and book suggestions that can help you navigate through AI and peace fields. Check our online library!

LIBRARY

Share on social

Share on FacebookShare on X (Twitter)Share on Pinterest

This email was created with Wix.‌ Discover More