AI FOR PEACE NEWSLETTER

Your monthly dose of news and the latest developments in AI for Peace

APRIL 2020

Spotlight on AI in armed conflict, human rights protection, atomic AI, AI in humanitarian assistance and disaster response. 

For more news and resources on AI and Coronavirus look at our

Special Edition Newsletter on Covid19

THIS MONTH’S BEST READS 

AI and Machine Learning Symposium: Artificial Intelligence and Machine Learning in Armed Conflict, 27 April 2020

Artificial intelligence (AI) systems are computer programs that carry out tasks – often associated with human intelligence – that require cognition, planning, reasoning or learning. Machine learning systems are AI systems that are “trained” on and “learn” from data, which ultimately define the way they function. Both are complex software tools, or algorithms, that can be applied to many different tasks. However, AI and machine learning systems are distinct from the “simple” algorithms used for tasks that do not require these capacities. The potential implications for armed conflict – and for the International Committee of the Red Cross’ (ICRC) humanitarian work – are broad. There are at least three overlapping areas that are relevant from a humanitarian perspective.

 

These Tech Companies Managed to Eradicate ISIS Content. But They're Also Erasing Crucial Evidence of War Crimes, 11 April 2020, Time

…It was an example of how even one mistaken takedown can make the work of human rights defenders more difficult. Yet this is happening on a wider scale: of the 1.7 million YouTube videos preserved by Syrian Archive, a Berlin-based non-profit that downloads evidence of human rights violations, 16% have been removed. A huge chunk were taken down in 2017, just as YouTube began using algorithms to flag violent and extremist content. And useful content is still being removed on a regular basis.

 

AI and Security – VB Special Issue, VB

Both AI and cybersecurity are nearly omnipresent in our daily lives, and the intersection of the two is of increasing importance as our world becomes more connected, more “intelligent,” and more reliant on online or automated systems. AI technology can impact existing problems in cybersecurity, national security, physical safety, and even media consumption.

 

Emerging from AI utopia, 3 April 2020, Science

A future driven by artificial intelligence (AI) is often depicted as one paved with improvements across every aspect of life—from health, to jobs, to how we connect. But cracks in this utopia are starting to appear, particularly as we glimpse how AI can also be used to surveil, discriminate, and cause other harms. What existing legal frameworks can protect us from the dark side of this brave new world of technology?

 

What all policy analysts need to know about data science, 20 April 2020, Brookings

With qualifications, data science offers a powerful framework to expand our evidence-based understanding of policy choices, as well as directly improve service delivery. Even for public servants who never write code themselves, it will be critical to have enough data science literacy to meaningfully interpret the proliferation of empirical research.

 

The Different Challenges and Approaches to AI by Country, 10 April, 2020, Unite.AI

With the complete transformation of society being imminent due to artificial intelligence (AI) technologies, it is important to look at the different approaches being taken by countries around the globe. Whether it is for reasons of prosperity or surveillance, there is no doubt that nations are increasingly investing in AI. Rad more details about China, US, Germany, United Kingdom, France and Canada.

 

AI researchers propose ‘bias bounties’ to put ethics principles into practice, 17 April, 2020, VB

Researchers from Google Brain, Intel, OpenAI, and top research labs in the U.S. and Europe joined forces this week to release what the group calls a toolbox for turning AI ethics principles into practice. The kit for organizations creating AI models includes the idea of paying developers for finding bias in AI, akin to the bug bounties offered in security software.

 

Inside the grave new world of Atomic AI, 13 April 2020, Asia Times

Unmanned aerial vehicles, unmanned underwater vehicles and space planes are likely to be “the AI-enabled weapons of choice for future nuclear delivery,” a leading military think tank revealed during a recent seminar in Seoul. AI, or artificial intelligence, enables faster decision-making than humans and can replace humans in the decision matrix at a time when leadership reacts too slowly – or is dead. The Stockholm International Peace Research Institute, or SIPRI, released its report The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk Volume II; East Asian Perspectives in a forum hosted by the Swedish Embassy in Seoul.

THIS MONTH’S PODCAST EPISODE CHOICE   

Deep Learning, NLP & AI Engineers w/ Pujaa Rajan, Ep. 7

By Applied AI Pod, 13 April 2020,

In a conversation with a Deep Learning Engineer herself for: covid19, NLP metrics, DL’s popularity and balancing quantity over quality. Tune in to check the next thing to beat a human at.:) & Find Pujaa, Deep Learning Engineer @nodeio & Women in AI USA Ambassador/SF Founder, at pujaarajan.com & check out her latest covid19 volunteer work for covidnearyou.org. As many AI researchers have stopped their regular work to contribute their skills, special kudos to Pujaa for doing just that too.

 

“App-solutely necessary? Technology as a way out of the coronavirus crisis”, Mark Leonard's World in 30 Minutes, 15 April 2020

By European Council on Foreign Relations

Word on the street seems to suggest that technology will be the way out of the coronavirus crisis and the lockdowns in many European countries. This seems to be confirmed by a multitude of projects such as the EU’s Pan-European Privacy-Preserving Proximity Tracing (PEPP-PT), whose the aim of which is to make it possible to interrupt new chains of infection with the coronavirus.Through apps and data sharing, we will be able to track the spread of the virus, those infected and those who instead developed a degree of immunity to the disease and thus are allowed to return to participate normally in society. As good as it sounds, however, the issue comes with its own set of profound ethical questions regarding individual rights such as privacy and consent. and collective privacy. Our Host Mark Leonard is joined by ECFR experts Ulrike Franke and Anthony Dworkin and as well as independent researcher and broadcaster Stephanie Hare to break down the current discourse around tech in the age of corona and its implications.

 

ICYMI

 

AI for Good, S1E6

By ElementAI

Charles C Onu is using AI to detect birth asphyxia in babies. His story is inspiring because of its impact on society and the field of healthcare (in 2016, 1,000,000 babies died from asphyxia), but also because of his humble beginnings. In this episode, Charles shows us that a passion for solving problems can help you overcome many obstacles. Host Alex Shee also sits down with Rediet Abebe, Co-Founder and Co-Organizer of Black in AI, to expand on how others are using AI to change not just their industry, but the world.

 

AI in Humanitarian Assistance and Disaster Response

By Carnegie Mellon University

Ritwik Gupta, a machine learning research scientist in the SEI’s Emerging Technology Center, discusses the use of AI in humanitarian assistance and disaster response (HADR) efforts. “We are going to be working on finding what is new and what is next and just being ambassadors for bringing this world of AI and HADR together.”

THIS MONTH’S PUBLICATIONS

National Artificial Intelligence Strategies and Human Rights: A Review, 15 April 2020, Stanford Global Digital Policy Incubator

Global Partners Digital and Stanford’s Global Digital Policy Incubator have published a report examining governments’ National AI Strategies from a human rights perspective. The report looks at existing strategies adopted by governments and regional organisations since 2017. It assesses the extent to which human rights considerations have been incorporated and makes a series of recommendations to policymakers looking to develop or revise AI strategies in the future.

 

Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims, 20 April 2020, Cornell University

This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems and their associated development processes, with a focus on providing evidence about the safety, security, fairness, and privacy protection of AI systems. We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.

 

Corona Pan(Dem)Ic: The Gateway To Global Surveillance? 4 April 2020, ICT4Peace

The paper discusses the observation that the panic of the physical Covid-19 illness may let societies rush into a rise of global surveillance technologies. This precipitation is unprecedented, potentially path-dependent, infringing human rights, and, hence, dangerously unreflected. We must shift back from emotionality towards reason in order to live up to our duty to question national legislation and the course of action taken recently by Governments.

 

ICYMI

Human Rights and Technology Discussion Paper, December 2019, Australian Human Rights Commission

This Discussion Paper sets out the Australian Human Rights Commission’s preliminary views on protecting and promoting human rights amid the rise of new technologies.

New technologies are changing our lives profoundly—sometimes for the better, and sometimes not.

We have asked a fundamental question: how can we ensure these new technologies deliver what Australians need and want, not what we fear?

DIRECTLY FROM AI FOR PEACE

  • AI for Peace at the Hack the Crisis, the Netherlands, 3-5 April, 2020. AI for Peace joined forces with thousands of people throughout the world to help solving Covid19 crisis. The online hackathon focused on solutions in fields such as patient care, protection for medical and sanitary staff at hospitals, virus containment and digital solutions for people and business in quarantine. To come up with the solutions, motivated people with a wide range of skill sets are needed ranging from health professionals to artists, from developers to communication specialists and from supply chain to blockchain and AI experts (and anything else really). You can read more about the winning projects here.

 

  • AI for Peace Founding Director “zoomed” to Mexico, on 29 April 2020, as a guest speaker at “Geopolitics and Technology Change: The Future Today”. She took this collective journey with Professor Nikola Zivkovic and his students at the Tecnológico de Monterrey, looking at the AI through the lens of Covid19 pandemic and how the decisions we make today can shape our potential futures. From bigdata, machine learning, NLP, deep learning in prediction, diagnosis, treatment and protection of health workers, to questions of privacy, security, surveillance, disinformation, protecting vulnerable populations, responding to ethical dilemmas and challenges, and even existential risks and AGI and superintelligence. Look for the recording at our next newsletter!

 

  • AI for Peace participated in the very first Collective Action Summit organized by Robert Bosch Stiftung, Ashoka, Obama Foundation, Global Shapers Community and many others. It gathered more than 100 changmakers and organizations from more than 50 countries from the entire world, sharing their knowledge, expertise, and best practices in tackling Covid19. AI for Peace presented its strategy in using AI to protect human security.  

 

  • Blog: “When poverty and hunger kill before coronavirus”, The impoverished have been hardest hit by pandemics and Covid19 will not be any different

EVENTS TO FOLLOW IN MAY

  • AI and data-driven policy: the power to protect the world’s most vulnerable, May 6, 11am PST, by AI for Peace and Omdena at the Global Digital Development Forum. AI for PEACE, SH4P and Omdena are hosting a discussion at the first ever global virtual #ICT4D conference on May 6. We are thrilled to shape the discussion on AI and data-driven policy in coronavirus response and protecting the most vulnerable one with Laura Clark Murray, Branka Panic, Chris P. Lara, Rudradeb Mitra and Reem A. Mahmoud and some of 4,000 registered participants. Read more about the Forum here.

 

  • Cyber Policy Center Online Series | National AI Strategies and Human Rights: New Urgency in the Era of COVID-19, May 6, 2020 10-11am PST, Stanford Cyber Policy Center. Eileen Donahoe, the Executive Director of the Global Digital Policy Incubator (GDPi) at Stanford's Cyber Policy Center and Megan Metzger, Associate Director for Research, also at GDPi. Joining them will be Mark Latonero, Senior Researcher at Data & Society, Richard Wingfield, from Global Partners Digital, and Gallit Dobner, Director of the Centre for International Digital Policy at Global Affairs Canada. The session will be moderated by Kelly Born, Executive Director of the Cyber Policy Center.

 

Follow Us
Follow on LinkedIn
Follow on X (Twitter)

Online Library 

On our website, AI for Peace, you can find even more awesome content, podcasts, articles, white papers and book suggestions that can help you navigate through AI and peace fields. Check our online library!

LIBRARY

Share on social

Share on FacebookShare on X (Twitter)Share on Pinterest

This email was created with Wix.‌ Discover More