AI FOR PEACE NEWSLETTER

Your monthly dose of news and the latest developments in AI for Peace

DECEMBER 2020

Spotlight on campaign against killer drones in Germany, AI and the death of nuclear scientist in Iran, facial recognition and surveillance of Uyghurs in China, EU's push into biometric technologies

For more resources on Democracy, Free Speech & AI look at our

Special Edition Newsletter

curated by Maanya Vaidyanathan, AI for Peace Research Fellow

If someone has forwarded this to you and you want to get our Newsletter delivered to you every month, you can subscribe here:
Subscribe

THIS MONTH'S SPOTLIGHT - THE GOOD AI

The Good AI is empowering organizations and talent using AI to address the Sustainable Development Goals.

 

“Artificial Intelligence is a powerful technology that will shape the future of our world like no other technology before. Whether it will be a force to enable us and not to undermine us, whether it will help us address recent and old challenges depends on our ability to use it wisely and responsibly. At The Good AI, we believe this is possible. We believe AI will help achieve the Sustainable Development Goals adopted by the UN in 2015: improve our lives, protect the planet, ensure equity and fairness for all, maintain peace. The Good AI wants to be the catalyst of the AI Revolution for Good. To make it happen, we need more innovative AI solutions, and we need a framework for ethical, trustworthy, and responsible use of AI. The Good AI mission is to support organizations and talent using AI to address the Sustainable Development Goals through visibility, knowledge, and community.”

 

Find out about the latest news in the industry, join courses and programs, read guidelines & principles, and much much more! Join The Good AI Community all around the globe. Visit The Good AI here.

THIS MONTH’S BEST READS 

If AI is so good, why hasn’t it changed the way we deliver humanitarian aid? The Good AI, 3 December 2020

Artificial intelligence advocates and experts make bold promises about the ability of AI to do more with less and create efficiencies, allocating tasks like data analysis, inventory, and record keeping to machines, freeing up time for humans to tackle bigger challenges. But can AI revolutionise the way in which humanitarian aid is delivered? And if it can, why hasn’t it done so already?

 

Sci-fi surveillance: Europe's secretive push into biometric technology, 10 December 2020

EU science funding is being spent on developing new tools for policing and security. But who decides how far we need to submit to artificial intelligence? Billions of euros in public funding flow annually to researching controversial security technologies, and at least €1.3bn more will be released over the next seven years.

 

Companies collaborating with global experts to help eradicate human trafficking using technology, December 2020

Human trafficking is a complex, thriving crime that impacts every country: There are an estimated 40 million people worldwide subjected to some form of modern slavery. Given the widespread nature of this crime and the complexity of tackling it, increased engagement from all stakeholders, including and especially the private sector, is vital. Through their expertise, capacity for innovation, and global reach, technology companies can play a major role in preventing and disrupting human trafficking and in empowering survivors. Digital information and communication technologies offer opportunities for a step change in tackling this crime.

 

Facebook, Twitter, and YouTube must end the attack on critical voices in MENA, 17 December 2020

Access Now and 42 human rights organizations, journalists and activists from across the globe are voicing frustration and dismay at how platform policies and content moderation procedures often lead to the silencing and erasure of critical voices from marginalized and oppressed communities across the Middle East and North Africa. The coalition urges Facebook, Twitter, and YouTube to not be complicit in censorship and erasure of oppressed communities’ narratives and histories, and ask they implement the requested measures. Read more here.

 

Alibaba facial recognition tech can identify Uighurs: Report, 17 December 2020

Technology giant Alibaba Group Holding Ltd has facial recognition technology that can specifically pick out members of China’s Uighur minority, surveillance industry researcher IPVM said in a report. The report comes as human rights groups accuse China of forcing more than one million Muslim Uighurs into labour camps, and calls out firms suspected of complicity.

 

Facial Recognition Technology and the Death of an Iranian Nuclear Scientist: An International Humanitarian Law Perspective, 21 December 2020

On 27 November 2020, Mohsen Fakhrizadeh was killed while traveling in his car just east of Tehran. It is understood that Fakhrizadeh was Iran’s foremost nuclear scientist and that he led the country’s efforts to develop a nuclear bomb. Iran claimed that facial recognition technology was used to facilitate the killing via an unmanned, vehicle-mounted, machine gun ‘equipped with an intelligent satellite system’ which zoomed in on Fakhrizadeh and shot him.

 

Microsoft’s iron cage: Prison surveillance and e-carceration, 21 December 2020

Fyodor Dostoyevsky, author of “Crime and Punishment”, once wrote, “The degree of civilisation in a society is revealed by entering its prisons.” Updated for the 21st century, our “degree of civilisation” might be revealed by the technology used inside them. For Microsoft, prisons represent a market. In recent years, the company and its business partners have started providing an array of surveillance and Big Data analytics solutions to prisons, courts and community supervision programmes.

 

Activists and Parliamentarians Join Together to Prevent Armed Drones in Germany, 24 December 2020

In an historic development that will undoubtedly save the lives and sanity of many people in Mali and Afghanistan—a development in which a number of U.S. citizens participated—the German military establishment was forced by a surprising surge of opposition among the MPs in the Socialist Democrat Party (SPD) in the German Bundestag (parliament) to delay plans, at least for now, to arm the Heron TP drones that Germany has been leasing from Israel since 2018. Instead, further discussions of the ethical and legal ramifications of deploying armed drones are to take place in Germany.

 

The Geopolitics of Artificial Intelligence, 24 December 2020

As artificial intelligence technologies become more powerful and deeply integrated in human systems, countries around the world are struggling to understand the benefits and risks they might pose to national security, prosperity and political stability. That AI is deeply embedded in the discourse of geopolitical competition is well established. The belief that AI will be the key to military, economic and ideological dominance has found voice in a proliferation of grand AI mission statements by the US, China, Russia and other players.

 

EU Fundamental Rights Agency Issues Report On AI Ethical Considerations, 25 December 2020

The European Union’s Fundamental Rights Agency (FRA) has recently published a report on AI that probes into the ethical considerations that must be undertaken to develop the technology. A document published under the title ‘Getting The Future Right’, interviewed over a hundred public administration officials and private company staff, in an effort to answer the question.

 

What if autonomous weapons are more ethical than humans? 29 December 2020

The emergence of machines capable of aiding warfare through artificial intelligence was sobering and worrisome before the more recent evolution of “lethal autonomous weapons,” or LAWs. These weapons are defined by their ability to “think through” decisions on the battlefield, or whatever arena of combat they are in, and make the “decision” about whether or not to use lethal force against a human or group of humans. Once designers started building prototypes of such machines, which could target and kill combatants without explicit authorization on a case-by-case basis, humanity entered a new paradigm of warfare.

 

ICYMI

How Can Artificial Intelligence Help Curb Deforestation in the Amazon? IPI Global Observatory, November 23, 2020

The estimated loss in revenue from illegal logging alone costs timber producing countries between $10 to 15 billion per year. Stolen wood is estimated to depress world timber prices by up to 16 percent each year. In spite of these impacts, effective strategies to curb illegal deforestation are hard to find. Part of the problem is a lack of adequate forest monitoring, which is complicated by the challenges to obtaining accurate and consistent spatial data on deforestation. Even when greater accuracy and reliability are achieved—for instance, with the support of satellite technologies that allow for real-time tracking and increasingly detailed surveillance of forest canopies—filtering large amounts of data can be slow, labor intensive, and expensive. The enormous troves of data that can now be gathered through the deployment of drones pose similar challenges.

THIS MONTH’S PODCAST EPISODE CHOICE

HUMANITARIAN AI – Alexa Prize Winners: Jinho Choi, Sarah Fillwock and James Finch from Emory University, 14 December, 2020

Humanitarian AI Today’s guest host Fay Schofield speaks with Jinho Choi, Sarah Fillwock and James Finch from Emory University about “Emora” their team’s Alexa Prize winning socialbot, conversational AI and humanitarian applications of socialbots and emerging digital assistants like Alexa.

 

BERLIN SECURITY BEAT - Conflicts We Can (Not) Predict, Katharina Emschermann and Nils Metternich discuss the future and what can and cannot be predicted.

In the second episode of the “Berlin Security Beat”, the Centre for International Security's podcast, Dr. Katharina Emschermann, Deputy Director at the Centre, talks to Dr. Nils Metternich, Associate Professor in International Relations at the University College London and an expert on civil conflicts and the prediction of their dynamics. They discuss what we can and cannot predict in international security, why a Nobel Peace Prize winner went to war in Ethiopia, the role of forecasting in the policymaking process, and what conflicts to watch in 2021.

 

ICYMI

Maria Ressa on How Social Media Can Destabilize Democracy and Journalism, September 2020

Journalism’s role in balancing power in democracy is being undermined by the spread of disinformation on social media platforms. By allowing any content to be posted online, regardless of its validity, platforms are enabling autocrats to destabilize democratic institutions.

THIS MONTH’S WEBINARS

AI for human rights research and documentation, 10 December 2020

Coinciding with International Human Rights Day, this installment of series on Responsible AI will be centered on using AI as a tool for human rights research and documentation. It examines how AI can be used to build a better world and help us stand up for human rights. This series is supported by @Hitachi.USA. Take a look at the recording of a discussion about AI applications that have the potential to benefit human rights causes. From early warning systems that can detect the likelihood of abuse; to image recognition that can identify and process refugees faster and more efficiently upon seeking asylum; to machine learning tools that scan websites and newsfeeds for human trafficking operations and alert proper authorities, there are a number of ways in which AI might remedy the shortcomings of existing technologies or scientific tools. In this half-hour program we will also consider the risks AI technologies pose to undermining human rights and what steps need to be taken to ensure violations do not occur.

 

PeaceCon 2020 – Alliance for Peacebuilding, 7-9 December 2020

Over the last 10 years, PeaceCon has grown to become the largest global gathering of peacebuilders held in the United States and provides a dynamic platform for peacebuilders to engage the global affairs community. With the move to an entirely virtual format, PeaceCon 2020 aims to attract an even more diverse set of voices, expertise, and ideas from across the world. You can see some of the video recordings here. We highly recommend: Media Literacy as a Tool of Peacebuilding and Countering Misinformation Sessions by IREX and New America, Technology and Peacebuilding by Red Dot Foundation, Countering Online Disinformation and Hate Speech in Myanmar by FHI 360 and Burma Monitor, and Data for Peacebuilding and Prevention: Sustaining Peace in the World of Emerging Technologies by NYU Center on International Cooperation.

THIS MONTH’S PUBLICATIONS

Technologies for Liberation – Toward Abolitionist Futures, Astrea Foundation  

Technologies for Liberation: Toward Abolitionist Futures is rooted in the groundwork of visionary abolitionists who fight to end policing, criminalization, and carceral logics and technologies in all their forms. This report emerged out of the need to better understand the ways in which Queer, Trans, Two-Spirit, Black, Indigenous, and People of Color (QT2SBIPOC) communities are disproportionately impacted by surveillance and criminalization at all levels—from the state-endorsed to the corporate-led—and to resource these communities to push back. Technologies for Liberation: Toward Abolitionist Futures is based on rich interviews and engagement with movement technologists, organizers, researchers, and policy advocates about what liberation from surveillance and criminalization can actually look like.

It includes: 1) Key findings about how movements in the U.S. and Puerto Rico are pushing back against criminalizing technologies and building sustainable, community-centered alternatives; 2) Recommendations for funders looking to support communities proactively transforming and re-imagining new futures outside of criminalization.

 

Locked in and locked out: the impact of digital identity systems on Rohingya populations

This Briefing Paper contextualises Rohingya human rights and protection concerns within the global trajectory towards legal identities for all and the increased digitisation of identification systems. The paper relates Rohingya experiences of registration systems, to wider human rights challenges around racial and xenophobic discrimination, digital technologies and bor- ders, as articulated in a recent report by the UN Special Rapporteur on Contemporary Forms of Racism, Racial Discrimination, Xenophobia and Related Intolerance.

 

ICYMI

The weaponisation of synthetic media: what threat does this pose to national security? July 2020

The article deals with the national security implications generated by the weaponisation of AI-manipulated digital content. In particular, it outlines the threats and risks associated with the employment of hyper-realistic synthetic video, audio, images or texts –generally known as ‘synthetic media’ or ‘deepfakes’– to compromise (influence) targeted decision-making processes which have national security relevance. It argues that synthetic media would most likely be employed within information and influence operations targeting the public opinion or predefined social groups. Other potential national security relevant targets (specific individuals or organisations that have responsibility for national security) should be adequately equipped to deal with the threat in question (and therefore have appropriate procedures, technologies and organisational settings to deal with the threat).

Follow Us
Follow on LinkedIn
Follow on X (Twitter)

Online Library 

On our website, AI for Peace, you can find even more awesome content, podcasts, articles, white papers and book suggestions that can help you navigate through AI and peace fields. Check our online library!

LIBRARY

Share on social

Share on FacebookShare on X (Twitter)Share on Pinterest

This email was created with Wix.‌ Discover More