AI FOR PEACE NEWSLETTER

Your monthly dose of news and the latest developments in AI for Peace

APRIL 2022

Spotlight on Twitter and human rights, AI and geopolitics, satellite companies, facial recognition in war and more

For more resources on War in Ukraine look at our

Special Edition Newsletter

If someone has forwarded this to you and you want to get our Newsletter delivered to you every month, you can subscribe here:
Subscribe

THIS MONTH’S BEST READS 

Human rights groups raise hate speech concerns after Musk's takeover of Twitter, 25 April 2022

Twitter is not just another company, human rights advocates noted. "Regardless of who owns Twitter, the company has human rights responsibilities to respect the rights of people around the world who rely on the platform. Changes to its policies, features, and algorithms, big and small, can have disproportionate and sometimes devastating impacts, including offline violence," Deborah Brown, a digital rights researcher and advocate at Human Rights Watch, told Reuters in an email.

 

Elon Musk’s Twitter buyout must not come at the expense of human rights, 25 April 2022

Musk has declared himself a “free speech absolutist” and indicated his intentions to minimize content moderation on the platform — a position that puts millions of people at risk and increases the likelihood of Twitter being used as a tool for inciting violence, hate, and harassment – what’s been dubbed Toxic Twitter.

 

Adversarial AI and the dystopian future of tech, VB, 3 April 2022

In an adversarial AI attack, AI is used to manipulate or deceive another AI system maliciously. Most AI programs learn, adapt and evolve through behavioral learning. This leaves them vulnerable to exploitation because it creates space for anyone to teach an AI algorithm malicious actions, ultimately leading to adversarial results. Cybercriminals and threat actors can exploit this vulnerability for malicious purposes and intent.

 

Artificial intelligence is already upending geopolitics, 6 April 2022

These problems are especially acute with AI, as the means by which learning algorithms arrive at their conclusions are often inscrutable. When undesirable effects come to light, it can be difficult or impossible to determine why. Systems that constantly learn and change their behavior cannot be constantly tested and certified for safety.

 

Satellite companies join the hunt for Russian war crimes, 6 April 2022

They are cueing their satellites to pinpoint mass graves, bombed-out hospitals and shattered schools. They are helping to identify military units that have targeted civilians. And their real-time data is being used to deploy investigators, such as those from the International Criminal Court and United Nations, to collect more physical evidence or personal testimony from witnesses on the ground in Ukraine.

 

Developing countries are being left behind in the AI race – and that's a problem for all of us, 13 April 2022

The developed world has an inevitable edge in making rapid progress in the AI revolution. With greater economic capacity, these wealthier countries are naturally best positioned to make large investments in the research and development needed for creating modern AI models. In contrast, developing countries often have more urgent priorities, such as education, sanitation, healthcare and feeding the population, which override any significant investment in digital transformation. In this climate, AI could widen the digital divide that already exists between developed and developing countries.

 

‘Regulation has to be part of the answer’ to combating online disinformation, Barack Obama said at Stanford event, 25 April 2022

Obama told a packed audience of more than 600 people in CEMEX auditorium – as well as more than 250,000 viewers tuning in online – that everyone is part of the solution to make democracy stronger in the digital age and that all of us – from technology companies and their employees to students and ordinary citizens – must work together to adapt old institutions and values to a new era of information. “If we do nothing, I’m convinced the trends that we’re seeing will get worse,” he said.

 

Can Cyber Nukes Usher-In Peace in The Global Digital Space? 22 April 2022

However, the complexity of the circumstances should not deter the regulators from scaling back offensive cybersecurity technology, like what happened with nuclear technologies. Cyber dominant countries like China and Russia have been developing cyber-offensive technologies for years just to use when the time arrives. Nevertheless, the recent Russian deterrence for cyber warfare is a different story; Ukraine has been working on defending cyber-attacks for the last ten years.

 

How Democracies Spy on Their Citizens, 18 April 2022

Establishing strict rules about who can use commercial spyware is com­plicated by the fact that such technology is offered as a tool of diplomacy. The results can be chaotic. The Times has reported that the C.I.A. paid for Djibouti to acquire Pegasus, as a way to fight terrorism. According to a previously unreported investigation by WhatsApp, the technology was also used against members of Djibouti’s own government, including its Prime Minister, Abdoulkadar Kamil Mohamed, and its Minister of the Interior, Hassan Omar.

 

The Future of War in the Age of Disruptive Technologies, 26 April 2022

From the battlefields of Yemen and Ukraine to Syria, Armenia and Azerbaijan, war has reinforced its centrality in the 21st century. These modern conflicts have spread across the land, air, maritime, and cyber domains, fuelled by ethnic antagonisms, territorial claims and geopolitical competition. But, more significantly, they have demonstrated the critical role of disruptive technologies in shaping military doctrines and influencing future battlefield tactic

 

Bringing facial recognition to war is a bad idea, 29 April 2022

Ukraine’s Ministry of Defense has not said how it will use the technology, according to Reuters, which first reported on the news citing Clearview Chief Executive Officer Hoan Ton-That as its main source. Ukraine’s government has also not confirmed that it was using Clearview, but Reuters reported that its soldiers could potentially use the technology to weed out Russian operatives at checkpoints. Out of Clearview’s database of 10 billion faces, more than 2 billion come from Russia’s most popular social-media network, Vkontakte, allowing the company to theoretically match many Russian faces to their social profile.

THIS MONTH’S WEBINARS AND CONFERENCES

Disaster Mobility Data Network Meeting 9: Mobility Data, Displacement, and the War in Ukraine, 5 April 2022

Since the beginning of the Russian invasion of Ukraine more than 2.5 million people have been displaced as refugees, and nearly 2 million more have been displaced within their country.  An estimated 50% of those who were forced to flee Ukraine are children. Mobility data has played an important role in helping to estimate changing areas and rates of population influx and dissemination into many different parts of Europe. Humanitarian agencies and policy makers are using this data in part to make decisions about resource allocation and planning to meet the needs of millions of Ukrainians and others displaced by the war. At the same time, technology providers, governments, and multilateral organizations have made a set of rapid and dramatic decisions about data protection and responsibility, shaping how mobility data can, and in most cases cannot, be accessed and shared about communities in Ukraine. 

 

This conversation features representatives from UN OCHA, UNICEF, and the Jackson School of International Affairs at Yale, who discuss data responsibility, displacement analysis, and the limits of mobility data during the war in Ukraine. 

 

Is artificial intelligence the future of warfare? 22 April 2022

We discuss the risks behind autonomous weapons and their role in our everyday lives. “If we’re looking for that one terminator to show up at our door, we’re maybe looking in the wrong place,” says Matt Mahmoudi, Amnesty International artificial intelligence researcher. “What we’re actually needing to keep an eye out for are these more mundane ways in which these technologies are starting to play a role in our everyday lives.”

 

Laura Nolan, a software engineer and a former Google employee now with the International Committee for Robot Arms Control, agrees. “These kinds of weapons, they’re very intimately bound up in surveillance technologies,” she says of  lethal autonomous weapons systems or LAWS.

 

Artificial Intelligence and Human Rights Forum

From surveillance and misinformation to facial recognition and foreign influence, advances in Artificial Intelligence have been rapid, raising questions about potential impacts on the rights of people around the globe. Join the AI and Human Rights Forum to hear from some of the world’s top experts on disinformation, online hate and freedom of speech, authoritarian tech, AI ethics and governance, and global cooperation. The event will take place over five days and will feature panel discussions from leading global experts working at the intersection of AI and human rights.

 

Conflict in the Age of the Internet: Mitigating Human Rights Abuses in Russia and Ukraine, 28 April 2022

In the age of the internet, the need for digital platform transparency and accountability is essential in mitigating human rights abuses. These platforms, such as Facebook and Twitter, have so far used a piecemeal response to suppress and amplify different types of content regarding the status of the conflict between Ukraine and Russia — highlighting the real need for researchers and lawmakers to be granted greater access to platform data for research and oversight.

 

ICYMI

From Clickbait to Confect Resilience: Collaboration Between Tech and Peacebuilding Actors to Build Social Cohesion Online, at the Peacecon@10

Building on recent research and the development of a Council on Technology and Social Cohesion, this panel will focus on sharing lessons learned, elevating examples of digital peacebuilding and positive peacetech, and identifying the opportunities for the tech and peacebuilding sectors to work collaboratively and creatively to improve the impact of technology platforms on social cohesion. This session will feature peacebuilders, tech sector experts, and others working at the intersection of social cohesion and technology.

THIS MONTH’S REPORTS AND PUBLICATIONS

Can Emerging Technologies Lead a Revival of Conflict Early Warning/Early Action? Lessons from the Field, April 2022

The early warning/early action (EWEA) community has been working for decades on analytics to help prevent conflict. The field has evolved significantly since its inception in the 1970s and 80s. The systems have served with variable success to predict conflict trends, alert communities to risk, inform decision makers, provide inputs to action strategies, and initiate a response to violent conflict. Present systems must now address the increasingly complex and protracted nature of conflicts in which factors previously considered peripheral have become core elements in conflict dynamics.

 

The Hard Problem of Prediction for Conflict Prevention, 26 April 2022

In this article we propose a framework to tackle conflict prevention, an issue which has received interest in several policy areas. A key challenge of conflict forecasting for prevention is that outbreaks of conflict in previously peaceful countries are rare events and therefore hard to predict. To make progress in this hard problem, this project summarizes more than four million newspaper articles using a topic model. The topics are then fed into a random forest to predict conflict risk, which is then integrated into a simple static framework in which a decision maker decides on the optimal number of interventions to minimize the total cost of conflict and intervention. According to the stylized model, cost savings compared to not intervening pre-conflict are over US$1 trillion even with relatively ineffective interventions, and US$13 trillion with effective interventions.

THIS MONTH’S PODCAST CHOICE 

AI for Peace at On AiR: IR in the age of AI

Medlir and Chris start off our third season with AI4Peace's Branka Panic, about how AI can contribute to peace and conflict prevention. Plus see our bonus episode this week on YD's AI movie picks, which also led to a lively second discussion.

 

AI and International Criminal Law

Medlir and Chris interview Marta Bo on Autonomous Weapons Systems and accountability under international criminal law, while Young Diogenes talks about the trolley dilemma, the moral machine, and that time when Dwight and Michael end up in a lake.

Follow Us
Follow on LinkedIn
Follow on X (Twitter)

Online Library 

On our website, AI for Peace, you can find even more awesome content, podcasts, articles, white papers and book suggestions that can help you navigate through AI and peace fields. Check our online library!

LIBRARY

Share on social

Share on FacebookShare on X (Twitter)Share on Pinterest

This email was created with Wix.‌ Discover More