AI FOR PEACE NEWSLETTER

Your monthly dose of news and the latest developments in AI for Peace

FEBRUARY 2022

Spotlight on Tech and war in Ukraine, crimes in Metaverse, mass surveillance in Latin America, cybersecurity arms race and more

For more resources on War in Ukraine look at our

Special Edition Newsletter

If someone has forwarded this to you and you want to get our Newsletter delivered to you every month, you can subscribe here:
Subscribe

THIS MONTH’S BEST READS 

Tech's role in the Ukraine war, Protocol, 25 February 2022

Nothing feels very far away anymore. CNN brought once-distant wars into our living rooms, but TikTok and YouTube and Twitter have put them in our pockets. Following along with what’s happening is now easier than ever, though that’s often fraught with misinformation and lack of context, and social networks are quickly having to figure out what to take down and what to leave up. The war is affecting all of us, whether we know it or not.

 

For more information look at our special edition newsletter on War in Ukraine and AI, cybersecurity, disinformation on social media and more.

 

Artificial intelligence technologies have a climate cost, 4 February 2022

The climate impact of AI comes in a few forms: The energy use of training and operating large AI models is one. In 2020, digital technologies accounted for between 1.8 per cent and 6.3 per cent of global emissions. At the same time, AI development and adoption across sectors has skyrocketed, as has the demand for processing power associated with larger and larger AI models. Paired with the fact that governments of developing countries see AI as a silver bullet for solving complex socio-economic problems, we could see a growing share of AI in technology-linked emissions in the coming decades

 

What Should Be Considered a Crime in the Metaverse? Wired, 28 January 2022

All this raises crucial issues about the ethics of near-​term virtual worlds. How should users act in a virtual world? What’s the difference between right and wrong in such a space? And what does justice look like in these societies? Let’s start with virtual worlds that exist already. Perhaps the simplest case is that of single-​player video games. You might think that with nobody else involved, these games are free of ethical concerns, but ethical issues still sometimes arise.

 

Meta Wouldn’t Tell Us How It Enforces Its Rules In VR, So We Ran A Test To Find Out, BuzzFeed, 11 February 2022

Meta has said it recognizes this trade-off and has pledged to be transparent about its decision-making. So, to better understand how it is approaching VR moderation, BuzzFeed News sent Meta a list of 19 detailed questions about how it protects people from child abuse, harassment, misinformation, and other harms in virtual reality. The company declined to answer any of them. Instead, Meta spokesperson Johanna Peace provided BuzzFeed News a short statement: “We’re focused on giving people more control over their VR experiences through safety tools like the ability to report and block others. We’re also providing developers with further tools to moderate the experiences they create, and we’re still exploring the best use of AI for moderation in VR. We remain guided by our Responsible Innovation Principles to ensure privacy, security and safety are built into these experiences from the start.”

 

Stop normalizing mass surveillance in Latin America, Access Now, 4 February 2022

In many cities around the world, when you go out in public, you are unknowingly exposing yourself to surveillance, including the use of mass surveillance tools that record, analyze, and store your personal biometric data — your face, your voice, the way you walk, and more. Even if you know you may be under surveillance, most people have no idea how their personal data is being used or who has access to it. And in countries across Latin America, both governments and the companies that develop this type of technology refuse to be transparent, leaving citizens in the dark about the privacy violations and threats they face.

 

How AI is shaping the cybersecurity arms race, The Conversation, 23 February 2022

There are two main ways AI is bolstering cybersecurity. First, AI can help automate many tasks that a human analyst would often handle manually. These include automatically detecting unknown workstations, servers, code repositories and other hardware and software on a network. It can also determine how best to allocate security defenses. These are data-intensive tasks, and AI has the potential to sift through terabytes of data much more efficiently and effectively than a human could ever do.

 

In this age of climate crisis, humanitarians need to learn to love tech, TNH, 23 February 2022

In some places, local actors are already taking advantage of 21st century advances that have yielded new ways to help predict climate events before they become crises, or that have transformed when, how, and where humanitarian aid can be delivered. As a sector, we have long been suspicious of tech, wary of the risks it poses and sceptical of the opportunities it brings. There can be good reasons for this – the recent Red Cross hack brought to light the dangers of storing vulnerable people’s identities online. Limited ethical frameworks and metrics for success have also fostered an understandable reticence.

 

Technology can speed up humanitarian action, 23 February 2022

Technology today is evolving at an extraordinary and accelerating pace and is changing the very way we live and work. Its ability to assist humanitarian action in low-income countries has alerted donors, practitioners and governments to its potential. Southern Africa is the current focus of humanitarian concern and a number of Anticipation and Disaster Risk Financing systems, for instance, are being deployed to avert potential crises.

 

Cyberattacks: a real threat to NGOs and nonprofits, ReliefWeb, 22 February 2022

The recent cyberattack affecting the International Committee of the Red Cross (ICRC) has put a media spotlight on the threat to the humanitarian sector. Sadly our experience shows that cyberattacks in this sector are not rare. We look at the risk to NGOs and how they can prepare and defend against the growing proliferation of cyberthreats.

THIS MONTH’S WEBINARS AND CONFERENCES

ENP Webinar: Humanitarian Negotiation and Technologies - Threats and Opportunities, 15 February 2022

While much has been written about the role of technology on humanitarian action, the implications of technology on humanitarian negotiation are under studied. The Humanitarian Negotiation and Technologies – Threats and Opportunities webinar presents a platform for a diverse panel to have an exploratory discussion about how the rapidly changing digital landscape affects humanitarian negotiation now and in the future. Panelists will further discuss the significance of tech-centered tools in negotiation, discuss the challenges and opportunities, and share areas of exploratory research and action for the future.

 

Mapping Technologies for Peace and Conflict: From the Weaponization of Social Media to Digital Peacebuilding and Peacetech, ThinkND, 1 February 2022

This presentation will provide a survey of the best and worst impacts of technology on peace building. Beginning with an overview of how social media platforms’ profit models and designs amplify hate and undermine democracy, this lecture will explore social media’s impact in a dozen countries in the Global South that offer an insight into the devastating effects of technology.

 

A Social Media Analysis Toolkit for Mediators and Peacebuilders, 9 February 2022

This toolkit is a practical how-to guide for mediators and peacebuilders who want to conduct their own social media analysis, offering an overview of what is possible, a practical guide to a handful of technology tools, and suggestions on analysis methods. Developed in collaboration with the Centre for Humanitarian Dialogue, its objective is to offer a pathway for peacebuilders and mediators to go from social media data to programming insights. Access the toolkit here.

THIS MONTH’S REPORTS AND PUBLICATIONS

Taking Stock of Early Warning for Atrocity Prevention, February 2022

Adapting to COVID-19 restrictions, the 2021 Sudikoff Seminar took the form of three webinars, each focused on a particular aspect of early warning: statistical methods for risk assessment and early warning, qualitative early warning assessments, and communicating about risk. Read rapporteur’s reports on each of the seminars. Selected seminar participants reflect on key themes from the seminar and offer recommendations for the future directions of the field. Read more here.

 

State of AI Ethics, Volume 6, MAIEI, February 2022

The State of AI Ethics Report (volume 6) is MAIEI’s most comprehensive report yet touching nearly 300 pages covering. Our goal with these chapters is to provide both an in-depth analysis of each of those areas (but by no means exhaustive given the richness of each of these subdomains) along with a breadth of coverage for those who are looking to save hundreds of hours in trying to parse through the latest in research and reporting in the domain.

 

The Role of AI in the Battle Against Disinformation, 2022

Detecting and countering disinformation grows increasingly important as efficient disinformation campaigns lead to negative real-world consequences on a global scale, both in politics and in society. Machine learning (ML) methods have demonstrated their potential for at least partial automatisation of disinformation detection and analysis. In this report, we review current and emerging artificial intelligence (AI) methods that are used or can be used to counter the spread and generation of disinformation, and briefly reflect on ongoing developments in anti-disinformation legislation in the EU. This overview will shed light on some of the tools that disinformation-countering practitioners could use to make their work easier.

THIS MONTH’S PODCAST CHOICE 

The Existential Hope Podcast: Christine Peterson | On a Positive Turning Point for Human Longevity, Foresight Institute, 14 February 2022

In the first episode of the Existential Hope Podcast we interviewed Christine Peterson, co-founder and former President of Foresight Institute. We talked about everything from cryopreserved pets, sci-fi reading recommendations and the future of longevity.

 

ICYMI

David Chalmers on Reality+: Virtual Worlds and the Problems of Philosophy, Future of Life, January 2022

David Chalmers, Professor of Philosophy and Neural Science at NYU, joins us to discuss his newest book Reality+: Virtual Worlds and the Problems of Philosophy. Topics discussed in this episode include: Virtual reality as genuine reality; Why you can live a good life in VR; Why we can never know whether we’re in a simulation; Consciousness in virtual realities; The ethics of simulated beings.

EVENT ANNOUNCEMENTS 

AI for Peace at MozFest - ’Do no harm’ in the algo age, March 9, 2022, 1-2pm ET

‘Do no harm’ and preventing the outbreak, escalation, and continuation of conflict and harm while using new tools is a huge task, and a more integrated, strategic, and coherent approach across different sectors and actors is needed to sustain peace and protect vulnerable populations. This session will offer a place for practitioners from both AI and peacebuilding fields to discuss how existing ‘do no harm’ and ‘conflict sensitivity’ mechanisms can be “upgraded” to algo age, to help ethical programing of organizations planning utilizing AI and related technologies in their programs in conflict and violence affected countries. Register here. 

 

Participatory Action Research, Polarization, and Social Media: Ongoing lessons from the Digital MAPS project, March 16, 2022, 9 am ET

The Digital Media Arts for an inclusive Public Sphere (Digital MAPS) is an innovative program funded by the British Council, supporting partners to conduct social media mapping, analysis, and the design and implementation of pilot interventions that counter polarization and promote inclusivity and openness in the networked public sphere. Join us for this webinar which will share the participatory action research process and select findings from the social media mapping and analysis done with 19 partnering organizations and social media content creators in Iraq, Jordan, Lebanon, Libya, Occupied Palestinian Territories, Syria, Yemen, and Tunisia. We'll also share opportunities, challenges, and lessons from this practitioner-led social media analysis process.

Follow Us
Follow on LinkedIn
Follow on X (Twitter)

Online Library 

On our website, AI for Peace, you can find even more awesome content, podcasts, articles, white papers and book suggestions that can help you navigate through AI and peace fields. Check our online library!

LIBRARY

Share on social

Share on FacebookShare on X (Twitter)Share on Pinterest

This email was created with Wix.‌ Discover More