AI FOR PEACE NEWSLETTER Your monthly dose of news and the latest developments in AI for Peace |
|
|
JULY 2021 Spotlight on AI and climate action, "AI Cold War", AI and civil society, AI in Mexico, India, Syria, Myanmar, Ethiopia, and more… |
|
|
For more resources on Technology, Bias, and Racial Justice look at our Special Edition Newsletter curated by Amanda Luz, Jeremy Pineda, Loren Crone, Stephanie Hilton |
|
|
If someone has forwarded this to you and you want to get our Newsletter delivered to you every month, you can subscribe here: |
|
|
BY AI FOR PEACE AI FOR PEACE New Blog Entry, Conflict due to climate change: a univariate causal analysis, by Yared Hurisa, AI for Peace Adviser “In this series of blog posts, we will try to investigate the role of climate change on the conflict in Ethiopia using various quantitative methods. Our research approach will be from simple to complex where we first implement a descriptive and correlation analysis between climate change as approximated by changes in drought index (Evaporative Stress Index) and conflict as the occurrence of armed conflict events over a period of time. In future posts, we will relax this assumption to see the effect of various factors such as demographic and economic variables on conflict and by implementing advanced mathematical and machine learning algorithms.” Read more here. AI FOR PEACE ICYMI Blog Entry, Reflections on AI for Peace, by Irene Bratsis, Women in AI New York City “…But I’m an optimist at heart. I believe we can, and that we really really should, find ways to combine the common good with these exciting new technical capabilities. For every initiative that uses the power of data to say, maximize ad revenue, there ought to be one that’s using the power of data to educate citizens, empower democracies and enable global peace. Humanitarian aid, peacebuilding and peacekeeping are areas that are laden with bureaucratic red tape; this is precisely why they’re ripe for the kind of revolution AI can enable, particularly if it’s powered by grassroots efforts...” Read more here. |
|
|
THIS MONTH’S BEST READS The world’s social media giants admit they can’t protect women online, Quartz, 1 July 2021 Facebook, Twitter, TikTok, and YouTube made their first joint commitment to curb the harassment women face on their platforms, according to The World Wide Web Foundation. The social media giants pledged on July 1 to give users more granular control over who interacts with their posts and improve their reporting processes—for example, by giving users the ability to track their harassment reports during each stage of review. Purdue Launches Nation’s First Tech Tank Focused on Intersection of Technology and Diplomacy, 7 July 2021 The Purdue Research Foundation launched the Center for Tech Diplomacy at Purdue (CTDP), a new think tank at the intersection of technology and U.S. foreign policy. CTDP sees 21st century diplomacy as being uniquely driven by technology capabilities, which should advance freedom, democracy, and human rights, as well as U.S. national security and prosperity. Why tech needs to focus on the needs of marginalized groups, WEF, 8 July 2021 When discussing the impact of technology on marginalized groups, all too often the focus is on “fixing technology” to address harms against marginalized groups. This narrative is fundamentally flawed: it is based on the premise that technology is imposed on marginalized groups by some third party, at best unaware – at worst indifferent – to how the technology will affect these groups. If a killer robot were used, would we know? 4 July 2021 The likely best way to verify autonomous weapons use is by inspecting the weapon itself. If investigators retrieve a weapon, they can study it through digital forensic techniques and look for evidence in data logs, such as flight path data, photo or video processing records, received orders, or other indicators that the weapon allows autonomous control of firing decisions and that they were used. While such a forensic investigation could positively verify use, it couldn’t prove the opposite, that an autonomous attack had not occurred. What if a seized weapon was used manually, but others in a battle were used autonomously? Furthermore, investigators may have no access to the weapon: an autonomous gun turret like South Korea’s SGR-A1 may stay in the control of the military that used it; a used weapon may not be recovered; the data on the weapon may be corrupted, spoofed, or deliberately wiped; or the weapon may be destroyed in the attack. Facebook often removes evidence of atrocities in countries like Syria and Myanmar – but we can preserve it, 15 July 2021 Nearly half of the world’s population owns a smartphone. For those living in conflict zones or suffering human rights violations, these devices are crucial. They help ordinary people record and share the atrocities they witness – alerting the world to their plight, and holding to account those responsible for crimes against humanity. Yet when they come to post this vital digital evidence on social media platforms, citizens often find their posts censored and permanently removed. Companies such as Facebook have no obligation to preserve the evidence, and have been accused of rushing to moderate content on an ad hoc, sometimes incoherent basis. How Mexico’s traditional political espionage went high-tech, 21 July 2021 In 2017, investigators discovered traces of Pegasus spyware on the phones of several Mexican journalists and civic activists. The government acknowledged it had used Pegasus — but only, officials said, to fight criminals. Amid the backlash, the Justice Ministry stopped using the surveillance tool. Four years later, Pegasus has become the most prominent symbol of an explosion of high-tech political spying in Mexico. And yet the mystery around its use has only deepened. Why Civil Society Needs to Pay Attention to AI, NPQ, 15 July 2021 As a refugee and Black woman, Gebru’s other passion is social justice, and as she moved the center of the AI field, she was seeing its role in the growth of injustice. Her response to her growing awareness of the harms was, “I’m not worried about machines taking over the world. I’m worried about groupthink, insularity, and arrogance in the AI community. If many are actively excluded from its creation, this technology will benefit a few while harming the great many.” India sent Twitter the most requests for account information, 15 July 2021 Twitter’s transparency report said there was a 26% increase in India’s requests to remove content from accounts of verified journalists and media outlets. India made the most requests to Twitter seeking information about accounts, the social media company’s transparency report for July 2020 to December 2020 said on Wednesday. Twitter said that it was the first time since it started publishing the report in 2012 that the United States was not at the top of the list. Japan (17%) and France (14%) followed India and the US in seeking information about accounts from Twitter, according to the report. Don’t Let Police Arm Autonomous or Remote-Controlled Robots and Drones, EFF, 16 July 2021 It’s no longer science fiction or unreasonable paranoia. Now, it needs to be said: No, police must not be arming land-based robots or aerial drones. That’s true whether these mobile devices are remote controlled by a person or autonomously controlled by artificial intelligence, and whether the weapons are maximally lethal (like bullets) or less lethal (like tear gas). What’s happening in Tigray? Internet shutdowns avert accountability, Access Now, 29 July 2021 Access Now and the #KeepItOn coalition are calling on all parties in Ethiopia’s Tigray conflict to cease any attempts to censor the population and conceal war crimes through internet shutdowns. Since the start of the conflict in November 2020, internet and telecommunication shutdowns, and website blockings, have been used as a weapon of information control and censorship by involved parties. To date, broadband and mobile internet remains off, and Tigray remains blacked out. Four signs Mexico is embracing digital authoritarianism, Access Now, 28 July 2021 The Pegasus revelations are making headlines across the globe. But in Mexico, the use of NSO Group’s spyware to attack civil society is relatively old news, and there is a lot more to worry about when it comes to digital rights. Lawmakers are implementing harmful public policies, excluding civil society organizations (CSOs) from most conversations and taking an offensive stance toward critics. This has resulted in a digital rights crisis in Mexico, with lawmakers imitating the policies and tactics of authoritarian regimes. Below, we explore four key developments, and explain why the international community should be paying attention to what’s going on. |
|
|
THIS MONTH’S REPORTS AND PUBLICATIONS Publication: Is there an AI cold war? Hertie School Centre for Digital Governance, July 2021 New publication by Joanna Bryson and Helena Malikova documents and addresses the claims of a new AI cold war: a binary competition between the United States and China. Bryson and Malikova argue that while some of the claims of this narrative are based at least in part on genuine security concerns and important unknowns, evidence for its extreme binary nature is lacking. Read the full policy brief here. National Power After AI, CSET, July 2021 AI technologies will likely alter great power competitions in foundational ways, changing both how nations create power and their motives for wielding it against one another. This paper is a first step toward thinking more expansively about AI & national power and seeking pragmatic insights for long-term U.S. competition with authoritarian governments. Machine Learning and Mobile Phone Data Can Improve the Targeting of Humanitarian Assistance, NBER, July 2021 The COVID-19 pandemic has devastated many low- and middle-income countries (LMICs), causing widespread food insecurity and a sharp decline in living standards. In response to this crisis, governments and humanitarian organizations worldwide have mobilized targeted social assistance programs. Here we show that non-traditional “big” data from satellites and mobile phone networks can improve the targeting of anti-poverty programs. Our approach uses traditional survey-based measures of consumption and wealth to train machine learning algorithms that recognize patterns of poverty in non-traditional data; the trained algorithms are then used to prioritize aid to the poorest regions and mobile subscribers. |
|
|
THIS MONTH’S WEBINARS AND CONFERENCES Using machine learning to forecast & understand forced displacement; Forecasting forced displacement, at NYU CIC Conflict Early Warning/Early Action Practitioners Workshops, 18-21 May 2021 PITCH 1: The Danish Refugee Council The Foresight analysis platform resulting from this partnership is designed to inform strategic planning and scenario-building exercises by providing accurate forecasts of the total number of forced displacement from a given country 1 to 3 years into the future based on a machine learning model, while also providing a Bayesian network model that analyses the interlinkages between key drivers of displacement. This session showcased the tool and results, as well as how it can be used and accessed. PITCH 2: Kimetrica Insights into the dynamics of forced displacement over time and space has importance in supporting humanitarian and development decisions in affected countries. As such, Kimetrica’s model is an attempt to solve the predictive problem in the space of forced displacement. It uses the displacement Site Assessment (SA) tracking data from IOM DTM (Displacement Matrix) that tracks the number and multisectoral needs of internally displaced persons (IDPs) on quarterly basis (approximation). 2021 Data Fellows Programme Showcase, OCHA Center for Humanitarian Data, 29 July 2021 The fourth year of the Centre's Data Fellows Programme was held remotely in June and July 2021. In this year's Data Fellows Programme Showcase, hear from: - Julia Janicki on developing a data story about the climate crisis in the Sahel; - Roberta Rocca on researching potential applications for complex system models to better understand and respond to humanitarian needs; - Murray Garrard on designing a strategic communications campaign to support the adoption of data responsibility within OCHA; - Kasia Chmielinski on exploring new ways to communicate the quality of data on the Humanitarian Data Exchange. Data for Peace Dialogue: Using AI for Fighting Modern Slavery - Lessons for Peacebuilding, NYU CIC, 27 July 2021 Over 40 million people today are trapped in modern slavery and conditions of severe exploitation worldwide. One in four of them are children, and almost 71 percent are women and girls. Governments are already behind in their commitment to eradicate modern slavery and achieve UN Sustainable Development Goal 8.7 by 2030. Data about this form of human rights abuses and vulnerabilities can be difficult to collect and patterns of exploitation difficult to see. On top of that, armed conflict, natural disasters, and other humanitarian settings increase vulnerability to certain forms of forced labor, modern slavery, human trafficking, and child labor. In response, some organizations and researchers are looking into available data and the potential of computational science, artificial intelligence, and machine learning tools to help stop modern slavery. |
|
|
THIS MONTH’S PODCAST CHOICE YOUR UNDIVIDED ATTENTION – A Facebook Whistleblower In September of 2020, on her last day at Facebook, data scientist Sophie Zhang posted a 7,900-word memo to the company's internal site. In it, she described the anguish and guilt she had experienced over the last two and a half years. She'd spent much of that time almost single-handedly trying to rein in fake activity on the platform by nefarious world leaders in small countries. Sometimes she received help and attention from higher-ups; sometimes she got silence and inaction. “I joined Facebook from the start intending to change it from the inside,” she said, but “I was still very naive at the time.” ARE YOU A ROBOT – S7E3: How to Use Data as a Tool of Empowerment // Sarah Williams In our interview with Sarah, we’ll be discussing if we can fully remove biases from data sets, and how we can decolonise data structures. Sarah herself has worked on multiple amazing projects in which we can do so, for example Digital Matatus, which shows how to pull the omnipresent nature of mobile technology in developing countries to collect data for infrastructure. This data created a new, successful transit map for Nairobi and has led to a spark, where similar projects are developing in other cities. |
|
|
EVENT ANNOUNCEMENTS Deadline for agenda submissions extended to August 13 for NetHope’s 20th Anniversary Summit Do you have a powerful story of partnership or collaboration that you think will benefit the sector? August 13, is the NEW deadline for Agenda Submissions for NetHope's Virtual Summit that runs November 15th-19th. Don’t miss this opportunity to join our lineup of outstanding speakers and share your knowledge with an engaged audience of NGO leaders. Peacebuilding Responses to Online Harm – New Research on Social Media’s Challenges from Analyzing Conflict Dynamics to Addressing Harms t Users, 17 August, 11am EDT Where does most violent online content occur? How do people interact with it - and why do some engage its perpetrators? Moreover, how do we understand social media dynamics in a particular country context? What are the attributes that make it dangerous—or that make it a tool for peacebuilding? SFCG will present their report “Handling Harmful Content Online: Cross-National Perspectives of Users Affected by Conflict” and Mercy Corps will present their report on “Social Media and Conflict: Understanding Risks and Resilience.” SFCG and Mercy Corps are both institutional members of the Digital Peacebuilding Community of Practice, coordinated by the Alliance for Peacebuilding. Artificial Intelligence: The New Frontier of Business and Human Rights, Asser Institute, The Hague, 7-8 September 2021 What businesses should be doing to respect human rights in the development and use of AI? A great lineup of scholars from around the world will share their insights on various aspects of corporate responsibility in the context of AI - from human rights due diligence to quantum AI. Registration is free, you just need to send an email to c.l.lane@rug.nl. Organised by the Business and Human Rights Working Group of the Netherlands Network for Human Rights Research and the Asser Institute. |
|
|
BOOK RECOMMENDATION Living in Data: A Citizen's Guide to a Better Information Future, by Jer Thorp To live in data is to be incessantly extracted from; to be classified and categorized, statisti-fied, sold and surveilled. Data (our data) is mined and processed for profit, power and political gain. Our clicks and likes and footsteps feed new digital methods of control. In Living in Data, Thorp asks a crucial question of our time: how do we stop passively inhabiting data, and become active citizens of it? In this provocative book, Thorp brings his work as a data artist to bear on an exploration of our current and future relationship with data, transcending facts and figures to find new, more visceral ways to engage with data. Threading a data story through hippo attacks, glaciers, and school gymnasiums; around colossal rice piles and over active mine fields, Living in Data keeps humanity front and center. Thorp reminds us that the future of data is still wide open; that there are stories to be told about how data can be used, and by whom. |
|
|
On our website, AI for Peace, you can find even more awesome content, podcasts, articles, white papers and book suggestions that can help you navigate through AI and peace fields. Check our online library! |
|
|
|
|