AI FOR PEACE NEWSLETTER Your monthly dose of news and the latest developments in AI for Peace |
|
|
OCTOBER 2021 Spotlight on AI as a threat to democracy, Facebook papers, data journalism, misinformation, Ethiopia, climate and conflict data and more |
|
|
If someone has forwarded this to you and you want to get our Newsletter delivered to you every month, you can subscribe here: |
|
|
THIS MONTH’S BEST READS Facebook’s role in Myanmar and Ethiopia under new scrutiny, Guardian, 7 October 2021 Whistleblower Frances Haugen’s testimony to US senators on Tuesday shone a light on violence and instability in Myanmar and Ethiopia in recent years and long-held concerns about links with activity on Facebook. “What we saw in Myanmar and are now seeing in Ethiopia are only the opening chapters of a story so terrifying, no one wants to read the end of it,” Haugen said in her striking testimony. Haugen warned that Facebook was “literally fanning ethnic violence” in places such as Ethiopia because it was not policing its service adequately outside the US. The covid tech that is intimately tied to China’s surveillance state, Technology Review, 11 October 2021 Dahua was just one of the Chinese companies that was able to capitalize on the pandemic. As covid began to move beyond the borders of China in early 2020, a group of medical research companies owned by the Beijing Genomics Institute, or BGI, radically expanded, establishing 58 labs in 18 countries and selling 35 million covid-19 tests to more than 180 countries. In March 2020, companies such as Russell Stover Chocolates and US Engineering, a Kansas City, Missouri–based mechanical contracting company, bought $1.2 million worth of tests and set up BGI lab equipment in University of Kansas Medical System facilities. Alfred Nobel, Technology, and the End of War, The Diplomat, 8 October 2021 This week, the Nobel Prizes are being announced. The Nobel Prizes, generally considered to be among the most prestigious international awards in the fields of science, literature, and peace, are named for Swedish industrialist Alfred Nobel, who made a number of important advancements in martial technology, including dynamite and smokeless gunpowder. Toward the end of his life, Nobel expressed the view that his efforts had been directed at making war so horrifying that the world would come together and ban it. More than a century later, not only does war remain unbanned, but the way that martial technologies impact our imagination has fundamentally changed. Americans Need a Bill of Rights for an AI-Powered World, Wired, 8 October 2021 Our country should clarify the rights and freedoms we expect data-driven technologies to respect. What exactly those are will require discussion, but here are some possibilities: your right to know when and how AI is influencing a decision that affects your civil rights and civil liberties; your freedom from being subjected to AI that hasn’t been carefully audited to ensure that it’s accurate, unbiased, and has been trained on sufficiently representative data sets; your freedom from pervasive or discriminatory surveillance and monitoring in your home, community, and workplace; and your right to meaningful recourse if the use of an algorithm harms you. What's stunning about the misinformation trend -- and how to fix it, CNN, 7 October 2021 Today, living in society means also swimming in an ocean of misinformation. As Facebook's whistleblower told Congress, the company, on its own, hasn't been able to cope with the scale and complexity of the problem. What is different today, and by that we mean unprecedented in human history, is the volume and the velocity with which anyone can spread misinformation around the world. On a bad day, misinformation about coronavirus from the Russian and Chinese governments, for example, can reach almost a billion social media accounts and get better engagement and circulation numbers than content from credible news sources. As a result, we not only got foreign interference in the 2016 and 2020 US elections. There were also mobs in India, in South Sudan, in Myanmar, and in Mexico attacking and killing innocent people because of rumors and misinformation spread on Facebook, Snapchat and WhatsApp. "Artificial Intelligence is a threat to democracy," says 'iHUMAN' director Tonje Hessen, Free Press Jpirnal, 10 October 2021 We made iHUMAN to highlight the ways in which AI can be used for good and to also tell the world how dangerous things can turn out to be if it is used to spread falsehood. We have seen how deepfakes are used, even by political parties to manipulate people and push their agenda. See AI is just a tool like any other, we can either use it for good or bad. It is the power structure behind AI we need to be careful about. We have a tech elite which is operating like a Mafia with unlimited money. There is no transparency, no accountability as they are operating with no international regulations. All of this combined is extremely terrifying. The basic question we need to ask is, 'how much power and influence should a private corporation have?' Pandora Papers & Data Journalism: how investigative journalists use tech, Moonshot, 4 October 2021 Millions of leaked documents and the biggest journalism partnership in history have uncovered financial secrets of 35 current and former world leaders, more than 330 politicians and public officials in 91 countries and territories, and a global lineup of fugitives, con artists and murderers. The leaked records reveal that many of the power players who could help bring an end to the offshore system instead benefit from it – stashing assets in covert companies and trusts while their governments do little to slow a global stream of illicit money that enriches criminals and impoverishes nations. How AI is rising up the ranks of the military, Axios, 23 October 2021 Military dominance in the future won't be decided just by the size of a nation's army, but the quality of its algorithms. The U.S. still leads on integrating AI into defense, but some competitors like China have advantages of their own — and they're catching up. The Defense Department plans to spend $874 million for AI-related technologies. It also aims to increase the number of AI-related projects to more than 600, up 50% from current efforts. |
|
|
THIS MONTH’S REPORTS AND PUBLICATIONS Empowering Local Communities Using Artificial Intelligence, 5 October 2021 Previous works in citizen science have identified methods of using AI to engage the public in research, such as sustaining participation, verifying data quality, classifying and labeling objects, predicting user interests, and explaining data patterns. These works investigated the challenges regarding how scientists design AI systems for citizens to participate in research projects at a large geographic scale in a generalizable way, such as building applications for citizens globally to participate in completing tasks. In contrast, we are interested in another area that receives significantly less attention: how scientists co-design AI systems "with" local communities to influence a particular geographical region, such as community-based participatory projects. Specifically, this article discusses the challenges of applying AI in Community Citizen Science, a framework to create social impact through community empowerment at an intensely place-based local scale. We provide insights in this under-explored area of focus to connect scientific research closely to social issues and citizen needs. Remote Sensing and Artificial Intelligence in the Mine Action Sector, October 2021 Remote sensing and artificial intelligence (AI) technologies are included in discussions of how technology and innovation can improve humanitarian action and international peacekeeping. These technologies have the potential to improve the capacity to assess needs and to monitor changes on the ground and can be useful for both the mine action (MA) and broader humanitarian sectors. Even though remote sensing and AI are not the silver bullet in MA and come with several challenges (e.g., operational and data protection), the International Committee of the Red Cross (ICRC) and the Geneva International Centre for Humanitarian Demining (GICHD) believe that the integration of remote sensing and AI into the MA sector will enhance evidence-based decision making, aiding in determining priorities for surveying and clearance of contaminated areas and enabling the scarce recourses available for MA activities worldwide to be appropriately directed and used as efficiently as possible. |
|
|
THIS MONTH’S WEBINARS AND CONFERENCES DATA FOR PEACE DIALOGUE – Data for Peace: Climate & Conflict–Big Data Applications for Climate-Conflict Research & Action The 2021 October Data for Peace dialogue discussed the results of the Ecological Threat Report 2021: Understanding ecological threats, resilience, and peace, recently published by the Institute for Economics and Peace; and different ways the peacebuilding and prevention community can use data and data-driven approaches for climate-conflict research, prediction, and prevention. TEC Talks: Machine Learning and Power, ThinkND, October 11, 2021 Machine learning purports to make accurate predictions and decisions about everything from prison recidivism rates to cancer diagnoses to mortgage approvals. But time and time again, scholars, activists, and journalists have demonstrated that machine learning algorithms often digitize and replicate inaccuracies, historical prejudices, and institutional harms. With so much on the line, machine learning models and the data used to train them may deserve more scrutiny. Berlin Climate and Security Conference: Making Sense of Climate Data for Peacebuilding, 1 October 2021 It is well established that climate change impacts can pose risks to peace by undermining human security and increasing the impacts of other drivers of conflict and fragility. Thanks to a growing availability of high-quality data and computational capacities, our ability to enhance the understanding of the complex, context-specific impacts of climate change on current and future security risks is also growing. Providing practitioners with data-driven insights about the full range of those impacts is crucial in order to improve anticipatory action to avoid and reduce those risks. In this panel discussion, speakers presented climate data products from the AGRICA Project, which provides accessible climate information for operational responses. Furthermore, the session discussed its contribution and suitability towards achieving risk informed operations with practitioners. Berlin Climate and Security Conference: How to Get Policy-makers at Different Governance Levels to Conflict Prevention Action, 7 October 2021 This event focused on how to move from identifying water- and climate-related conflict risks and forecasting current and future climate and water conflict hotspots to designing and implementing conflict prevention and mitigating action. Its main aim was to investigate how the various analytical tools available for understanding climate and water security risks can be used in an effective and targeted manner to not only inform but to actually trigger action on the ground – by actors from the local, national, and international level. It did so by discussing how different analytical tools available have already or can in the future guide climate and water security interventions, ultimately calling for moving from analysis to action. |
|
|
THIS MONTH’S PODCAST CHOICE Facebook is under new scrutiny for it's role in Ethiopia's conflict, NPR, 11 October 2021 Hate and division on Facebook are not just a problem in the U.S. That's one of the messages whistleblower Frances Haugen took to Congress last week, where she accused Facebook's algorithms of quote, "literally fanning ethnic violence in Ethiopia," a country that's endured nearly a year of civil war. On AiR: IR in the age of AI - S2E1: Russia, AI, and Emerging Tech, 3 October 2021 Medlir and Chris speak with Leonid Kovachich to discuss Russia’s approach to AI and emerging technology (with YD asking a couple questions about Privacy, Data Governance, and further improving audit trails in the age of AI). |
|
|
EVENT ANNOUNCEMENTS Content & Conflict: The Case of Ethiopia | November 10 at 9 - 10 AM Pacific Time Join us next week on November 10th for a public briefing on “Content & Conflict: The Case of Ethiopia”, organized by the Program on Democracy and the Internet’s Content Policy & Society Lab (CPSL). Ethiopia’s Afar, Amhar and Tigray regions have been witnessing an armed conflict, with concerns of ethnically charged violence, expressed by several human rights groups and institutions. In addition, the communications black out, and the difficult access for human rights organizations to the area have led millions of citizens in Ethiopia and abroad, to rely heavily on social media to be informed on the conflict. At the same time, numerous organizations have been warning against the widespread circulation of unmoderated harmful discourse and content related to the conflict. 2021 HAI Fall Conference, Stanford University, 9-10 November 2021 This year’s virtual fall conference features a novel format. We will present and discuss four policy proposals that respond to the issues and opportunities created by artificial intelligence. Each policy proposal will be a radical challenge to the status quo and capable of having a significant and far-reaching positive impact on humanity. The proposals will be presented to a panel of experts from multiple disciplines and backgrounds, who will vet, debate, and judge the merits of each proposal. We will also encourage audience participation throughout. Emerging Technologies in Peacebuilding and Prevention—Lessons Learned from Humanitarian Actors Practitioners Virtual Workshop, 1-2 December 2021 This virtual workshop will provide an opportunity for attendees to discuss the current and future applications of emerging technologies in peacebuilding and learn from humanitarian actors and their experiences. The workshop will create a space for sharing both successes and failures, while envisioning together how to increase the use and impact of data and data-driven approaches in peacebuilding and conflict prevention. Mark your calendars to to join colleagues and practitioners across the globe as they share their work and lessons learned from emerging technologies in humanitarian and peacebuilding fields. Registration will be open for all interested participants in late October. Register here. |
|
|
On our website, AI for Peace, you can find even more awesome content, podcasts, articles, white papers and book suggestions that can help you navigate through AI and peace fields. Check our online library! |
|
|
|
|