AI FOR PEACE NEWSLETTER Your monthly dose of news and the latest developments in AI for Peace |
|
|
JANUARY 2022 Spotlight on metaverse, AI fighter pilots, surveillance tech, humanitarian digital ethics, indigenous AI, PeaceCon, MozFest and more… |
|
|
If someone has forwarded this to you and you want to get our Newsletter delivered to you every month, you can subscribe here: |
|
|
BY AI FOR PEACE AI FOR PEACE at Basel Peace Forum, 20-21 January 2022 AI for Peace Founding Director, Branka Panic, joined the 2022 edition of the Basel Peace Forum, organized by Swisspeace. She spoke at the panel discussion “Navigating Fact and Fiction: How to Fight Fake News”. Fake news can easily proliferate, particularly in times of political turbulence and instability. Recent examples in times of COVID, but also in specific contexts such as Myanmar or India, show that even though these phenomena are not new, the use of modern technologies has a considerable impact on the scale and speed at which dangerous information reaches audiences. Therefore, digital technologies that amplify the spread of harmful information are seen as a growing risk and challenge in places affected by conflict. How can peacebuilders fight misinformation, disinformation, and hate speech in a digitalized world? You can look at all the Forum recordings here. |
|
|
THIS MONTH’S BEST READS To catch an insurrectionist, Vox, 6 January 2022 A few days after the Capitol insurrection last January, the FBI got two tips identifying an Ohio man named Walter Messer as a participant, and both cited his social media posts about being there. To verify those tips, the FBI turned to three companies that held a large amount of damning evidence against Messer, simply as a result of his normal use of their services: AT&T, Facebook, and Google. Privacy Is Power How Tech Policy Can Bolster Democracy, Foreign Affairs, 19 January 2022 In reality, few of the policymakers who were present at the creation of the Internet predicted that the hypertext transfer protocol used to load webpages would prove dominant, and even fewer considered what it might take to govern the Internet at scale. Present-day Web users are living with the consequences of their inaction: weaponized social media, cyber-intrusions that prey on the vulnerabilities of Internet architecture, the buying and selling of informed predictions about individual Internet users’ future behavior, and information monopolies that threaten democratic discourse online. Facebook critics say its metaverse could quickly become a virtual hellscape, Wired, 18 January 2022 Only 6 percent of Arabic hate content is flagged as such when a user posts on Instagram, Politico reported in October. The Facebook algorithms used to identify terrorist content in Arabic wrongly take down posts 77 percent of the time, WIRED reported the same month. And Facebook employs just 766 Arabic-speaking moderators to check posts by 220 million Arabic-speaking users, French daily Le Monde revealed. Blocking access to Twitter in Nigeria is a flagrant violation of fundamental rights, Access Now, 13 January 2022 After seven months of deliberately blocking access to Twitter, authorities in Nigeria have today lifted the ban on the social media platform. “Ending the ban on Twitter in Nigeria is the right thing to do, but it is incredibly unfortunate that it took the authorities so many months to do so,” said Felicia Anthonio, Campaigner and #KeepItOn Lead at Access Now. “The ban was an unnecessary attack on fundamental rights, while costing the country’s economy over a billion USD.” The Rise of A.I. Fighter Pilots, The New Yorker, 17 January 2022 A fighter plane equipped with artificial intelligence could eventually execute tighter turns, take greater risks, and get off better shots than human pilots. But the objective of the ace program is to transform a pilot’s role, not to remove it entirely. As darpa envisions it, the A.I. will fly the plane in partnership with the pilot, who will remain “in the loop,” monitoring what the A.I. is doing and intervening when necessary. According to the agency’s Strategic Technology Office, a fighter jet with autonomous features will allow pilots to become “battle managers,” directing squads of unmanned aircraft “like a football coach who chooses team members and then positions them on the field to run plays.” Shifting the narrative: not weapons, but technologies of warfare, ICRC, 20 January 2022 In this post, Klaudia Klonowska, a researcher with the Asser Institute’s DILEMA project, calls for a dramatic shift in what we consider to be an important tool of warfare. Not weapons, but all technologies of warfare. She argues that we need to acknowledge that the choice of technologies may influence offensive capabilities just as much as the choice of weapons. ICYMI Stop exported surveillance tech from being used in human rights abuses, The Straits Times Authoritarian countries use digital technologies, such as facial recognition tools, to suppress and monitor dissidents and control speech. Japan, the United States and Europe must prevent their technologies from proliferating and being misused for human rights abuses. The United States has announced an initiative to create an international framework to manage exports of surveillance technologies, together with Australia, Denmark and Norway. If AI Is Predicting Your Future, Are You Still Free? Wired These predictive analytics are conquering more and more spheres of life. And yet no one has asked your permission to make such forecasts. No governmental agency is supervising them. No one is informing you about the prophecies that determine your fate. Even worse, a search through academic literature for the ethics of prediction shows it is an underexplored field of knowledge. As a society, we haven’t thought through the ethical implications of making predictions about people—beings who are supposed to be infused with agency and free will. |
|
|
THIS MONTH’S WEBINARS AND CONFERENCES 2nd ICT4D Partnerships Conference – January 25-27, 2022 An inclusive VIRTUAL gathering of 700+ global actors involved in digital development and ICT use for international development and humanitarian response to explore and learn about best practices, opportunities and challenges when working in partnership with others. PEACECON@10: COVID, CLIMATE, & CONFLICT: RISING TO THE CHALLENGES OF A DISRUPTED WORLD, 26-28 January 2022 This year’s conference comes at a time when the COVID-19 pandemic, climate change, displacement, disinformation and democratic backsliding are just a few of the disruptions facing the peacebuilding field and peacebuilders are rising to meet these challenges. The impacts of the global COVID-19 pandemic continue to be felt across the world. Mass distribution of the vaccine has not been made widely available to vulnerable communities. Meanwhile, sources like disinformation have caused vaccine hesitancy among populations globally. ICYMI Emerging Technologies in Peacebuilding and Prevention Workshop – Lessons Learned from Humanitarian Actors Now Available on Youtube: 20+ Panel Sessions and Lightning Talks from the 2021 Emerging Technologies in Peacebuilding and Prevention Workshop. In December 2021, over 600 participants and speakers came together to have crucial conversations around data ethics and responsibility, advancing peace with GIS, predictive analytics and models for famine, migration, weather risks, big data in migration and forced displacement, the role of the private sector, and much more. |
|
|
THIS MONTH’S REPORTS AND PUBLICATIONS Humanitarian Digital Ethics: A Foresight and Decolonial Governance Approach, Carr Center Discussion paper Series, Aarathi Krishnan, 20 January 2022 Just as rights are not static, neither is harm. The humanitarian system has always been critiqued as arguably colonial and patriarchal. As these systems increasingly intersect with Western, capitalist technology systems in the race of ‘for good’ technology, how do governance systems ethically anticipate harm, not just now but into the future? Can humanitarian governance systems design mitigation or subversion mechanisms to not lock people into future harm, future inequity, or future indebtedness because of technology design and intervention? Instead of looking at digital governance in terms of control, weaving in foresight and decolonial approaches might liberate our digital futures so that it is a space of safety and humanity for all, and through this—birth new forms of digital humanism. Biometric data flows and unintended consequences of counterterrorism, ICRC, Katja Lindskov Jacobsen, 26 January 2022 Examining unintended consequences of the makings and processing of biometric data in counterterrorism and humanitarian contexts, this article introduces a two-fold framework through which it analyzes biometric data-makings and flows in Afghanistan and Somalia. It combines Tilley's notion of “living laboratory” and Larkin's notion of infrastructure into a framework that attends to the conditions under which biometric data is made and to subsequent flows of such data through data-sharing agreements or unplanned access. Exploring such unintended consequences, attention needs to be paid to the variety of actors using biometrics for different purposes yet with data flows across such differences. Accordingly, the article introduces the notion of digital intervention infrastructures, with biometric databases as one dimension. Gender and Feminist Considerations in Artificial Intelligence from a Developing-World Perspective, with India as a Case Study, SSRN, 6 January 2022 The relationship between women, technology manifestation, and likely prospects in the developing world is discussed in this manuscript. Using India as a case study, the paper goes on to discuss how ontology and epistemology views utilized in AI (Artificial Intelligence) and robotics will affect women's prospects in developing countries. Women in developing countries, notably in South Asia, are perceived as doing domestic work and are underrepresented in high-level professions. They are disproportionately underemployed and face prejudice in the workplace. The purpose of this study is to determine if the introduction of AI would exacerbate the already precarious situation of women in the developing world or if it would serve as a liberating force. While studies on the impact of AI on women have been undertaken in developed countries, there has been less research in developing countries. This manuscript attempts to fill that need. |
|
|
THIS MONTH’S PODCAST CHOICE MACHINE ETHICS PODCAST - 2021 in review with Merve Hickok, 3 January 2022 This podcast brings together interviews with academics, authors, business leaders, designers and engineers on the subject of autonomous algorithms, artificial intelligence, machine learning, and technology's impact on society. This episode Ben and Merve are chatting about 2021–EU AI legislation & harmonising AI product markets through policy, the UNESCO principles, systemic dogma, AI ethics in defence, Reith lectures and Lethal autonomous weapons, demonstrating values / principles and much more... TRUMANITARIAN PODCAST – Arms Race for Data, 14 January 2022 AI is transforming the world and will have profound implications for humanitarian action. But how? Will it lend itself to authoritarian regimes controlling their populations and will humanitarian organisations be complicit in this and create additional vulnerabilities for the populations we serve? Will be help us create a better user experience for “consumers” of humanitarian aid and will it help us ensure that we get spare parts for the generator just in time? Listen in as Sarah Spencer from humanitarianai.org and Lars Peter Nissen discuss these and many more questions. AUGMENTED HUMANITY – Indigenous A.I. 3 January 2022 On this program we’re joined by Michael Running Wolf (Northern Cheyenne, Lakota and Blackfeet), who was raised in a rural prairie village in Montana with intermittent water and electricity; naturally he has a Master’s of Science in Computer Science, is a former engineer for Amazon’s Alexa, and is an instructor at Northeastern University. He was raised with a grandmother who only spoke his tribal language, Cheyenne, which like many indigenous languages is near extinction. By leveraging his advanced degree and professional engineering experience, Michael hopes to strengthen the ecology of thought represented by indigenous languages. |
|
|
EVENT ANNOUNCEMENTS Artificial Intelligence and the Past, Present and Future of Democracy – Thursday, February 3, 2022, 4-5 pm EST Towards Life 3.0: Ethics and Technology in the 21st Century is a talk series organized and facilitated by Dr. Mathias Risse, Director of the Carr Center for Human Rights Policy, and Berthold Beitz Professor in Human Rights, Global Affairs, and Philosophy. Drawing inspiration from the title of Max Tegmark’s book, Life 3.0: Being Human in the Age of Artificial Intelligence, the series draws upon a range of scholars, technology leaders, and public interest technologists to address the ethical aspects of the long-term impact of artificial intelligence on society and human life. A Social Media Analysis Toolkit for Mediators and Peacebuilders, Build Up – February 9, 2022, 9am EST Along with our partners at The Centre for Humanitarian Dialogue, we introduce a new toolkit for mediators and peacebuilders on social media analysis. Full of case studies, technical tips, and tools and analysis methods, what is possible with this toolkit? What questions do you still have about how social media analysis can be used in peacebuilding initiatives? MozFest Virtual, 7-11 March 2022 MozFest is a unique hybrid: part art, tech and society convening, part maker festival, and the premiere gathering for activists in diverse global movements fighting for a more humane digital world. Join us online, 7-11 March 2022, to connect with others around the world who have a single mission: a better, healthier internet. |
|
|
On our website, AI for Peace, you can find even more awesome content, podcasts, articles, white papers and book suggestions that can help you navigate through AI and peace fields. Check our online library! |
|
|
|
|