AI FOR PEACE NEWSLETTER Your monthly dose of news and the latest developments in AI for Peace |
|
|
SEPTEMBER 2020 Spotlight on democracy in the digital world, AI and gender equality, AI Ethics, AI as an existential threat, Responsible Tech, and Digital Pathways for Peace |
|
|
For more resources on Democracy, Mis/Disinformation & AI look at our Special Edition Newsletter curated by Rachel Brooks, AI for Peace Research Fellow |
|
|
If someone has forwarded this to you and you want to get our Newsletter delivered to you every month, you can subscribe here: |
|
|
THIS MONTH'S SPOTLIGHT AI FOR PEACE INCLUDED IN THE “Guide to Responsible Tech: How to Get Involved & Build a Better Tech Future” We are honored that AI for PEACE is included in All Tech is Human Responsible Tech Guide! This guide is a comprehensive look at the vibrant Responsible Tech ecosystem. Aimed at college students, grad students, and young professionals, the "Responsible Tech Guide" is a mix of advice, career profiles, education journeys, and organizations in the space. Developed by All Tech Is Human, an organization committed to informing & inspiring the next generation of responsible technologists & changemakers. “Our goal is to make sure that social media companies take responsibility in combating hate speech, fighting disinformation, and empowering social movements to support democracy and protect human rights. Algorithms that reinforce racism and AI-enabled biased decision making are not only an issue of privacy and discrimination but of peace and security, which we see now clearer than ever with facial recognition systems used in policing. We, at AI for Peace, advocate for responsible tech, for safeguarding against the misuse and unintended consequences, and for protecting not only national but human security of all individuals and communities around the world.” Branka Panic, AI for Peace Founder |
|
|
BY AI FOR PEACE AI FOR PEACE AT THE DATA FOR AI WEEK – “The Ethical Side of Data Usage”, 14 September AI for Peace Founder joined the Data for AI conference, 14-18 September 2020, as a panelist at “The Ethical Side of Data Usage”. Machine learning requires data, and organizations of all types and in different industries have lots of data that is useful for many very important tasks. However, enterprises are finding that there are concerns and restrictions on how that data can be used, shared and applied. This panel will explore the ethical side of data usage from an industry perspective. AI NON-TECHNICAL GUIDE FOR POLICYMAKERS – You can now find and download the Guide at SlideShare We created the Policymakers Guide to AI with a human-centered approach and explained in an engaging way AI basics to an audience of policymakers and all interested individuals who don’t have expertise in this field. Our goal is to demystify what AI is, and demonstrate how it is already altering our lives and societies we live in. The Guide offers explanations and additional resources, videos, articles, papers, and tutorials, to help policymakers prepare for the current and future AI developments and impacts. It serves as an open resource, welcoming all comments and suggestions to make it better and inviting, continuing dialogue in explaining AI and keeping up with its developments. You can also find the Guide at AI for Peace Website Library. |
|
|
THIS MONTH’S BEST READS How democracies can claim back power in the digital world, MIT Tech Review, September 29, 2020 Should Twitter censor lies tweeted by the US president? Should YouTube take down covid-19 misinformation? Should Facebook do more against hate speech? Such questions, which crop up daily in media coverage, can make it seem as if the main technologically driven risk to democracies is the curation of content by social-media companies. Yet these controversies are merely symptoms of a larger threat: the depth of privatized power over the digital world… There’s a long list of ways in which technology companies govern our lives without much regulation. In areas from building critical infrastructure and defending it—or even producing offensive cyber tools—to designing artificial intelligence systems and government databases, decisions made in the interests of business set norms and standards for billions of people. ‘The Social Dilemma’ Will Freak You Out—But There’s More to the Story, Singularity Hub, September 29, 2020 Dramatic political polarization. Rising anxiety and depression. An uptick in teen suicide rates. Misinformation that spreads like wildfire. The common denominator of all these phenomena is that they’re fueled in part by our seemingly innocuous participation in digital social networking. But how can simple acts like sharing photos and articles, reading the news, and connecting with friends have such destructive consequences? These are the questions explored in the new Netflix docu-drama The Social Dilemma. Directed by Jeff Orlowski, it features several former Big Tech employees speaking out against the products they once upon a time helped build. Big Brother Turns Its Eye on Refugees, Foreign Policy, September 2, 2020 By adopting biometric registration systems, aid agencies efficiently provide refugees with an official identity, prevent fraud, and improve the dignity of the refugee aid process. Yet despite those benefits, biometrics also threaten the security of refugees, especially women, in three ways: through greater risk for false matches; by increasing the potential for discrimination; and by threatening exploitation. The first step in correcting these problems is recognizing them. Biometrics collection from refugees and vulnerable people questioned by analysts, Biometric Update, September 9, 2020 The use of biometrics and digital identity systems in the humanitarian sector remains highly problematic, viewed through a data justice lens, according to a pair of researchers from ERC Global Data Justice project at TILT, Tilburg University. The researchers’ paper on ‘Exclusion and inclusion in identification: regulation, displacement and data justice’ explores the situation by examining the effects on displaced populations in Uganda and Bangladesh. The two case studies offer divergent approaches to inclusivity policy, with Uganda shifting towards inclusiveness while Bangladesh restricts participation. How to put AI ethics into practice: a 12-step guide, WEF, September 11, 2020 Policy-makers and industry actors are increasingly aware of both the opportunities and risks associated with AI. Yet there is a lack of consensus about the oversight processes that should be introduced to ensure the trustworthy deployment of AI systems – that is, making sure that the behaviour of a given AI system is consistent with a set of specifications that could range from legislation (such as the EU's Non-Discrimination Law) to a set of organizational guidelines. UNESCO completes major progress on establishing foundation of ethics for AI, IT Brief, September 21, 2020 UNESCOs Member States have announced there has been ‘major progress’ in the development of a global normative instrument for the ethics of artificial intelligence (AI). In November 2019, the United Nations secretary-general Antonio Guterres congratulated the organisation for taking up this challenge, declaring that AI is a critical frontier issue for the whole UN system and the whole world. In March this year, UNESCO asked 24 experts with multidisciplinary experience in the ethics of artificial intelligence to develop a draft recommendation on the ethics of AI. UNESCO then launched a wide process of consultations to obtain the many points of view of stakeholders. Read more here. This involved experts from 155 countries, members of the public (through a global online survey), United Nations agencies, major stakeholders from the sector such as Google, Facebook and Microsoft, and the world of academe with the University of Stanford and the Chinese Academy of Sciences. Is AI an Existential Threat? AI Unite, September 27, 2020 There are many challenges to AI, fortunately, we still have many years to collectively figure out the future path that we want AGI to take. We should in the short-term focus on creating a diverse AI workforce, that includes as many women as men, and as many ethnic groups with diverse points of view as possible. We should also create whistleblower protections for researchers that are working on AI, and we should pass laws and regulations which prevent widespread abuse of state or company-wide surveillance. Humans have a once in a lifetime opportunity to improve the human condition with the assistance of AI, we just need to ensure that we carefully create a societal framework that best enables the positives, while mitigating the negatives which include existential threats. AI ethics must be a continuous practice, says a director at the Oxford Internet Institute, TechJuice, September 30, 2020 “When we think of digital technologies, we cannot disregard their social impact, with respect to the ethical values and principles that underpin our societies. If there is friction between these values and principles and technological innovation, the latter will not be adopted and it is also likely that this friction will lead to strict policies and regulation.” The process gets trickier and harder to judge when a change in technology or an increment in the evolution of a specific type of technology comes along, and given the speed at which technological advancements are taking place in this modern era, this problem arises a lot more often than you would think. |
|
|
THIS MONTH’S PODCAST EPISODE CHOICE FUTURE OF LIFE INSTITUTE - Andrew Critch on AI Research Considerations for Human Existential Safety, September 15, 2020 In this episode of the AI Alignment Podcast, Andrew Critch joins us to discuss a recent paper he co-authored with David Krueger titled AI Research Considerations for Human Existential Safety. We explore a wide range of issues, from how the mainstream computer science community views AI existential risk, to the need for more accurate terminology in the field of AI existential safety and the risks of what Andrew calls prepotent AI systems. Crucially, we also discuss what Andrew sees as being the most likely source of existential risk: the possibility of externalities from multiple AIs and AI stakeholders competing in a context where alignment and AI existential safety issues are not naturally covered by industry incentives. YOUR UNDIVIDED ATTENTION - Facebook Goes '2Africa', September 2, 2020 This summer, Facebook unveiled “2Africa,” a subsea cable project that will encircle nearly the entire continent of Africa — much to the surprise of Julie Owono. As Executive Director of Internet Without Borders, she’s seen how quickly projects like this can become enmeshed in local politics, as private companies dig through territorial waters, negotiate with local officials and gradually assume responsibility over vital pieces of national infrastructure. “It’s critical, now, that communities have a seat at the table,” Julie says. We ask her about the risks of tech companies leading us into an age of “digital colonialism,” and what she hopes to achieve as a newly appointed member of Facebook’s Oversight Board. |
|
|
THIS MONTH’S BOOK REVIEW Book review: An Artificial Revolution: On Power, Politics and AI, Ivana Bartoletti, May 2020 AI has unparalleled transformative potential to reshape society but without legal scrutiny, international oversight and public debate, we are sleepwalking into a future written by algorithms which encode regressive biases into our daily lives. As governments and corporations worldwide embrace AI technologies in pursuit of efficiency and profit, we are at risk of losing our common humanity: an attack that is as insidious as it is pervasive. Leading privacy expert Ivana Bartoletti exposes the reality behind the AI revolution, from the low-paid workers who train algorithms to recognise cancerous polyps, to the rise of data violence and the symbiotic relationship between AI and right-wing populism. Impassioned and timely, An Artificial Revolution is an essential primer to understand the intersection of technology and geopolitical forces shaping the future of civilisation, and the political response that will be required to ensure the protection of democracy and human rights. |
|
|
THIS MONTH’S PUBLICATIONS The Black Box, Unlocked: UNIDIR Publishes New Study on Autonomous Weapons and Military AI, UNIDR, September 22, 2020 The United Nations Institute for Disarmament Research (UNIDIR) today published "The Black Box, Unlocked," a comprehensive study on an issue at the heart of ongoing discussions on the use of advanced artificial intelligence and autonomous weapons in warfare: predictability and understandability. Artificial Intelligence for Social Good Report, September 2020 This report is based on realities and experiences from Asia and the Pacific, and provides various perspectives on what AI for social good may look like in this region. More importantly, the report offers suggestions from the research community on how policymakers can encourage, use, and regulate AI for social good. You can download the report here. ICYMI New UNESCO report on Artificial Intelligence and Gender Equality, August 29, 2020 During the time UNESCO is drafting its Recommendation on the Ethics of Artificial Intelligence, it is important to reflect on how to best integrate gender equality considerations into such global normative frameworks. It is also crucial to examine closely how AI codes of ethics can and should be implemented in practical terms. In order to explore these questions, UNESCO's Gender Equality Division initiated a Global Dialogue on Gender Equality and AI with leaders in AI, digital technology and gender equality from academia, civil society and the private sector. |
|
|
EVENTS TO FOLLOW Keeping our Children Safe with AI, ITU, October 6, 15:00-16.30 CEST Join us to explore how law enforcement and concerned authorities can use these and other applications of AI to safeguard our children and help us to identify the red-line between the need to ensure the safety of our children and the use of potentially invasive technologies by law enforcement. The Webinar will also launch a new UNICRI project supported by the Ministry of Interior of the United Arab Emirates to further explore these issues. M&E THURSDAY TALK – Mapping & Measuring COVID-19 Violence Around the World, October 8, 10am EDT This summer, PeaceTech Lab deployed its COVID-19 Violence Tracker with the help of hundreds of volunteers, who are assisting with tracking, categorizing, and visualizing COVID-19 related violence. This data set includes media reports on xenophobia/racism and misinformation leading to violence, domestic violence, gender-based violence, and conflict over resources, among others. The COVID-19 Violence Tracker has helped PeaceTech to analyze what type of violence has been happening and where it is occurring, but this is just a first step. Tim Receveur will discuss how PeaceTech is now working to understand the causes and correlations of the violence, and also to connect data, technology tools, and strategies with networks on the ground who can mitigate it. ICYMI PEACE DIRECT – Digital Pathways for Peace Webinar, September 29, 2020 As peacebuilders place increasing importance on the use of digital technologies to sustain peacebuilding work in this midst of the Covid‑19 pandemic, we convened an online consultation for people across the globe to share insights and knowledge on how to capitalise on the opportunities for peace that digital technologies provide. This webinar presents findings from our latest report, ‘Digital Pathways for Peace: Insights and lessons from a global online consultation’, as well as the perspectives of local peacebuilders operating in this space. You can read the full report here. You can find the Executive Summary here. |
|
|
On our website, AI for Peace, you can find even more awesome content, podcasts, articles, white papers and book suggestions that can help you navigate through AI and peace fields. Check our online library! |
|
|
|
|