AI FOR PEACE NEWSLETTER Your monthly dose of news and the latest developments in AI for Peace |
|
|
AUGUST 2020 Spotlight on AI crime threats, facial recognition, AI for the social sector, AI for detecting hate speech, AI for All in India, and more. |
|
|
For more resources on Democracy, Mis/Disinformation & AI look at our Special Edition Newsletter curated by Rachel Brooks, AI for Peace Research Fellow |
|
|
If someone has forwarded this to you and you want to get our Newsletter delivered to you every month, you can subscribe here: |
|
|
BY AI FOR PEACE AI FOR PEACE IN FORBES - Is AI A Force for Good? Interview with Branka Panic, Founder And Executive Director At AI For Peace, 23 August 2020, Forbes Participating in an upcoming panel at the Data for AI Conference, Branka Panic, Founder and Executive Director at AI for Peace shares why she started an organization focused on making sure that AI provides continuously positive benefits. She provides details about the AI for Good movement, challenges companies may need to overcome as they approach AI for good, and what organizations can do to minimize the risks of creating AI with unintended consequences. AI Today Podcast: Interview with Branka Panic, Founder and Executive Director at AI for Peace AI can be an immense helping tool in augmenting human capabilities to tackle some of the world’s greatest challenges. However, as organizations and governments look to increasing use and adopt AI for a variety of different use cases it’s important to have discussions around the ethics of AI and make sure these systems are being used for good. In this episode of the AI Today podcast we interview with Branka Panic, Founder and Executive Director at AI for Peace. AI for Peace at the WOMEN IN AI ETHICS ANNUAL EVENT – “AI for Refugee Communities” Pervasive bias in algorithms and flawed Artificial Intelligence (AI) systems pose significant risks to humanity. These problems require urgent discussion, concrete action to reduce harm, increase accountability, and collectively shape the types of outcomes we want to see from AI. Women in AI Ethics (WAIE) is a global initiative by Lighthouse3, a strategic research and advisory firm based in Oakland, California with a mission to increase recognition, representation, and empowerment of brilliant women in this space who are working hard to save humanity from the dark side of AI. This was an entire day of inspiring talks and discussions with women and allies around the world to discuss the current state of diversity + ethics in AI and build meaningful action plans for progress. |
|
|
THIS MONTH’S BEST READS Algorithmic Colonisation of Africa, 21 August 2020 Common to both traditional and algorithmic colonialism is the desire to dominate, monitor, and influence the social, political, and cultural discourse through the control of core communication and infrastructure mediums. While traditional colonialism is often spearheaded by political and government forces, digital colonialism is driven by corporate tech monopolies—both of which are in search of wealth accumulation. Police use of facial recognition violates human rights, UK court rules, Ars Technica, 11 August 2020 Privacy advocates in the UK are claiming victory as an appeals court ruled today that police use of facial recognition technology in that country has "fundamental deficiencies" and violates several laws. South Wales Police began using automated facial recognition technology on a trial basis in 2017, deploying a system called AFR Locate overtly at several-dozen major events such as soccer matches. Police matched the scans against watchlists of known individuals to identify persons who were wanted by the police, had open warrants against them, or were in some other way persons of interest. 'Deepfakes' ranked as most serious AI crime threat, 3 August 2020, EurekAlert Fake audio or video content has been ranked by experts as the most worrying use of artificial intelligence in terms of its potential applications for crime or terrorism, according to a new UCL report. The study, published in Crime Science and funded by the Dawes Centre for Future Crime at UCL (and available as a policy briefing), identified 20 ways AI could be used to facilitate crime over the next 15 years. These were ranked in order of concern - based on the harm they could cause, the potential for criminal profit or gain, how easy they would be to carry out and how difficult they would be to stop.\ The role played by Artificial Intelligence in social sector, 4 August 2020 The Artificial Intelligence for social good can probably assist in solving some of the country’s most pressing problems. As a count number of facts, it can contribute in some way or every other to tackling and addressing all of the United Nation’s Sustainable Development Goals, supporting large sections of the populace in both growing and developed countries. AI is already helping in several real-life situations, from assisting blind humans in navigating and diagnosing cancer to identify sexual harassment victims and helping with catastrophe relief. Let us take a look briefly at integral social domains where AI can be carried out effectively. Facebook’s AI for detecting hate speech is facing its biggest challenge yet, 14 August 2020, Fast Company The single most amazing thing about Facebook is how vast it is. But while more than two and a half billion people find value in the service, this scale is also Facebook’s biggest downfall. Controlling what happens in that vast digital space is nearly impossible, especially for a company that historically hasn’t been very responsible about managing the possible harms implicit in its technology. Only in 2017—13 years into its history—did Facebook seriously begin facing up to the fact that its platform could be used to deliver toxic speech, propaganda, and misinformation directly to the brains of millions of people. Your startup needs an ai ethicist. How do you find one? 18 August 2020 Today, companies are wrestling with products and platforms that have a profound impact on our civil liberties, how we communicate, and even democracy at large. The growing public recognition of the downstream effects of technology, along with an increasing demand that tech companies take greater responsibility for any “unintended consequences” and negative externalities, has led to a host of new job titles aimed at responsible innovation. Tech companies are beginning to hire for new roles with non-traditional skills and higher education is starting to adapt to better meet the qualifications necessary for these emerging careers. Police built an AI to predict violent crime. It was seriously flawed, Wired, 6 August 2020 A flagship artificial intelligence system designed to predict gun and knife violence before it happens had serious flaws that made it unusable, police have admitted. The error led to large drops in accuracy and the system was ultimately rejected by all of the experts reviewing it for ethical problems. ICYMI Environmental Intelligence: Applications of AI to Climate Change, Sustainability, and Environmental Health, Stanford HAI, July 16 Over 900 Earth-observing satellites currently peer down at us from space. Simultaneously, an emerging network of ground-based sensor technologies track the movement of water, the sounds of ecosystems, and the chemicals that permeate Earth’s soils and atmosphere above it. This new generation of sensing technologies is attended by sophisticated physical models, from climate simulators to continental-scale hydrologic models. These expansive new data streams and physical models present uncharted opportunities to use AI to address the needs of a planet on life support. Potential areas of inquiry might include the development of new climate solutions, land management practices, water security, environmental justice, prediction of air and groundwater pollution, preventing extinction, and optimizing nature for human health and well-being. Technology Theatre, July 13, CIGI Whether it’s the national release of contact-tracing apps meant to battle a pandemic, or Sidewalk Labs’ (now defunct) bid to create a “city built from the internet up,” public conversations about major policy initiatives tend to focus on technological components and evade significantly harder questions about power and equity. Our focus on the details of individual technologies — how the app will work, whether the data architecture is centralized, or the relative effectiveness of Bluetooth — and individual experts during the rollout of major policies not only is politically problematic, but also can weaken support for, and adherence to, institutions when their legitimacy is most critical. |
|
|
THIS MONTH’S PODCAST EPISODE CHOICE THE RADICAL AI - Finding Joy in Meaningful Work: AI for Social Good in Social Work & Social Justice with Eric Rice, Where is the limit in the use of technology to solve societal problems? How can Social Work utilize AI to address social injustice? To answer these questions and more we welcome Dr. Eric Rice to the show. Eric is an associate professor and the founding co-director of the USC Center for Artificial Intelligence in Society, a joint venture of the USC Suzanne Dworak-Peck School of Social Work and the USC Viterbi School of Engineering. Rice received a BA from the University of Chicago, and an MA and PhD in Sociology from Stanford University. Eric’s research focuses on community outreach, network science, and the use of social networking technology by high-risk youth. ICYMI Why the world is at a turning point with artificial intelligence and what to do about it, August 10, Brookings Artificial intelligence (AI) is one of the transformative technologies of our time. It is reshaping entire sectors, including healthcare, education, e-commerce, transportation, and defense—and in many ways, is the defining force of the coming years. Ultimately, though, it is the policies and principles established by people—lawmakers, regulators, software developers, and ethicists—that will determine the trajectory of this emerging technology. That is the message that Brookings scholars Darrell West and John Allen convey in their new book, “Turning Point: Policymaking in the Era of Artificial Intelligence.” On August 10, West and Allen joined Nicol Turner Lee for a webinar discussion to explain what AI is, discuss its use in leading sectors, outline the ethical and societal ramifications of AI deployment, and recommend a policy and governance blueprint to maximize the advantages of AI. |
|
|
THIS MONTH’S BOOK REVIEW Book review: ‘The Drone Age: How Drone Technology Will Change War and Peace’, 13 August 2020 The rapid development of drone technology over the years has resulted in a debate as to whether it does more harm than good, raising concerns as to how its use could shape our future. Could personal or political motives be the root of the problem? In ‘The Drone Age: How Drone Technology Will Change War and Peace’, Michael J Boyle explores six ways in which drones affect decision-making and risk calculations of its users both on and off the battlefield. It also seeks to show that the introduction of drones has changed how we understand the strategic choices that we face in war and peace, and the consequences these decisions may have in the future. |
|
|
THIS MONTH’S PUBLICATIONS Towards Responsible #AIforAll in India, 21 August 2020, World Economic Forum NITI Aayog, the think tank of the Government of India, is developing the approach to “Responsible #AIforAll” based on a large-scale stakeholder consultation facilitated by The Centre for the Fourth Industrial Revolution (C4IR) India. A Responsible AI working document (developed on the basis of a consultation workshop held in December 2019, organised by the C4IR India, World Economic Forum) was presented during a global consultation with AI ethics experts around the world on 21 July 2020 and subsequently released by NITI Aayog for wider public consultations. Special Collection of Artificial Intelligence, 2020, UNICRI The potential of the Artificial Intelligence for law enforcement, legal professionals, the court system and even the penal system to augment human capabilities is enormous. However, we need to truly test the limits of our creativity and innovation to overcome the challenges that come with these technologies, as well as to develop entirely new approaches, standards and metrics that will be necessitated by them. In this regard, contained within the pages of this UNICRI Special Collection on Artificial Intelligence, are a selection of articles from innovative minds in academia to stimulate discussion and promote solutions to the challenges we face in this emerging domain on how to shape the design of the policies and legal frameworks of the future and provide guidance to those who will build the AI-based tools and techniques. |
|
|
EVENTS TO FOLLOW AI for Peace at the Data for AI Week – “The Ethical Side of Data Usage”, 14 September, 3pm ET Join us at Data for AI conference, to be held 14-18 September 2020. AI for Peace founder will be a panelist at “The Ethical Side of Data Usage”, on Monday, 14 September, 3pm ET Machine learning requires data, and organizations of all types and in different industries have lots of data that is useful for many very important tasks. However, enterprises are finding that there are concerns and restrictions on how that data can be used, shared and applied. This panel will explore the ethical side of data usage from an industry perspective. “Staying ahead of misinformation, globally” – 29 September 2020 Fake news has entered the global lexicon in the last four years. Online platforms unite communities from across continents; however, greater interconnectivity has also broadened the scope of mis/disinformation. Following sessions on misinformation in India and the US, this session by Logically looks at global trends and seeks to identify how the confluence of misinformation, journalism and social media may converge or diverge in the coming years. Book Talk on “Democratizing Our Data: A Manifesto” with Julia Lane, 2 October, 12 PM EDT Join the Center for Data Innovation on Friday, October 2, 2020 at 12:00 PM EDT for a conversation with Julia Lane on her new book Democratizing Our Data: A Manifesto. The book argues that public data is crucial to a well-functioning democracy, but America’s public data infrastructure is crumbling. To address this, a new framework focused on automation, transparency, and accountability is needed to produce high-quality public data that can serve the public good. How Will Quantum Computing Shape the Future of AI? 14 October 2020, 9am ET Quantum computing holds the potential to revolutionize AI by harnessing the powers of quantum mechanics to solve problems that exceed the capabilities of traditional supercomputers. By creating new quantum algorithms, it may be possible to substantially reduce the computing time needed to use machine learning to solve complex problems, such as formulating safe nanomaterials, enhancing climate forecasts, and discovering novel drug compounds. Taking these ideas from theory to practice will require significant resources, and the countries and companies who achieve quantum supremacy are likely to gain a competitive edge in the global AI race. |
|
|
On our website, AI for Peace, you can find even more awesome content, podcasts, articles, white papers and book suggestions that can help you navigate through AI and peace fields. Check our online library! |
|
|
|
|