AI FOR PEACE NEWSLETTER Your monthly dose of news and the latest developments in AI for Peace |
|
|
FEBRUARY 2021 SPECIAL EDITION Spotlight on Technology, Racial Bias, and Racial Justice |
|
|
Curated by Amanda Luz, Jeremy Pineda, Loren Crone, Stephanie Hilton |
|
|
If someone has forwarded this to you and you want to get our Newsletter delivered to you every month, you can subscribe here: |
|
|
WHY A SPECIAL EDITION ON TECH, RACIAL BIAS, AND RACIAL JUSTICE? Oprah Winfrey is one of the most recognizable faces in the United States. As an American talk show host, television producer, actress, author, and philanthropist, she is considered an iconic Black woman, along with other successful examples like Michelle Obama and Serena Williams. However, their easy face recognition by humans is not equally done by algorithms. In the previous couple of years, we saw how technology companies with facial analysis services such as Microsoft, Amazon, and Google fail to recognize their faces as women's faces. According to MIT researcher and digital activist Joy Buolamwini, this is not just an isolated incident: there are gender and skin-type bias in facial analysis technology from leading tech companies. With the increased adoption of artificial intelligence and new technologies for analyzing humans and implications for racial justice, we would like to offer some resources to introduce the topic, to help understand the concerns about the biases in new technologies and propose some creative approaches that can be taken by civil society and governments to pressure for more accountability and transparency regarding new technologies. In this special edition newsletter, we propose a set of articles, publications, books, webinars, and podcasts that can help start an informed discussion around emerging technologies, racial bias, and racial justice. |
|
|
READINGS WE RECOMMEND Wrongfully Accused by an Algorithm, New York Times, 24 June 2020 A nationwide debate is raging about racism in law enforcement. Across the country, millions are protesting not just the actions of individual officers, but bias in the systems used to surveil communities and identify people for prosecution. Facial recognition systems have been used by police forces for more than two decades. Recent studies by M.I.T. and the National Institute of Standards and Technology, or NIST, have found that while the technology works relatively well on white men, the results are less accurate for other demographics, in part because of a lack of diversity in the images used to develop the underlying databases. Access Now’s response to a Call for submissions: Thematic report on new information technologies, racial equality and non-discrimination, Access Now, 13 December 2019 As artificial intelligence (AI) continues to find its way into our daily lives, its propensity to interfere with human rights only gets more severe. In this submission, Access Now seeks to provide relevant information on the topic. We particularly offer insight on three main areas: (1) international human rights law approaches to regulating new technologies and AI; (2) how new information technologies may entrench bias, including racial bias and; (3) how information technologies have affected the enjoyment of human rights (drawing on the example of racial bias in machine learning used in the United States (US) criminal justice system). When AI Fails on Oprah, Serena Williams and Michelle Obama, 4 July 2018 Respect isn’t just about being recognized or not recognized. It is also about having agency regarding the processes that govern our lives. As companies, governments, and law enforcement agencies use AI to make decisions about our opportunities and freedoms, we must demand that we are respected as people. Sometimes respecting people means making sure your systems are inclusive such as in the case of using AI for precision medicine, at times it means respecting people’s privacy by not collecting any data, and it always means respecting the dignity of an individual. Is Digital Technology Making Health Inequality Worse? IAPHS-Interdisciplinary Association for Population Health Science 2020 We can’t escape technology in healthcare. Between health apps, physician portals, text reminders, and much more, information and communication technology (ICT) has become essential for healthcare delivery. But what happens to people who are left behind–or worse, left entirely out of this technology revolution? AI researchers propose ‘bias bounties’ to put ethics principles into practice, Venture Beat, 17 April 2020 “Bias and safety bounties would extend the bug bounty concept to AI and could complement existing efforts to better document data sets and models for their performance limitations and other properties,” the paper reads. “We focus here on bounties for discovering bias and safety issues in AI systems as a starting point for analysis and experimentation but note that bounties for other properties (such as security, privacy protection, or interpretability) could also be explored.” The Tech ‘Solutions’ for Coronavirus Take the Surveillance State to the Next Level, The Guardian, 15 April 2020 We are all solutionists now. When our lives are at stake, abstract promises of political emancipation are less reassuring than the promise of an app that tells you when it’s safe to leave your house. The real question is whether we will still be solutionists tomorrow. Emerging digital technologies entrench racial inequality, UN expert warns, United Nations Human Rights, 15 July 2020. Emerging digital technologies driven by big data and artificial intelligence are entrenching racial inequality, discrimination and intolerance, a UN human rights expert said today, calling for justice and reparations for affected individuals and communities. |
|
|
ORGANIZATIONS TO FOLLOW Algorithmic Justice League - The Algorithmic Justice League’s mission is to raise awareness about the impacts of AI, equip advocates with empirical research, build the voice and choice of the most impacted communities, and galvanize researchers, policy makers, and industry practitioners to mitigate AI harms and biases. We’re building a movement to shift the AI ecosystem towards equitable and accountable AI. We mitigate the harms and biases of ALI by promoting 4 core principles; affirmative consent, meaningful transparency, continuous oversight and accountability and actionable critique. Color Coded - We are a POC-only transformative space that centers historically-excluded people in the co-teaching, co-creation, and co-ownership of new technologies. Our work supports and amplifies groups and individuals who are uplifting and sustaining communities of color—in Los Angeles and beyond. Together, we advance sustainable, community-centric projects to stay life long learners, protect our families, defend our hoods, decolonize and indigenize, liberate ourselves, grow collective wealth, and thrive! Media Justice - For communities of color, lower income families and the social movements they fuel, transforming systemic barriers and “beating the odds” in the 21st century demands a fair economy, connected communities, a political landscape of visibility, voice and power. To achieve this we need a media and technology environment that fuels real justice. We call that “Media Justice” and recognize this as a crucial moment in our struggle for freedom—freedom from oppression, freedom to communicate. Responsible AI- The organization is committed to the advancement of AI driven by ethical principles that put people first. The principles include fairness, reliability/safety, inclusiveness, privacy/security, accountability and transparency, and the organization hopes to operationalize these principles across the company. We work to detect biases in technology and identify where it can impose on human rights and privacy. |
|
|
PODCASTS, WEBINARS, AND VIDEOS WE RECOMMEND How well do IBM, Microsoft and Face++ AI services guess the gender of a face? MIT Media Lab Inclusive product testing and reporting are necessary if the industry is to create systems that work well for all of humanity. However, accuracy is not the only issue. Flawless facial analysis technology can be abused in the hands of authoritarian governments, personal adversaries, and predatory companies. Ongoing oversight and context limitations are needed. We need to bring the human back into the digital conversation. Nanjala Nyabola, 17 April 2019 Nanjala Nyabola urges the global community to address the challenges arising through social media platforms from a human perspective, and not simply from a technological perspective. Where We Lose Our Way, Tedx Talks, 18 February 2018 Take a look at the shift of migration and the ongoing debate on how to determine the worth of human life. In this necessary and pertinent talk, she makes a case for a renewed sense of humanity. It is not so much that we built a perfect system. It's not so much that we build a perfect system. It’s about believing in the perfect system and that we can come together and work towards that. AI & Racial Bias with Renée Cummings, The Radical AI Podcast, 28 June 2020 This episode features a presentation delivered by Renee Cummings as a workshop given on 06/24/20, and also welcomes Ethical Intelligence CEO Olivia Gambelin to the show as a guest host. Renée Cummings is a criminologist and international criminal justice consultant who specializes in Artificial Intelligence (AI); ethical AI, bias in AI, diversity and inclusion in AI, algorithmic authenticity and accountability, data integrity and equity, AI for social good and social justice in AI policy and governance. She is the CEO of Urban AI. Data as Protest: Data for Black Lives with Yeshi Milner, The Radical AI Podcast, 1o June 2020 How can we claim agency over data systems to fight for racial justice? What is Data for Black Lives? How can you join the movement? To answer these questions and more we welcome Yeshi Milner to the show. Yeshi Milner is the co-founder and executive director of Data for Black Lives. Raised in Miami, FL, Yeshi began organizing against the school-to-prison pipeline at Power U Center for Social Change as a high school senior. There she developed a lifelong commitment to movement building as a vehicle for creating and sustaining large-scale social change. More recently, Yeshi was a campaign manager at Color of Change, where she spearheaded several major national initiatives, including OrganizeFor, the only online petition platform dedicated to building the political voice of Black people. On June 19, the Center for Technology Innovation at Brookings hosted a webinar of distinguished computer, social scientists and legal experts to talk about the intersection of race, AI, and systemic inequalities. The discussion shared existing research in this area and explore how fairness, equity, and ethics can be better addressed in the development of AI systems. |
|
|
PUBLICATIONS WE RECOMMEND Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks, ProPublica, 23 May 2016 Criminologists have long tried to predict which criminals are more dangerous before deciding whether they should be released. Race, nationality and skin color were often used in making such predictions until about the 1970s, when it became politically unacceptable, according to a survey of risk assessment tools by Columbia University law professor Bernard Harcourt. But as states struggle to pay for swelling prison and jail populations, forecasting criminal risk has made a comeback. Racial discrimination and emerging digital technologies: a human rights analysis, 18 June 2020 In the present report, the Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance, E. Tendayi Achiume, analyses different forms of racial discrimination in the design and use of emerging digital technologies, including the structural and institutional dimensions of this discrimination. She also outlines the human rights obligations of States and the responsibility of corporations to combat this discrimination. Inequality in Asia and the Pacific. Chapter 4: Technology and Inequalities, United Nations ESCAP The relationship between technology and inequality is multifaceted. Technology has enhanced productivity, accelerated economic growth, enabled knowledge and information sharing and increased access to basic services. However, it has also been the cause of inequalities. This chapter examines the role of technology across the three facets of inequality discussed in the previous chapters: inequality of outcome; inequality of opportunities; and inequality of impact, which is concerned with the impact of environmental hazards on the most vulnerable. The global landscape of AI ethics guidelines, Nature Machine Intelligence, 2 September 2019 In the past five years, private companies, research institutions and public sector organizations have issued principles and guidelines for ethical artificial intelligence (AI). However, despite an apparent agreement that AI should be ‘ethical’, there is debate about both what constitutes ‘ethical AI’ and which ethical requirements, technical standards and best practices are needed for its realization. To investigate whether a global agreement on these questions is emerging, we mapped and analysed the current corpus of principles and guidelines on ethical AI. Our results reveal a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy), with substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented. Our findings highlight the importance of integrating guideline-development efforts with substantive ethical analysis and adequate implementation strategies. Human Rights Racial Equality and New Information Technologies, UCLA School of Law, June 2020 With the rise of networked digital technologies, scholars and global internet rights advocates across different fields have increasingly documented the ways in which these technologies reproduce, facilitate or exacerbate structural racial inequality. However, global human rights discussion and mobilization against the harms of new information technologies mostly revolve around a specific set of issues: hate crimes and hate speech, the role of new information technologies in facilitating hate incidents online, the use of online networks and fora to coordinate racial hatred, and threats to individuals’ right to privacy. |
|
|
BOOKS WE RECOMMEND Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, 17 January 2018 Eubanks systematically shows the impacts of data mining, policy algorithms, and predictive risk models on poor and working-class people in America. The book is full of heart-wrenching and eye-opening stories, from a woman in Indiana whose benefits are literally cut off as she lays dying to a family in Pennsylvania in daily fear of losing their daughter because they fit a certain statistical profile. Twitter and Tear Gas: The Power and Fragility of Networked Protest, Yale University Press, 2017 Could the ability to organize massive protests quickly on Facebook and Twitter be making those protests vulnerable in the long term? If new technologies are so empowering, why are so many movements failing to curb authoritarianism’s rise? Is a glut of misinformation more effective censorship than directly forbidding speech? Why are so many of today’s movements leaderless? Race After Technology, Ruha Benjamin, 17 June 2019 “Race After Technology is essential reading, decoding as it does the ever-expanding and morphing technologies that have infiltrated our everyday lives and our most powerful institutions. These digital tools predictably replicate and deepen racial hierarchies — all too often strengthening rather than undermining pervasive systems of racial and social control.” Design Justice - Community Lead Practices to Build the World We Need, Sasha Constanza-Chock, 2020What is the relationship between design, power, and social justice? “Design justice” is an approach to design that is led by marginalized communities and that aims explicitly to challenge, rather than reproduce, structural inequalities. It has emerged from a growing community of designers in various fields who work closely with social movements and community-based organizations around the world. |
|
|
On our website, AI for Peace, you can find even more awesome content, podcasts, articles, white papers and book suggestions that can help you navigate through AI and peace fields. Check our online library! |
|
|
|
|