AI FOR PEACE NEWSLETTER

Your monthly dose of news and the latest developments in AI for Peace

JULY 2020

Spotlight on digital technologies and racial inequity, techno-racism and human rights, ethical AI, and RightsCon - the world’s leading event on human rights in the digital age.

For more news and resources on AI, privacy, surveillance, and contact tracing look at our Special Edition Newsletter on Covid19 and Privacy

If someone has forwarded this to you and you want to get our Newsletter delivered to you every month, you can subscribe here:
Subscribe

SPOTLIGHT STORY – RightsCon 2020  

This month we are sharing a special spotlight on RightsCon, the world’s leading event on human rights in the digital age. Every year, RightsCon brings together business leaders, policy makers, general counsels, technologists, advocates, academics, government representatives, and journalists from around the world to tackle the most pressing issues at the intersection of human rights and technology. This year, 7,828 participants from 158 countries across the world gathered online and joined this historic RightsCon.

 

If you missed the conference or would like to return to some of the recorded sessions, RightsCon will be continuously updating the video playlist with highlights of the RightsCon Online program. To protect participants, sessions were held online using a custom platform available to registered participants, but many of the most impactful and engaging fireside conversations and press briefings are open to the public on YouTube — covering breaking developments in Brazil, Ethiopia, Egypt, and beyond, with dialogue and analysis by E. Tendayi Achiume, Maria Ressa, Shoshana Zuboff, Rashad Robinson, Audrey Tang, and many others. Check it out here!

THIS MONTH’S BEST READS 

All drone strikes ‘in self-defence’ should go before Security Council, argues independent rights expert, July 9, UN News

The growing use of weaponised drones risks destabilising global peace and security and creating a “drone power club” among nations, that face no effective accountability for deploying them as part of their “war on terror”, a senior UN-appointed independent rights expert said. At the UN Human Rights Council in Geneva, Agnes Callamard, Special Rapporteur on extrajudicial, summary or arbitrary executions, said that more than 100 countries have military drones and more than a third are thought to possess the largest and deadliest autonomous weapons. “As more Government and non-State actors acquire armed drones and use them for targeted killing, there is a clear danger that war will come to be seen as normal rather than the opposite of peace,” Ms. Callamard said. “War is at risk of being normalized as a necessary companion to peace, and not its opposite. “Appealing for greater regulation of the weapons, and lending her support to calls for a UN-led forum to discuss the deployment of drones specifically, the Special Rapporteur insisted that their growing use increased the danger of a “global conflagration”.

 

Not Just The Sprinkles On Top: Baking Ethics Into AI Design, July 13, Forbes

The first step toward ethical design is assembling the right team. Every AI team obviously needs technical talent. But to cover all the bases, it’s critical to bring in other perspectives, too. In my experience, AI experts don’t always have a knack for anticipating what features will seem “creepy,” or determining whether the application’s voice should be male or female. This is where interdisciplinary teams come in.

 

Emerging digital technologies entrench racial inequality, UN expert warns, July 15, OHCHR

Emerging digital technologies driven by big data and artificial intelligence are entrenching racial inequality, discrimination and intolerance, a UN human rights expert said today, calling for justice and reparations for affected individuals and communities. Even when tech developers and users do not intend for tech to discriminate, it often does so anyway, Tendayi Achiume, UN Special Rapporteur on racism, said in presenting a report on emerging digital technologies and racial discrimination to the UN Human Rights Council.

 

Pakistan Is Using a Terrorism Surveillance System to Monitor the Pandemic, July 15, Slate

“The ISI has given us a great system for track and trace,” Khan said, referring to the country’s military-run spy agency, the Inter-Services Intelligence. “It was originally meant for terrorism, but now it has come in useful against the coronavirus,” he added, chuckling. Now, with the countrywide coronavirus lockdown lifted and cases surging past 252,982 as of Tuesday, the government’s reliance on the ISI’s track and trace technology is beginning to worry digital rights activists. They fear that this surveillance may extend beyond the pandemic. A spokesperson from the Digital Rights Foundation—a Pakistani digital rights group—noted that the government’s decision to enlist the support of its security agency to trace coronavirus patients is a “worrying development that impedes the right to privacy of its citizens.”

 

Deepfake used to attack activist couple shows new disinformation frontier, July 15, Reuters

A couple campaigning for Palestinian rights were targeted by a deepfake persona Oliver Taylor wrote an article attacking them. The catch? He doesn’t seem to exist. The Taylor persona is a rare in-the-wild example of a phenomenon that has emerged as a key anxiety of the digital age: The marriage of deepfakes and disinformation. The threat is drawing increasing concern in Washington and Silicon Valley. Last year House Intelligence Committee chairman Adam Schiff warned that computer-generated video could “turn a world leader into a ventriloquist’s dummy.” Last month Facebook announced the conclusion of its Deepfake Detection Challenge - a competition intended to help researchers automatically identify falsified footage. Last week online publication The Daily Beast revealed a network of deepf

 

Big Tech In Washington's Hot Seat: What You Need To Know, July 28, NPR

Some of the world's most powerful CEOs are coming to Capitol Hill — virtually, of course — to answer one overarching question: Do the biggest technology companies use their reach and power to hurt competitors and help themselves? Here's what you need to know: Who: Facebook CEO Mark Zuckerberg, Amazon CEO Jeff Bezos, Apple CEO Tim Cook and Google CEO Sundar Pichai. What: The tech execs will answer lawmakers' questions in the culmination of a year-long investigation by the House Judiciary Committee's antitrust panel into the tech giants' power that spanned 1.3 million documents and hundreds of hours of hearings and closed-door briefings. Why: The four companies shape how billions of people communicate, learn, work, shop and have fun. Americans' reliance on these platforms has only intensified as the coronavirus pandemic has kept us in our homes. Plus, the hearing comes as more Democrats and Republicans openly challenge the immense power of Silicon Valley.

 

The Panopticon Is Already Here, July 30, The Atlantic

Xi Jinping is using artificial intelligence to enhance his government’s totalitarian control—and he’s exporting this technology to regimes around the globe. China’s government has a history of using major historical events to introduce and embed surveillance measures. In the run-up to the 2008 Olympics in Beijing, Chinese security services achieved a new level of control over the country’s internet. During China’s coronavirus outbreak, Xi’s government leaned hard on private companies in possession of sensitive personal data. Any emergency data-sharing arrangements made behind closed doors during the pandemic could become permanent.

 

Soft Law as a complement to AI regulation, July 31, Brookings

While the dialogue on how to responsibly foster a healthy AI ecosystem should certainly include regulation, that shouldn’t be the only tool in the toolbox. There should also be room for dialogue regarding the role of “soft law.” As Arizona State University law professor Gary Marchant has explained, soft law refers to frameworks that “set forth substantive expectations but are not directly enforceable by government, and include approaches such as professional guidelines, private standards, codes of conduct, and best practices.”

THIS MONTH’S PODCAST EPISODE CHOICE   

HUMANITARIAN AI TODAY – Valentina Pavel, Ada Lovelace Institute

Humanitarian AI Today's host Mia Kossiavelou speaks with Valentina Pavel, Legal Researcher at the Ada Lovelace Institute about the Institute, a UK-based research and deliberative body working to ensure data and AI work for people and society. Valentina is leading the Changing Regulations workstream of the Rethinking Data program, a project designed to change the data governance ecosystem by transforming how we talk about data through our narratives, developing people-centered data practice, and envisioning a positive vision for the future of data regulation. Valentina is a former Mozilla Fellow at Privacy International where she developed the Our Data Future project and she previously worked as a digital rights policy advisor with ApTI Romania, member of the European Digital Rights (EDRi) network.

THIS MONTH’S WEBINAR CHOICE   

SDG 16: Peace, Justice, and Strong Institutions, 9 July, MIT Media Lab

Sustainable Development Goal 16: Peace, Justice and Strong Institutions seeks to promote just, peaceful and inclusive societies. This webinar will highlight projects that have incorporated Earth observation data to better understand and to take action to end violent conflict and human rights abuses around the globe. It will also seek to explore how space technology can be used for projects designed around promoting strong governmental institutions that are responsive to their citizens.

 

Techno-Racism and Human Rights: A Conversation with the UN Special Rapporteur on Racism, July 23

The Digital Welfare State and Human Rights Project, based at the Center for Human Rights and Global Justice at NYU Law, presents an event together with the Special Rapporteur on Racism about her recent report to the UN Human Rights Council on racial discrimination and emerging digital technologies. This event and report come at a moment of international crises, including a global wave of protests and human rights activism against police brutality and systemic racism after the killing of George Floyd and a pandemic which, among many other tragic impacts, has laid bare how deeply embedded inequality, racism, xenophobia and intolerance are in our societies.

 

All Tech is Human - Data Discrimination & Algorithmic Bias w/ Safiya Umoja Noble & Meredith Broussard

How can we reduce data discrimination & algorithmic bias that perpetuate gender and racial inequalities? Join us for a livestream discussion with Safiya Umoja Noble (author of Algorithms of Oppression, Associate Professor at UCLA) and Meredith Broussard (author of Artificial Unintelligence, Associate Professor at NYU) for a timely discussion on how data discrimination and algorithmic bias can perpetuate gender and racial stereotypes and inequalities.

 

ICYMI IN JUNE

Steven Pinker and Stuart Russell on the Foundations, Benefits, and Possible Existential Threat of AI, June 15, Future of Life Institute

Over the past several centuries, the human condition has been profoundly changed by the agricultural and industrial revolutions. With the creation and continued development of AI, we stand in the midst of an ongoing intelligence revolution that may prove far more transformative than the previous two. How did we get here, and what were the intellectual foundations necessary for the creation of AI? What benefits might we realize from aligned AI systems, and what are the risks and potential pitfalls along the way? In the longer term, will superintelligent AI systems pose an existential risk to humanity? Steven Pinker, best selling author and Professor of Psychology at Harvard, and Stuart Russell, UC Berkeley Professor of Computer Science, join us on this episode of the AI Alignment Podcast to discuss these questions and more.

THIS MONTH’S PUBLICATIONS

From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance, Carr Center Discussion Paper Series, by Sabelo Mhlambi

What is the measure of personhood and what does it mean for machines to exhibit human-like qualities and abilities? Furthermore, what are the human rights, economic, social, and political implications of using machines that are designed to reproduce human behavior and decision making? The question of personhood is one of the most fundamental questions in philosophy and it is at the core of the questions, and the quest, for an artificial or mechanical personhood.

 

The 2019 AI Index Report, HAI

Because AI touches so many aspects of society, the Index takes an interdisciplinary approach by designing, analyzing and distilling patterns about AI’s broad global impact on everything from national economies to job growth, research and public perception. The purpose of the project is to ensure that the discussion on AI takes into account data, serving practitioners, industry leaders, policymakers and funders, the general public and the media that inform it.

 

Deepfakes: A Grounded Threat Assessment, by Tim Hwang

Researchers have used machine learning (ML) in recent years to generate highly realistic fake images and videos known as “deepfakes.” Artists, pranksters, and many others have subsequently used these techniques to create a growing collection of audio and video depicting high-profile leaders, such as Donald Trump, Barack Obama, and Vladimir Putin, saying things they never did. This trend has driven fears within the national security community that recent advances in ML will enhance the effectiveness of malicious media manipulation efforts like those Russia launched during the 2016 U.S. presidential election. These concerns have drawn attention to the disinformation risks ML poses, but key questions remain unanswered. How rapidly is the technology for synthetic media advancing, and what are reasonable expectations around the commoditization of these tools? Why would a disinformation campaign choose deepfakes over more crudely made fake content that is sometimes equally as effective? What kinds of actors are likely to adopt these advances for malicious ends? How will they use them? Policymakers and analysts often lack concrete guidance in developing policies to address these risks.

ONLINE COURSE WE RECOMMEND

Digital Peacebuilding 101: introducing technology for peacebuilding, Build Up

What is possible to do in peacebuilding with technology? Who else is doing it and where? Is technology also a space for conflict prevention and transformation? An introductory course aimed at inspiring you to think about what digital peacebuilding can look like for you.

AI FOR PEACE EVENTS IN JULY 

RightsCon 2020 - AI for Peace Founding Director, Branka Panic, participated at the RightsCon 2020, the world's leading event on human rights in the digital age, on Tuesday, 28 July, at a panel discussion on "Data for Peacebuilding and Prevention" hosted by New York University Center on International Cooperation (NYU CIC). If you are working at the intersection of data and peacebuilding, do not miss this event! Join our efforts in creating a community of practice applying data-driven methods to peace and violence prevention. Learn more about cutting-edge approaches to sustaining peace through machine learning, natural language processing, utilizing satellite imagery or digital evidence. How can we strengthen this ecosystem, build trust, and respond to ethical challenges? These are some of the questions our panelists discussed. Registered participants can hear see the session here.   

 

M&E Thursday Talk – Using Artificial Intelligence to Create Lasting Peace, July 30

This is the M&E Thursday Talk from July 30th, 2020, when Branka Panic of AI for Peace and Laura Clark Murray of Omdena led a discussion on “Using AI to Create Lasting Peace – Building Bridges between AI Experts & Peacebuilders.” Big data, machine learning, natural language processing – buzz words or real tools that can be applied in peacebuilding? As the world is facing new forms of violence, repression, and human rights violations, peacebuilders are exploring new tools to respond to peace and security challenges. Sometimes the gap between tools and practitioners looks too big to cross, and yet there is an urgent need to overcome this obstacle. AI for Peace aims to solve this challenge by connecting data scientists and AI experts with peacebuilders to create a more peaceful and just future. Alongside Omdena, this discussion addressed the urgent need to take advantage of new tools, while applying ethical standards and caution about unintended consequences.

Follow Us
Follow on LinkedIn
Follow on X (Twitter)

Online Library 

On our website, AI for Peace, you can find even more awesome content, podcasts, articles, white papers and book suggestions that can help you navigate through AI and peace fields. Check our online library!

LIBRARY

Share on social

Share on FacebookShare on X (Twitter)Share on Pinterest

This email was created with Wix.‌ Discover More