THIS MONTH’S BEST READS
Tech's role in the Ukraine war, Protocol, 25 February 2022
Nothing feels very far away anymore. CNN brought once-distant wars into our living rooms, but TikTok and YouTube and Twitter have put them in our pockets. Following along with what’s happening is now easier than ever, though that’s often fraught with misinformation and lack of context, and social networks are quickly having to figure out what to take down and what to leave up. The war is affecting all of us, whether we know it or not.
For more information look at our special edition newsletter on War in Ukraine and AI, cybersecurity, disinformation on social media and more.
Artificial intelligence technologies have a climate cost, 4 February 2022
The climate impact of AI comes in a few forms: The energy use of training and operating large AI models is one. In 2020, digital technologies accounted for between 1.8 per cent and 6.3 per cent of global emissions. At the same time, AI development and adoption across sectors has skyrocketed, as has the demand for processing power associated with larger and larger AI models. Paired with the fact that governments of developing countries see AI as a silver bullet for solving complex socio-economic problems, we could see a growing share of AI in technology-linked emissions in the coming decades
What Should Be Considered a Crime in the Metaverse? Wired, 28 January 2022
All this raises crucial issues about the ethics of near-term virtual worlds. How should users act in a virtual world? What’s the difference between right and wrong in such a space? And what does justice look like in these societies? Let’s start with virtual worlds that exist already. Perhaps the simplest case is that of single-player video games. You might think that with nobody else involved, these games are free of ethical concerns, but ethical issues still sometimes arise.
Meta Wouldn’t Tell Us How It Enforces Its Rules In VR, So We Ran A Test To Find Out, BuzzFeed, 11 February 2022
Meta has said it recognizes this trade-off and has pledged to be transparent about its decision-making. So, to better understand how it is approaching VR moderation, BuzzFeed News sent Meta a list of 19 detailed questions about how it protects people from child abuse, harassment, misinformation, and other harms in virtual reality. The company declined to answer any of them. Instead, Meta spokesperson Johanna Peace provided BuzzFeed News a short statement: “We’re focused on giving people more control over their VR experiences through safety tools like the ability to report and block others. We’re also providing developers with further tools to moderate the experiences they create, and we’re still exploring the best use of AI for moderation in VR. We remain guided by our Responsible Innovation Principles to ensure privacy, security and safety are built into these experiences from the start.”
Stop normalizing mass surveillance in Latin America, Access Now, 4 February 2022
In many cities around the world, when you go out in public, you are unknowingly exposing yourself to surveillance, including the use of mass surveillance tools that record, analyze, and store your personal biometric data — your face, your voice, the way you walk, and more. Even if you know you may be under surveillance, most people have no idea how their personal data is being used or who has access to it. And in countries across Latin America, both governments and the companies that develop this type of technology refuse to be transparent, leaving citizens in the dark about the privacy violations and threats they face.
How AI is shaping the cybersecurity arms race, The Conversation, 23 February 2022
There are two main ways AI is bolstering cybersecurity. First, AI can help automate many tasks that a human analyst would often handle manually. These include automatically detecting unknown workstations, servers, code repositories and other hardware and software on a network. It can also determine how best to allocate security defenses. These are data-intensive tasks, and AI has the potential to sift through terabytes of data much more efficiently and effectively than a human could ever do.
In this age of climate crisis, humanitarians need to learn to love tech, TNH, 23 February 2022
In some places, local actors are already taking advantage of 21st century advances that have yielded new ways to help predict climate events before they become crises, or that have transformed when, how, and where humanitarian aid can be delivered. As a sector, we have long been suspicious of tech, wary of the risks it poses and sceptical of the opportunities it brings. There can be good reasons for this – the recent Red Cross hack brought to light the dangers of storing vulnerable people’s identities online. Limited ethical frameworks and metrics for success have also fostered an understandable reticence.
Technology can speed up humanitarian action, 23 February 2022
Technology today is evolving at an extraordinary and accelerating pace and is changing the very way we live and work. Its ability to assist humanitarian action in low-income countries has alerted donors, practitioners and governments to its potential. Southern Africa is the current focus of humanitarian concern and a number of Anticipation and Disaster Risk Financing systems, for instance, are being deployed to avert potential crises.
Cyberattacks: a real threat to NGOs and nonprofits, ReliefWeb, 22 February 2022
The recent cyberattack affecting the International Committee of the Red Cross (ICRC) has put a media spotlight on the threat to the humanitarian sector. Sadly our experience shows that cyberattacks in this sector are not rare. We look at the risk to NGOs and how they can prepare and defend against the growing proliferation of cyberthreats.