T3THICS WEEK 8: The Battle for Twitter
Twitter fights off a hostile takeover, & Clearview AI is trying to launder its reputation
T3THICS is Monika & Marta’s weekly roundup of tech ethics news (and olds) - and our quick thoughts on them.
New to T3THICS? Sign up here: t3thics.substack.com
If you can’t get enough of us here, follow Monika on Twitter and Marta on LinkedIn
The Battle for Twitter
Just when Elon Musk seemed to have lost interest in Twitter, declining to join its board, he announced a bid to buy it outright in cash at $54.20 per share. Given the prices was much higher than what it was trading at when he made the bid, this amounted to announcing a hostile takeover. Outcries were swift, doubling down to point out that Musk doesn’t have the knowledge or experience to run the company. Others pointed out that this could be a red herring from Musk, who is facing legal action on multiple fronts - most loudly for the toxic and racist work environment at Tesla. Even Shoshana Zuboff, author of Surveillance Capital (tech ethics’ answer to Thomas Picketty) weighed in with a zoomed-out view of how controlling a social media platform is actually controlling information and knowledge at scale.
The question on everyone’s mind is: what happens to Twitter if it does fight off Musk? Does it remain the same? Does it become a co-op owned by all of its members? Does it become a decentralized protocol, or essentially a set of client nodes and host nodes that interact through the protocol software to create a network not governed by a single entity (like for e.g. Ethereum’s blockchain)? Do we envision the bird site as a public utility or a digital commons? One thing is clear: Twitter means something to a lot of people, and there’s a desperate sense that its falling into the wrong hands would be a loss for our ability to communicate with one another.
Facial Recognition in the Time of War
Clearview AI, the company famous for unethically (and illegally in some jurisdictions) scraping the internet for photos to build the biggest facial recognition database is trying to make a comeback. This time, having struck out with governments in North America and Europe, it’s preying on Ukraine. CEO Hoan Ton-That gave use of its services to the Ukrainian government for free, claiming that it would help them “identify spies” (not something AI can do on its own without tons of supporting intel) and deceased soldiers (also not something AI can do reliably). More than 340 government accounts have been created and more than 8000 searches done on the bodies of Russian soldiers, leading to Ukrainian military contacting the family of the deceased amidst growing concern about this tactic. Facial recognition is famously problematic because of its inaccuracy. Its use in conflict zones is being battle-tested by a company trying desperately to launder its reputation and get back in the good graces (and lucrative contracts) of governments. Open questions remain about the normalization of the use of this technology. What happens when the software gets it wrong and the wrong Russian mother is called about her son? What happens if the software misidentifies someone as an enemy? Would it get them killed? These aren’t “what if” questions but more “when” questions. We are witnessing the deployment of technology in war before the ethical questions of whether we should even be using the tech in this way have been resolved. We’ll probably get the answers - but for the people it harms it’ll be too late.
Monika’s Things
For the academic set, a thoughtful piece from artist Geraldine Juárez on the assetization of art through crypto & NFTs - here’s a sneak peak, the whole piece is really worth it:
Notwithstanding art’s longtime development of its status as an asset class, in the new ownership economy, smart-contracts turbocharge its speculative character, successfully disconnecting the generation of revenue from artistic production, or in other words, incorporating cultural production as one of the main activities of retail-trading.
An interesting thread & visual map of the Jan 6 insurrection defendants, their connections to each other and to far-right extremist groups in the US
A meditation on using computer metaphors in writing by David R. MacIver
Another week, another crypto scam: while the online casino industry is tighly regulated and mostly illegal in the US, crypto has blown the doors open to people losing their life savings due to addiction by skirting regulation. Full piece by Kevin Collier in NBC news here.
In other crypto news: the NFT bubble’s burst
John Oliver’s Last Week Tonight exposes the seedy underbelly of data brokers by threatening to blackmail congresspeople with data about them
Meta has announced it will be charging creators 47.5% of profits on products they sell in the Metaverse, even though Mark Zuckerberg promised in 2020 that they’d charge less than the 30% Apple and Google take from apps on their app stores
Just in time for Easter, a robotic mouth perfectly capable of chanting prayers:
A lawsuit against Facebook and “ethical AI” subcontractor firm Sama is being launched after the Times reporting on the horrifying working conditions Kenyan moderators were subjected to
As the humanitarian disaster in Shanghai continues, content moderators on Chinese social media seem to be staging a form of silent protest by delaying when videos of the horrific conditions for people under lockdown are removed
Cool virtual Salon alert! Writer Moira Weigel hosts a discussion of the Frankfurt School, a school of philosophy that seems to influence some big names in tech. For a deep dive on the topic, see this piece by Weigel
Thread + paper looking at whether people with certain political orientations are more likely to share fake news:
An excellent thread about what the options are when a tech worker is faced with an ethical dilemma at their job
For folks who love podcasts, Kara Swisher chats with Cathy O’Neal about her new book:
What is the Public Interest Technology University Network? A run through of the objectives and milestones of this 2-year old consortium of US universities
It got lost a bit in this week’s Twitter news, but the US State department launched a Bureau of Cyberspace and Digital Policy, inching the government forward in the race to catch up with tech harms
How should computer scientists and social scientists learn from each other?
This week’s Signal in Noise tweet:
Marta’s Things
This year is the year where three religions have holy celebrations in one weekend. To mark the occasion, three articles on the intersection of AI and religion:
Using a person’s birthday to tailor daily advice based on the corresponding Torah portion, Robo Rabbi uses AI to “counteract algorithmic evils”.
Father Paolo Benanti, a professor at Pontifical Georgian University and an engineer and ethicist, is the pope’s chief advisor on AI. There’s been a series of meetings between industry AI leaders such as Meta, Microsoft, IBM and Deepmind with the pope, in an interesting intersection between religion/spirituality and technology.
Two Pakistani academics, Dr Junaid Qadir and Amana Raquib, have been studying how Islamic ethical and legal principles can be used to regulate AI in Muslim countries.