AI Doesn’t Kill People, People Kill People (if they've got an AI)
When global tech giants advocate for reduced regulation, it’s time for African policymakers to put on their re-scepticals*
#Africa #policy #regulation - Comedian Suzy Eddie Izzard took on gun control in a stand-up routine twenty years ago:
“(The NRA) have a saying ‘guns don’t kill people, people kill people’… But monkeys do too, if they’ve got a gun. Without a gun, they’re pretty friendly. But with a gun, they’re pretty dangerous.”
In November, we carried the story of Google's regional director for sub-Saharan Africa government affairs and public policy, Charles Murito, telling delegates at Africa Tech Festival 2024 that African governments needed to regulate the use cases of AI instead of regulating the technology.
"We need to think about AI not as a technology that needs to be regulated. AI is a general-purpose technology, akin to what electricity was a 100-plus years ago," he said. "What we need to regulate is not the technology itself, but the use cases of that technology," he explained.
Sounding familiar?
Izzard rounded up his routine: “Guns don’t kill people, people kill people, and monkeys do (if they’ve got a gun).”
SO WHAT? - For a notoriously aggressive global tech multinational to want what would be essentially unrestricted, unregulated development of a technology that could easily end up in the hands of careless, ill-prepared or even malevolent actors should set off red warning lights. Especially from tech giants that actively seek military and national security business for their AI Cloud services, and are not too picky about which regimes they work for.
For national policymakers in Africa, the messaging of Google, Microsoft, Amazon and other tech multinationals must be examined very, very critically.
Their lobbying machines have rumbled into action. Apple, Meta and other key players refused to endorse an EU Pact on use of AI. It’s the old bogeyman – if you don’t allow us to have unfettered power to push our AI business, you won’t be competitive and China will win.
It’s the some FUD playbook of petrochemicals and tobacco companies since the 1950’s: “We all want what’s best, but let’s not do anything hasty because we don’t REALLY know what the dangers are, and if bad stuff happens it’s the users’ fault, they should really take more care.”
ZOOM OUT: What is revealing is that in the last three or four years we have seen a complete volte face from the Cloud giants on AI regulation. A few years ago they were all for regulation. Sundar Pichai in an op-ed in the Financial Times, this (undated) recommendations doc. Clear calls for regulatory intervention have slowly morphed into a “why can’t we all just…get along”.
What’s changed? Well, hyperscalers’ Cloud services revenues exploded.
The message has shifted to the position that developers and more importantly vendors of AI tech services themselves should not be regulated… users must just be careful what they do with it.
Give a bunch of monkeys an awesomely powerful and potentially dangerous tool, and then tell them to not do anything bad. And should they do something reckless, blame the monkey and not the person who gave them the gun.
Governments and policymakers across Africa have for decades faced global corporate interests aggressively driving their agendas across our continent, often to a captured political class. We see the results of that in environmental and social devastation caused by the fossil fuels, mining and arms industries.
Are we going to see it again with high tech?
* Sorry, that was appalling
PS: Just from the last couple of issues of AfricaAINews.com weekly news digest:
LinkedIn training AI on personal information:
Judicial bodies calling for oversight of AI
Ugandan government raising red flags
Image source: Craiyon