Gain insights into navigating the complexities for a more responsible and balanced AI-driven future.
Artificial intelligence (AI) has long been positioned (by its creators) as a force for good. A labour-saving, cost-reducing tool that has the potential to improve accuracy and efficiency in all manner of fields. It’s already been used across sectors – from finance to healthcare – and it’s changing the way we work. But the time has come to ask, ‘at what cost?’ Because the more AI is developed and deployed, the more examples are being found of a darker side to the technology. And the recent ChatGPT data input scandal showed that our understanding to date is just the tip of a very large and problematic iceberg.
The more sinister side of AI
There is a range of issues with AI that have been either unacknowledged or brushed under the industry carpet. They are each cause for varying degrees of concern, but together present a bleak picture of the future of the technology.
Bias
Bias is the area that has been most talked about and it is the one area that is being addressed, largely due to public pressure. With the likes of Amazon left blushing after the uncovering of its sexist recruitment AI and the American healthcare algorithm that was found to discriminate against black people, AI bias was becoming too dangerous to ignore, both ethically and reputationally. The reason for the bias was easy to identify – because AI ‘learns’ from human-produced data, the biases of the people who create that data can inadvertently affect the training process. The industry has already admitted that all AI systems are at risk of becoming biased (with disclaimers splashed across every model now being produced). And not withstanding efforts like NVIDIA’s Guardrails initiative, there is no instant fix to the problem. One possible route out, made more possible by the emergent reasoning capabilities of LLMs, is the use of explainable AI (XAI). This allows a user to question the logic behind AI’s decision-making, and get a sensible answer. But with this approach still in a very nascent stage, the problem remains rife.
Unethical data collection practices
This is where ChatGPT joins the conversation. Capable of generating text on almost any topic or theme, it is widely viewed as one of the most remarkable tech innovations of the last 10 years. It’s a truly outstanding development. But its development was only possible due to the extensive human data labelling and the hoovering up of vast swathes of human-generated data. For ChatGPT to become as uniquely complex and versatile as it is, millions – billions – of pieces of data have needed to be sourced and in many cases labelled. Because of the immense toxicity of its earlier models, OpenAI, creators of ChatGPT, needed to introduce a significant amount of human-labelled data to indicate to the models what toxicity looked like. And quickly.
Was this done by the same cappuccino-drinking, highly-paid Silicon Valley hipsters who thought the models up? No, it was “outsourced” to a workforce who felt coerced to view some of the most disturbing material on the planet, all for the price of the foam on a California coffee. In January 2023, a Time investigation uncovered that it was a Kenyan workforce earning less than $2 an hour had done the job. They often handling extremely graphic and highly disturbing data, without training, support, or any consideration for their well-being.
It was a shocking discovery but it was made even more so by the knowledge that ChatGPT is not the only guilty party in this area. Many AI companies start out by outsourcing their data labelling. Some ignorant – some wilfully so – of the labelling processes and the conditions experienced by the workers who provide them with the data they need. this is something that simply can’t be left unaddressed.
The ethics of applied AI
Without human-generated data, there can be no AI: Because of the proliferation of AI content replacing human content on the Internet, some predict we will run out of new training data by 2026. The recent gnashing of teeth by leading luminaries in the industry shows there is some legitimate concern at the pace of change. Some have said we are seeing AI’s “iPhone moment”. But how dangerous is it?
Well, for one thing, it is capable of determining a person’s race from an x-ray alone. The recent focus is on Large Language Models, but this reminds there are many other powerful AI applications out there. Will AI take your job, or render your business obsolete? Possibly not, but a person using it might. The entire content creation industry looks pretty shaky right now, but almost anywhere there is a “mundane” human touchpoint involving interpersonal interaction could easily be a target for AI powered applications.
Criminals will use it. In fact, already are. Generative AI solutions are already producing high-quality audio and video, enough to fool a human into handing over money believing it is a loved one. It will produce better malware, better phishing emails, and better suggestions as to how to manipulate individuals in real time.
But like the internet, which has created and destroyed so much, we can’t uninvent it, and we’re doing a poor job of stopping people using it. So can we help stop the negatives, while reinforcing the positives? There are arguments which suggest a libertarian free-for-all is best, because otherwise criminal behaviour goes further underground, making it harder to track. On the other hand, we see the EU trying to legislate AI out of existence, but only if it is controlled by large companies. Which is the biggest evil? The criminals sucking up and using data to try and take our money without us noticing or big business sucking up and using our data to try and take out money with us noticing? This debate will run and run, as different countries take different stances. We have already seen the announcement by Japan that it is not a breach of copyright to use copyright material in AI training (even if illegally obtained!), something at stark odds with the current headwinds of opinion (if not actual law)
What is the future of AI?
Although the ultimate goal of Artificial General Intelligence (AGI) is still a long way from becoming a reality, AI is already too useful a tool to abandon. In terms of productivity, it is unsurpassed, changing the way the work and live, changing our capabilities. However, unless we take precautions now, we’re may be heading towards a time of science fiction that few of us will enjoy living in.
While anyone can access – can build and deploy – highly sophisticated AI (with the right research and tools) the potential for it to be used unethically is enormous. And while we’re still a way away from the potential of rogue machines operating for the defeat of mankind, there is enough scope for individual people to do plenty of their own damage. Regulation may be the way to deal with that risk. There have already been some attempts – GDPR’s requirement that all automated decisions should be explainable, and the new EU AI Act (alluded to above) aimed at regulating ‘high-risk’ AI. But as with all digital issues, regulation is difficult to implement. It can be intrusive. With AI placed in the hands of the layperson, targeting the problem through the active monitoring of data centres, the forced compliance and intervention of tech producers, is not going to be easy.
AI is by no means all bad. It brings enormous benefits. It saves money – not just through enhanced productivity, but through fraud prevention. It protects the vulnerable – with natural language processing (NLP) helping companies to identify customers who need more support (or who shouldn’t be sold to). And it removes the burden of some of the most time-consuming and tedious tasks from bottom-tier workers. But we can’t let its benefits shade out the technology’s darker side, because if we do – and we fail to act – the repercussions could be disastrous.
Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!