The Human Cost in The Race for AI's Holy Grail
It's only human to get enamored by the new shiny thing. Only if we dare to look away will we start to see the human cost to advancing humanity.
The race to solve humanity’s biggest problems with AI is getting crowded, with Big Tech and Small Tech wanting a piece of the pie. But ahead of the pack is attention-grabbing OpenAI and its biggest investor, Microsoft.
OpenAI has been making the most noise too. The Elon Musk — Sam Altman saga, fueled by Sam’s double standards and Elon’s jealousy, surfaces the fundamental flaws of human thinking in their quest for the artificial.
Google has also been making a lot of noise lately in their attempt to reassure us that their various AI projects are not inherently biased, that the failures are due to poor execution. The recent Gemini AI disaster has brought to light the real-life consequences of letting AI run free, and ascertains the risks to unchecked control.
OpenAI’s ChatGPT and Google’s Gemini are impressive AI tools but of limited intelligence.
They are trained to complete tasks or actions as accurately as the quality of their human teachers. Futurists call this Artificial Narrow Intelligence (ANI) while some call it Weak AI or simply AI.
Their ultimate goal is to be the first to claim the Holy Grail of AI — Artificial General Intelligence (AGI) — when their narrow-brained generative-AI model will become a human-like independent thinking machine.
AGI is a subset of AI and is theoretically much more advanced than traditional AI. While AI relies on algorithms or pre-programmed rules to perform limited tasks within a specific context, AGI can solve problems on its own and learn to adapt to a range of contexts, similar to humans.1
But the path to the finish line in this race for AGI is one paved with many lawsuits.
Elon Musk recently sued OpenAI for violation of the founding agreement. Contrary to their original open-source non-profit business model, OpenAI’s flagship product ChatGPT’s version4 with its “advanced AI” costs $20 per month usage fees. They have also raised $13B to date from partner Microsoft, in turn propelling the MS Office generative-AI capabilities with their CoPilot product. OpenAI is currently valued at $86B.
OpenAI "scraped" the authors' works along with reams of other copyrighted material from the internet without permission to teach its GPT models how to respond to human text prompts.2
ChatGPT intelligence is a result of the training data it is being fed, and there lies the issue for many content creators, from literary publications, to newspapers, to authors.
The New York Times has filed a copyright infringement lawsuit against OpenAI for using millions of articles as training data without permission. The Authors Guild is also battling OpenAI in the courts for the use of their published work as training material without their consent. Actors, authors, artists, programmers, et al. are filing lawsuits against these generative-AI tools.
In many ways that OpenAI and the likes are doing is to an open hacking of the internet, of publicly available content using advanced screen scraping technology to build to train its algorithms. The copyright laws are falling short to address this new way of stealing.
As a NYT subscriber, I use the wisdom gained from reading the content that I have legally purchased in my writing. OpenAI’s counterargument is based on this loophole, but it’s debatable.
Humans can argue that we use our wisdom to reproduce copyrighted work in our own unique creative way. How do these AI tools with their narrow and weak brains claim to be wise enough to use information decisions to make accurate judgments?
OpenAI does not dispute the validity of these claims. Instead, they are banking on the courts to dismiss copyright claims under the guise of “fair use”, as in the case of the Authors Guild where the federal judge dismissed 4 of the 6 claims.
The court allowed a fourth claim of “unfairness” under the unfair competition law to proceed, however, holding that, if true, the authors’ claims that Open AI used their copyrighted works “to train their language models for commercial profit may constitute an unfair practice.”3
OpenAI will also use its investment dollars to strike deals with many content owners. Who’s to say that NYT won’t cave in to sell its rights for a price, and with its human soul.
The Associated Press struck a licensing deal in July with OpenAI, and Axel Springer, the German publisher that owns Politico and Business Insider, did likewise this month. Terms for those agreements were not disclosed.4
I hope that the courts will have a legal precedent and inclination to use their powers of the copyright law to conduct an audit, to lift the hood and take a peak at the practices of these companies. If OpenAI wants to use copyrighted information to train its AI babies, then their practices to obtain, store, and reproduce it must be made public too.
Unchecked power is dangerous. The very human rights that got us here should be the cost of the advancement of humanity. Transparency must a bare mininumn requirement. Where do we draw the line?


There is such a human cost to AI in the likes of jobs lost to computers. I'm feeling this now. I love that AI can help me keyword a photo upon upload and add an auto description because I am terrible at doing those things to get my work seen. However, the impending loss of my job to a computer program is not ideal.
Great read buddy. AI analysed with a pinch of pepper. Not sweet or salty. Before we had only a human mind to compete if we ever wanted to but now we face a lifeless creature of our own making . At the onset only so much furore is there. Future is going to be ......Now it's not God knows , it's only the AI that knows.