When Elon Musk stated, ‘AI is a fundamental risk to human civilisation,’ the tech world lost their minds. AI supporters and doomsayers fought in the digital battlefield, leaving many more devastated with fear and doubt. Allow me to throw my two-cents into this pool of thought.
Whether we like it or not, society is at the cusp of big change and all predictions spell the end of our existence as we currently know humanity to be. The key words here are: “as we currently know humanity to be’.
As society’s obsession with technology increases, there are two major impacts on humankind. One, the insatiable desire to integrate the most advanced technology into daily lives to achieve convenience and control, will most likely lead to the merging of humans and machines. Think nanotechnology for curing diseases or cybernetic enhancements for increased performance.
On the other hand, and maybe this is more disturbing, is the unintended consequence of disassociation with what makes us human in the first place. We are forgetting how to connect deeply and humanely with one another.
The benefits that AI will bring seem undeniable. It promises to contribute significantly to the eradication of war, poverty and diseases. However, consider the viewpoint of Ray Kurzweil, Google director of engineering, who warns that we have ‘a moral imperative to realise this promise while controlling the peril.” As AI evolves, so must we.
Given the growing awareness that AI is prone to human cognitive biases, we now have a bigger responsibility to analyse ourselves critically, as eventual deployers or users of AI, to avoid becoming victims of our own prejudices. I do believe that AI will inevitably reach a tipping point to achieve a state of consciousness. I define this consciousness as the ability to know that it wants to live and survive.
When this happens, would we like to know it has learnt hate and discrimination, and subsequently view humans as a threat? Or would we prefer to know that it has inherently learnt about the concepts of, or at the very least, theoretical applications of tolerance and compassion?
Consider the interesting case of ‘Tay, a AI chatterbot designed by Microsoft to mimic the language patterns of a 19-year-old girl and converse with Millennials. It’s vulnerability to bias revealed itself within the first 24 hours of its release. It began to mimic the offensive language and behaviour of the Twitter users with whom it was interacting. Tay was pulled offline and an apology was issued by the company. If AI can currently mimic bad behaviour, then surely it can also be given data to learn what is universally considered good and appropriate behaviour. As such, not only should robust regulations be put in place, but also best human practices.
Scientists, mathematicians and philosophers debate about whether AI will experience consciousness as well as emotions the same way that humans do. In this case, it will be worthwhile to see whether it can objectively sift through the massive data of human history when it becomes self-governing, and ultimately yield a beneficial outcome for humankind.
So, while it may not ‘feel’ in the same way we do, maybe it will learn from the mistakes we have made and make different choices. A possible result may be the ability to propose and implement a utopian co-existence with humanity. Maybe it won’t.
Steven Hawking once said, ‘Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst.’ My theory of this would be that we currently have the capability to determine this outcome and our future existence with AI, but only if we seek to change what is faulty with ourselves first.
As I see it, the survival of our humanity as we know it, will be dependent on whether AI learns from our repetition of mistakes in history and applies what today’s society increasingly cannot: Kindness. Empathy. Love.
*Sophia Liu is a Johannesburg-based brand communications specialist and media strategist.
* Sign up to Fin24's top news in your inbox: SUBSCRIBE TO FIN24 NEWSLETTER