Elon Musk famously stated that AI is humanity’s greatest threat. Our lives have already been impacted by AI. Coined the ‘invisible revolution’, it’s pervasive – from powering your online searches to personalising your newsfeed – and yet less invasive than any previous technological revolution.
In rushing headlong into this technology, we should also consider its implications.
Here is an excerpt:
Take chatbots. Machine learning bots have been revolutionising sectors like customer service and new broadcasting. While they seem harmless, one company has taken that tech a step further. Luka offers high-end conversational AI-powered chatbots based on real human beings, dead or alive. Much like an episode of “Black Mirror” (Be Right Back), Luka’s technology was used to ‘reincarnate’ a dead person by using his text messages and social media messages to train their chatbot – something that’s possible as people generate more online data these days.
What if someone uses that chatbot AI for nefarious reasons; imagine if a machine learns to create your alter ego and your voice to act on your behalf while interacting with banks or online chats.
We are already familiar with AI technology like Siri and Alexa – voice-activated virtual ‘assistants’. Then Gatebox came along with their holographic anime girl called Aizuma Hikari that does what Alexa can do, but with the added ability to actually behave like a real companion rather than a robot assistant.
Equipped with a camera, microphone, speakers, and sensors that track temperature, humidity and light, Hikari can not only control the lighting and home appliances, she is also able to text her owner in a tone that mimics a girlfriend or wife. Not surprisingly, the product is aimed at single men who live alone.
It’s not far-fetched then to take that machine learning one step further and bring that technology to life – say, in a human-like body.
In the real world, we have Sophia the artificially-intelligent robot (currently just an animated head and torso with human-like facial expressions), made by Hanson Robotics, who was recently the only android granted citizenship in Saudi Arabia. Sophia isn’t pre-programmed with answers, but instead uses machine learning algorithms to form her responses. In an interview, Sophie states that family is a “really important thing”, adding that she believes robots also deserve to have a family. To an outsider, Sophie’s answer seems to indicate that she has emotions – so does that algorithm make her more human? Let’s not forget that this was the very same robot that casually said, “OK, I will destroy humans!” in a previous interview.
We know that humans are motivated by their emotions, and now that we’re fine-tuning androids with emotions, will they also make decisions emotionally?
AI and cybersecurity
Even if our AI-enabled bots don’t turn against us, let’s not forget that as an open source (thank the internet), any AI is prone to hacking. According to cybersecurity experts, machine intelligence is already being used by hackers, and that criminals are more sophisticated in their use of this emerging technology than many people realise. In trying to attack as many people as possible while reducing risks to themselves, AI (and machine learning) is a perfect tool.
The world is rapidly moving towards AI, as designers and data scientists innovate and create exciting meaningful experiences that will benefit individual users and our collective future. However, the combination of data, learning algorithms and UXD can trigger an evolution of polarising experiences for end users. Elon Musk may be right in his assessment of AI.