In early July, news broke that Blake Lemoine, a Google engineer, had established AI Sentience. This would be huge news and a quantum leap in human technology if it was real. Regretfully, Lemoine was mistaken.
To this day, Lemoine claims his software is self-aware. He also says the software can communicate on the level of a very knowledgeable eight-year-old. However, further investigation found that confirmation bias had more to do with the issue than some quantum leap.
Regardless, this ordeal sprouted a discussion regarding AI. Many are wondering how AI can help humans with their tasks. More importantly, many people argue that we should include AI for issues like data leaks in cybersecurity. Nobody wants to make Skynet a reality.
At the moment, we know 3 things for certain:
- Modern AI can process millions of pieces of data each second
- AI can use algorithms to conclude information from data
- Intelligence and reason are hard to distinguish
This also sparked a philosophical and moral debate about self-aware machines. Should we build self-aware machines? What does it mean to be self-aware? What does it mean to be a human? Certainly, these discussions are important for society. However, they are also significant for technology; self-aware AI would be a game changer.
Walks Like a Human, Talks Like a Human
Back in the days of Isaac Asimov, the idea was that intelligence would recognize intelligence. However, that does not answer what intelligence is. This statement also presumes that all humans experience intelligence in the same way or have the same amount.
As it turns out, intelligence is just one of the components of sentience. In fact, experiences and wisdom play a huge role in humans’ mental capacity. The same concepts also govern how AI would act.
According to Enzo Pasquale Scilingo, a bioengineer at the University of Pisa, it is not uncommon to communicate with users who are not entirely human. The chatbots we have today have amazing processing and searching power. They might even sound alive.
However, AI takes in very little data compared to a human child, especially with context levels. In fact, babies take in colors, sounds, and shapes. They also learn their names and how they relate to each other.
On the other hand, AI only gets an echo from time to time. This is not enough for even the most intelligent AI to form a personality.
Still, according to some researchers, the technology for that is here. It is only a matter of time before we create a fully human-level artificial mind, a singularity.
AI Sentience Would Need A Lot More Data
The term AI sentience is becoming more relevant. And that’s because Artificial Intelligence alone is not sufficient any more. We have observed humans, animals, and even machines. As a result, we now understand that the threshold for intelligence is much lower than we thought.
Virtually every animal has some degree of intelligence. However, they do not have enough processing power. As a result, animals cannot acquire traits such as speech and abstract reasoning. Conversely, AI has a different problem. It does not have enough data to work with.
When fed specific data, the AI can solve many tasks. The AI’s processor is not encumbered with everything humans have to deal with on a daily basis. Here, they become brilliant and highly useful.
AI would need much more data to run and solve complex things such as ad hoc analytics or data visualization. And to make matters worse, we do not know which data we need to give the AI.
At the moment, AI is ideal for solving tasks inside known parameters. However, the results would fall apart if unpredicted factors affected those parameters. In these cases, it is apparent that AI simply does not have enough data to work with.
AI Is Very Intelligent, But Not Very Smart
In the most basic terms, intelligence is realizing a relation between one piece of data and another. This is an easy task for computers. In effect, they are made to recognize relations and report on them. This is why any large business now uses AI.
Additionally, chatbots primarily aim to answer questions and chat with people. As a result, they must have a lot of information. This is how they can answer virtually any question you ask them.
However, that does not mean chatbots know what you are talking about. Rather, they can only see the relation of the term online.
For some people, this can seem like the AI is communicating like a fairly knowledgeable child. And this might be what happened with Lemoine. In fact, Lemoine was personalizing the chatbot. He created a social connection with this AI. However, the AI does not realize Lemoine’s existence outside that little conversation.
Uses for AI in Cybersecurity
The existence of general AI with human-level intelligence is still quite far away. But that doesn’t mean the AI we have right now is useless. One application is for the cybersecurity industry. AI will specifically come in handy with the current cybersecurity talent shortage.
Firstly, AI can do constant checks, surveillance, and reporting. AI never sleeps and never blinks. This means it can never miss anything predicted by its parameters.
Additionally, AI is great at combating other AI, such as bots. No bot can trick AI into handing out personal information. The AI would let the conversation timeout without ever faltering.
Finally, AI is great at building and testing models. Especially with machine learning involved, the AI can monitor assets and predict where an attack might happen. In turn, human experts can fill in the vulnerability.
Because of these features, more research into AI will certainly follow. However, we are several decades away from full AI sentience.