Artificial intelligence is a constellation of several concepts and technologies meshed together to enable machines to sense, think, comprehend, and act independently or with minimal human supervision. From consumer product lines and services like Apple’s Siri to Tesla’s autonomous cars, AI is progressing rapidly. While AI has already played a crucial role in almost everyone’s lives, either directly or indirectly, many industry experts believe we are still a long way from achieving the full potential of AI. In other words, AI, as we know it today, is still in its infancy, and there is more to it.
Depending on its potential, AI can be classified into three types:
Artificial narrow intelligence or weak AI
We are currently in this phase of AI, where we are working on mastering the narrow range of abilities to automate simple tasks or a bunch of closely related tasks in our everyday lives using AI. One example of this is the software capable of analyzing data without human intervention to give business functions.
Artificial general intelligence or strong AI
This is the phase where AI can mimic and match the levels of human intelligence independently. Until today, Fujitsu managed to build a supercomputer (K) that simulated 1 second of neural (brain) activity. However, because it took almost 40 minutes to achieve this feat, it is difficult to say if we can achieve strong AI in the foreseeable future.
Artificial superintelligence (ASI)
This is the state where AI is self-aware and surpasses the capacity of human intelligence and ability.
For many years, people have always assumed that technology is for the betterment of humanity and is always useful. But artificial intelligence’s applications can stretch way beyond the trivial use-cases we know today and be used to perform and think independent tasks such as fighting wars, driving us around, or raising our kids. Because of this, we need to shift our focus from the functionalities to the ethics of AI.
Ethical AI is a subset of machine morality, a concept that has been explored since the 1970s. Ethical AI aims to address the ethical concerns revolving around AI technology and its practical implications. Ethical AI also stresses questioning, constantly investigating, and monitoring the AI-powered technologies imposed upon human lives. It focuses on highlighting the possible consequences of AI-powered tech in life-threatening situations. How can someone code or install morality into machines or computers to allow them to make better judgments?
We have already witnessed several large-scale events in the past where technology was used for unintended purposes. We have experienced technology being put in use to manipulate and extract emotional bias from users through the recent Cambridge Analytica scandal, where millions of Facebook users’ data was gathered and used for political advertising purposes.
The rapid upscaling of AI technologies is increasing the focus on enforcing ethical aspects of this burgeoning technology. These ethical concerns can range from trivial aspects such as who should be credited for an AI-created artwork to complex and disturbing elements such as a matter of surveillance or national security using AI. These ethical concerns resonate when AI’s use and application expand to areas far different from the initially intended use-case of developing algorithms for academic and business purposes.
In general, several technology ethics, such as access rights, health and safety, digital rights, existential risk, freedom, human enhancement, judgment, precautionary principles, privacy, and security, also apply for AI. Enabling transparency and having a check on self-replicating or recursive technology are other crucial aspects that need to be considered.
Ethical AI issues
Apart from the devastating human extinction-level issues as we see in sci-fi movies, there are several other issues caused by unethical AI that will more likely affect us in several socioeconomic ways. One of the major issues is unemployment. Robots in the industrial sector now replace a large amount of the labor force performed by humans. With the advancement in technology, several other sectors, such as transport, automotive, logistics, construction, assembly lines, and supply chain, will see a potential shift of the workforce from humans to robots and machines. We can also witness inequalities concerning the distribution of assets or wealth created as a result of machines. Eliminating AI bias, dealing with potential security issues, singularity, and loss of human morals and humanity in robots are other major potential issues of AI.
AI safety and future
With the increasing research, applications, and the game-changing promise of AI to make things more efficient, concerns worldwide started to mount that AI may do more societal harm than economic good. This is because there is no governing body such as the national government overseeing what is happening in private companies that use AI to make applications and machines.
Some industry experts believe that building super-intelligent or strong AI can help humanity eradicate diseases, wars, theft, or other criminal activities. However, many believe that it can, on the contrary, pose a huge danger to humanity. Considering the possible implications of powerful technology like AI falling into the hands of cybercriminals or others focused on evil, it is high time for every organization and individual to consider the ethical aspects of artificial intelligence while developing AI models or applications.
Featured image: Shutterstock