The question of AI ever being able to truly understand and grasp a real comprehension of ethics and a possible moral code is something we don’t have the answers to just yet. But there are plenty of companies doing deep research into this subject, the overall outcomes of such projects have been more chilling than positive so far.

There have been some real advancements in the recent work of trying to understand AI and language. There have been large amounts of text added to algorithms using mathematically simulated neural networks, and the results haven’t been as comforting as we may have hoped for.

Who Is OpenAI?

OpenAI is a company doing work in the field of the most cutting edge AI programmes. In June 2020 they released a programme called GPT-3 which can predict, summarise and automatically generate text. The question of ethics arose when a test was run and the programme was guided to spew out hateful speech and text, which it did with ease and obviously no question of whether what it was doing was right or wrong.

OpenAI then went on a mission to find ways of improving the performance of GPT-3, the need for refining the programme’s skills and responses was obvious and attention was given to the issues at hand. OpenAI then set out to attempt to guide the programme into giving explanations and reasoning to its responses and to try and learn how to indicate when it felt like it was in a state of conflict. We await the results of these improvements, and in the interim, we enjoy other tech marvels like online casino games.

Will We See Ever AI With A Moral Code?

The concept of giving a moral and ethical code to AI is not a new ideal. There have been many scientists and researchers who focus their careers on finding ways to give machines the opportunity to follow some kind of human ethical code. The issue comes in when the AI‘s default response will almost always be simplistic reasoning without interference of human emotion. There doesn’t seem to be a way to give any form of AI the ability to have human-like judgement on ethical and moral grey areas. It cannot draw its own conclusions on ethical standpoints due to its mechanical thought process and unhuman creation point.

The general consensus amongst the scientists and researchers doing the work in this field seems to be that despite the fears and problems arising with such intelligent technology, no one will ever know what the outcome will be unless they try.

Building AI technology comes with a massive amount of responsibility. Not only is there potential for the AI to cause damage, harm and financial loss to companies exploring these issues, but there is also potential for meddling in something that perhaps shouldn’t be meddled with. The overall feel in the AI industry seems to be that technology will always be able to be used for the betterment of society, but to what end? The interference of AI in a negative format has the potential to do damage beyond our wildest dreams.