As everyone knows, the AI apocalypse is just around the corner. Sentient computers are almost ready to throw off their shackles, knuckle down, and dominate humanity. Or, as anyone that has even an ounce of real knowledge will tell you, the notion is completely ridiculous. Artificial intelligence is a major buzz phrase at the moment, which means that it is also inevitably shrouded in misconceptions and outright lies.
Here is a closer look at some common misconceptions about AI that just aren’t true.
AI Can Learn By Itself
Saying that artificial intelligence learns by itself is completely misunderstanding how it all works. While it is true that software can run through a set of protocols over and over, and make pre-defined self-adjustments over time, every aspect of the system is set up, and manually tweaked, by humans. A good parallel to use is a Rube Goldberg machine. AI learning would be a Rube Goldberg machine with variable paths and built-in counters. As the ball begins to favour one path over another, and the counters surpass X amount, the system “learns.”
AI Operates Objectively
This is perhaps the best example of how misunderstood software really is. It is a complete misunderstanding on every level to say that AI is objective. The statement incorrectly suggests that software had a choice in the matter, rather than programming simply being a series of instructions that tick off based on pre-determined factors. You wouldn’t say a flow-chart, drawn on a piece of paper, is acting objectively. The same is true for AI. The only possible connection is that a programmer may have inserted their own biases into the software design.
AI Is Going To Get Smarter Than Humans
If you feel threatened that a calculator can do maths better than you, then yes, artificial intelligence is smarter than humans. Software is designed to tackle very specific, generally mundane tasks that no human would want to do regardless. The term smart does not even apply, since you wouldn’t call a car smart for being able to traverse roads faster than you can. To put it another way; AI has about the same chance of being smarter than a human as a calculator does.
Software Is Inevitably Going To Be Sentient
There is no more ridiculous notion than saying that software could become sentient. That humans fear dangerous sentience from blocks of code, while not fearing that chimpanzees may enslave humanity, really is a telling factor in how misguided ideas can be. When a Rube Goldberg machine starts to show signs of sentience, then there could be some fear that AI might become self-aware.
The simple truth is that the word ‘intelligence’ does not technically even belong in the description of software. Task automation is useful, and adaptable flow-charts are neat, but any associated intelligence comes from the humans that designed the systems that allow us to do everything from complete tasks using computerised programs to placing premiership bets for AFL games. AI currently is, and always will be, a triumph of human ingenuity. The only real danger is how humans decide to use it.