Stangl

In the “Terminator” movies of the last 30 years, sometime in the fictional future, the artificial intelligence network of computers called Skynet becomes self aware and enslaves humanity.

For those you who haven’t seen the movies, there’s time travel involved as the remnants of rebel humanity seek to protect the future version of their leader, John Connor. Skynet sends a young Arnold Schwarzenegger back in time to kill Connor.

Skynet, the artificial intelligence (AI) network had been entrusted with all aspects of societal controls, everything from traffic control to nuclear weapons. In these movies, humanity gave its fate to the machines willingly, knowing that the future would be safe in the hands of unemotional machines.

It turns out that when the computer becomes “self aware,” it realizes the greatest threat to the planet is humans, so it sets about to destroy as many people as possible.

While the movies are a lot of fun (the first two, at least), scientists and ethicists have been constantly debating the merits of machine learning and how we will be able to “pull the plug” if and when the machines become smarter than their creators.

This is no longer the science fiction of the last century, AI is now touted by mainstream vendors like Microsoft as a tool to help do mundane and repetitive tasks without errors. Billions are being invested in self driving cars and wireless networks fast enough to make communication between the machines possible.

AI is coming, ready or not.

I have read numerous articles on this topic. I find it fascinating. As a kid who grew up in the 1970s, one that was told I could live on the moon, I knew that I would have a robot servant. Maybe one like Rosie the Robot maid from “The Jetsons.” At the very least, I would have a clunky model like Robby the Robot from “Forbidden Planet.” Love those actuated gears that click and clack when he’s thinking.

I think there is something innately human about worrying if the machines will become our overlords, but we may be worrying for nothing.

It turns out that hackers are already setting the progress of machine learning back. Dawn Song, a professor at UC Berkeley who specializes in studying the security risks involved with AI and machine learning, is warning that we may never get to the promised land of AI learning if hackers don’t stop messing around.

An article in Technology Review says that Song warned that new techniques for probing and manipulating machine-learning systems—known in the field as “adversarial machine learning” methods—could cause big problems for anyone looking to harness the power of AI in business.

Adversarial machine learning involves experimentally feeding input into an algorithm to reveal the information it has been trained on, or distorting input in a way that causes the system to misbehave. By inputting lots of images into a computer vision algorithm, for example, it is possible to reverse-engineer its functioning and ensure certain kinds of outputs, including incorrect ones.

It’s kind of like a prank call to a really naïve person who never gets the joke, stopping time and again to see if their refrigerator is running.

Since the fictional Skynet became self aware on Aug. 29, 1997, I think we are good, at least for a while.

It is somehow fitting that imperfect humans will ruin the future — or save it, depending on your point of view.

As always, I welcome your comments. You can reach me by email at tstangl@theameryfreepress.com, telephone 715-268-8101 or write me at P.O. Box 424, Amery, WI, 54001.