Stephen Hawking made headlines recently when he expounded on his fear of runaway artificial intelligence. His concerns aren’t new to those who are familiar with the subject, although the technology for that is not there yet and it’s still entirely theoretical.
The other big fear out of A.I. is that an intelligence, given enough power, could decide humanity is a threat and destroy it, a la Skynet  from the “Terminator” movies.

The latter threat is, right now, more likely to happen, as researchers push the technology forward. Though there are real benefits to be had with more intelligent and capable computers, there is no guarantee that they would be applied responsibly.

Right now, research is split between academia and Silicon Valley. In the business-focused setting of the latter, progress has been mixed with controversy.

Uber and Lyft, ride-sharing programs that threaten the existence of established taxi companies, have been expanding their services around the United States. But not every locality has welcomed them. The latest controversy comes from Virginia, where the state’s Department of Motor Vehicles told Lyft to cease and desist operations until it meets certain requirements. Lyft is flaunting that order.

Lyft has nothing to do with A.I., but it speaks to the attitude from Silicon Valley, which will probably drive most of the implementation of A.I. in products. The Valley seems to operate under a “Why not?” mentality when it comes to creating and marketing products and services. Most of the products coming out of it are frivolous, but some, like Google’s self-driving cars, could revolutionize many aspects of our day-to-day lives, possibly for the better.

The “Why not?” mentality is more interested in  if companies can do something than if they should do something. And the public is not given the option to hold back on some of these technologies, only to deal with the fallout once they’ve been unilaterally introduced.

In “Terminator 2,” the Arnold Schwarzenegger cyborg explains that Skynet is given control over strategic command because of a flawless performance record. That’s a decision that is made without consideration of the consequences. In the fast-moving world of Silicon Valley, is it more or less probable that decisions to inject A.I. (using code that may be rushed or poorly thought-out) into critical systems could happen? Who would be able to stop it?

Before we’re worried about computers consciously deciding to muck things up for humanity, we should focus on humans not mucking things up ourselves, as we’ve done plenty of times in history already.