The Internet is Under Fire – What does that Mean for AI?
I stumbled upon some new statistics this week from Pew Research asking the question, “Is the Internet good or bad for society as a whole?” The survey was conducted in January of this year (so, way before the Facebook fiasco), and a sizable majority of respondents (70%) continue to believe the Internet is a good thing overall. However, that represents a significant drop of 6 percentage points compared with early 2014.
Snappy headlines aside, it’s fairly obvious this isn’t a question that can be answered with a simple “good” or “bad” response. The Internet (and the modern technologies that it powers) provides many incredible benefits to society (access to information, the convenience of online shopping and entertainment, social connectedness), as well as some very negative consequences (data privacy concerns, the fake news situation, and “screen addiction”). Whether the good outweighs the bad is a matter of perspective. How we continue and enhance the benefits while limiting the negative outcomes is a more interesting discussion.
Al Gore a whole bunch of smart people invented the Internet, originally to share data sets with each other, it has morphed and changed dramatically. Largely unchecked and unburdened by government regulation, the pace of innovation has been amazing, and the services, products, and applications from the digital era have taken on a life of their own.
This brings me to AI, another cutting-edge technology that is about to go mainstream. In fact, today, the White House is pulling together executives from nearly 40 U.S. companies to hold an AI summit. The goal is to call in the experts to figure out how to strike a balance between the benefits of machines that can do things like spot diseases or drive cars and the reality that jobs – or lives – are at stake in the age of AI.
I’m anxious to see what comes out of this meetings. Again, it’s not a matter of whether AI will be good or bad for society. It will undoubtedly yield some of both. The bigger question is how can we maximize the good and minimize the bad. Are some government regulations a necessity or inevitability? Or, would they hamper the pace of innovation too much and stifle amazing solutions that on the cusp of helping society? Can we trust the technology industry to manage their intelligent software and machines responsibility, utilizing principles of beneficence and non-maleficence (do good and do no harm)? Does the government have the speed, agility, and technical acumen to enact effective legislation?
These are all important questions, but I don’t claim to know the answers. If you have thoughts, please chime in with a comment below.