HomeTech and GadgetsArtificial IntelligenceManaging the Future - Part One: Artificial Intelligence

Managing the Future – Part One: Artificial Intelligence

March 17, 2019 – Happy St. Patrick’s Day to all who celebrate the “greening” of the beer, the Chicago River, and in some cases many a gill. A number of my friends who I meet with monthly in a local pub to discuss how we can make the world better are gathering around the corner from our apartment this afternoon, to celebrate the rites and rituals that accompany St. Patty’s Day. I won’t be joining them this year because my knee has been acting up after being crammed into an airline seat that would have better suited a sardine than a human.

In today’s posting, I want to talk about “managing the future.” When you type that phrase into an online search engine what usually comes up is “forecasting.” But that’s merely a small part of what is our future on this planet. Forecasting is a financial and budgeting term. Where does disruptive change, a fixture of the 21st century fit when you only look at financial forecasts quarter-to-quarter, and year-to-year? Where do disruptions from unforeseen technological innovation show up in quarterly forecasts? And where does climate change enter the picture in most corporate annual reports to shareholders?

We live in an age of disruption. From the smartphones in our pockets to artificial intelligence (AI), the Internet of Things (IoT), cryptocurrencies and the blockchain, fintech, and robotics, change is happening at lightning speed making it far tougher for governments and policy creators to anticipate the positive and negative impacts.

AI, Disruption and the Future

Normally when you use the word “disruption,” it doesn’t have a positive connotation. But, in fact, the disruption we are seeing in the 21st century impacts us both positively and negatively.

Take for example AI. It involves software running on electronic devices that is self-learning. It can be found on our smartphones, in our cars, in the airplanes we fly, in the medical tools that lead to a diagnosis, and in countless other applications. Today banks use AI to detect fraud. Physicians use it for diagnosis and treatment of disease. Biomed companies use it to help develop new drugs. Manufacturers use it for quality control analysis. Police, emergency services and stock traders are toying with the idea of using it to predict the next crime, fire, flood, or rise or fall in the stock market.

AI, the auto-pilot, and other flight control functions on airplanes have been the subject of headlines in the last few days because of two Boeing 737 Max8 crashes. So too has the AI in autonomous vehicles, a fast approaching technology, that is seen as the direction the industry is going at a galloping pace.

AI predictive and data mining tools are also being seen as dark forces on social media and the Internet, shaping messaging we receive and have now been seen influencers in a recent election and referendum in the United States and the United Kingdom respectively.

Do Our Governments Know How to Manage AI Present and Future?

Asking the above question might suggest that we have not done a very good job of managing AI to date and have yet to come up with policy to maximize the positive gains while protecting all of us from AI’s potential dark side.

When you allow a machine to become a principal means for solving problems you should be doing it knowing what you are sacrificing. One reason for doing it may be that the problem you are attempting to address is too complex without using a machine’s capacity to analyze the massive amounts of data and work out the permutations and combinations to find a solution.

In a Brookings Institution study entitled “A Blueprint for the Future of AI,” it notes that AI could create an even greater imbalance between the haves and have nots. Why? Because AI can be manipulated by the quality of the “data in” used by the technology to do pattern recognition. This is the way machines learn and if the data source is flawed, any proposed solution will reflect this. This is just one of the many ways AI could prove damaging to society. For example, consider an AI banking system used to help make loan decisions. If the data in includes a rules bias, it may exclude a family from being able to get a mortgage or an eminently qualified individual from obtaining a small business loan.

And then there is the issue of an autonomous AI learning from data sets that prompts it to make unforeseen conclusions leading to actions which have negative impacts on individuals. Think of AI used for facial recognition, or the AI of autonomous weapon systems.

Managing Facial Recognition

Today China is tracking the movement of so many of its citizens through a proliferation of technologies including CCTV cameras and AI facial recognition software. The threat to individual freedoms cannot be underestimated by such usage. Privacy and anonymity become things of the past. If we are to manage a future with AI tools of this type, then we need laws to regulate its use and to ensure that those receiving the image matches are restricted in the usage of the results. The Brookings report suggests that “the equivalent of a search warrant,” should be a requirement in identifying individuals using the technology, and only authorized in the event of “probable cause” and not merely because of suspicion. For police, the technology applied in the aftermath of a crime or act of terror should be used to identify individuals at the site of the activity, but should apply a similar standard based on “probable cause.” To-date facial recognition AI technology is an imperfect art, better at identifying men than women, and people who are fair-skinned, rather than dark. False positives and systematic biases cannot be ignored.

Managing Autonomous Weapons

And then there is the rise of AI weapons. The United States, Russia, and other countries are hell-bent on developing autonomous weapons technology for surveillance and combat. War is disruptive enough without giving a machine the authority to shoot-to-kill. How should this burgeoning field be managed? Should “shoot-to-kill” autonomous weapons be universally banned while the surveillance, intelligence, and targeting AI used by weapon systems fired by humans be allowed?

Educating Government and Citizens About the Future of AI is a Must

Can we regulate and manage a future where AI applications, both anticipated and unforeseen, continue to be invented? Governments and their citizens need to talk about the technology openly, and they need the inventors, software engineers, and scientists educating them to make informed regulatory decisions. Managing the future may not produce one-size-fits-all legislation. It may have to be nuanced and flexible if it attempts to broadly define an emerging field such as AI. But one thing is for sure, the status quo just will not do if we are to manage the future well.

In Part 2 we will discuss managing the future of IoT.

 

It is hard to deal with the future if you don’t establish some game rules to help you to play even in a thick fog.

 

lenrosen4
lenrosen4https://www.21stcentech.com
Len Rosen lives in Oakville, Ontario, Canada. He is a former management consultant who worked with high-tech and telecommunications companies. In retirement, he has returned to a childhood passion to explore advances in science and technology. More...

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Most Popular

Recent Comments

Verified by ExactMetrics