The greatest risk from artificial intelligence

Among the most important lessons in human history is that those who adopt innovation in the most advantageous manner often triumph over competitors. This has never been truer than in the rapidly evolving artificial intelligence revolution underway, where we face great risk from a tripartite of totalitarian nations, corporate oligopolies and complacent democracies.

Properly designed AI systems with embedded security and rules-based governance offer one of few options to solve many problems facing the U.S. due to an extended period of low productivity, an escalating public debt problem, unsustainable health care costs, outdated infrastructure and increased inflation. Unfortunately, those areas that need advanced AI systems most often protect entrenched interests rather than transform into a competitive economy.

China is exploiting this historic opportunity in part through the world’s largest surveillance network of human behavior, which is tapped for strategic industrial and military purposes in partnership with leading tech companies. China’s well-funded strategic plan intends to dominate artificial intelligence by 2030. Dr. Eric Schmidt captured part of the problem facing much of the U.S. economy at a recent hearing for the House Armed Services Committee: “The DOD does not have an innovation problem, it has an innovation adoption problem.” While the overdue warnings are welcomed by many and should be extended to the entire U.S. economy, Dr. Schmidt’s role as chairman of the Department of Defense Innovation Advisory Board represents another risk in the form of corporate oligopolies.

Dr. Schmidt recently stepped down as chairman of the board at Alphabet Inc. (Google), remaining on the board. Google is a contractor for the DOD project Maven and one of only three cloud vendors in the running for a $10 billion DOD contract called JEDI. Amazon and Microsoft are the other contenders. Regardless of which vendor wins the contract, the construct exacerbates two of the highest risk areas facing the U.S. — catastrophic cyberattacks and further consolidation of wealth in a few zip codes.

The type of behavior we are observing in corporate oligopolies as strategic partners with government is a form of parallel exclusion, which is described in a Yale Law Journal paper by C. Scott Hemphill and Tim Wu as “conduct, engaged in by multiple firms, that blocks or slows would-be market entrants.” The purpose of the behavior is Orwellian: “Maintaining an exclusion scheme is a dominant strategy for each of the excluders. In such cases, the likelihood of collapse is even lower, yielding a potentially indefinite system of parallel exclusion.”

That the U.S. federal government would seem to favor oligopolies is troubling, as it tends to force the next generation of leaders to emerge outside the U.S., which directly conflicts with the economic and national security interests of the country and its citizens. If the current trajectory continues unabated with the U.S. protecting incumbent oligopolies over emerging leaders, and China continues to exploit our weaknesses, totalitarianism could dominate Earth and space for generations.

If, however, we employ AI systems to prevent crises, diversify the economy and empower individuals toward achieving sustainability in economic, social and environmental ecosystems, then the future could be brighter than at any previous point in history. The choices made today in adoption of AI systems may well determine the outcome for our planet and species. Which will it be — an unsustainable dystopian oligopoly serving a few, or a strong diversified economy serving all citizens?

Mark Montgomery is an independent scientist working in AI for over two decades. He has been living in Santa Fe for 10 years with his wife, Betsy.

(0) comments

Welcome to the discussion.

Thank you for joining the conversation on Please familiarize yourself with the community guidelines. Avoid personal attacks: Lively, vigorous conversation is welcomed and encouraged, insults, name-calling and other personal attacks are not. No commercial peddling: Promotions of commercial goods and services are inappropriate to the purposes of this forum and can be removed. Respect copyrights: Post citations to sources appropriate to support your arguments, but refrain from posting entire copyrighted pieces. Be yourself: Accounts suspected of using fake identities can be removed from the forum.