Michael Kerns; above, figure from a recent paper that visualizes a "privacy-preserving" algorithm for search in social networks; courtesy Michael Kearns

“Algorithm” is a simple word describing an orderly, multi-step procedure, such as the process of doing long division in mathematics. However, the data systems that run computer programs are anything but simple, and as today’s most sophisticated algorithms are assigned more and more of their programmers’ tasks, some scientists worry that their effects could threaten human primacy.

People use algorithms to control how Facebook functions, and self-driving cars work because of algorithms. “Everything these days that’s automated by computers is run by algorithms,” said Michael Kearns, a longtime professor in the University of Pennsylvania’s computer and information science department. Kearns speaks on “Machine Learning and Social Norms” in a Tuesday, April 4, community lecture sponsored by the Santa Fe Institute.

In the last decade or so, algorithms have been extensively used to gather data about people — where we go, what we like, what we buy — and, as Kearns described it, “to develop models that make consequential decisions about individual people, like do you get a loan or not, do you get admitted to this college, and if you’ve committed a crime, what sentence do you receive?” This realm can seem intrusive and even frightening. One way these algorithms or “personalization models” are scary is their tendency to be discriminatory. For example, many language models that underlie search engines like Google have strong gender bias in them. Kearns said that “people in the field like me are starting to become very concerned about algorithms violating social norms of all kinds, including fairness and privacy.”

It is a fact that computers can do certain tasks better, or at least more efficiently, than humans. But according to Kearns, there are “certain types of decisions that we as a society don’t think machines should make.” One potent example is automated warfare. “Many people are of the belief that we should never have machines or algorithms killing people.” 

Kearns brought up the World War II-era Manhattan Project, which “attracted the greatest minds of its generation and then when they saw the damage done by the weapon, many of those same scientists turned their attention to limiting the use and power of those weapons. So it makes sense that the same scientists who created the highly automated, data-driven algorithmic world we live in today strongly participate in trying to figure out how to rein in algorithmic behavior where we as humans think it needs reining in.”

He speaks about this and related topics on Tuesday, April 4, from 7:30 to 9 p.m. at the Lensic Performing Arts Center (211 W. San Francisco St.). There is no charge for the lecture, but tickets can be reserved at; call 505-988-1234 for details.

(0) comments

Welcome to the discussion.

Thank you for joining the conversation on Please familiarize yourself with the community guidelines. Avoid personal attacks: Lively, vigorous conversation is welcomed and encouraged, insults, name-calling and other personal attacks are not. No commercial peddling: Promotions of commercial goods and services are inappropriate to the purposes of this forum and can be removed. Respect copyrights: Post citations to sources appropriate to support your arguments, but refrain from posting entire copyrighted pieces. Be yourself: Accounts suspected of using fake identities can be removed from the forum.