It’s ironic, but artificial intelligence is on the cusp of harnessing the collective wisdom locked inside the human hive mind.
Showcasing a piece of software at a debate in the 200-year old Cambridge Union debating society in Cambridge, England in late November, IBM wanted to demonstrate that it could assist two teams of human debaters sift relevant arguments from a field of hundreds of pieces information, near instantly.
Of course the general concept that there’s wisdom in crowds is nothing new. In the past, IBM has showcased a similar system that could pull arguments from millions of news articles and distill them.
But how to extract that wisdom— particularly from individual opinions not organized in a structured way, as in news stories—has always been a tricky problem. Surveys and opinion polls are one method. Betting and financial markets are another. Yet each has its limitations. Surveys are notoriously difficult to design. The answers pollsters offer as possible responses may not accurately match the range of views people hold. The wording of questions can notoriously bias how people answer. And betting and financial markets can’t tell you much about exactly why people hold a certain view.
But IBM’s new approach is something more. Called “speech by crowd,” the system can take thousands of arguments, divvy them up into categories and then summarize the main points of each position.
In the week leading up to the Cambridge Union debate, IBM used a website to solicit more than 1,100 arguments on the proposition “A.I. will cause more harm than good.” The system crunched all the comments in about two minutes, characterizing 570 as being in favor of the proposition, and 511 as being against it. (It also managed to handle efforts by some users to troll the system, submitting arguments laced with expletives and other inappropriate language, in the hopes of tricking the software into repeating these arguments in the live debate demonstration.)
In the demonstration—part of ongoing IBM research into software capable of processing arguments, which the company calls “Project Debater”—the system used the themes distilled from the crowd’s collective wisdom to help two teams of human debaters pick their arguments. For instance, as an argument against the idea that A.I. will cause more harm than good, the IBM software picked out that the A.I. would help automate many routine, monotonous tasks, saving humans from drudgery. For the position that A.I. would cause more harm than good, it picked up the theme that A.I. can entrench bias found in human decision-making.
According to Noam Slonim, the IBM researcher who heads the project, companies can use the system to help understand what customers think about new products, or to solicit employee opinions on a new company policy. He thinks even governments could use it to solicit opinions from citizens. Big Blue will shortly start offering exactly these kind of services, made possible by the A.I. software, to a select group of its cloud computing customers.
Slonim says this “speech by crowd” software is better than surveys or polls, because people can provide whatever feedback they wish, using free-form, natural language. This provides for a much broader range of possible views and nuanced arguments, instead of than simply answering multiple choice survey questions.
John Bohannon, chief scientist at San Francisco-based A.I. company Primer, says there is tremendous demand from companies hoping to find better—and cheaper—ways to do market research. Summarizing free-text opinions is one way to do this, he says.
But this “speech by crowd” system is designed to work alongside humans decision-makers, providing them with insights. And IBM isn’t alone in augmenting human analysis in this way. For instance, Primer also markets software that extract insights from documents as a way to help human analysts in fields ranging from market research to finance to national security. But these tools are not designed to replace humans. “We want to turn that human into a computer-human pair,” Bohannon says.
Richard Socher, the chief scientist at Salesforce, which is also working on summarization as a major focus of research, makes an analogy to what has happened in the field of translation since the advent of machine-learning based translation software. Today, most human translators use such software to take a first pass through a document. Only then does the human take over, cleaning up language and capturing more subtle nuances, for instance translating colloquialisms and metaphors that machines still struggle to translate as well as humans.
By contrast, most analysts still manually pour through reams of documents and manually summarize them, Socher says. With the renewed attention focused on A.I.-enabled summarization tools from tech giants like IBM and Salesforce and smaller, specialized firms like Primer, that may be starting to change.
More must-read stories from Fortune:
—2020 Crystal Ball: Predictions for the economy, politics, technology, and more
—Russia and China have built a new gas pipeline that has everything—except profit
—5 cocktail trends to watch for 2020
—A roundtable of investing experts share their best advice for 2020
—The ‘princess’ and the prisoner: How China’s Huawei lost public support at home
Subscribe to Fortune’s Eye on A.I. newsletter, where artificial intelligence meets industry.