WDYT 9/19: Can machines help us to achieve better compromises (and to fight autocrats)?
Every day we learn that machines have surpassed humans in one more field of endeavor. Artificial intelligence is better than us at winning games, it is more efficient at object recognition and can analyze an incredible amount of data. Everything still somehow easy.
While scientists and developers are continuing to push the frontier of AI capability, we are also giving neural networks and the algorithms they generate more control, responsibility, and power on real life negotiations and decisions.
One question that technology ethicists ask is whether we can teach machines to compromise — probably the most complex operation? And can they even surpass us with better compromises or better bargaining results?
Why Compromise Matters
Political science teaches us that in any system of modern society, compromise is essential for the successful long-term perpetuation of the system. When one person or party maximizes their institutional power to the fullest extent possible, systems collapse. In other words, compromise is what keeps everything running — from democracies to companies.
Humans are not perfect at compromise, but we have become remarkably good at recognizing when a system is near a breaking point and forming a consensus around compromise.
On the other hand, Autocrats are so destructive because they eliminate compromise. „Don’t make compromises“ has become a rule for Autocrats. They keep repeating that making any compromise will be for the advantage of “the enemy” and will halt any progressive change that could save the system. It is an unwillingness to ignore differences to save processes for compromises and long term stability.
Democratic leaders must be able to compromise. And crucially, they must be even able to lose. In the end, that’s the hope, all autocracies are likely to implode under their own weight. But the human and economic costs are immense.
Challenges for Machines to help (so far)
So far machines are not really better than humans. Even the most sophisticated neural networks are still based on a series of binary decisions. Something is either a one or a zero, it is true or it is false.
With binary thinking, there is no compromise. As we give machines more responsibility and power in fields as diverse as law enforcement, zoning, and pollution control, the ability to compromise becomes more critical.
In real life, most things are not binary. The beauty of compromise is that it exists between true and false, yes and no.
Another issue with the power we are giving machines is that we often hardwire our prejudices and biases into them. The decision trees used by coders often reflect their unconscious biases. When all coders are from the same background, there is no chance to correct data to enter the machine learning protocols.
But Compromise requires the acceptance of a diversity of opinions, needs, and experiences. We are building machines and putting them in control of vast swaths of our civilization without giving them the benefit of a diversity of viewpoints.
Compromise isn’t always the most efficient solution, but it is often the only (morally) workable solution if there is some kind of cooperation needed.
The Future: Better Compromises by Better Negotiations
We build machines to be efficient and to optimize their tasks. But humanity is messy. The efficient solution is often morally abhorrent. Compromise is not always efficient. It puts the breaks on optimization and efficiency.
Unless we can teach machines to compromise, we may be creating an algorithmic autocracy. If we can’t teach machines to compromise, we need to insert humans at different parts of certain decision or negotiation processes to force compromise.
But even humans have their limits, a lot of human negotiators tend in a win-win negotiation (collaborative approach) to go for win-lose (competitive approach), misinterpret or lie at some point. Negotiations backed or fully run by machines could help to find the best compromise.
However, if we’re able to overcome those limits, A.I. powered discussions and negotiations might compromise better than (just) humans. For example: Could two A.I. powered bots negotiating with each other, automatically optimize to the best possible value for both sides, banishing any misinterpretation of the available information?
AI could help here to (to just name a few):
- Prepare verified information and knowledge for negotiations and discussions
- Tailor all potential and feasible options
- Structure the negotiation (1. Draft opening, 2. clarify relationship and communication and 3. focus on value creation)
- Documenting and analyzing negotiations
- Discount the visible and obvious to highlight the important hidden information, knowledge, intention or characteristics
- Evaluate moves in Negotiations (no move ist risk free)
- Separate assumptions from knowledge and make sure assumptions do not lead negotiators
- Identify “surprises” that help to tackle assumptions
- Identify the real needs from boths sides/real reasons for conflict
- Debrief negotiations and make them transparent
Very early examples are an A.I.-enabled alarm could warn a negotiator when a counterparty’s facial expressions indicate a bargaining session is about to go south or an A.I. which analyses proposition and automatically highlight the best arguments for and against it.
Compromise is essential for the success of any deal and even broader of any civilization. So, what do you think: Can we teach machines to compromise for a better society, without building an algorithmic autocracy?