top of page
  • Writer's pictureDr. Stefan Fourier

Let AI decide

Imagine sitting in a meeting. It has already lasted two hours; all the facts are on the table, the various alternatives and their consequences have been discussed and are crystal clear. A single course of action has emerged as the solution—reasonable, logical, fact-based, and entirely unambiguous. You could now go and take action. But everyone thinks, "The boss has to decide first." So, the boss announces the obvious. But in reality, he hasn't decided anything—it was completely clear what needed to be done. This "decision," which is not actually a decision, could have been left to an algorithm. Here, no decision is needed, just action as a result of facts and logical considerations. What needs to be done is clear, logical, and fact-based.

Now imagine that at the end of this meeting, the boss suddenly makes a decision that has nothing to do with the outcome of the discussion. He does not choose the reasonable, clear, fact-based solution, but something entirely different. Almost everyone has experienced something like this at some point. In the political arena, we unfortunately encounter such decisions more frequently these days—decisions against reason, against logic, against facts. Then one wonders, why does the boss (or politician) do this? There can be various reasons:

  1. The boss knows more. He is aware of facts that did not make it into the discussion round. One might wonder why he did not disclose them, but that is at least a possibility.

  2. The boss is under external pressure. He is forced by external powers—whatever they may be—to enforce a specific, factually unsupported solution.

  3. The boss has personal interests. The solution not determined by the facts may bring him personal advantages that he does not want or cannot reveal.

  4. The boss does not want to lose face. Perhaps he had propagated a different solution to stakeholders or outsiders before the meeting and does not want to look embarrassed now.

  5. The boss is simply incompetent. He did not understand the entire situation and discussion and thus makes a completely misguided decision.

Such things happen. However, and here I venture a bold thesis: an algorithm would never do this. It cannot act against logic. Only humans can do that. Wouldn't it be wiser, therefore, to forgo human influence and let algorithms take the lead when the facts are clear? They would do it better because there is nothing to decide here, only to follow the facts.

It is different when there is no clear set of facts, when different approaches exist, and a decision has to be made between these perhaps equally valid and comparably risky options. A real choice, in other words, where it only becomes clear later which path was the right one. Then real decisions are needed. When the facts are unclear and thus the chosen path can be the wrong one, humans must step in. These real decisions cannot be left to algorithms. They must be made by decision-makers who can understand and bear the risk.

To sum it up succinctly:

When we do not know what is right, a decision must be made—and only then. When we know what is right, there is no need to decide, only to act.

By nature, real decisions can be wrong. The decision-maker bears this risk and must bear it. He can seek confirmation from experts or try to distribute the risk among several shoulders. In the end, however, none of this changes the fact that he alone bears the responsibility for his decision.

Looking at the practical side, most "decisions" are actually not decisions at all because the facts are largely clear. If it is raining outside, I do not need to decide to take an umbrella—it goes without saying, unless I want to get wet and risk catching a cold. If company costs are too high, there is no need to decide on a savings program—it is an immediate conclusion from the analysis and just needs to be done.

For example, one could let algorithms handle all "decisions" where the risk of error is under ten percent. This would certainly cover most everyday cases. If the risk rises to a higher value, then real decisions are needed. And these—this is extremely important—must be made by those who bear the risk.

Politicians and leaders often make real decisions where the facts are not clear, and risks are present. They make these decisions for others, including me. I am therefore confronted with the consequences of these decisions, without being able to influence or avoid them. I understand that a state or any other form of human coexistence needs such decision-makers. I am also modest enough from experience to know that I could hardly make better decisions than the people in government and corporate leadership who do this. The dilemma is that they decide, and I have to deal with the consequences. They cannot take responsibility for their decisions, even if they wanted to.

This dilemma can only be lived with if one can be sure that the decision-makers always keep in mind what their decisions mean for everyone else. These considerations must influence the decision and be disclosed and communicated. Politicians as decision-makers must put themselves in the shoes of those for whom they are deciding. They must maintain a close connection to the people. And they must communicate openly and clearly.

I suspect that a large part of the dissatisfaction we are experiencing in the country these days is because decisions are not handled openly and reasonably. Maybe it would indeed be better to use algorithms. They could evaluate decisions based on the facts, highlight risks, set things in motion in cases of low risk, and issue appropriate warnings in cases of high risk. An algorithm would never implement half-baked laws, engage in one-sided dependencies in supply chains, or tackle projects whose goals are unclear, funding is unsecured, or success prospects are low. If all decisions, in companies and politics, were adequately prepared, evaluated, and published by algorithms, we would probably have a better world.


Os comentários foram desativados.
bottom of page