About Me

Human vs. Machine: Why the Machine (Almost) Always Wins

Human vs. Machine



For thousands of years, human intelligence ruled our globeWe have elevated ourselves beyond the food chain and have slowly but surely built on the complex civilization that we now know. With that constant, rapid development, we now seem to get ourselves into our Achilles heel. Artificial intelligence nowadays beats the very best human players in chess ( Deep Blue ), the Jeopardy TV quiz! ( Watson ) and Go ( AlphaGo ) and statistical models make better choices than human experts. How can a machine win from nature's greatest miracle - the human brain?

It is a sad picture: you spend about one quarter of your life on school, study and specialization. You may aspire to a responsible position in which you have to make many important choices. You are in fact entitled to that: after all those years at the university or college you have knowledge and a good gut feeling for the tasks that await you. And then suddenly it appears from research  by, among others, professor Chris Snijders (TU Eindhoven), that your alleged expertise is a myth. At least, a computer model can, with the right data at its disposal, in practically all cases make a better choice than you.

Were all those years of blood, sweat and tears for nothing at all? That is perhaps a bit short through the bend. It is actually logical that we are the losing party here. There are dozens of psychological reasons why human thinking makes it to a machine. We take a closer look at seven of those 'mental beauty errors'.

1. Our thinking patterns are stuck



See the image with the  dots above. Try connecting all points with each other without taking your finger off the screen and using no more than four straight lines. If you didn't know the solution yet, you might start sweating a little by now. Come on, here you will find the solution.
The fact that in this case we literally do not think out of the box makes it difficult for us to find the relatively simple solution. It shows that human thinking is often characterized by fixed patterns and views. A computer looks at every new problem with a fresh perspective and has the capacity to search for multiple alternatives quickly.

2. Our memory is unreliable



Do you remember where you were when you heard that flight MH17 had crashed? Or when the Twin Towers collapsed on September 11, 2001? There is a good chance that this will evoke fairly accurate memories, but it remains to be seen whether those memoirs actually describe what happened then. Shortly after the Challenger Space Shuttle crashed at the launch in 1986, researchers from Cambridge University questioned hundreds of Americans about where their locations and activities were on that day. Three years later, few of their memories turned out to be true .

Our memories are constantly changing. Every time we get something from our memory, it changes a little bit. The truth slowly mixes with fantasies and opinions. Something that obviously does not bother a computer.

3. We fall for the availability heuristics


If you rely on your memory when making a choice, you mainly use memories that are easily available. Maybe something is right in front of you that reminds you of a certain event, or that memory is simply the first thing that comes to mind. Sometimes that leads to unfair comparisons. A heuristic is, as it were, an abbreviation for the brain.

With availability heuristics, your thinking process therefore takes an abbreviation by mainly relying on the memories that are easiest to hand.

If you have recently been involved in a car accident, you will estimate the chance of a subsequent accident much higher than necessary. This recent confrontation makes it seem as if car accidents are generally common, while the actual number is quite low. A machine does not take abbreviations, but only assumes hard figures.

4. We have difficulty dealing with probability calculations


Take a moment for the next money night experiment: imagine that you live in New York and have to take a taxi regularly. It is an outright disaster: in total, only 5 percent of all taxis driving around are empty.
The problem is that all taxis have blinded windows and you cannot see if anyone is in the back. Over the years you have developed a gut feeling for empty taxis. When the next taxi that arrives is empty, your navel starts to itch in 90 percent of the cases. Yet you also get itchy in 10 percent of all taxis occupied. At the moment your navel is itching and you are hailing the next taxi. What is the probability that it will actually be empty?

You may now be inclined to think that the answer is 90 percent, but unfortunately that is not correct. Remember that of the 1000 taxis, only 50 are empty. In 90 percent of those 50 times you get a gut feeling, so 45 times. Your navel also itches in 10 percent of the 950 other cases, so 95 times. In total your navel itches 140 times, but in only (45/140 =) 32 percent of those cases the taxi is empty. Given that a computer can calculate with a correct formula, these types of errors are human in 100 percent of the cases.

5. We overestimate our own capabilities


If you ask someone to estimate how certain he is of his answer, that certainty is almost always overestimated. A study by American psychologists Libby and Rennekamp shows that test subjects systematically overestimate the number of correctly answered Triviant questions. This effect is even greater for participants who think they have some degree of expertise.

 A person makes an estimate about the reliability of an answer based on intuition and feelings. A statement like "I'm 80 percent sure" is rarely really based on anything in that regard. A machine calculates that confidence interval mathematically and to the point. That certainty can then be used for further calculations.

6. We find patterns that do not exist



This is perhaps the reason why casinos still convert so much money. We think we see a pattern in the numbers that roll out of the roulette table, while in fact it is based on pure randomness. The urge to see patterns in everything comes from our evolution. If we see the high grass moving on the savannah, that can roughly mean two things.

It can be a meaningless signal (it is nothing more than the wind that carries the arid culms), or a life-threatening alarm (it is a tiger preparing for the attack). From an evolutionary point of view it is of course more logical to make the link between the movement of the grass and the danger that lurks, even if that chance is actually quite small.

In daily life, however, it is only impractical to include non-existent patterns in your considerations - that makes little sense, of course. A statistical computer model can determine fairly easily on the basis of the data whether it is likely that there is a relationship between two (or more) variables. Is there no pattern? Then that ratio is also not included in the rest of the calculations.

7. We cannot always learn from the feedback we receive



Normally we learn from the direct consequences of our actions. Think, for example, of driving: if we turn the steering wheel to the right, the car will follow irrevocably. If we are not what we want, then we must correct. The promotion has a direct consequence and based on that consequence you determine whether you need to make adjustments.

 In a more complex situation, it is often not clear what cause and effect is. A decision does not have immediate results, but can secretly have a significant impact on the future. A machine can distinguish these different factors much better from each other and can also learn from complex patterns and relationships.
It seems as if people are getting a twist on all aspects of their own technology. Yet that does not mean that we are hopelessly lost. After all, we never said it had to be a competition. It is precisely the combination between man and machine that is invincible.

Man touches on the problems, the machine finds the solution. An AI entity is (to date) extremely good at one or a few specific tasks. If you have several of those separate AI programs, you are actually building an AI toolbox. Man is the one who then decides in which situation he uses which tool. More advanced tools ultimately lead to more advanced solutions. In short, we are as powerful as the tools we work with. Leave the dirty work to those algorithms.

Post a Comment

0 Comments