Uncanny valley in e-Learning
Attention creepy trench: Uncanny Valley in e-Learning
Robots and animated characters are part of our everyday lives. However, if such artificial figures look too human-like, they meet with rejection. There is a threat of falling into Uncanny Valley. This effect also plays an important role in e-learning.
Whether it’s WALL-E, the Tin Man from “The Wizard of Oz,” or the NS-5 robots that give Will Smith a hard time in “I, Robot: It’s hard to imagine film and television without robots. However, not all robots evoke the same emotions in visionaries. While the first two robots from our small selection are perceived as friendly, clumsy and lovable creatures, the robots from “I, Robot” tend to trigger unease. This has to do with the fact that their appearance and behavior is too similar to that of humans. This effect, i.e. the rejection due to a too human-like appearance of artificial figures, is called Uncanney Valley or acceptance gap. We will go into this in more detail in a moment, but we would like to illustrate this seemingly paradoxical effect with an example from the world of animated films. Some of you probably know the movie “The Polar Express”. The characters in this Christmas film were meant to inspire moviegoers, but met with rejection in many places. The reason: they looked too human-like. Animated films such as “The Ice Queen” or “Shrek”, on the other hand, which also feature animated people, do not pose a risk of falling into Uncanny Valley for viewers. The animated figures are deliberately overdrawn and have comic-like proportions. Unlike anthropomorphic figures, people accept them and in many cases even love them. It is similar with talking animals or animate things. But why is that? Why are talking fish not a problem, but human-like robots are?
In addition to the almost partially realistic depiction, however, it is also about the movement of the depicted figures. Even movement can have an impact on acceptance. We already know it from the Hollywood blockbusters, where in greenbox studios with people full of sensors they try to represent movements as realistically as possible in the later 3D rendering.
The Uncanny Valley in detail
The “uncanny valley” or “creepy trench” was discovered by a Japanese roboticist in the 1970s. The Uncanny Valley effect states that the acceptability of the behavior of artificial figures such as robots or animated characters depends on their degree of realism. The following graphic should serve for a better understanding:
As evident here, human likeness and the trust that humans place in artificial figures are related. For example, industrial robots, which consist largely of gripper arms or conveyor belts, do not inspire confidence in people. A certain human resemblance is therefore mandatory if artificial figures are to be accepted. If robots, stuffed animals or animated characters are given human-like (physiognomic or psychological) characteristics, trust among people increases. One might think here of care robots with googly eyes, such as those already in use in retirement homes. However, this increase in trust through human likeness only works up to a point. If the character becomes too human-like, trust and acceptance drop rapidly. This is why we also speak of the acceptance gap in relation to Uncanny Valley. The figures then cause discomfort and meet with rejection. That the Uncanny Valley exists is relatively undisputed, but researchers disagree about the reasons for its emergence. One possible explanation comes from psychology: It is assumed that we either classify artificial figures as self-legal or put them in the categorical drawer “human”. The former is the case when it comes to animated animals with human characteristics or belittled robots. We accept that these figures are subject to their own laws and do not question everything. If an artificial figure resembles a human too closely, we measure it by human standards. However, since these characters are never 100% human-like and therefore always show deficiencies in non-verbal behavior (unnatural facial expressions, movements, etc.), we instinctively reject them.
The Uncanny Valley in the digital space
We all know and use chatbots, chat programs that are able to simulate communication processes through algorithms. Almost all major companies offer such chatbots as an addition to their customer service. These virtual helpers have been able to establish themselves quite well in recent years. The reason: They don’t even pretend to be human. However, if these chat programs were to make us believe that there is a human being sitting on the other side chatting with us, there would be a relatively high chance of an acceptance gap developing after a response that was perceived as inappropriate or less than human. A similar problem arises with voice assistants such as Siri, Alexa & Co: A voice that sounds human-like but is still “tinny” enough to classify it as artificial tends to be accepted more readily than one that cannot be distinguished from a real human voice.
An Uncanny Valley also looms in e-learning
e-learning that works enables learners to have real-world learning experiences – keyword learning experience design. These learning experiences should be based on the actual needs of the learners and immerse them in a gripping story. From years of experience in the e-learning field, we know: e-learning is better received when these stories are built with the help of photos or videos of real people. An implementation with human-like animated or comic characters is also possible, but always carries a certain risk of an Uncanny Valley.
To avoid the occurrence of an Uncanny Valley in e-learning, we pay special attention to script planning. Once the e-learning script is ready, we run through it together with our customers. If we discover potential acceptance gaps during these role plays, we close them before e-learning is established. This ensures a coherent learning experience that offers learners added value and is guaranteed not to send them into the ditch of horror.