January 3, 2021

War in the Uncanny Valley

Don’t be scared of the wrong robots.

War in the Uncanny Valley

By now most people will have seen the latest Boston Dynamics video. The reactions have been pretty much the same as with every other Boston Dynamics video, all the way back to the original BigDog: these horrible machines are going to kill us all.

But despite Boston Dynamic's best efforts to try to sell them to the military (and more recently, to various police forces), the reality remains that these robots are not suitable as weapons platforms, and probably won't be for the foreseeable future. The autonomy component is significantly lacking, and remote operation is much harder than it seems, to say nothing of the operational capability and logistics chains required to keep an Atlas (or even a Spot Mini) running in the field.

Nevertheless, the fear and disgust many people feel when seeing these robots move fluidly*, as if they were alive (even though they move on fixed, programmed loops in a completely controlled environment) is very real, due to the uncanny valley effect - the instinctual revulsion we feel at something attempting to mimic the natural world (especially human features) but not quite succeeding.

This points to a different military application for these and other robots: tools of psychological warfare.


In the imagined parallel future of Ashley Wood's World War Robot, autonomous (if temperamental) robots are an organic part of (inter-planetary) warfare. They fill every niche alongside human troops, but one type in particular has always fascinated me. Meet the square:

A small robot designed for intelligence gathering and reconnaissance, it has a special set of features designed to improve its survivability:

The Square attempts to disarm all who come across it with sickly, childish features combined with specially designed audio emissions of phrases such as: "Why can't we just get along?" and "Come out and play", all with the tone of a small child.

The square ratchets the uncanny valley effect all the way up, seeking to scramble the limbic system and delay any hostile response. It uses the uncanny valley as a form of emotional camouflage, to achieve a defensive edge.

The fictional square is very sophisticated, and far beyond anyone's current technical ability to fully realize. However, there are other militarily useful ways to apply the uncanny valley principle in the context of existing weapons systems.


It's not often recognized that autonomous weapons systems are already a standard part of the arsenal of most modern militaries. Land mines require no human operator to function after deployment (and can often be rapidly deployed remotely), are very effective in their role of passive area denial, and importantly, as tools of psychological warfare.

While the basic concept is crude almost to the point of anachronism, research into more technically sophisticated concepts for landmine use and deployment are ongoing. Notably, concepts such as the self-healing minefield aim to improve the effectiveness of these weapons systems by increasing their autonomy still further.  

One can imagine a square designed not for reconnaisance, but as a bomb. A minefield from the uncanny valley. Such weapons wouldn't need anything like general autonomy to be effective; like mines they would only need to lie dormant until it was time to activate.

What would be the effect on troop psychology when faced with mines which look and move just like animals? Like children? Mines which call to them in their own language, with unique deepfake synthesized voices? What associations would form in the soldier's minds when animals and children might be bombs? What would the video footage of soldiers shooting at child-mines look like to the folks at home?

What kind of filters will be put in the soldier's AR smartgoggles to filter it out? And how are the uncanny valley bots going to counter those? This version of the future gets weird fast, folks.  


I don't believe the uncanny valley minefield is an especially practical weapon. For one thing, mines are effective because they are very cheap to produce and deploy -   uncanny valley mines would need to be much more expensive, even though they might be technically feasible. And the nightmare of fully autonomous gatling gun weilding humanoid killbots is not even feasible at any price.

But autonomy will certainly have a role to play on future battlefields and in future wars. Just as with nuclear weapons, it will be national and international norms that govern how such weaponry is deployed. It's important that we do not get lost in science fiction visions of that future as we figure out what those norms might be.

If all you're looking for is the obvious killbots, someone is going to sneak a square right by you, and you'll be too scrambled to notice. Please be more imaginative with your nightmare visions.


* Boston Dynamics robots move that way because they use a control technique called underactuated locomotion, you can learn all about it for yourself for free.