Science

Ultra smart nazi killbot slaughters mankind, boils kitten in acid

It will not have escaped anyone interested in military technologies such as, more generally, in technological dynamics, that the 2010s were those of the placing on the agenda of questions related to Artificial Intelligence (AI). They coincide with a series of debates, linked to the use of force (particularly in counterterrorism and more broadly in counter-irregularity) as well as to drones. “The alignment of the planets” is almost perfect: behind the convergence between drones / robotics and AI is the question of the “robot-killer”. But what is hidden behind a designation that is more techno-folkloric (1) than academically relevant?

There is no question here of returning to the debates around the legal, ethical or even strategic implications surrounding the question. It has generated a large volume of literature, works, monographs and articles – whether academic, popular or press – and there is obviously no question of summarizing them here. No, rather, it is a question here of wondering about the very emergence of the term “killer robot”, hence the title of this post, which refers to with the slightly mocking hashtag that both I and Carl von C. use on Twitter.

Obviously, the terminology itself says a lot. The robot is by definition not human and the “killer” has a connotation that refers more to the assassin than to the trained soldier controlling his strength. The term is militant and its trajectory itself is interesting. It is historically found in science fiction literature. In 1991 it seems to be found for the first time in academic / essay literature (Manuel DeLanda, War in the Age of Intelligent Machines, New York, Zone Books, 1991) before Richard G. Epstein’s 1997 book, which presented a series of short stories intended to nourish ethical reflection on autonomy.

At the turn of the 2000s, the expression was only very marginally used in the field of strategic studies. It will be especially so during the second half of the decade, which corresponds to a greater and greater use of armed drones (2). The term is institutionalized with the establishment of the Campaign to Stop Killer Robots in 2013. It then became widespread in a good number of NGOs (HRW has a “killer robots” page, for example) but also in academic literature and in particular so-called critical studies, in resonance with the appearance of a real academic market around drones. The term is spreading all the more rapidly since it can be backed up by negative representations of an out of control robotics, in particular through works of fiction – starting obviously with the eternal Terminator illustrating many articles by press on the subject.

De facto, the drone becomes the “aerial figure of evil” by playing on understandable fears but also a blurring of technical and strategic categorizations, as well as on a process of disqualifying the taking into account of technical factors. works in particular in so-called critical studies – the famous critique of “expertise”, to which some oppose “academic research” (while both should share the same methods). I came back to this in particular here, more specifically on the case of the drone and in the wake of the work by G. Chamayou. We also note that the web page of the UN Office for Disarmament Affairs devoted to SALA (Autonomous Lethal Weapon System) only refers to literature which, failing to include work in strategic studies, is speculative.

So finally, why would this term be inappropriate, even though the term SALA (Autonomous Lethal Weapon System) is politically recognized, at the United Nations as in several States? If this second term has more neutral connotations, it is nevertheless surprising. Autonomy thus refers first of all to a maximalist capacity for normative production: to be autonomous is to define its own rules. However, of course, the robot cannot do this, on the one hand, because it is the result of programming which is human-made and on the other hand because, even in more advanced systems involving different statistical learning processes, the basic algorithm is human. The robot is itself designed for a task or a set of tasks, it does not have, at least so far, the potential versatility that a human has at birth. The risk is obviously that by talking about a “killer robot” badly and on a phantasmal basis, we miss much more concrete issues related to AI. It’s a conceptual journey that we started with this issue of DSI.

In particular, we saw that what is often qualified as “autonomy” is more often than not a complex series of automatisms. However, it is clear that the increasing complexity of weapon systems by the introduction of a greater number of automatic systems is a historical trend, but which, from the point of view of the indiscrimination of inflicted civilian losses, has without This doubt has already reached its peak with the ballistic or cruise missile equipped with a nuclear charge, in particular when it is equipped with decoys. More broadly, the automation of advanced functions is at the heart of the second offset strategy (3). It finds applications as complex as the Aegis combat system and its various iterations. In this case designed to cope with a saturation attack by Soviet anti-ship missiles, the fact that it was NOT used in an operational environment as complex as that of the Persian Gulf in 1988 and while the crew of the cruiser Vincennes was not trained to compensate led to the destruction of an Iran Air Airbus.

On what objective basis then position the authors using the concept of “killer robot”? The South Korean SGR-A1 is often mentioned but as it is, it is a static remotely operated cupola positioned facing the demilitarized zone. It is not certain that it will ever be switched to “automatic fire” mode: this would refer to a war between the two countries, in a context where the DMZ would undoubtedly be crushed under a large volume of firepower … Uran-9 tracked armored vehicle, sometimes presented as a “killer robot”, is for its part only remote-controlled… like a good number of terrestrial robots, again improperly presented as “killer robots”, such as the Dogo. The novelty in terms of automation lies above all, in this area, in the “follow me” function which can be used for logistics mules.

Ranger munitions or aerial munitions systems may have a capability known as ATR (Automatic Target Recognition) or ATA (Automatic Target Acquisition), also referred to as “pixel targeting”), but the process is, in fact, that of a comparison with previously encoded imagery. It is very far from being sufficiently sophisticated to attack a man in particular (Edit of 01.21.2021: or to determine by itself a target which would not have been previously encoded and therefore considered as legitimate). In addition to the rare systems with such a capacity, their commercial success is far from being significant. The Brimstone air-to-surface missile has an ATA mode that can be used against armored vehicles – which British pilots have been reluctant to use in a Libyan desert yet empty of civilians – as does an AGM-84K SLAM-ER… so far very little used in combat and only sold to South Korea, in addition to the US Navy.

In fine, why the “killer robot”? The finest semantic-academic hold-up of the century is first of all the fruit of a militant operation based on categorical interference; a series of psychological springs; and which plays on the deficits of scientific culture. It is obviously not open to criticism in itself, but it ignores real questions – starting with those, admittedly less “glamorous” – linked to the processes of targeting and creation of rules of engagement and runs the risk of ‘a wholesale rejection of robotics, while the complexity of the techno-capability debates undoubtedly calls for a little more nuance.

(1) Joseph Henrotin, “Terminator, Uzi Makers Shootin’up Hollywood. Techno-folklore and technological aberrations ”, DSI HS n ° 75, December 2020-January 2021.

(2) Grégory Boutherin, “Anti-drone movements. Birth of a campaign against remotely operated systems ”, DSI n ° 81, May 2012.

(3) Joseph Henrotin, “The third offset, networks and war in the future anterior”, DSI n ° 123, May-June 2016.

(Edit of 13.01.2021: shell on the function of Brimstone)


Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker