Abstract
In recent years, the development of autonomous weapon systems and so-called ‘killer robots’, has caused a number of serious legal and ethical concerns in the international community, including questions of compliance with International Humanitarian Law and the Laws of Armed Conflict. On the other hand, governments and military services hope to develop game-changing technologies, that are ‘better, faster and cheaper’. In this paper, I wish to show how different and competing regimes of justification shape the technopolitical controversy and risk management of autonomous weapon systems. The central point of contention is the transfer of decision authority and the attribution of responsibility in cooperative networks of humans and machines with autonomous functions. As a response to the ‘legal irritation’ of hybrid networks, a new type of ‘hybrid law’ has emerged, mediating different regimes of justification and risk management in contemporary conflicts.