New weapons are constantly being invented. The basic principles of international humanitarian law (IHL) regulating them are not controversial. All states agree that weapons that are inherently indiscriminate or which cause superfluous injury or unnecessary suffering may not be used. The difficulty arises with the application of these principles to actual weapons systems. With the exception of the prohibition of exploding bullets in 1868, when the government whose scientists invented the new bullet called a conference to ban it, (1) all governments defend their new developments almost as a matter of principle. Unless a new system clearly falls within an existing treaty ban (e.g., an obviously lethal chemical weapon), gigantic efforts have to be made in order to get governments seriously to consider whether a weapon should be banned. This author has had personal experience of this in the effort to ban blinding laser weapons, (2) and everyone is aware of the enormous public media effort that was involved prior to the adoption of treaties banning anti-personnel landmines (3) and cluster munitions. (4) So far, however, the weapons considered were able to be evaluated in the context of IHL principles, even if for the most part these principles were not overtly the basis of the treaty negotiations.
The emergence of cyber warfare and robots (including drones), however, poses a different type of challenge. This presentation will argue that such systems might be capable of respecting the basic principles of IHL but nevertheless seriously undermine international law. In this regard the basic principles of international law, in particular the post-Charter grundnorm of international peace and security, need to be revisited in the analysis of the future of these new technologies.
IHL, CYBER WARFARE, AND ROBOTICS
There is no obvious reason why cyber warfare should of itself violate the basic principles of IHL. All will depend on whether attacks are directed at military objectives, whether precautions have been taken to avoid collateral effects that are disproportionate, or the extent of which are unpredictable, and that such attacks are not perfidious. A particular program that can only attack civilian or protected sites, or which is by nature indiscriminate, would fall foul of these rules, but it would be difficult to imagine a conference to ban specific computer programs, although this might not be completely impossible. A more complex question is the status of those undertaking the attacks if they are not members of the military. However, this question is not qualitatively different from the basic issue of how to interpret "taking a direct part in hostilities." The most difficult issue could well be identifying the source of the attack. This could easily lead to flawed counter-attacks, but would not violate IHL if the analysis of the source of the attack were undertaken in good faith. Rather, the more serious problem is the effect of this on the prohibition of inter-state conflict (which will be analyzed further in sections 3-5 below).
For the purpose of my remarks, robots are systems that are programmed to undertake missions with the user at a distance. To some degree, earlier technologies such as cruise missiles and even mines can fall into this category. However, robots are usually associated with a greater degree of autonomy and sophistication. Drones are the robots already in extensive use, and dozens of states now have armed drones in their inventory. The argument is commonly made that an operator of a drone can take more care in accurately choosing military objectives and avoiding collateral casualties because he or she is not subject to the stress of fighter pilots. (5) However, such attacks are not immune from the usual problems associated with conflicts conducted entirely or primarily by air warfare: faulty intelligence leading to mistakes, a problem exacerbated by the covert nature of many drone strikes; and the inability to accept surrender and to search and care for casualties. …