twitter share facebook share 2019-03-16 1392

The most important question about the Iraq War, and therefore about the British decision to join the American invasion, is whether the disorder and violence that followed could and should have been foreseen.

As early as June 2002, the Ministry of Defense’s Strategic Planning Group described Iraq as “potentially fundamentally unstable.” By December 2002, a Ministry of Defense paper described the post-conflict phase of operations as “strategically decisive.” On January 15, 2003, two months before the invasion, Blair told his defense chiefs of staff “the ‘Issue’ was aftermath—the Coalition must prevent anarchy and internecine fighting breaking out.” Blair himself was most blunt with Bush, in a note on January 24, 2003: “They are perfectly capable, on previous form, of killing each other in large numbers.”

However, a British inquiry into the Iraq War led by John Chilcot found that “when the invasion began, the U.K. government was not in a position to conclude that satisfactory plans had been drawn up and preparations made to meet known post-conflict challenges and risks in Iraq and to mitigate the risk of strategic failure.”

Blair, in his response, said that “failures in American planning are well documented and accepted,” but continued: “I note nonetheless that the Inquiry fairly and honestly admit that they have not even after this passage of time been able to identify alternative approaches which would have guaranteed greater success.”

This is where his defense is weakest. The reason the Chilcot inquiry was unable to identify better approaches to planning for the aftermath of an invasion could be that its consequences were likely to be bad, and foreseeably so.

One could contend that the danger of disorder and sectarian violence was not the main argument against military action at the time. Indeed, it was hardly part of the public debate. Robin Cook, the only cabinet minister to resign from the government before the invasion, opposed military action for other reasons, worried about the damage to international alliances and in particular by the weakness of support from the British public.

Not even cabinet ministers could have been expected to know enough about Iraq to be able to judge the likely effects of the invasion. And if they could not, other members of Parliament and members of the public certainly could not have been expected to do so. On the intelligence, Blair is entitled to say that the prewar debate had been open and public. He had sought to present what he knew, even if it later turned out to be wrong. But on planning for the aftermath, he failed to consider how badly it could turn out and, although some academic specialists publicly expressed their forebodings, most MPs and citizens were not in a position to assess the likely consequences. If a fraction of the intelligence effort devoted to weapons of mass destruction had been devoted to war-gaming the results of toppling Saddam Hussein, Iraq’s dictator, a better decision might have been reached.

If the lesson of the Chilcot inquiry is that leaders contemplating military action should imagine the worst-case scenario, Blair’s error was that he imagined the wrong one. For him, the worst case was that biological, chemical, and even nuclear weapons produced by rogue states such as Iraq, Libya, and North Korea would fall into the hands of al-Qaeda-inspired terrorists, who had killed nearly 3,000 people on September 11, 2001, but who would have killed 30,000 or 300,000 if they could—and that he had a chance to help stop them.

Contrary to conventional wisdom, political leaders often come to grief because of their principles rather than the lack of them.

Blair supported U.S. military action because he believed that Saddam was a latent threat to the British people; that getting rid of him would be good for the Iraqi people and Iraq’s neighbors; and that Britain should support the U.S., which stands however imperfectly for liberal democracy, and try to influence U.S. policy. He should have realized that these objectives were outweighed by the risk that it would go badly.

Just as David Cameron, had he not believed in Britain’s membership in the EU, could have used the ambiguous conclusion of his renegotiation of its terms to argue for a Leave vote in the referendum, which he would have won comfortably. He might still be prime minister now. That this is even more unthinkable than Blair backing out of a joint U.S.-U.K. military operation only reveals how committed Cameron, who had once called himself a Eurosceptic, really was. Similarly, John Major was too attached to the European Exchange Rate Mechanism as the guarantee of the credibility of his counterinflation policy. He could have suspended the pound’s membership in the ERM before it was forced out in 1992. It would have been an embarrassment rather than a humiliation, and might have saved something of his and his party’s reputation. And Margaret Thatcher’s determination to protect home-owning taxpayers from the arbitrary demands of Labour councils was also too strong—she could have been prime minister for even longer if she had abandoned the poll tax.

In every case, as a result of their deep conviction, these prime ministers became boxed into a course of action from which it became inconceivable to them that they could escape.

messages.comments