Software, more than most other things that are designed, tends to be designed by trial and error. That's because it's so easy to build a a design to test it. Other engineers have to actually construct a prototype to test, so their time is better spent working out in advance whether the design is good enough.
This principle is responsible for the relative shoddyness of software.
It has been observed that this approach doesn't work for security purposes, as there you're not concerned to how your design responds to specific, or even random stimuli, but in whether some stimiuli can be constructed that will cause a misbehaviour. This is the concept of Programming Satan's Computer, coined by Ross Anderson.
But software isn't the only thing designed by trial and error. Any system that can evolve over time will basically be constrained by the requirement that it must appear to work. That constraint will keep most errors out, but not security flaws, just as conventional software testing keeps out most errors, but not security flaws.
There's an unrelated concept in security of the "speedbump". A speedbump is something that discourages people from doing something the designer doesn't want them to do, by forcing them to undertake some procedure which shows unambiguously that they are doing what they're not supposed to - like breaking an easily-breakable lock, or something. It doesn't actually stop them from being able to do it, but it stops them pretending - even to themselves - that they're not really doing anything they're not supposed to.
Putting these two concepts together, a real-world security process that is preventing something virtually nobody really wants to do, and is evolving over time, will tend to end up as a speedbump. If it becomes less than a speedbump, it will no longer appear to work, so that won't happen. But because the speedbump deters casual attackers, and virtually all attackers are casual, it will appear to work.
The one kind of person who shows up this kind of security speedbump is the person who, usually under the influence of alcohol, is too oblivious to be deterred by the speedbump. Back in the 1991 Gulf War, a man I knew slightly walked into the Ministry of Defence in London, wandered round some corridors, went into a random office and asked in alcohol-slurred cockney "What is this Gulf War all about then?". Similar, via Schneier, this story of a drunk man climbing over the perimiter fence and boarding a plane at Raleigh-Durham International Airport.
The fence is supposed to stop people from being able to board aircraft without passing through the proper security channels. It appeared to work, but only because nobody wanted to do it badly enough to actually climb the fence. The fence is a speedbump: entirely effective, except against terrorists and eccentric drunks.
This speedbump phenomenon is not the same as "Security Theatre". Security theatre is generally a new measure introduced for show, which, while possibly effective against a narrowly-defined threat, is easily bypassed and not effective against a broader, more realistic range of threats. These speedbumps are more likely to be long-standing security measures, which are assumed because of their long standing to be working effectively.
The complaint is that if a decision is made that security must be improved, searching out and rectifying security speedbumps is likely to be less visible and obvious than installing new, showy, security theatre, even though it could be much more productive.
Therefore we are dependent on the eccentric drunks finding our speedbumps.