Sunday, March 20, 2005

 

Machination, n. The method employed by one's opponents in baffling one's open and honorable efforts to do the right thing.

My last post on the subject of "How To Save The Internet", from CIO Magazine. Cue the sighs of relief from the readership.

A blinding glimpse of the obvious here -

Catch Some Bad Guys
Time and again, security types bemoan the light sentences hackers get. If the penalties were harsher, perhaps people wouldn't be so fast to spread their mmalicious code.


But penalty is not a deterrent; arrest is. Right now, the bad guys know the risk equation is favorable—that it's extremely unlikely they will be caught. A higher capture rate would dissuade them.


Creating higher capture rates has a lot to do with anonymity on the network—or, more specifically, removing it. Many of the Big Ideas in this space propose less anonymity—licensure, for example. Microsoft's Charney wonders what effect automatic traceback packets— knowing quickly and reliably where data came from—would have. "It's an astounding thought," he says.


And then, he immediately comes up with the problems it presents. Traceback tells you where, not who. And privacy issues get thorny quickly. "Can you use the highway anonymously?" Charney asks. "No. But you also can't be stopped for no reason. More complicated than that, the Supreme Court has already ruled that you can't force someone to attach their name to political speech if they don't want to. So do you create an anonymous part of the Internet to ensure free speech? And if so, what stops bad guys from just using that?"


Still, if privacy issues could be worked out, and capture rates went up, attempted attacks would go down.


Pas de merde, Sherlock. Although we're dealing with a higher order of intelligence here than your average criminal, like most criminals they believe they are immune to any legal retribution, and that if indeed they are caught, they will get a slap on the wrist. Mitnick of course was the exception to the rule, but even Mitnick's sentence wasn't terribly odious in the hierarchy of things (unless you're in a blue state, where you usually get a slap on the wrist for homicide, but kill a snail darter or some other minor component of the food chain, and the liberals will be googling for the proper way to tie a hangman's knot). The problem of course is that a successful prosecution entails presenting evidence to a jury, and the vast majority of jury pools are too stupid (let's be blunt about it) to deal with a case involving technology (although, rather interestingly, the last time I was summoned for jury duty I found myself in a small kaffeeklatsch with a couple of other tired looking types who turned out to be IT consulting types as well; needless to say we would probably be proffered to jury pools involving auto accidents). I'm not arguing that there should be professional juries (as apparently there are in some places in Europe), far from it, but any prosecution of this type should require a highly literate presentation of the facts to a jury such that they can make a fair assessment of the events in question, and if indeed a person happens to be somewhat literate in the discipline, that person should not be excludable through challenge for cause (nothing we can do about preemptory challenges, though....). As I noted before, the only thing that's going to stop spammers is the threat of sure, severe punishment, and it likewise goes for cybercriminals.

Another glimpse of the obvious here -
Dictate What Software Shouldn't Do
Specs rule the development process. They dictate what a new software application should do, yet they rarely include what an application shouldn't do—like run code by itself or allow anonymous access or allow the destruction of data because of bugs. What if, from now on, all specs documents were required to include antirequirements, such as a laundry list of common features, potential unintended consequences and bugs that the application must actively eliminate from occurring before the product ships?

Absolute truth that specs rule the development process, however, detailing every single "do no harm" scenario in an app dev context is thoroughly impractical, unless the application is sandboxed. Most real-world stuff isn't going to be sandboxed. It's the job of the specification writers and reviewers to make sure the spec is detailed down to the nth level, and if they miss something, woe betide them. It's an organizational thing also - the people who will test and verify the application have to be involved from the get-go in order to determine what behaviors are quantifiable. The business which funds development and project management will often impose unreasonable timelines on these projects, leading to incomplete specs (or burnt-out teams drawing them up) and the ensuing project trainwrecks.

There are a couple of calls in the article for initiatives along the lines of the "Big Dig" and the Manhattan Project. Using the Big Dig is a pretty poor example in that there are still lingering project issues that need to be resolved, and the internal controls established by the project were notoriously faulty. The Manhattan Project (and for that matter the Apollo project as well) were vast projects designed to get to a well-defined deliverable (a big boom, or landing a man on the moon and getting him home in one piece). Internet security is not a well-defined deliverable, it is a concept, and has many components, some of which might (and I emphasize might) be achievable through projects of this sort, but as an overall goal, it's a touch amorphous, and unless there are those quantifiable business benefits that result ultimately in a positive cash flow, no one in the private sector is going to fund it.

(Yup, all excerpts from the article are quoted verbatim only for fair use purposes, and are the property of the copyright holders. I gratefully acknowledge their courtesy in providing the material for public examination and comment).

|

<< Home

This page is powered by Blogger. Isn't yours?

Technorati search