Wednesday, May 25, 2005
Lies, damned lies, and benchmarks
I happened to get myself sucked into a meeting yesterday that had to do with a new paradigm for delivering information to our customers. The paradigm was interesting (and I must admit the data model was a hell of a lot better thought out than the existing ones I've seen out there), but a couple of things really caused me to raise my eyebrows about this. The first was a totally outrageous performance claim; the rocket scientists presenting were claiming a sustained throughput of over 1 gigabit per second over a single NIC on a Wintel server. Correct me if I'm wrong, but won't bus limitations throttle that performance? I don't care how good a Wintel server you've got, but I really don't think any PCI-based box is going to give you sustained 1Gbps throughput. I can easily see 700 Mbps, maybe 800 if you push it, but filling the pipe? I don't think so.
The second claim was rather interesting. Since the performance discussion started out talking about Wintel boxes, I asked how the code performed under Linux (supposedly, the code is pure ANSI C or C++ depending on who you talk to; the only Java going near this thing is in the management interface, performance is critical) and I was very surprised to hear that the developers found that they were getting better results with Microsoft's Visual Studio compiler going to a straight EXE file rather than with any compiler under Linux (I would've expected some variances in performance between gcc and commercial compilers under Linux, but then again, gcc is the de facto standard). It's just always been my experience that Visual Studio will link everything in creation into the final EXE (I mean, why do you need to link in half of MFC for a character-based program?). I also got the rather surprising viewpoint that Red Hat was the preferred Linux environment for running this product (the wind has been blowing towards SuSE from a professional and management standpoint in my neck of the woods).
The third claim was one where I have to call BS. This particular solution involves delivery of external information to our clients, who generally are very risk averse, and have tons of internal auditors and risk managers looking at ways to prevent any sort of nasties from entering through supposedly trusted channels. The usual method here is of course a couple of layers of SPI firewalls plus intrusion detection, perhaps not as heavily protected as a straight Internet connection, but still fortified. The presenter yesterday claimed that my esteemed client from last year, Colditz, felt that this was a trusted solution and did not need any firewalling (needless to say, firewalling would take their vaunted performance numbers to the proverbial toidy). Now, it just so happens that I've got tons of docs and standards from Colditz and our work products from there, and I've got a very nice PDF file that specifies exactly how that external connectivity should be provisioned, and of course the word firewall is mentioned about 853 times in the document. The only difference being that Colditz has migrated to Nokia firewall appliances in the interim instead of vanilla PIXes - the standards and rulesets are still the same. Presenter called BS. I called a risk manager friend over at Colditz. He said, "ain't gonna happen without a firewall, tell him he's full of it". It was an interesting conversation. When I asked for who they're talking to in the engineering group over at Colditz, they refused to give me a name. I dropped a couple of names (CTO and CIO's office, plus head of engineering) I dealt with over there, and offered to call them. Storm clouds quickly descended.....
Only one conference call today, so I should count myself lucky. Heading out to pick up the Brian Wilson Smile DVD this AM, review tomorrow.