Milking Tucows: The Udder Truth About Cryptographic Software Reviews


Contrary to what your perception might coax you into accepting, security product reviews often have little, or in many cases, nothing, to do with security at all. General beta-testing criteria works nicely for various types of software, but cryptographic software isn’t one of those types; it’s this piece of writing’s duty to inform you of what you can conclude from software reviews, and what you can’t.


“Caveat emptor.” “Let the buyer beware,” but what if the buyer isn’t aware?


I’ll admit it – my repertoire doesn’t consist of evaluating cryptographic software, but as an academic practitioner of cryptography, acquaintances of mine often ask, “Hey Justin,” or JT, depending on how well they know me, “is this encryption software any good?” They have a retort poised and rearing, just in case my opinion doesn’t reinforce their hope that it is good; that retort is of the manner, “Well, it had <insert_one_through_five_cows>!” After hearing this enough, I switched into Sherlock Holmes mode and investigated the situation, to see what was behind this seemingly holy bovine mantra of a security rating.


Because of Tucows’ web prominence in this process of reviewing software, it was only natural that they should assume the role of the figurative guinea pig, for the sake of a catchy title; this opinionated analysis of mine extends beyond their rating system, to that of reviews found in rather popular publications, such as SC Magazine, for example. Also, this isn’t just a software issue – I’ve seen this occur with hardware reviews, as well. In general, the problem with these specific criteria-based evaluation systems is their ostensible nature; that is, when you see a high rating, you might be inclined to jump to the conclusion that it’s good at what it does, when that isn’t inherently factual. As such, these ratings are quite limited and unreliable as an alternative to security analysis. Ironically, you’ll find that information security-centric publications often provide ratings based on no criterion related to security. It’s not that they always fail to provide useful results; it’s that a host of folks misunderstand these reviews as vouchers for security. To clarify this, let’s picture software as an application of some act; in other words, there is some act we wish to perform, and this application is our stage. For example, these keys I’m currently typing on aren’t, of themselves, the act of typing; it is through them, however, that I engage in the act of typing.


The problematic issue with these specific criteria-based evaluation systems is that they usually just tell us something about the application, as an interface, but nothing about the internal act it is built to manifest; this is what occurs when cryptographic software is evaluated and given sequentially numbered ratings. It’s one thing for this keyboard to look stylish, work like a charm, and cater to the comfort of my fingertips, but this doesn’t amount to much if the keyboard doesn’t work. A cryptographic implementation is among the most demanding, analytically speaking. With the criteria that these media evaluate cryptographic software by, the software is judged on everything but its cryptographic security. This shouldn’t be as shocking as it may sound, at first. After all, if you have no criterion for evaluating security, then how do you expect to conclude anything about the security of the software being evaluated? With that in mind, what does that rating actually correspond to? The answer: veneer.


How simply usable is this veneer? What features does this veneer? How thoroughly documented are the features of the veneer? How prompt and informative are those who support the veneer? Overall, what is the trade-off between what I’m getting and what I’m paying for, veneer-wise? These are the types of questions the criteria asks, and on a good day, an evaluation will provide consumers with a useful rating for which to weigh, with a grain of salt, the accessibility of the product’s metaphorical “clothes.” In some scenarios, unrelated to security, this rating may suffice, but when applied to security, it fails miserably, because you can’t beta-test for security with these criterias. Don’t get me wrong; there’s nothing wrong with a friendly interface. I’m of the opinion that useful cryptography is transparent cryptography, which is yet another Good Thing(TM). On the other hand, as a security consumer myself, I’m not paying for a facade – I’m paying for security. First and foremost, I’m concerned with what lies beneath the veneer, and whether they realize it or not, the layman consumer is too, which is why my word of advice to them is, “If you’re seeking a security solution, then security must be what you’re after. Having said that, do not mistake five cows for indicators of security.”


Proper functionality does not equal proper security


Here’s an important rule to keep in mind: proper functionality does not equal proper security. Cryptographic software may very well work, but that doesn’t mean it works securely. No matter how much you evaluate its functionality, this tells you nothing about the underlying security. When you hand over your hard-earned banknotes, you’ve just become part of the inevitable security trade-off. You want security and the security vendor wants your business. This mutual exchange consists of your cash for their creation, and there’s no doubt that you want to feel as if the trade-off was equitable. Don’t make the mistake of basing this worth on the false assumption that security is a commodity that can be measured by a rating; it is a movement. Keep in mind that not only do these criteria lack any cryptographic attributes, but the reviewers are usually not cryptographers. Therefore, at best, they are rating an interface. Given certain criteria, they rate the consumer-to-interface experience. Nothing more. Potentially nothing less. I’ve seen decent products with low ratings, and deplorable products with high ratings. It’s a veneer thing – not a cryptographic thing. That’s right folks. You just might be placing your bets on a show horse with a lot of hype but little heart.


When it comes to security, I’m not one to hesitate from taut, iconoclastic analyses of situations. The implications of this situation are hazardous, because we’re dealing with a classic case of “e-judging” a book by its cover. Furthermore, when security-oriented publications feature such ratings, this throws another supportive log on the flaming alibi of, “Well, such-and-such magazine gave it a great rating, so it must be good!” As consumers, we delegate. Perhaps somewhere, there’s someone who knows enough about everything to make decisions on their own without consulting, or relying on, any other source, but until then, I’m content with the opinion that we all entrust some sort of security to some other source. Imagine if automobile safety was evaluated based solely on the exterior aesthetics of the vehicle, interior features and decor, or general functionality (i.e., the door opens and the car starts, so it’s safe)? This is as farcical as it sounds. The majority of us rely on automobile experts to determine the safety of vehicles; this translates to other industries as well, such as pharmaceutical, architectural, et cetera.


It’s not as if the majority of us, hopefully, would rely on a general marketing review in Car and Driver for the sake of deducing such safety results. Given that most folks would obviously agree with that philosophy, why would we deduce security results from a general marketing review in SC Magazine or Tucows? However, security dilettantes turn to such resources, which is why it’s crucial for there to be a distinction between a security analysis and a general marketing review. The sloppiness of the situation is that both reviews may render entirely different results. Remember my comment about useful cryptography being transparent cryptography? Consumers don’t know cryptography, but they do know usability, and their subjective decisions will largely be made out of personal convenience. Why? They’ve already delegated the security aspect of their decision to an expert security resource; they can take care of the usability aspect on their own. This is an open invitation for insecure products dipped in a layer of point-and-click chocolate. It’s akin to a Styrofoam castle, thinly plated in aluminum. From a distance, it may appear intimidating, but kiss your castle goodbye, if Hurricane Adversary is in the forecast; in security, the forecast can be just as uncertain as in meteorology.


Cryptographic education, the uselessness of security-less reviews, and closing remarks


Throughout the history of computer security, evidence has shown that it is an esoteric subject, and a process that many get wrong and only a minuscule handful get right, relatively speaking. There’s a monumental misunderstanding of it, and without any doubt in my mind, education is the long-term, nevertheless appropriate, way to tackle this problem. It’s out of responsibility that the security community calls attention to these misinterpreted resources. This leads me to my closing remark, which should be a lucid axiom. General reviews of security products – specifically cryptographic products, in the case of this article – are not useful for conclusions about security, by consumers; they are, at best, evaluations of the product’s interface and the consumer experience to be expected. The results of these reviews are irrespective of the actual security of the cryptographic product; in other words, the product’s security or insecurity has no bearing on the rating that’s given. Wait a minute – isn’t that what we look for first? I know it’s a rhetorical question, but it’s an important one.


It’s a sloppy predicament for consumers, because these misleading reviews are often found in the same security publications in which their faith is placed. The consumers’ assumption that, “Surely only good security products would be worthy of the ink lying upon the pages of a respected security magazine,” can lessen the effectiveness of useful advice, such as that obtained in Matt Curtin’s snake-oil FAQ, or the whistle-blower demeanor of Bruce Schneier’s “The Doghouse” reports. You have product designers with good intentions, devoid of a lick of cryptographic sense; then, you have the self-apple-polishing “consumivores” (pardon the portmanteau) who slap on the veneer so thickly that you’d think they were giving you the NSA, compressed and packaged in cellophane. Even worse, it can be quite difficult for the average Joe McParanoid to differentiate between the good and bad, when it all floats in tandem, in the Everglades of commercial cryptographic products. On a bad day, both often look the same.


There’s another article’s worth of commentary I’d like to present on “useful cryptography.” I’m anxious to further discuss the importance of education in cryptography, and take an analytical look at how a great deal of commercial cryptographic products and systems are designed without cryptographers anywhere to be found; it can be expensive to hire one, so in cases where this isn’t plausible, educating those responsible could mean the difference between secure and insecure. I’ll also introduce some advisory steps for engineers, on how to approach cryptographic infrastructure design, from conceptualization to going live. For now, these are just my two cows – I mean . . . two cents. Moore on this, next time. Sorry, but on the other hand, it would be a shame to pass up a good, albeit cheap, pun.

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top