The last place I expected to see a security issue was IHOP. Sure enough, though, this very thing happened in Quincy, Massachusetts. Here's the skinny. You walk into IHOP and ask to be seated, only to have a security guard ask to hold your I.D. What? This reminds me of the problem with drivers pulling out of gas stations without paying, which prompted the prepay idea. Logically, this would have been an obvious solution to try. Fortunately, this was the unauthorized move by a boneheaded employee, and not a corporate policy change. Customer, John Russo, replied, after being confronted by the security guard slash I.D. valet, “You want my license? I'm going for pancakes. I'm not buying the Hope diamond.” I heard a humorous joke about it, along the lines of needing a passport to order a Belgian waffle. Jokes aside, apart from being a ridiculous occurrence, the security implications of this situation are numerous.
Justin Troutman Blog
Immediately following this years CRYPTO conference in Santa Barbara, NIST held The Second Cryptographic Hash Workshop. From that link, you can access the papers and presentations from the workshop. This, as some of you might know, is the think-tank for ideas that may eventually result in a new hash function standard. For now, the SHA-256 is a good interim standard, but we're going to need something new. I, as are many others, would love to see an AES-style competition for a new hash function standard. If it comes anywhere close to the success of the AES selection process, it would be yet another job-well-done by the cryptographic community.
I'm all for it. Until then, if you need a standard, for whatever the reason, go with SHA-256 for a 128-bit level of security. If you don't need a standard, and are flexible to choose any primitive, I'd suggest looking at Whirlpool; it's based on the wide trail design strategy that's found in Rijndael's (AES's) design, and was co-designed by Vincent Rijmen (The "Rij" in "Rijndael.) Until next time – cheers!
Minneapolis has this so-called "bait-car" program. Simply put, the police use nice rides as decoys to lure in car thieves. Concepts like this are certainly not new, but hey, it's something I just read in the "news" and my first thought was that it's analogous to honeypots, in some regards. The article opens up with the mention of a Toyota Camry being used as a decoy.
Those of you who are aware of my habit of concocting portmanteaux shouldn't be surprised at the title of this blog entry, which I've so guiltlessly dubbed, "Honeyota." Hehe. I'm anxious to hear about any other similar concepts you folks have used to lure in attackers, even if just for the purpose of siphoning useful information that may aid in thwarting potential future attacks on crucial systems.
Okay, so this isn't new, but I've been wanting to point out what I feel is an obvious security failure. Essentially, these "DataDots" are laser-etched dots, granular in size, much like sand. They're prepared in a UV-based adhesive for application. The dots contain a unique identifier code, that upon application, will allow your property to be identified as belonging to you. This is beneficial, for stolen items that are recovered – at least, that's what they're aiming for. Here's their home page, and here's a video that introduces the concept.
My questions are: "What happens if I put my DataDots on someone else's property, and report it stolen?" "What happens when property is sold that contains DataDots?" "What happens if there are multiple DataDots, with different identification numbers, on the same item?" There are various ways to look at this problem, but these are the first few questions that pop into mind. This reminds me of a cryptographic problem involving MACs. (Y'all know how much I love MACs.)
The problem I'm referring to is as follows. A MAC, or Message Authentication Code, is a keyed function that serves the purpose of preserving the integrity of a message. Let's say Alice and Bob are communicating, and want to ensure that an adversary doesn't tamper with the message. Alice and Bob share an authentication key. Alice computes a MAC on the message, using this shared key. When Bob receives the message, he authenticates it using the same key. If the MAC he computes matches the MAC sent along with the message, then the message hasn't been tampered w
In this article at The Register, a point is raised; is it even worth disclosing vulnerabilities, considering the ramifications?
In the cryptographic community, disclosure is mortar; it is responsible for the stability of research in the field. Good, secure cryptographic design is a product of the cryptanalytical aptitude that was built before it. Simply put, this means that cryptography, in general, has been successful, due to the fact that cryptanalysts are able to publish results. These results may include practical attacks on applied cryptographic systems, which leads me to wonder – could a cryptanalyst face legal woes in the event of disclosing a cryptographic weakness with the context of this issue at hand? Probably; it's contextual.
Over the decades, the reason we've built progressively better cryptography is because of cryptanalytical results published by fellow cryptographers; younger, maturing cryptographers, such as myself, look to these past results as our foundation. Recognizing and disclosing insecurity is essential for the rethinking and designing of security.
There are nooks and crannies to the disclosure debate that have been investigated over the years, but when all is said and done, disclosure, in general, is a necessity. Errors resulting in insecurity are inevitable, but they cannot be shrugged off as a matter of course, then pushed to the side. Correctness and security are both crucial.
As humans, we have an unspoken duty of defending our basic human rights. If there was a bill of security rights, disclosure would be on there, without a doubt; it wou
I happened to be browsing CNN and noticed a story entitled, "Cameras that scold." The short description read:
"Residents and police say talking surveillance cameras reduce crime. CNN's Gary Nurenberg reports ( April 8 )"
Basically, the city of Baltimore has, at residents' requests, installed surveillance cameras that are activated by motion detection sensors. Upon activation, it alerts:
"Your photograph was just taken. We will use it prosecute you."
You can check out the video here. (Just to warn you in advance, it's a pop-up window, so you may have to adjust your pop-up blocker.)
The assumption, by the community – both residential and law enforcement – is that crime has been reduced since the implementation of these cameras. However, that's not what one can really conclude. The cameras are isolated security measures; that is, while they may deter criminals from the target they monitor, this says nothing about reducing the amount of crime that will actually take place.
What you have here isn't a way to solve the problem; it just moves the problem somewhere else. You see this a lot – protecting targets (especially those already hit). This isn't practical, nor does it make sense. Have you tried counting all the possible targets? Me neither. Suppose we have a front door and back door. A criminal comes in the front door, so afterwards, we install surveillance cameras above the front door. Does this reduce any crime? No, it just lets the criminal know that he'll have to use the back door next time.
There have been numerous reports on the ineffectiveness of sur
This question is aimed at both developers and consumers. The role I fulfil is strictly cryptanalytical; that is, when I work on a project, I conceptualize what the security infrastructure should look like, from a cryptographic standpoint, but the developers ultimately implement this conceptualization of mine. Oftentimes, when I'm brought onto the project, there is already an infrastructure in place, and nine times out of ten, it's insecure, because it's either missing something or doing something wrong. I'm in the process of writing a rather large series on this, but that's all the details I'm relinquishing for now. 😉
Anyhow, my question is this. As a developer, what types of goals do you try to achieve, cryptographically? I know this is context-dependent, but at the bare minimum, what do you feel is sufficient, for preserving confidentiality and integrity? As a consumer, what do you look for in a cryptographic solution? What characteristics are deciding factors?
Okay, so one question turned into four. Oh well. Hehe. I ask because I've noticed a lot of falsified stigmas and misconceptions that lead to developers falling short and consumers looking for the wrong things. An ongoing interest of mine is learning more about why cryptography fails so often at the implementation level, and why some bad cryptographic products are able to gather a large fan base. More importantly, I'm learning for the sake of suggesting ways to mitigate the effects of these issues, and in some cases, avoid them altogether.
Thanks in advance, and a great Thursday to y'all from the Carolinas!
Well folks, I have finally hopped on the blog bandwagon, which I am excited about. I have a personal weblog at http://www.justintroutman.org/blog/, but it's reserved for intense cryptanalytical miscellany only, such as the latest cryptanalysis from around the community, and my own research. Here, I'll discuss a variety of issues – some more cryptographic than others. Oh, and I cordially invite you – no, wait, I not only cordially invite you, but encourage you to pass along any questions or topics you may have, that you'd like to see elaborated on. Who knows; it may be the type of question or topic to devote an article to. I'll be on the look-out for some interesting security issues, of which I'll be posting soon. So, until then – bon voyadios!
The Weekly Permutation's focus shifts from happenings in the cryptographic community, to general computer security, to the politics that affect it all.
Cryptographic coverage includes everything from the latest cryptanalysis of block ciphers and hash functions to the use of cryptography in a malicious context, such as cryptoviral information extortion.
In regards to general computer security and the politics surrounding it varied topics, ranging from the legal ramifications of full disclosure to just plain rotten security decisions and products.