Interview with a Security Professional – M3DU54

 

Well in an effort to continue the popular “Interview with a Security Professional” series I was lucky enough to have our very own M3DU54 answer some questions for us. Word has it he dictated the responses to several fine looking specimens whilst swilling Pina Coladas on the beach on the Spanish Riviera. The life of a retired hacker is indeed a truly stressful one! On with the interview then!

Question

 

I postulated a little while ago that a good way to learn reverse engineering is by starting to analyze your own “toy” programs. One such as “Hello world” for an example as you know what the source code looks like and how the program functions. Do you think this method of self-learning a good one?

 

Absolutely. The major stumbling block for newcomers to reverse engineering is the sheer complexity. This way you can slowly build up the complexity at a rate you feel comfortable with and thus get a feel for not only how compiled code looks and relates to source – but also become more familiar with your debugging environment.

There will come a point when the pre-knowledge of the source will get in the way of any further learning. This should be a clear signal to fly the nest and start looking at other peoples code. Again, don’t try to find flaws in protection methods immediately, just try simple targets such as nag screens and work your way up. After a while you’ll be able to chart entire applications given time and read ASM almost as well as most people read source.

The days are gone where ASM was pretty much a requirement when writing code and now many ‘programmers’ don’t have a clue when it comes to debugging their own code at the assembly level which is a real shame. Personally, As an 80’s enthusiast, I think ASM is the most exciting form of programming and anything that brings people into contact with it, such as reverse engineering, should be encouraged – but I think it is right to advise people to start small otherwise the sheer scale of the task can sour them to what can be a very rewarding experience.

Question

 

Most every attack, and post exploitation technique can be discovered via data mining. The one exception would be inserting an LSP into the victim computers stack, and its stealthy communications model. Do you know of any other stealthy ways to hide the two way communications model on a compromised computer?

 

Well, it depends very much on what you mean by stealth. You can hide in the protocol stack on a compromised host with little difficulty and source packets which appear to come from a legitimate application. An LSP is a simple way to fool a software firewall into believing an authorized application sourced the data.

For example on a windows platform you can inject/receive packets from the winsock API, the winsock lower functions, the protocol drivers, the intermediate drivers, the miniport (vendor supplied) driver, or the NDIS which glues all of the driver components together. If fact, as I discovered recently you can even source and receive packets from the HAL itself, by profiling the characteristics of the network card without any need for the vendor supplied driver. The latter technique will get under any current software firewall, even if the NIC vendor builds a firewall hook into the driver itself.

But then, there are also hardware firewalls and the trick in these cases often boils down to a tradeoff between transfer speed and stealth. Your traffic must not only be sourced below the software firewall hook (or above the hook and masquerading as an authorized app) to fool the local host security it must also look like legitimate traffic when out there on the wire.

If you mimic properties of legitimate traffic (Generally by observing existing TCP, which you can see as being 2-way established) you can thus negotiate the local hardware and software firewalls with little difficulty. That’s the look’n’learn approach…

That’s rather like watching vehicles on a road. If you see that a particular turnoff has traffic in both directions then its fairly certain that the route ain’t blocked. If all the cars are red, then it would be fair to say that red cars are not blocked in either direction. A similar system applies in profiling the local firewall rules – if the traffic is bound for a global rather than local IP range then it is fair to say you’ve found a possible way through the perimeter. Copy everything but the remoteIP and content. If your own packets block, then the firewall is probably blocking them on the remote IP and you try another visible route.

Obviously the above approach requires that the destination is prepared to accept this traffic on any port, which is pretty easy if you base your receiver around a packet driver rather than a socket object.

Another way to achieve stealth is by manipulating unused option space in the headers of authorized outgoing packets. Of course, this relies on having a collection point upstream of the compromised host – but it can be used to hide data across the enterprise for collection at the perimeter where you can strip off the options and compile them into packets to a specific destination. That takes more time to set up and an infected gateway but it does mean that there is nothing on the local topology to alert the admin who may be monitoring for telnet and other strange connections. If the device on the gateway stores and bursts then this can be quite effective.

Another strong method in larger organizations is to hide data in fake DNS lookups to a remote fake DNS server you control. As long as you keep the traffic level down this is rarely noticed but it is an excellent way to pass captured user passwords out of a secure network, and the resulting replies (if you send them) can be used for a reverse stream. The key is to make the requests look legitimate but even if each lookup only carries one or two characters of real information and you only send one every 2 minutes the buffer will clear quickly as you’re unlikely to see a machine getting a unique login every 30 minutes.

 

Question

 

While compromising embedded systems like a Cisco router can have its uses, it in reality is not as useful as some might think. Would you agree? Be it based on a normal hacking scenario.

 

I’m not sure how useful people think it is. In fact, I’m not entirely sure what a ‘normal’ hacking scenario would be.

Owning a device considered as ‘hardware’ whether it be a router or a door entry system is powerful. It is powerful because most people would exclude it as a possibility when evaluating a system for security or even when analysing an attack after the event. Those investigating will often assume that the thief had an access token or somehow managed to get by the firewall rules. Rarely will a detailed inspection of the device be carried out. This is because most security experts will tend to analyse all attacks in a familiar context.

A cisco router is a node on the network. It is a node like no other because, unlike the workstations, it is not sitting underneath a switched environment but on top of it. It therefore has a much wider view of the network topology than one can easily gain sitting at a workstation. Additionally, it has multiple interfaces, each can have multiple virtual interfaces, it can spoof and tamper packets on the wire far more effectively than a switched host.

I think the problem many people have is that they view a Cisco router as a device which is ‘configurable’ rather than ‘programmable’ – Whilst it is possible to cause a fair amount of mayhem within the confines of the IOS environment it is considerably more useful to run code alongside the IOS environment. I’d say simple reconfiguration of a router to which you have access is of limited use in a typical attack – but owning the router outright by uploading a modified IOS image is a considerable advantage which will persist much longer than any temporary flaws in the perimeter security or hidden tools on a compromised host.

Furthermore whilst a host is often re-provisioned at the first sign of a problem a router generally just undergoes a check or replace of its startup-config file which would have no effect if the IOS were compromised as the config file would appear untouched.

Even if one were to replace the IOS this is generally done via an IOS command such as ‘copy tftp flash’. The problem with this is that the IOS itself is suspect then relying on the IOS to remove the firmware image and install the new one cleanly is also rather suspect. When we consider that the bootblock is also writeable then we have a compromise situation that is terribly difficult to detect or resolve.

So, to answer your question, if I had to choose one method to ensure later re-entry into a compromised network I’d go for a Cisco router over a host or server rootkit every time. Whilst it won’t get you the immediate level of access you’d like (Such as the ability to copy a database) it’s as useful as a key to the networking room.

 

Question

How important is “social engineering” to industrial espionage seen as many corporations do have hardened networks.

 

An interesting question. I think Social Engineering has always been fundamental to Industrial Espionage and, although the growth in data warehousing and networking has shifted the focus of industrial espionage greatly, social factors are still key. When a particular target has to be compromised it is often very much faster to employ social methods than it is to try system cracking.

 

Social Engineering is a much misunderstood and berated concept. It is often seen by system crackers as the shallow end of the pool. Personally I think exploiting people is every bit as fascinating as exploiting overflows. Both software and people are complex and it is in these complexities that weaknesses exist. Identifying and exploiting weak links in a personnel list is every bit as challenging as finding loopholes in system configurations, perhaps more so.

I think the problem occurs when people equate Social Engineering with asking for a password or talking someone into telling you their key. As an example, at a dinner party try mentioning some difficulty you have had remembering a password and the difficulty it subsequently caused – most people at the table will happily begin running through their favourite password strategies. And yes, it really can be that simple. But there’s a whole other class of social engineering which is as evolved as language itself.

Social engineering, bribery, blackmail, employee positioning and many other methods have been used to gain the advantage in business for a very long time, networked computing is a relatively recent development which can assist the thief in locating or transferring data or provide him with the tools to establish trust or credibility – but for most modern corporate spies technology is still little more than a supplementary tool.

 

Question

 

The statement that IPv4 is inherently insecure appears to me to be slightly disingenious as we have IPSec to use when required. Would you agree?

 

I would say that ensuring security and privacy is a layer 6/7 concern.

The layered design philosophy is widely regarded in the computing industry as the founding stone of interoperability and adaptability. Certainly, if it wasn’t for the layered approach to networking we would not be seeing the rich variety of inter-networked computing we see today.

I think it is easy to lose sight of this and blame some lower-level protocol for the security issues of the day, personally I think that IPv4 performs its job admirably in this regard. The IP layer corresponds to Layer 3 in the OSI model and, as such, is akin to the ‘postal service’ of popular analogy. Now, you can blame the postal service for losing letters, you can blame the postal service for incorrect delivery – but one thing you cannot blame the postal service for is your own failure to establish the identity and credibility of yourself and your correspondents or to take reasonable safeguards to protect secrets in transit. That’s simply not their job. If it were, getting a letter posted at all would be an ordeal.

Layer 3 in the OSI model is our ‘postal service’ – it defines a mechanism for getting data from A to B over multiple links. That’s all it does and it does so as quickly and efficiently as possible. If we want reliable transmission we use higher level protocols such as TCP – if we want low overhead best-effort delivery we use UDP. That’s the flexibility of the layered approach.

If we wanted to design a postal service that encrypts everything in transit, guarantees delivery, authenticates endpoints, et al then we could do so. But that would mark the end of the humble postcard which is cheap, efficient and frees one from spending their time licking envelopes.

 

Question

 

When it comes to reverse engineering a program with the goal of finding overflows or format string issues do you use a standard methodology? Much like RF signals analysis using the “find the modulation method, the modulation rate, symbol state” and so on.

 

Realistically it’s just something you develop a taste for. First I note the language, the compiler and any static linkages, then if applicable any activex/COM objects. Very often at this point you will notice a few familiar faces – common objects with exploitable conditions or third party libraries you’ve come across before and know to have identifiable weaknesses.

Also, knowing the compiler and language allows you to search for the most common culprits, sprintf’s as an obvious example, which will need greater scrutiny. This part takes forever. Each one needs to be examined in context to make sure its parameters are bounds checked… you also have to do a lot of back tracing and examine the entire subroutine in order to determine if the condition can be triggered by its entry conditions (Not just its formal parameters)

I find a good timesaver is to profile memory accesses at runtime and build up a map of buffers and the code that references them. This is where you’re most likely to find application specific vulnerabilities and it will help concentrate your initial efforts to those areas of code most likely to contain potential for overruns. Once done, try to focus on those buffers most obviously linked with parsing user input or socket data first as these are going to be the most easily accessible.

The most difficult aspect is identifying potential race conditions and problems arising from multithreaded applications and their contentions with shared resources. Of course, these are also the hardest to replicate outside of a test environment and so I’d tend to leave these unless the application is high profile and more direct methods draw a blank.

The problem is that most useful exploitable conditions are not easy to find. Those that ARE easy to find are often short-lived. There is no one-shot tool or perfect methodology to find them all quickly, if there was then generally code would be much more secure before shipping.

Once found there’s no guarantee you can force the condition from the outside and I personally find this part to be very tedious. You may have identified a number of potential conditions but you now have to work backwards from each in the hope that there is some entry condition which can force them – then, work backwards again, until you reach some external interface you can manipulate to seed those conditions. This is why I tend to begin by work forwards from buffers related to IO and parsing.

Finally, on larger projects the best thing you can do is get a team together. Seriously, the more eyes you have looking over the code the better your chances are otherwise it is so easy to burn yourself out and start missing things. I find that after a couple of hours my brain tends to shift into automatic and I’m useless for anything else such as answering the phone or responding to verbal questions – that’s where I am most productive. It can only described as a kind of mental myopia.

My personal preparation involves:

– Plenty of perforated continuous fanfold listing paper (Never underestimate the power of hardcopy when analysing code)

– Disconnect the telephone and cut power to the doorbell

– Move coffee and antacid tablets close to the workstation

– A multi-monitor / remote debugging environment is a must

– Have a bunch of pillows and cushions for napping/relaxing

– Stock up on quick food, pot noodles are great

– Keep ambient lighting constant, comfortable and glare free

I find it best to immerse myself in the code and take regular 1-2 hour relaxations rather than actual sleep if possible as this really keeps you in the zone. I find I work best with a proper sleep around once every 48 hours.

I know many people would disagree and say that such extreme immersion is unhealthy and unnecessary, they are right of course – the thing is, the white hat community may be able to get by with juggling work commitments and code evaluation but the ability to single-mindedly dedicate yourself to the task is what gives a small team of blackhatters the edge over both developers and the professional security community. If you’re prepared to break your head over the code you WILL find exploits ahead of the crowd. It just takes dedication, focus and a lack of external commitments.

After all, for the serious blackhatter, 0-day is 0ld news.

Question

 

Realistically just how important is having good math skills to a good programmer?

 

I don’t think it is essential for being a good programmer. People have said to me many times in my life ‘Oh, you can program computers – you must be really good at math’ and I generally laugh and say ‘no, that’s why I need the computer’ – I even once had someone start throwing me complex multiplication believing that if I was a programmer I could rapidly solve it in my head.

I think it’s a misconception many people have about programming. Whilst it is true that a good background in mathematics will serve you well it is by no means a prerequisite.

Obviously, an appreciation of simple things such as binary and hexadecimal numbering is essential as is a good grasp of boolean logic and an understanding of many mathematical operations and, more importantly, how they can be applied. But to determine to what extent mathematics will be important to an individual programmer really depends on what area they want to get into.

Obviously, if you want to be a games programmer or write engineering simulations you’re going to need a much richer set of mathematical tools at your disposal. But again, things like integration networks, matrix math, and the complexities of collision detection and finding collision points can be mastered very quickly.

You don’t need a degree in math to be a good programmer. Although I think you will struggle to be a well-rounded programmer if you don’t have a strong appreciation of mathematical concepts.

You will certainly need a strong stomach for the subject if you wish to turn your hand to more complex tasks. Complex statistical techniques also crop up a lot in even the most bland business applications – and, even in many simple applications, your code will benefit greatly from a finer appreciation of the subject.

 

Question

 

To be an IT Professional is to have a life of constant learning, and keeping ones current skill set up to date. Was there a routine that you used to personally accommodate this constant learning reality for todays IT Pro?

 

I’m probably not the best person to address that question.

For me the problem was never keeping on top of an increasingly restless industry it was keeping on top of my domestic life. That has been a problem for me since adolescence.

I took my CCNA and CCNP within 6 months of self-study whilst working with Nortel and keeping up with their training programme although it took 18 months before I took the IE written.

Personally, I’ve always found studying for certification far easier than, say, keeping my social life current. I think that is probably fine in the short term but in the long term it can cause significant health problems. I did go through some serious bouts of depression and anxiety. I was finally diagnosed with clinical technomania and had strong tendencies to become socially reclusive.

But then, my case is more extreme than most. Back in the dark days before broadband I used to think nothing of 4-figure monthly telephone bills – which is difficult to justify when you’re unemployed. Having my line cut for a week would make me restless and often get me cross-wiring the telco or abusing prepays.

I think the best advice I can give to those truly struggling to stay on top of the certification game is don’t let it interfere with your domestic life – nothing is worth either your health or your friends. Take your time, pace yourself, and keep some outside interests.

I know this probably doesn’t apply to the majority of the readership, but for the few that do consider technology to have been their entire life from an early age then they should be especially careful to maintain a conscious balance.

 

Question

 

For someone who simply can’t grasp programming at a good level is there something that you would still advise them to understand, and if so what is it? By this I mean at least understand certain C functions and the such.

 

Take up gardening. Seriously though, this very much depends what they want to be. If their interests are in network security then they don’t really need much programming skill at all. If they want to be a coder then they should re-evaluate their achievable aims or find some book, person or course which explains it better.

There are many fields in IT and very few of them actually require you to be able to program a machine.

 

Question

 

Were you to start learning all over again, what would you learn, and in what order, with the goal of becoming an IT Pro?

 

I don’t think I would have done much different apart from take time out to look around once in a while. I’ve always just learned whatever most interested me at the time without any real employment goal. I think that has given me a very good broad base to work from. Of course, I started before I was even ten years old.

I’ll tell you how it was for me, and you can contrast with the industry as it stands today:

By the time I became head of our after-school computer club at age 13 I was writing ASM routines to perform code-injection over BBC Econet networks (Not an exploit, a feature) thanks to a stack of Econet manuals I’d shamelessly stolen from the network room and studied at home. I had a solid understanding of 6502 and Z80 assembly due to the BBC machines at school and two Sinclairs and a Commodore64 at home – I also had a Gavilan and an Osbourne-1 (which I later sold for a KayPro II)

I also stole a number of Borland C manuals from the local college (Yes, I made a habit of stealing manuals) where I could often be found playing on a Prime550 which the admins had given me an account on after being caught in various students logins while skipping school. And so, by age 15 I was coding in C, Cobol, Smalltalk, Pascal, Fortran and ASM with mounting experience on PDPs,Primes and 8088/6 + 68000 desktops.

My big break came when I got into considerable legal trouble on multiple fronts and ended up cutting an informal deal to help patch up a weak security system for a large Anglo-American financial services company. That simply doesn’t happen today. The remaining legal issues were suspended as a result. This lead to permanent employment where the only redeeming feature was that I could now afford the toys I so desperately wanted. Later I became security cleared and went to work with AS400’s and ended up in the US minding a high security VAX cluster, later overseeing migration to an AXP cluster of DEC Alphas.

Basically I shied away from work for a long time because it would interfere with my enjoyment of coding, digital electronics and telephony. When I finally started working I found that I was technically minded enough to breeze through certifications and generally found myself taking certs that had little or nothing to do with my work function or promotion path. Work was just a convenient way to come into contact with better hardware and pay for my home setup. I’ve never really had a ‘strong’ work ethic and always been entirely self-motivated. My real love has always been an unhealthy obsession with playing dirty. I returned to this in 1999 performing various blackhat services with some close friends.

But today ?

Now things are more competitive and I don’t think such a carefree attitude ‘cuts it’ anymore. In a way I consider myself lucky to have gotten into the industry in easier times. If I was starting again now I probably wouldn’t do so well to be absolutely honest. One has to be motivated and focused simply to compete for stepping-stone jobs and build experience and that’s just not as much fun.

What was good in the 80’s/early 90’s is just not good now. The pioneering spirit is lost and now personal development is a matter of survival. It used to be that anyone with a good grasp of the system was pretty much uniquely talented. Now you either fight to stay on top or someone takes your job. Combine this with salaries crashing as more compete for the same roles… it is an absolute mess.

Personally, I consider myself better off out of it, which is why I invested heavily and retired early. To those just starting I’d say cert in every emerging technology. Five years is a lifetime so be prepared to redefine yourself at a years notice. Be flexible enough to step sideways when the salaries crash or when focus shifts. Oh, and watch your back.

I would like to take a chance to express my gratitude to M3DU54 for taking the time to answer the questions for us, and also for giving it some serious thought as indicated by his very interesting answers. Stay tuned for furthers interviews, and please feel free to kick me some names you would like to see interviewed.

Saludos!

alt.don

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top