Posts

Showing posts from March, 2012

Activism leads to... what?

I am sure I am missing out on some important issues again. I've been thinking about these activists and their actions. I understand that these are very passionate people and very concerned about certain issues, whatever they may be, animals, traffic, torture, food, openness on the net etc. That's all fine, but what I don't get are the actions. I see crowds outside fur shops who manifest their opinion against fur dealers, and how cruel it is to farm animals for the soul purpose of becoming coats for rich ladies in rich areas of town. On the other hand they have leather shoes on... OK, I admit it might be that these shoes are made of leather that happens to be a biproduct of food manufacturing and thereby quite acceptable. Then I think, most of these young (they mostly are) have decided to take their thoughts a bit further and have become vegetarians/vegans to really show their point. And that takes away the leather shoe theory too. This is still unclear to me... Someti...

RDP: MS12-020 - stupid Secunia comment

I just read this article at Computerworld and was somewhat stunned by the comment from an "security expert" from Secunia: However, the fact that RDP is disabled by default on Windows workstations limits the number of potential targets, so we shouldn't worry about the next Conficker, said Carsten Eiram, chief security specialist at Danish vulnerability research firm Secunia. Ehmmm... OMG! I certainly hope this poor guy was not quoted correctly, because what it would mean is that a vulnerability, even critical, is not critical if it is not widely deployed. So, if a company has a telnetd running on the DMZ without no password restrictions it is fine, because it is just one server among maybe hundreds. I have no words for this stupidity and ignorance. This sounds a lot like a guy I used to work with (he called himself a security analyst) and said we shouldn't worry about traffic in clear between two certain servers, since the CAT-5 between those server was so shor...

Penthouse level

Somewhere along the way to this level you realize that all of this data has to reside somewhere. The raw data can be zipped and stored "somewhere else", you rarely need those logs but you probably want to save them for 90 days or so, if regulatory requirements say differently. For instance banking information might be required to keep up to 10 years in some countries. So you need to start thinking about your retention policies regarding different types of data. And you don't have to keep everything on-line either. Store it on backups of some kind, it is cheaper than the terabytes of disk you would need to keep everything on-line. You still have the normalized data to work with, and that is in most cases all you need, if done correctly. The products you will be working with at this level are very, very powerful and can help you with advanced reporting tools and alerting schemes. They connect to ticketing systems an monitoring suites. They also are able to correlate d...

Purification

As discussed above at the reporting level your reports are most likely in a basic format, with a lot of information to parse through manually. Maybe `grep` is your most important tool ("find" for you Windows techs, and "inc" for IOS wizards) in combination with magic `awk` one-liners. This means you will spend a lot of time with analyzing your report (maybe) once in a month, and that's all fine if you don't have hundreds of devices logging hundreds of thousands lines per day. But at some stage you will hit a critical level where it just becomes too hard to do this manually. This is where you step into level five, reviewing. This is the level you need to be at if your trying to comply to regulatory requirements. If not you will spend a lot of time explaining to the auditors why you aren't and what you have done with regards to compensating controls, which usually is harder than to bite the bullet and do this correctly from the beginning. As mentione...

What if?

Then one morning shit hits the fan, and it's all over the place. Something goes terribly wrong and production makes a distinct halt. It takes ages to file a new order. The Intranet page just won't load. Local shares won't mount. Remote Desktop Connections time out. SSH does not work… OK, this is not good, by any measures. And yes, this is an exaggeration, you probably won't see the above happening all at once unless someone blew up your data centers or you've had a very, very malicious attack where someone erased the lot. In one way or another you have to access your devices, be it through ILO, KVM or good old RS-232, but some serious digging into your logs (that are hopefully collected) has to be done. Following your Incident response plan (you have one, right?) you call in experts in several fields to help you chase traces of faults and translate various logs from all your systems. These experts might be in-house specialists, or you could buy this expertise ...

Why do you need logs?

Logs are needed for different reasons. Your company may be a shop of developers, so you need a repository for your code, who checks out and who checks in part of the code, which ever version control system you use it needs to keep track of the different versions of code the developers produce. Look at it as a log system within the application. A competent system like that most definitely has logging capabilities (if not, throw it out now), so that is easy. Who did what at which point of time?  That application needs to log who checked out which portions of code at which point of time, since if something out of the ordinary would happen, like a buffer overflow in the Linux kernel, it would be easy to see exactly when the faulty code was introduced and by whom. The "who" part is (hopefully) not there for blame reasons, but for educational as in "we all learn from errors". Still, the main reason for logging, and preserving the logs, is traceability. The examp...

Choosing Mr. Right

As you can imagine from the above, whatever products you choose, do not trust a vendor that promises "Plug it in and you will be up and running in ten minutes!". This will not happen. You will need to tweak and configure until your fingers bleed, your eyes are blood-shot from crying, and your vocabulary is out of new and innovative cursing as well as the old and proven four-letter words before you feel things are under control. Sometimes that feelings is good, as in: you're on the right track. The rest is just a false prediction of what is ahead of your adventure with log collection and reporting. Now, don't let these words put you off. Once you've overcome the first hurdles and caveats you might experience the beauty and necessity of logs. Sounds crazy, but when you understand the heartbeats and almost organic life that is going on in your data centers you will see the benefits of your correlated and filtered data logs, and the advantage it gives your organ...

Summary

See the above descriptions as examples, there are as many sub-levels as there are implementations and interpretations of needs and requirements. The examples are nonetheless taken from real life and compiled into short descriptions of what it can look like and gives you pointers to what you need to achieve depending on which confidence level you want to achieve. The examples do not give any hints or recommendations for products you may or may not need since it is a very fast moving market and acquisitions, mergers and trends move too rapidly for this document. You'll need to do some research to find out what is suitable for you and what the different vendors offer today. There are several and different aspects you need to keep track of when comparing products. You need to choose something that is right for your organization, and hopefully you'll find something that suits your needs and has room and capabilities to scale and expand with your company's strategies and goals....

6. Log monitoring

Punch that stick into gear number six! Now we're talking business. This is the level where you know what kind of environment you're living in, you know which logs are being collected, you know they are the right ones, and you know your filters work. You probably have alerts and alarms in place. Correlation of logs from different devices are in place and dashboards blink and beep at your 24/7 SOC. Your data centers are equipped with log collectors which are filtered into easily translated dashboards or lists with relevant information displayed on screens in the SOC. Events that are important, but not enough to be displayed on those screens, result in alerts that shows up in your monitoring software, and maybe even ticketing system. In some cases everything is connected to your central configuration management database. Backups are stored off site in a tamper proof environment. Reports are easily retrievable in case of need. Some of them are automatically sent to your managemen...

5. Log reviewing

At level 4 you were looking at the reports every now and then, but now someone decided you need to be on top of things and events pretty much on a daily basis. The reports/alerts created daily need to be quite granular and really tell you the state of your environment. At this stage you don't want to browse through hundreds of pages of false positives so some refining has to be done. reading that many pages not only takes way too much time, it also makes it impossible to focus on the really important things. If it takes too long and the real problems can't be distinguished easily, these reports will not be read after a while and thus not be acted upon. To reach this level takes a lot of groundwork. You will have to have the noise reduction in place, it has to be refined every now and then too. The reports, or dashboards, need to be designed so only the most important events are being displayed and therefore easy to attend to. There should be simple means to dig in to the data...

4. Log reporting

At some point you realized that ad hoc digging in logs was a bit of a hassle, so you took time to understand your logs and saw that a lot of it really is noise that you could live without. You are actually only interested in what has been out of the normal behavior and you also want to have some figures to present for your management. So you installed (or wrote and installed) some kind of beast that filters out that noise and gives you hard facts in a rudimentary report (albeit with pretty colors) and gives you a picture of what has been going on in your environment, or parts that you decided are important. Once a month (take or leave some) you have a look at this and pass on important figures to your bosses, and the technical parts you take care of so you don't have to see them next time.

3. Log investigation

If you have had problems of some infrastructural kind, be it networking problems, application mishaps or (let's hope not) had an intrusion of some kind you would be at this level. This implies that you know where your logs are being created, at which level they are logging, how to tweak log configuration settings, and actually know how to interpret what's in the logs to human understandable technical speech, and since there has been problems someone probably knows how to translate that techno-lingo to upper level language since management most likely have been reported to at some stage. What it boils down to is that you don't really care about your logs unless there is something worth to investigate, and only then you dive into your sea of logs and dredge for the line that eventually will lead you in the direction in finding out what really went wrong in the first place.

2. Log collection

Somewhere in the back of your head you seem to remember that some wise guy at some point mentioned that logs are an important part of your infrastructure, so you actually took some time to configure logging from what you consider important parts of your installation, and took time to look at the logs to see that logs actually were produced. In other words: You decided which parts of your datacenter were important enough to extract logs from and you know logs are created, and you might even be collecting logs to a central repository such as a syslog server for the Unix boxes and switches/routers and/or event collector of some brand for the windows event logs. You might even be on the advanced side and have installed triggers or other mechanisms to trap events from your databases. This is actually the level most companies are at, at some level.

1. Log ignorance

So you have set up a shop, installed your servers and clients, deployed applications and databases and what not, but you didn't give any thought to logging events and happenings in your environment. Then this is the level you're at. Ignoring the logs, couldn't care less. I.e. no logging what so ever, unless logging is configured out of the box, and even then you wouldn't have a clue where these logs reside, let alone what's in them. Ignorance is bliss, right?

Scheduled SIEM thoughts...

To be published every morning at 4:30 AM... Follow, comment and rock on!!!

Log management maturity

A sane approach to log collection is essential to successful deployment. This means in practice: don't get in there with a belief of installing a box and you're all set and compliant to whatever your initial goal was. This is very far from reality and even further from what could be true. In real life you will want to proceed with utmost care and tread with baby steps in mind. If you know what you have in means of hardware, operating systems, applications and most importantly have a deep understanding of what you are logging and know various I/O peaks you might be able to take more of a wholesale approach in implementing log collection. Another approach (a more common scenario) for a shop that does not quite know what they have, what they need and how I/O will interfere (or not) with their existing environment would be baby steps, introducing one log source at the time, analyzing the results, and when you're happy, go on to the next log source. Let's start w...