Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Network Intrusion Detection, Third Edition.pdf
Скачиваний:
213
Добавлен:
15.03.2015
Размер:
2.58 Mб
Скачать

Lack of understanding about TCP/IP, protocols, and services

Lack of trust in the IDS itself

Limitations Caused by the CIRTs

Could part of the problem of missed detects be the CIRTs? If your CIRT gets a report for an IMAP, portmap, ICMP ping sweep, Smurf, mscan, portscan, DNS zone Xfer, WinNuke, or whatever, no problem. They have a database pigeonhole to put it in, and everyone is happy. If the CIRT gets a report saying, "Unknown probe type, here is the trace, whatever it is it turns my screens blue," what do they do with that? The person getting the report is probably entry level and so there is a hassle because a database pigeonhole doesn't exist. The advanced analysts have a lot of work to do, and the seasoned CIRT workers have been burned by a false positive or two and aren't that likely to take action unless they get a similar report from a second source. In intrusion time, this can be a serious problem. From the moment I first heard about the klogin vulnerability in May 2000, it was less than eight hours before we were dealing with our first compromised system.

This is a serious issue because the CIRT is almost certainly understaffed. Real people are on the phone begging for help because their systems are compromised and their organization never had the funding to take security seriously. Real people screaming for help with compromised systems has to take priority over unknown probe types that turn screens blue. At the end of the month or quarter or whatever, the CIRT puts out their report: We logged this many portmaps, ICMP ping sweeps, Smurfs, mscans, portscans, DNS zone Xfers, WinNukes, and so forth. The new analyst who reported the unknown probe type sees that the report makes no mention of the unknown probe, shakes her head, and silently decides, never again. The analyst doesn't know whether the CIRT thinks she is nuts or whether the CIRT just doesn't care. This is why we made a conscious choice with Incidents.org, an all volunteer CIRT and analysis organization, to be willing to post a new pattern before the whole world and write as our commentary, "We have no idea what this is, can anybody help us?" More than once I have been embarrassed by the answer, because it was a pattern I should have known. Over time, that has led us to act more like a conventional CIRT, to be cautious about what we post, to wait until we know more. This keeps the word from getting out, and may allow an attack more time before we understand it well enough to detect it and defend against it. We know we shouldn't clam up, but it is hard to fight human nature.

A brief recap of EOI is now in order. We cannot observe every event. Of the things that we can observe, some are dismissed as unimportant when in fact they are attacks—these are the false negatives. Others are flagged as attacks when they aren't—these are the false positives. The goal of the system designer and intrusion-detection analyst should be to maximize the events that can be observed while minimizing the false positives and negatives. A number of systems and program design issues arise here, but there are also human issues to consider. Although complete efficiency might never be achieved, you should accept nothing less as your goal.

Severity

Several schools of thought propose ways to reduce severity to a metric, a number we can evaluate. This section discusses some of the primary factors that should be used to develop such a number. Let's start, however, with a basic philosophical principle: Severity is best viewed

from the point of view of the system (and its owners) under attack. This is an important principle because the further removed the evaluator is from a given attack, the less severe it is (at least to the evaluator).

It Happens All the Time

The intrusion-detection team that I worked with for several years was once invited to spend the day with a very large CIRT. The CIRT had an analysis team that had just accepted delivery of a spiffy new intrusion-detection capability, an analyst interface that could watch a large number of sensors. We all thought it might be interesting to sit with the Shadow team analysts at this CIRT's workstations and see how effective they could be with the new spiffy interface. Within four minutes, one of the Shadow analysts had found a signature indicating a root-level break-in to one of our sister sites. She wanted to call the site and tell them, but the CIRT workers laughed and said, "It happens all the time." No doubt that was true from their perspective. These folks operate well over a hundred sensors of their own in addition to all the reports they receive. They probably deal with more compromises in a year than I will experience in my entire working career. The trip still seems odd to me, however, because I know how much trouble and pain a compromised system can be to the system owners and those who have to assist them. Severity is best viewed from the point of view of the system under attack and its owner(s).

Although we do want to keep the human element in mind as we discuss the severity of attacks, we need to be able to sort between them so that we can react appropriately. At every emergency room, there is an individual in charge of triage, making sure that care is given to those who need it the most. This way, a patient with an immediate life-threatening injury doesn't have to wait while the medical personnel attend to a patient with a stubbed toe. In a large-scale attack response, resources become scarce very quickly, so an approach to triage for computer assets is required. Figure 16.3 introduces this concept at a high level.

Figure 16.3. Severity at a glance.

Are nontargeted exploits for vulnerabilities that do not exist within your computer systems actually no-risk? When you study risk more formally, you will learn that part of the equation is your level of certainty; how sure are you that none of your systems have the vulnerability? I tend to be on the conservative side. In the examples that follow, I consider nontargeted, nonvulnerable exploits to be of no risk only if they are also blocked by the firewall or filtering

router. In fact, there is a sense in which this is negative risk. The attacker using a nontargeted script exploit against a well-secured site is at a higher risk than the site because the attack will be reported. If the attacker succeeds in breaking in and doing damage somewhere else, the odds are at least fair that he can be tracked down.

What might be a reasonable method to derive a metric for severity? What are the primary factors? How can we establish an equation? How likely is the attack to do damage? And, if we

sustain damage, how bad will it hurt? Clearly, these are all factors.

Criticality

How bad will it hurt is one of the most important issues to consider in risk management. I was giving a talk in Washington, DC and wanted to make a point about anti-virus and personal firewalls so I asked, "How many of you travel multiple times a year?" Most of the hands went up, which makes sense for a government headquarters crowd. Then I asked, "How many of you carry Cipro?" Cipro is the antibiotic that was prescribed during the Anthrax attacks. Because there had not been an anthrax attack since October 2001, nobody was thinking about that. However, I can just imagine what would happen if I were in a strange city and started feeling the worst case of the flu in my entire life. How would I get access to top-quality medical care? At home, I have my doctor, who knows me, and a medical record and friends that are doctors. In Houston, or Seattle, or New York City, the answer is go to the emergency room. Do you know that it is not impossible to wait 12 hours just to be seen in an emergency room? How bad will it hurt? This is the question that should drive us. Now, just so you don't think I am totally off my rocker for carrying Cipro, I also travel internationally a lot, and though I try not to drink the water and to cook it, peel it or forget it, having something like Cipro is an important tool if things go wrong. What does any of this have to do with antivirus or a personal firewall? If you don't have these things and you are exposed, you are in a heap of trouble, just like anthrax and no Cipro.

It would be bad for me to be poisoned with anthrax, but it would be so much worse for the President of the United States to be poisoned with it. The major determinant for "how bad will it hurt?" is how critical the target is. If a desktop system is compromised, it is bad in the sense that time and work might be lost. Also, that system could be used as a springboard to attack other systems. If an organization's domain name system (DNS) server or email relay is compromised, however, a much more serious problem exists. In fact, if an attacker can take over a site's DNS server, the attacker might be able to manipulate trust relationships and thereby compromise most or all of a site's systems. When developing a metric, we need a way to quantify criticality. We can use a simple five-point scale, as follows:

5 points

Firewall, DNS server, core router

4 points

Email relay/exchanger

2 points

User UNIX desktop system

1 point

MS-DOS 3.11

Lethality

 

The lethality of the exploit refers to how likely the attack is to do damage. Attack software is generally either application or operating system specific. A Macintosh desktop system isn't vulnerable to a UNIX tooltalk buffer overflow, or an rcp.statd attack. A Sun Microsystems box running unpatched Solaris might quickly become the wholly owned property of Hacker Incorporated if hit with the same attacks. As an intrusion-detection analyst, I get nervous when an attacker can go after a specific target with an appropriate exploit. This is an indicator that the attacker has done his homework with recon probes and that we are going to have to take additional countermeasures to protect the target. Again, a five-point scale applies:

5 points

Attacker can gain root across network.

4 points

Total lockout by denial of service.

4 points

User access (via a sniffed password, for example).

1 point

Attack very unlikely to succeed (Wiz in 2002, for example).

The last example, 1 point for Wiz, introduces a really important point when calculating severity,

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]