- •Network Intrusion Detection, Third Edition
- •Table of Contents
- •Copyright
- •About the Authors
- •About the Technical Reviewers
- •Acknowledgments
- •Tell Us What You Think
- •Introduction
- •Chapter 1. IP Concepts
- •Layers
- •Data Flow
- •Packaging (Beyond Paper or Plastic)
- •Bits, Bytes, and Packets
- •Encapsulation Revisited
- •Interpretation of the Layers
- •Addresses
- •Physical Addresses, Media Access Controller Addresses
- •Logical Addresses, IP Addresses
- •Subnet Masks
- •Service Ports
- •IP Protocols
- •Domain Name System
- •Routing: How You Get There from Here
- •Summary
- •Chapter 2. Introduction to TCPdump and TCP
- •TCPdump
- •TCPdump Behavior
- •Filters
- •Binary Collection
- •TCPdump Output
- •Absolute and Relative Sequence Numbers
- •Dumping in Hexadecimal
- •Introduction to TCP
- •Establishing a TCP Connection
- •Server and Client Ports
- •Connection Termination
- •The Graceful Method
- •The Abrupt Method
- •Data Transfer
- •What's the Bottom Line?
- •TCP Gone Awry
- •An ACK Scan
- •A Telnet Scan?
- •TCP Session Hijacking
- •Summary
- •Chapter 3. Fragmentation
- •Theory of Fragmentation
- •All Aboard the Fragment Train
- •The Fragment Dining Car
- •The Fragment Caboose
- •Viewing Fragmentation Using TCPdump
- •Fragmentation and Packet-Filtering Devices
- •The Don't Fragment Flag
- •Malicious Fragmentation
- •TCP Header Fragments
- •Teardrop
- •Summary
- •Chapter 4. ICMP
- •ICMP Theory
- •Why Do You Need ICMP?
- •Where Does ICMP Fit In?
- •Understanding ICMP
- •Summary of ICMP Theory
- •Mapping Techniques
- •Tireless Mapper
- •Efficient Mapper
- •Clever Mapper
- •Cerebral Mapper
- •Summary of Mapping
- •Normal ICMP Activity
- •Host Unreachable
- •Port Unreachable
- •Admin Prohibited
- •Need to Frag
- •Time Exceeded In-Transit
- •Embedded Information in ICMP Error Messages
- •Summary of Normal ICMP
- •Malicious ICMP Activity
- •Smurf Attack
- •Tribe Flood Network
- •WinFreeze
- •Loki
- •Unsolicited ICMP Echo Replies
- •Theory 1: Spoofing
- •Theory 2: TFN
- •Theory 3: Loki
- •Summary of Malicious ICMP Traffic
- •To Block or Not to Block
- •Unrequited ICMP Echo Requests
- •Kiss traceroute Goodbye
- •Silence of the LANs
- •Broken Path MTU Discovery
- •Summary
- •Chapter 5. Stimulus and Response
- •The Expected
- •Request for Comments
- •TCP Stimulus-Response
- •Destination Host Listens on Requested Port
- •Destination Host Not Listening on Requested Port
- •Destination Host Doesn't Exist
- •Destination Port Blocked
- •Destination Port Blocked, Router Doesn't Respond
- •UDP Stimulus-Response
- •Destination Host Listening on Requested Port
- •Destination Host Not Listening on Requested Port
- •Windows tracert
- •TCPdump of tracert
- •Protocol Benders
- •Active FTP
- •Passive FTP
- •UNIX Traceroute
- •Summary of Expected Behavior and Protocol Benders
- •Abnormal Stimuli
- •Evasion Stimulus, Lack of Response
- •Evil Stimulus, Fatal Response
- •No Stimulus, All Response
- •Unconventional Stimulus, Operating System Identifying Response
- •Bogus "Reserved" TCP Flags
- •Anomalous TCP Flag Combinations
- •No TCP Flags
- •Summary of Abnormal Stimuli
- •Summary
- •Chapter 6. DNS
- •Back to Basics: DNS Theory
- •The Structure of DNS
- •Steppin' Out on the Internet
- •DNS Resolution Process
- •TCPdump Output of Resolution
- •Strange TCPdump Notation
- •Caching: Been There, Done That
- •Reverse Lookups
- •Master and Slave Name Servers
- •Zone Transfers
- •Summary of DNS Theory
- •Using DNS for Reconnaissance
- •The nslookup Command
- •Name That Name Server
- •HINFO: Snooping for Details
- •List Zone Map Information
- •Tainting DNS Responses
- •A Weak Link
- •Cache Poisoning
- •Summary
- •Part II: Traffic Analysis
- •Chapter 7. Packet Dissection Using TCPdump
- •Why Learn to Do Packet Dissection?
- •Sidestep DNS Queries
- •Normal Query
- •Evasive Query
- •Introduction to Packet Dissection Using TCPdump
- •Where Does the IP Stop and the Embedded Protocol Begin?
- •Other Length Fields
- •The IP Datagram Length
- •Increasing the Snaplen
- •Dissecting the Whole Packet
- •Freeware Tools for Packet Dissection
- •Ethereal
- •tcpshow
- •Summary
- •Chapter 8. Examining IP Header Fields
- •Insertion and Evasion Attacks
- •Insertion Attacks
- •Evasion Attacks
- •IP Header Fields
- •IP Version Number
- •Protocol Number
- •The Don't Fragment (DF) Flag
- •The More Fragments (MF) Flag
- •Mapping Using Incomplete Fragments
- •IP Numbers
- •IP Identification Number
- •Time to Live (TTL)
- •Looking at the IP ID and TTL Values Together to Discover Spoofing
- •IP Checksums
- •Summary
- •Chapter 9. Examining Embedded Protocol Header Fields
- •Ports
- •TCP Checksums
- •TCP Sequence Numbers
- •Acknowledgement Numbers
- •TCP Flags
- •TCP Corruption
- •ECN Flag Bits
- •Operating System Fingerprinting
- •Retransmissions
- •Using Retransmissions Against a Hostile Host—LaBrea Tarpit Version 1
- •TCP Window Size
- •LaBrea Version 2
- •Ports
- •UDP Port Scanning
- •UDP Length Field
- •ICMP
- •Type and Code
- •Identification and Sequence Numbers
- •Misuse of ICMP Identification and Sequence Numbers
- •Summary
- •Chapter 10. Real-World Analysis
- •You've Been Hacked!
- •Netbus Scan
- •How Slow Can you Go?
- •RingZero Worm
- •Summary
- •Chapter 11. Mystery Traffic
- •The Event in a Nutshell
- •The Traffic
- •DDoS or Scan
- •Source Hosts
- •Destination Hosts
- •Scanning Rates
- •Fingerprinting Participant Hosts
- •Arriving TTL Values
- •TCP Window Size
- •TCP Options
- •TCP Retries
- •Summary
- •Part III: Filters/Rules for Network Monitoring
- •Chapter 12. Writing TCPdump Filters
- •The Mechanics of Writing TCPdump Filters
- •Bit Masking
- •Preserving and Discarding Individual Bits
- •Creating the Mask
- •Putting It All Together
- •TCPdump IP Filters
- •Detecting Traffic to the Broadcast Addresses
- •Detecting Fragmentation
- •TCPdump UDP Filters
- •TCPdump TCP Filters
- •Filters for Examining TCP Flags
- •Detecting Data on SYN Connections
- •Summary
- •Chapter 13. Introduction to Snort and Snort Rules
- •An Overview of Running Snort
- •Snort Rules
- •Snort Rule Anatomy
- •Rule Header Fields
- •The Action Field
- •The Protocol Field
- •The Source and Destination IP Address Fields
- •The Source and Destination Port Field
- •Direction Indicator
- •Summary
- •Chapter 14. Snort Rules - Part II
- •Format of Snort Options
- •Rule Options
- •Msg Option
- •Logto Option
- •Ttl Option
- •Id Option
- •Dsize Option
- •Sequence Option
- •Acknowledgement Option
- •Itype and Icode Options
- •Flags Option
- •Content Option
- •Offset Option
- •Depth Option
- •Nocase Option
- •Regex Option
- •Session Option
- •Resp Option
- •Tag Option
- •Putting It All Together
- •Summary
- •Part IV: Intrusion Infrastructure
- •Chapter 15. Mitnick Attack
- •Exploiting TCP
- •IP Weaknesses
- •SYN Flooding
- •Covering His Tracks
- •Identifying Trust Relationships
- •Examining Network Traces
- •Setting Up the System Compromise?
- •Detecting the Mitnick Attack
- •Trust Relationship
- •Port Scan
- •Host Scan
- •Connections to Dangerous Ports
- •TCP Wrappers
- •Tripwire
- •Preventing the Mitnick Attack
- •Summary
- •Chapter 16. Architectural Issues
- •Events of Interest
- •Limits to Observation
- •Human Factors Limit Detects
- •Limitations Caused by the Analyst
- •Limitations Caused by the CIRTs
- •Severity
- •Criticality
- •Lethality
- •Countermeasures
- •Calculating Severity
- •Scanning for Trojans
- •Analysis
- •Severity
- •Host Scan Against FTP
- •Analysis
- •Severity
- •Sensor Placement
- •Outside Firewall
- •Sensors Inside Firewall
- •Both Inside and Outside Firewall
- •Analyst Console
- •Faster Console
- •False Positive Management
- •Display Filters
- •Mark as Analyzed
- •Drill Down
- •Correlation
- •Better Reporting
- •Event-Detection Reports
- •Weekly/Monthly Summary Reports
- •Summary
- •Chapter 17. Organizational Issues
- •Organizational Security Model
- •Security Policy
- •Industry Practice for Due Care
- •Security Infrastructure
- •Implementing Priority Countermeasures
- •Periodic Reviews
- •Implementing Incident Handling
- •Defining Risk
- •Risk
- •Accepting the Risk
- •Trojan Version
- •Malicious Connections
- •Mitigating or Reducing the Risk
- •Network Attack
- •Snatch and Run
- •Transferring the Risk
- •Defining the Threat
- •Recognition of Uncertainty
- •Risk Management Is Dollar Driven
- •How Risky Is a Risk?
- •Quantitative Risk Assessment
- •Qualitative Risk Assessments
- •Why They Don't Work
- •Summary
- •Chapter 18. Automated and Manual Response
- •Automated Response
- •Architectural Issues
- •Response at the Internet Connection
- •Internal Firewalls
- •Host-Based Defenses
- •Throttling
- •Drop Connection
- •Shun
- •Proactive Shunning
- •Islanding
- •Reset
- •Honeypot
- •Proxy System
- •Empty System
- •Honeypot Summary
- •Manual Response
- •Containment
- •Freeze the Scene
- •Sample Fax Form
- •On-Site Containment
- •Site Survey
- •System Containment
- •Hot Search
- •Eradication
- •Recovery
- •Lessons Learned
- •Summary
- •Chapter 19. Business Case for Intrusion Detection
- •Part One: Management Issues
- •Bang for the Buck
- •The Expenditure Is Finite
- •Technology Used to Destabilize
- •Network Impacts
- •IDS Behavioral Modification
- •The Policy
- •Part of a Larger Strategy
- •Part Two: Threats and Vulnerabilities
- •Threat Assessment and Analysis
- •Threat Vectors
- •Threat Determination
- •Asset Identification
- •Valuation
- •Vulnerability Analysis
- •Risk Evaluation
- •Part Three: Tradeoffs and Recommended Solution
- •Identify What Is in Place
- •Identify Your Recommendations
- •Identify Options for Countermeasures
- •Cost-Benefit Analysis
- •Follow-On Steps
- •Repeat the Executive Summary
- •Summary
- •Chapter 20. Future Directions
- •Increasing Threat
- •Improved Targeting
- •How the Threat Will Be Manifested
- •Defending Against the Threat
- •Skills Versus Tools
- •Analysts Skill Set
- •Improved Tools
- •Defense in Depth
- •Emerging Techniques
- •Virus Industry Revisited
- •Smart Auditors
- •Summary
- •Part V: Appendixes
- •Appendix A. Exploits and Scans to Apply Exploits
- •False Positives
- •All Response, No Stimulus
- •Scan or Response?
- •SYN Floods
- •Valid SYN Flood
- •False Positive SYN Flood
- •Back Orifice?
- •IMAP Exploits
- •10143 Signature Source Port IMAP
- •111 Signature IMAP
- •Source Port 0, SYN and FIN Set
- •Source Port 65535 and SYN FIN Set
- •DNS Zone Followed by 0, SYN FIN Targeting NFS
- •Scans to Apply Exploits
- •mscan
- •Son of mscan
- •Access Builder?
- •Single Exploit, Portmap
- •rexec
- •Targeting SGI Systems?
- •Discard
- •Weird Web Scans
- •IP-Proto-191
- •Summary
- •Appendix B. Denial of Service
- •Brute-Force Denial-of-Service Traces
- •Smurf
- •Directed Broadcast
- •Echo-Chargen
- •Elegant Kills
- •Teardrop
- •Land Attack
- •We're Doomed
- •nmap
- •Distributed Denial-of-Service Attacks
- •Intro to DDoS
- •DDoS Software
- •Trinoo
- •Stacheldraht
- •Summary
- •Appendix C. Detection of Intelligence Gathering
- •Network and Host Mapping
- •Host Scan Using UDP Echo Requests
- •Netmask-Based Broadcasts
- •Port Scan
- •Scanning for a Particular Port
- •Complex Script, Possible Compromise
- •"Random" Port Scan
- •Database Correlation Report
- •SNMP/ICMP
- •FTP Bounce
- •NetBIOS-Specific Traces
- •A Visit from a Web Server
- •Null Session
- •Stealth Attacks
- •Explicit Stealth Mapping Techniques
- •FIN Scan
- •Inverse Mapping
- •Answers to Domain Queries
- •Answers to Domain Queries, Part 2
- •Fragments, Just Fragments
- •Measuring Response Time
- •Echo Requests
- •Actual DNS Queries
- •Probe on UDP Port 33434
- •3DNS to TCP Port 53
- •Worms as Information Gatherers
- •Pretty Park Worm
- •RingZero
- •Summary
and then it can be used to capture the communication stream. Most of the sniffers deployed by hackers to collect user IDs and passwords are pull-based systems. They collect data until the collected data is retrieved.
Figure 16.5. Push or pull?
Analyst Console
So, you have determined where to place your sensors and have selected between push, pull, or both paradigms to acquire the EOI information. Now you can finally get to work. The intrusiondetection analyst does her work at the analyst console. If an election was won with the mantra, "It's the economy, stupid," someone better tell the intrusion-detection vendors that, "It's the console, stupid." An organization typically looks for the following factors when shopping for an IDS:
●Real-time
●Automated response capability
●Detects everything (no false negatives)
●Runs on Windows XP/UNIX/Commodore 64 (whatever the organization uses)
That gets the box in the door, but will it stay turned on? I have visited several sites that deployed commercial intrusion-detection systems very early in the game, and although they are still connected to the network, the console has a thin layer of dust on its keyboard. After the organization has been using the system for several months, the feature set tends to be as follows:
●Faster console
●Better false positive management
●Display filters
●Mark events that have already been analyzed
●Drill down
●Correlation
●Better reporting
Most major commercial IDS system consoles were so bad that the Department of Defense funded a number of alternate designs. Several of these are now hitting the market as products in the Enterprise Security Console market. Most organizations can't afford to develop alternative interfaces; so if you are in the market for an IDS, this list might help you select one you can
actually use. The following sections explore the console factors in greater detail.
Faster Console
The human mind is a tragic thing to waste, but that is exactly what happens when we put trained intrusion analysts' minds in a wait state. Here is what happens: The analyst has a detect, he starts to gather more information, he waits for the window to come up, he waits some more, and suddenly can't remember what he was doing.
I was working with the sales engineer of an IDS company recently and tried to point out that the interface was very slow. His answer of course was to buy a faster computer. (This was a twin 1.2Ghz Pentium IV with a gigabyte of RAM, which was still fairly current for January 2002.) One simple technique for improving the console performance is for the system to always query the information for any high-priority attack and have it canned and ready for the moment the analyst clicks on it. This way, the computer can wait for the analyst, rather than the other way
around.
False Positive Management
False positives happen. Sometimes we can't filter them out without incurring false negatives, so we must ask: What we can do to manage them?
The Code Red web attacks serve as a good example. If we write a filter that dampens probes to port 80 (and most of us did), we stand the risk of a massive false negative. If we don't use such a filter, we will cause a large number of false positives (false positive in the sense that if we are not running a vulnerable version of IIS, we don't need to be concerned with Code Red). Because Code Red is a Windows problem, we could get part of the way towards handling this problem with a better filter. If our filter language supports it, we could put in basic passive fingerprinting information for Windows into our filter. For instance, a Windows system defaults to a TTL of 128 and TCP window sizes between 5,000 and 9,000 for Windows NT and between 17,000 and 19,000 for Windows 2000; so if we see a TTL of greater than 128 and a window size that is not within spec, perhaps we could afford not to display the detect. We still collect it, but we do not bother the analyst with it. When the analyst selects any event in the potential false positive class, the console should display the regular normal information that it always does, but also the additional data to enable the analyst to make the determination.
Responsibility for False Positive
IDS vendors' feet need to be held to the fire for better false positive management. The Snort ruleset is getting better and better about providing information in the help file that tells an analyst whether there are possible false positives and what they are. But this is not good enough. Vendors must be diligent in reducing them, because false positives are the biggest hurdle to successful incident management. Vendors should fix filters that cause too many false positives, make sure that filters vulnerable to them are tunable, and delete filters that are useless and cause too many false positives. If nothing else, they must carefully document exactly the traffic pattern triggering the filters to report false positives.
Display Filters
The false positive management technique just discussed is used on some commercial IDS systems and should be considered a minimum acceptable capability. To reach a goal of detecting as many events of interest as possible, you have to accept some false positives. Display filters are one way to manage these. This is not a new idea; network analysis tools,
such as NAI's Sniffer, have always had both collection and display filters.
Mark as Analyzed
Unless you are a second-level (supervisor, trainer, or regional) intrusion analyst, life is too short to inspect events that have already been manually analyzed. After an analyst has inspected an
event, it should be marked as done. This is not rocket science. After all, the web browsers we all use mark the URLs we have already visited. Ideally, this would be more like the editing functions on modern word processors such as Microsoft Word—the event gets a tag with the date and time it was analyzed and the username of the analyst, and whether it was rejected as
a false positive or accepted and reported.
Drill Down
We certainly wouldn't want to provide users an interface that intimidates them! When an organization first starts performing intrusion detection, it might be quite happy with the system displaying a GUI interface with a picture, the name of the attack, date, time, and source and destination IPs. The happiness often ends when the organization finds out that it has reported a false positive. At this point, the analyst starts to desire to see the whole enchilada and it should be available with one mouse click. Drill down is a very powerful approach. Analysts get to work with big-picture data, and then as soon as they want more detail, they just click. The analyst should not have to leave the interface he is using—that discourages research. Analysts certainly should not have to enter a separate program to get to the data—that is inexcusable.
Drill down is not possible unless the data is collected (and it certainly ought to include the
packet headers). No analyst should have to report a detect he can't verify!
Correlation
Every analyst has seen a detect and scratched his head saying, "Haven't I seen that IP before?" Intrusion analysts at hot sites (sites attacked fairly often) frequently detect and report between 15 and 60 events per day. After a couple of weeks, that is a lot of IP addresses to keep track of manually. It also is not hard for the analysis console to keep a list of sites that have been
reported and color those IP addresses appropriately.
Better Reporting
Two kinds of reports make up the bread and butter of the intrusion analyst: event-detection reports and summary reports. Event reports provide low-level detailed information about detects. Summary reports help the analyst to see the trends of attacks over time and the manager to understand where the money is going.
Event-Detection Reports
Event-detection reports are either done event by event or as a daily summary report. They are usually sent by electronic mail. The IDS should support flexibility in addressing and offer PGP encryption of the report. The reports might be sent to groups that specialize in collecting and analyzing this information such as Incidents.org or SecurityFocus or the organization's CIRT or FIRST team, the organization's security staff. If you are shunning the attacker or plan to take action, another powerful technique is to file the report as a memo to record. For every detect displayed on the console, the analyst should have the opportunity to report with a single mouse selection accepting the detect. The system should then construct a report, which the analyst reviews and annotates before sending.
If you are shopping for an intrusion-detection system or Enterprise Security Console, sit down at the console and see how long it takes you to collect the information needed to report an event and to send it via email (or other format such as XML) to a CIRT or FIRST team. If you can't access raw or supporting data, take your hands off the keyboard and walk away from the system. If it takes more than five to seven minutes and your organization intends to report events, keep shopping. If you can collect the information including raw or supporting data and send it in within two minutes, please send me email telling me about the product so I can get one too.
Weekly/Monthly Summary Reports
Management often wants to stay abreast of intrusion detects directed against the sites for which they are responsible. Event-by-event or even daily reporting might prove too time consuming, however, and doesn't help them see the big picture. Weekly or monthly reports are a solution to this problem. In general, the higher level the manager, the less frequently she should be sent reports.
Hostor Network-Based Intrusion Detection
The more information we can provide the analyst, the better chance she has of solving the difficult problems in intrusion detection. What is the best source of this information, host based or network based? If you read the literature on host-based intrusion-detection products, you might conclude that host based is a better approach. And, of course, if you read the literature of companies that are primarily network based, theirs is the preferred approach. Obviously, you want both capabilities, preferably integrated, for your organization. Perhaps the best way to consider the strengths of the two approaches is to describe the minimum reasonable intrusiondetection capability for a moderately sized organization connected to the Internet, such as
shown in Figure 16.6.
Figure 16.6. A common architecture for a moderately sized organization.
The sensor outside the firewall is positioned to detect attacks that originate from the Internet. DNS, email, and web servers are the target for about a third of all attacks directed against a site. These systems have to be able to interact with Internet systems and can only be partially screened. Because they face high overall risk, they should have host-based intrusion-detection software that reports to the analyst console as well. This shows the need for both capabilities, host and network based, even for smaller organizations. As the size and value of the organization increases, the importance of additional countermeasures increases as well.
This minimum capability does not address the insider threat. Much of the literature for (primarily) host-based solutions stresses the insider attack problem. I keep seeing studies and statistics that state the majority of intrusions are caused by insiders. This is beginning to change and most experts agree that the majority of attacks come from the Internet. Malicious