- •Network Intrusion Detection, Third Edition
- •Table of Contents
- •Copyright
- •About the Authors
- •About the Technical Reviewers
- •Acknowledgments
- •Tell Us What You Think
- •Introduction
- •Chapter 1. IP Concepts
- •Layers
- •Data Flow
- •Packaging (Beyond Paper or Plastic)
- •Bits, Bytes, and Packets
- •Encapsulation Revisited
- •Interpretation of the Layers
- •Addresses
- •Physical Addresses, Media Access Controller Addresses
- •Logical Addresses, IP Addresses
- •Subnet Masks
- •Service Ports
- •IP Protocols
- •Domain Name System
- •Routing: How You Get There from Here
- •Summary
- •Chapter 2. Introduction to TCPdump and TCP
- •TCPdump
- •TCPdump Behavior
- •Filters
- •Binary Collection
- •TCPdump Output
- •Absolute and Relative Sequence Numbers
- •Dumping in Hexadecimal
- •Introduction to TCP
- •Establishing a TCP Connection
- •Server and Client Ports
- •Connection Termination
- •The Graceful Method
- •The Abrupt Method
- •Data Transfer
- •What's the Bottom Line?
- •TCP Gone Awry
- •An ACK Scan
- •A Telnet Scan?
- •TCP Session Hijacking
- •Summary
- •Chapter 3. Fragmentation
- •Theory of Fragmentation
- •All Aboard the Fragment Train
- •The Fragment Dining Car
- •The Fragment Caboose
- •Viewing Fragmentation Using TCPdump
- •Fragmentation and Packet-Filtering Devices
- •The Don't Fragment Flag
- •Malicious Fragmentation
- •TCP Header Fragments
- •Teardrop
- •Summary
- •Chapter 4. ICMP
- •ICMP Theory
- •Why Do You Need ICMP?
- •Where Does ICMP Fit In?
- •Understanding ICMP
- •Summary of ICMP Theory
- •Mapping Techniques
- •Tireless Mapper
- •Efficient Mapper
- •Clever Mapper
- •Cerebral Mapper
- •Summary of Mapping
- •Normal ICMP Activity
- •Host Unreachable
- •Port Unreachable
- •Admin Prohibited
- •Need to Frag
- •Time Exceeded In-Transit
- •Embedded Information in ICMP Error Messages
- •Summary of Normal ICMP
- •Malicious ICMP Activity
- •Smurf Attack
- •Tribe Flood Network
- •WinFreeze
- •Loki
- •Unsolicited ICMP Echo Replies
- •Theory 1: Spoofing
- •Theory 2: TFN
- •Theory 3: Loki
- •Summary of Malicious ICMP Traffic
- •To Block or Not to Block
- •Unrequited ICMP Echo Requests
- •Kiss traceroute Goodbye
- •Silence of the LANs
- •Broken Path MTU Discovery
- •Summary
- •Chapter 5. Stimulus and Response
- •The Expected
- •Request for Comments
- •TCP Stimulus-Response
- •Destination Host Listens on Requested Port
- •Destination Host Not Listening on Requested Port
- •Destination Host Doesn't Exist
- •Destination Port Blocked
- •Destination Port Blocked, Router Doesn't Respond
- •UDP Stimulus-Response
- •Destination Host Listening on Requested Port
- •Destination Host Not Listening on Requested Port
- •Windows tracert
- •TCPdump of tracert
- •Protocol Benders
- •Active FTP
- •Passive FTP
- •UNIX Traceroute
- •Summary of Expected Behavior and Protocol Benders
- •Abnormal Stimuli
- •Evasion Stimulus, Lack of Response
- •Evil Stimulus, Fatal Response
- •No Stimulus, All Response
- •Unconventional Stimulus, Operating System Identifying Response
- •Bogus "Reserved" TCP Flags
- •Anomalous TCP Flag Combinations
- •No TCP Flags
- •Summary of Abnormal Stimuli
- •Summary
- •Chapter 6. DNS
- •Back to Basics: DNS Theory
- •The Structure of DNS
- •Steppin' Out on the Internet
- •DNS Resolution Process
- •TCPdump Output of Resolution
- •Strange TCPdump Notation
- •Caching: Been There, Done That
- •Reverse Lookups
- •Master and Slave Name Servers
- •Zone Transfers
- •Summary of DNS Theory
- •Using DNS for Reconnaissance
- •The nslookup Command
- •Name That Name Server
- •HINFO: Snooping for Details
- •List Zone Map Information
- •Tainting DNS Responses
- •A Weak Link
- •Cache Poisoning
- •Summary
- •Part II: Traffic Analysis
- •Chapter 7. Packet Dissection Using TCPdump
- •Why Learn to Do Packet Dissection?
- •Sidestep DNS Queries
- •Normal Query
- •Evasive Query
- •Introduction to Packet Dissection Using TCPdump
- •Where Does the IP Stop and the Embedded Protocol Begin?
- •Other Length Fields
- •The IP Datagram Length
- •Increasing the Snaplen
- •Dissecting the Whole Packet
- •Freeware Tools for Packet Dissection
- •Ethereal
- •tcpshow
- •Summary
- •Chapter 8. Examining IP Header Fields
- •Insertion and Evasion Attacks
- •Insertion Attacks
- •Evasion Attacks
- •IP Header Fields
- •IP Version Number
- •Protocol Number
- •The Don't Fragment (DF) Flag
- •The More Fragments (MF) Flag
- •Mapping Using Incomplete Fragments
- •IP Numbers
- •IP Identification Number
- •Time to Live (TTL)
- •Looking at the IP ID and TTL Values Together to Discover Spoofing
- •IP Checksums
- •Summary
- •Chapter 9. Examining Embedded Protocol Header Fields
- •Ports
- •TCP Checksums
- •TCP Sequence Numbers
- •Acknowledgement Numbers
- •TCP Flags
- •TCP Corruption
- •ECN Flag Bits
- •Operating System Fingerprinting
- •Retransmissions
- •Using Retransmissions Against a Hostile Host—LaBrea Tarpit Version 1
- •TCP Window Size
- •LaBrea Version 2
- •Ports
- •UDP Port Scanning
- •UDP Length Field
- •ICMP
- •Type and Code
- •Identification and Sequence Numbers
- •Misuse of ICMP Identification and Sequence Numbers
- •Summary
- •Chapter 10. Real-World Analysis
- •You've Been Hacked!
- •Netbus Scan
- •How Slow Can you Go?
- •RingZero Worm
- •Summary
- •Chapter 11. Mystery Traffic
- •The Event in a Nutshell
- •The Traffic
- •DDoS or Scan
- •Source Hosts
- •Destination Hosts
- •Scanning Rates
- •Fingerprinting Participant Hosts
- •Arriving TTL Values
- •TCP Window Size
- •TCP Options
- •TCP Retries
- •Summary
- •Part III: Filters/Rules for Network Monitoring
- •Chapter 12. Writing TCPdump Filters
- •The Mechanics of Writing TCPdump Filters
- •Bit Masking
- •Preserving and Discarding Individual Bits
- •Creating the Mask
- •Putting It All Together
- •TCPdump IP Filters
- •Detecting Traffic to the Broadcast Addresses
- •Detecting Fragmentation
- •TCPdump UDP Filters
- •TCPdump TCP Filters
- •Filters for Examining TCP Flags
- •Detecting Data on SYN Connections
- •Summary
- •Chapter 13. Introduction to Snort and Snort Rules
- •An Overview of Running Snort
- •Snort Rules
- •Snort Rule Anatomy
- •Rule Header Fields
- •The Action Field
- •The Protocol Field
- •The Source and Destination IP Address Fields
- •The Source and Destination Port Field
- •Direction Indicator
- •Summary
- •Chapter 14. Snort Rules - Part II
- •Format of Snort Options
- •Rule Options
- •Msg Option
- •Logto Option
- •Ttl Option
- •Id Option
- •Dsize Option
- •Sequence Option
- •Acknowledgement Option
- •Itype and Icode Options
- •Flags Option
- •Content Option
- •Offset Option
- •Depth Option
- •Nocase Option
- •Regex Option
- •Session Option
- •Resp Option
- •Tag Option
- •Putting It All Together
- •Summary
- •Part IV: Intrusion Infrastructure
- •Chapter 15. Mitnick Attack
- •Exploiting TCP
- •IP Weaknesses
- •SYN Flooding
- •Covering His Tracks
- •Identifying Trust Relationships
- •Examining Network Traces
- •Setting Up the System Compromise?
- •Detecting the Mitnick Attack
- •Trust Relationship
- •Port Scan
- •Host Scan
- •Connections to Dangerous Ports
- •TCP Wrappers
- •Tripwire
- •Preventing the Mitnick Attack
- •Summary
- •Chapter 16. Architectural Issues
- •Events of Interest
- •Limits to Observation
- •Human Factors Limit Detects
- •Limitations Caused by the Analyst
- •Limitations Caused by the CIRTs
- •Severity
- •Criticality
- •Lethality
- •Countermeasures
- •Calculating Severity
- •Scanning for Trojans
- •Analysis
- •Severity
- •Host Scan Against FTP
- •Analysis
- •Severity
- •Sensor Placement
- •Outside Firewall
- •Sensors Inside Firewall
- •Both Inside and Outside Firewall
- •Analyst Console
- •Faster Console
- •False Positive Management
- •Display Filters
- •Mark as Analyzed
- •Drill Down
- •Correlation
- •Better Reporting
- •Event-Detection Reports
- •Weekly/Monthly Summary Reports
- •Summary
- •Chapter 17. Organizational Issues
- •Organizational Security Model
- •Security Policy
- •Industry Practice for Due Care
- •Security Infrastructure
- •Implementing Priority Countermeasures
- •Periodic Reviews
- •Implementing Incident Handling
- •Defining Risk
- •Risk
- •Accepting the Risk
- •Trojan Version
- •Malicious Connections
- •Mitigating or Reducing the Risk
- •Network Attack
- •Snatch and Run
- •Transferring the Risk
- •Defining the Threat
- •Recognition of Uncertainty
- •Risk Management Is Dollar Driven
- •How Risky Is a Risk?
- •Quantitative Risk Assessment
- •Qualitative Risk Assessments
- •Why They Don't Work
- •Summary
- •Chapter 18. Automated and Manual Response
- •Automated Response
- •Architectural Issues
- •Response at the Internet Connection
- •Internal Firewalls
- •Host-Based Defenses
- •Throttling
- •Drop Connection
- •Shun
- •Proactive Shunning
- •Islanding
- •Reset
- •Honeypot
- •Proxy System
- •Empty System
- •Honeypot Summary
- •Manual Response
- •Containment
- •Freeze the Scene
- •Sample Fax Form
- •On-Site Containment
- •Site Survey
- •System Containment
- •Hot Search
- •Eradication
- •Recovery
- •Lessons Learned
- •Summary
- •Chapter 19. Business Case for Intrusion Detection
- •Part One: Management Issues
- •Bang for the Buck
- •The Expenditure Is Finite
- •Technology Used to Destabilize
- •Network Impacts
- •IDS Behavioral Modification
- •The Policy
- •Part of a Larger Strategy
- •Part Two: Threats and Vulnerabilities
- •Threat Assessment and Analysis
- •Threat Vectors
- •Threat Determination
- •Asset Identification
- •Valuation
- •Vulnerability Analysis
- •Risk Evaluation
- •Part Three: Tradeoffs and Recommended Solution
- •Identify What Is in Place
- •Identify Your Recommendations
- •Identify Options for Countermeasures
- •Cost-Benefit Analysis
- •Follow-On Steps
- •Repeat the Executive Summary
- •Summary
- •Chapter 20. Future Directions
- •Increasing Threat
- •Improved Targeting
- •How the Threat Will Be Manifested
- •Defending Against the Threat
- •Skills Versus Tools
- •Analysts Skill Set
- •Improved Tools
- •Defense in Depth
- •Emerging Techniques
- •Virus Industry Revisited
- •Smart Auditors
- •Summary
- •Part V: Appendixes
- •Appendix A. Exploits and Scans to Apply Exploits
- •False Positives
- •All Response, No Stimulus
- •Scan or Response?
- •SYN Floods
- •Valid SYN Flood
- •False Positive SYN Flood
- •Back Orifice?
- •IMAP Exploits
- •10143 Signature Source Port IMAP
- •111 Signature IMAP
- •Source Port 0, SYN and FIN Set
- •Source Port 65535 and SYN FIN Set
- •DNS Zone Followed by 0, SYN FIN Targeting NFS
- •Scans to Apply Exploits
- •mscan
- •Son of mscan
- •Access Builder?
- •Single Exploit, Portmap
- •rexec
- •Targeting SGI Systems?
- •Discard
- •Weird Web Scans
- •IP-Proto-191
- •Summary
- •Appendix B. Denial of Service
- •Brute-Force Denial-of-Service Traces
- •Smurf
- •Directed Broadcast
- •Echo-Chargen
- •Elegant Kills
- •Teardrop
- •Land Attack
- •We're Doomed
- •nmap
- •Distributed Denial-of-Service Attacks
- •Intro to DDoS
- •DDoS Software
- •Trinoo
- •Stacheldraht
- •Summary
- •Appendix C. Detection of Intelligence Gathering
- •Network and Host Mapping
- •Host Scan Using UDP Echo Requests
- •Netmask-Based Broadcasts
- •Port Scan
- •Scanning for a Particular Port
- •Complex Script, Possible Compromise
- •"Random" Port Scan
- •Database Correlation Report
- •SNMP/ICMP
- •FTP Bounce
- •NetBIOS-Specific Traces
- •A Visit from a Web Server
- •Null Session
- •Stealth Attacks
- •Explicit Stealth Mapping Techniques
- •FIN Scan
- •Inverse Mapping
- •Answers to Domain Queries
- •Answers to Domain Queries, Part 2
- •Fragments, Just Fragments
- •Measuring Response Time
- •Echo Requests
- •Actual DNS Queries
- •Probe on UDP Port 33434
- •3DNS to TCP Port 53
- •Worms as Information Gatherers
- •Pretty Park Worm
- •RingZero
- •Summary
Ask him to help you take good notes. One of your primary goals is to make a backup of the system if at all possible.
Experienced handlers often have their own privileged binary applications and this includes backup programs. If you do not possess your own forensic-type backup and seizure tools, such as safeback, it might be wiser to copy all history files and log files to removable media before taking any other action. Incident handlers are supposed to write the contents of memory to removable media as well; while easily said, however, this has proven to be hard to do in practice. The best backups are bit-by-bit backups. If this option is not possible, the next question to answer is how critical the system is and how time pressing the incident is. If criminal activity is suspected and there is reason to believe that this actually is an incident, it might be best to do as follows:
●Power down the system
●Pop the drive
●Seal it in an envelope with a copy of your notes and the notes from the person who called in the incident
●Store the drive in an evidence safe or locked container with limited access
Hot Search
If it is a critical system and criminal prosecution is not a priority, you might have to search the system hot to find the problem. This is where a tool such as Encase or The Coroner's Toolkit (TCT) can really come in handy. Both tools are available for both old Windows (FAT) and more modern Windows (NTFS) file systems. Before running either tool, I like to run Tripwire on both the search drive and my host operating system before I start. That way, if something goes horribly wrong, I have an idea where to look for the problem. There used to be a forensics tool called Expert Witness, but it died in a lawsuit. I was doing a hot search of a drive that was infected with a virus and the next thing I knew I was infected with a virus. Now of course, the forensics tool sales representative is going to tell you this could never happen with his tool and he is probably right, but why take the chance?
In any case, your goal is to determine whether the evidence on the system reasonably supports the reported indications. This is known as validating the incident, and it is not limited to the information on the suspect hard drive. A good team doesn't leave a handler all alone; hopefully, someone is working the intrusion-detection system's records and other sources of data looking for information about the affected system while you are focused on the suspect system's hard
drive.
Eradication
Sometimes, it is possible to examine the situation and remove the problem entirely; other times, it is not. With eradication, we need to pause for an upwardly mobile career observation about incident handling. If folks in an organization have suffered one compromised computer or six, they are usually pretty scared. If your team comes in and you are courteous and professional and get the job done, they really appreciate it. When they see you in halls and staff meetings, they nod and kind of say thanks with their eyes—it is a good thing.You are sort of a hero.
I used to have this really cool job in the U.S. Navy where I flew around in helicopters waiting for jets to go smacking into the water. Then we would hover over the ejected pilots and I would jump out and swim up to them, hook the crew up to a cable hoist, and we would pull them out of the ocean. You want to know what they always said when I swam up to them? Whenever I ran into them on the ship after the rescue, there was that same nod and saying thanks with their eyes.
However: If you show up and do your work and the problem comes back the next day, you are not a hero; you are an incompetent idiot. It is critical that you succeed in eradication, even if you have to destroy the operating system to do it. Repeat after me, "Nukem from high orbit."
See, that isn't hard to say. Or, "Total eradication is too good for 'em."
I have tried to inject a little humor, because we must deal with a serious issue. As an incident handler, you need to be pre-authorized to contain and destroy to save your organization. Please take the preceding sentence very seriously. The incident-handling team needs to have a very senior executive in the organization as its sponsor or champion. The handler must be able to look that very young, very successful program manager droid, who has axed many a promising technical person on a whim, in the eye and say, "Yes, I know how important this system is. We will save as much of the data as your people have properly backed up, but the operating system is toast." Many times, the only way you can be certain the problem has been eliminated is to scrub that puppy to bare metal.
Oh yeah, when I swam up to these navy pilots, they always wanted to know "what happened?" They asked their questions in such a way that it was clear they wanted to know exactly one thing: Was it their fault? Might I suggest that when you handle an incident, the folks you come in contact will be very concerned that the incident was their fault. Why our culture is so bent on blaming the victim is beyond me! Be gentle and comforting when you speak. Don't come to conclusions early. Many times, running an incident to ground is like peeling an onion a layer at a time. Even if you know in your very bones it is their fault, be kind and supportive during the
incident. The time to deal with what happened comes soon enough!
Recovery
The purpose of the incident-handling process is to recover and reconstitute capability. Throughout the process, we try to save as much data as we can, even if the system hadn't been backed up in a long time. Often, we can mount a potentially corrupted disk as a data disk and remove the files we need from it. This is another good application for Tripwire. Before mounting a suspect disk on your field laptop, make sure you have a very current Tripwire running so that you can be certain malicious code doesn't get on your computer.
Emergency medical technician (EMT) trainers use scenarios to drive home the academic points taught. One of the important lessons to teach EMTs is not to become a victim, because this makes the rescue even more problematic. If you see someone prostrate on the ground draped over a cable, for instance, don't run up to him and touch him. What if the cable is the reason they are lying there dead? What will happen to you when you grab someone connected to a high-voltage cable? The point is to use situational awareness and take a few seconds to think about the circumstances that caused the computer to be compromised. In the exact same way that failing to eradicate the problem makes the incident handler look stupid, we do not want to put the system back in business with the same vulnerability that caused it to be compromised. This is an important point, because we will probably alter the system in some way. In fact, many times, the system owners will want to use this as an unexpected opportunity to upgrade the system, or freshen the patches. I find it amusing when the same manager who looked me in the eye during the containment phase and said things like, "Do you know how critical this system is? You can't shut it down," suggests that we upgrade the operating system before returning to service.
It is all well and good to freshen the operating system. However, what happens when an outsider makes a change to one of our systems? I oversaw the installation of a firewall once at a facility that didn't have one. For the next five years, every time someone couldn't connect to something, or their software didn't work right, I would get phone calls and/or email. "Is it the firewall?" This is a career risk vector to the incident handler. Remember our very young, very successful, hell-bent-on-rising-to-the-top executive? If anything goes wrong, he might use that to deflect attention from the fact that a system in his group was compromised. What countermeasures can we take?
During the incident-handling process, I like to keep the system owners informed. As long as they are in danger, they are very interested. As soon as they can see they are going to make it, they usually turn their attention to something else. It is imperative that early in the cycle, while the adrenaline is still flowing, to pull them aside and say something like this:
Sir, our primary objective is to get you back in business with as little downtime and as few
problems as possible. I am sure you understand that because the system was compromised, we will have to make at least some minor changes to the architecture, or it is likely to happen again. To ensure that the changes we make do not impact your operations, we need a copy of the system's documentation, especially design documents, program maintenance manuals, and most importantly, your system test plan. We will be glad to work with your folks to execute your system test plan before we close the incident.
Now, you and I both know that maybe five computers on the planet earth have an up-to-date, comprehensive test plan. There is no way on God's green earth that our slick young manager is going to be able to produce it. Time to invoke the power of the pen. We produce our preprinted incident closure form. It has blocks on it for the system administrator, primary customer, and system owner to state that they have tested the recovered system and that it is fully operational. So you say something like this:
No test plan? Ummm, well sir I can't close an incident out unless the system has been certified as fully operational. Tell you what, if you will get your people to run the tests they use to certify your systems and document those tests and sign the form, tonight, we can get this incident closed. I am willing to stay as long as it takes because as you know, the CIO's goal for incident handling is for downtime to never exceed one day, and we can't clear this system for operation until it has been tested.
I invested a couple of paragraphs making this public safety announcement. It is really a bummer when a promising young incident handler gets blamed for system problems after pouring her heart out to save a compromised system. Now that you know the risk, practice safe incident-handling procedures.
After a system has been compromised, it might become a hacker trophy. The attacker might post his exploit in some way. I have seen several instances in which after a system is compromised, recovered, and returned to service, the attackers come out of the woodwork to whack it again. Use your intrusion-detection capability to monitor the system closely. It might be possible to move the system to a new name and address and install a honeypot for a few
weeks.
Lessons Learned
At first, the incident was exciting and everybody on the planet wanted to get involved. There was the hunt for the culprit, sifting through clues to find the problem, and reconstructing the chain of events that led to the incident. Then comes the slow process of recovery and testing. This is less fun and folks are leaving, saying things like, "I guess you guys can take it from here." Finally, we are done. The problem is contained, eradicated, and the system is recovered. We are all drained and possibly a bit punchy. The last thing in the world you want to hear is, "the job ain't finished until the paperwork is done."
Two disciplines distinguish the professional from the wannabe: The pro takes complete and accurate notes every step of the way and does a good follow up. Both of these are disciplines; they do not come naturally. Every time you handle an incident, mistakes will occur. Mistakes also had to occur or the incident could have never happened. But that is a touchy subject, so tread lightly. Things could always have been done better. It is okay to make mistakes, just make new ones.
"Lessons learned" is the most important part of the process when approached with the correct mindset. It should never be a blame thing, rather an opportunity for process improvement. Here is the approach that has worked for me.
The incident handlers are responsible for documenting the draft of the incident report. As soon as they finish it, typos and all, they send a copy to each person listed as a witness, primary customer, and system owner. Anyone can make any comment he wants, and his comments will be part of the permanent record. The handlers make the call whether to modify the report. Within a week of the incident, a mandatory meeting should be held. Book the room for exactly one hour and start on time. The only order of business at the meeting is to review the final incident report's recommendations for process changes. One-hour meetings are not good places for the consensus approach. Just tally the votes for each item. The final report goes to the