Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

CISSP - Certified Information Systems Security Professional Study Guide, 2nd Edition (2004)

.pdf
Скачиваний:
144
Добавлен:
17.08.2013
Размер:
11.38 Mб
Скачать

Common Flaws and Security Issues 381

A covert storage channel conveys information by writing data to a common storage area where another process can read it. Be diligent for any process that writes to any area of memory that another process can read.

Both types of covert channels rely on the use of communication techniques to exchange information with otherwise unauthorized subjects. Because the nature of the covert channel is that it is unusual and outside the normal data transfer environment, detecting it can be difficult. The best defense is to implement auditing and analyze log files for any covert channel activity.

The lowest level of security that addresses covert channels is B2 (F4+E4 for ITSEC, EAL5 for CC). All levels at or above level B2 must contain controls that detect and prohibit covert channels.

Attacks Based on Design or Coding Flaws and Security Issues

Certain attacks may result from poor design techniques, questionable implementation practices and procedure, or poor or inadequate testing. Some attacks may result from deliberate design decisions when special points of entry built into code to circumvent access controls, login, or other security checks often added to code while under development is not removed when that code is put into production. For what we hope are obvious reasons, such points of egress are properly called back doors because they avoid security measures by design (they’re covered in a later section in this chapter, entitled “Maintenance Hooks and Privileged Programs”). Extensive testing and code review is required to uncover such covert means of access, which are incredibly easy to remove during final phases of development but can be incredibly difficult to detect during testing or maintenance phases.

Although functionality testing is commonplace for commercial code and applications, separate testing for security issues has only been gaining attention and credibility in the past few years, courtesy of widely publicized virus and worm attacks and occasional defacements of or disruptions to widely used public sites online. In the sections that follow, we cover common sources of attack or security vulnerability that can be attributed to failures in design, implementation, pre-release code cleanup, or out-and-out coding mistakes. While avoidable, finding and fixing such flaws requires rigorous security-conscious design from the beginning of a development project and extra time and effort spent in testing and analysis. While this helps to explain the often lamentable state of software security, it does not excuse it!

Initialization and Failure States

When an unprepared system crashes and subsequently recovers, two opportunities to compromise its security controls may arise during that process. Many systems unload security controls as part of their shutdown procedures. Trusted recovery ensures that all controls remain intact in the event of a crash. During a trusted recovery, the system ensures that there are no opportunities for access to occur when security controls are disabled. Even the recovery phase runs with all controls intact.

382 Chapter 12 Principles of Security Models

For example, suppose a system crashes while a database transaction is being written to disk for a database classified as top secret. An unprotected system might allow an unauthorized user to access that temporary data before it gets written to disk. A system that supports trusted recovery ensures that no data confidentiality violations occur, even during the crash. This process requires careful planning and detailed procedures for handling system failures. Although automated recovery procedures may make up a portion of the entire recovery, manual intervention may still be required. Obviously, if such manual action is needed, appropriate identification and authentication for personnel performing recovery is likewise essential.

Input and Parameter Checking

One of the most notorious security violations is called a buffer overflow. This violation occurs when programmers fail to validate input data sufficiently, particularly when they do not impose a limit on the amount of data their software will accept as input. Because such data is usually stored in an input buffer, when the normal maximum size of the buffer is exceeded, the extra data is called overflow. Thus, the type of attack that results when someone attempts to supply malicious instructions or code as part of program input is called a buffer overflow. Unfortunately, in many systems such overflow data is often executed directly by the system under attack at a high level of privilege or at whatever level of privilege attaches to the process accepting such input. For nearly all types of operating systems, including Windows, Unix, Linux, and others, buffer overflows expose some of the most glaring and profound opportunities for compromise and attack of any kind of known security vulnerability.

The party responsible for a buffer overflow vulnerability is always the programmer who wrote the offending code. Due diligence from programmers can eradicate buffer overflows completely, but only if programmers check all input and parameters before storing them in any data structure (and limit how much data can be proffered as input). Proper data validation is the only way to do away with buffer overflows. Otherwise, discovery of buffer overflows leads to a familiar pattern of critical security updates that must be applied to affected systems to close the point of attack.

Checking Code for Buffer Overflows

In early 2002, Bill Gates acted in his traditional role as the archetypal Microsoft spokesperson when he announced something he called the “Trustworthy Computing Initiative,” a series of design philosophy changes intended to beef up the often questionable standing of Microsoft’s operating systems and applications when viewed from a security perspective. As discussion on this subject continued through 2002 and 2003, the topic of buffer overflows occurred repeatedly (more often, in fact, than Microsoft Security Bulletins reported security flaws related to this kind of problem, which is among the most serious yet most frequently reported types of programming errors with security implications). As is the case for many other development organizations and also for the builders of software development environments (the software tools that developers use to create other software), increased awareness of buffer overflow exploits has caused changes at many stages during the development process:

Common Flaws and Security Issues 383

Designers must specify bounds for input data or state acceptable input values and set hard limits on how much data will be accepted, parsed, and handled when input is solicited.

Developers must follow such limitations when building code that solicits, accepts, and handles input.

Testers must check to make sure that buffer overflows can’t occur and attempt to circumvent or bypass security settings when testing input handling code.

In his book Secrets & Lies , noted information security expert Bruce Schneier makes a great case that security testing is in fact quite different from standard testing activities like unit testing, module testing, acceptance testing, and quality assurance checks that software companies have routinely performed as part of the development process for years and years. What’s not yet clear at Microsoft (and at other development companies as well, to be as fair to the colossus of Redmond as possible) is whether this change in design and test philosophy equates to the right kind of rigor necessary to foil all buffer overflows or not (some of the most serious security holes that Microsoft reports in early 2004 clearly invoke “buffer overruns”).

Maintenance Hooks and Privileged Programs

Maintenance hooks are entry points into a system that are known by only the developer of the system. Such entry points are also called back doors. Although the existence of maintenance hooks is a clear violation of security policy, they still pop up in many systems. The original purpose of back doors was to provide guaranteed access to the system for maintenance reasons or if regular access was inadvertently disabled. The problem is that this type of access bypasses all security controls and provides free access to anyone who knows that the back doors exist. It is imperative that you explicitly prohibit such entry points and monitor your audit logs to uncover any activity that may indicate unauthorized administrator access.

Another common system vulnerability is the practice of executing a program whose security level is elevated during execution. Such programs must be carefully written and tested so they do not allow any exit and/or entry points that would leave a subject with a higher security rating. Ensure that all programs that operate at a high security level are accessible only to appropriate users and that they are hardened against misuse.

Incremental Attacks

Some forms of attack occur in slow, gradual increments rather than through obvious or recognizable attempts to compromise system security or integrity. Two such forms of attack are called data diddling and the salami attack. Data diddling occurs when an attacker gains access to a system and makes small, random, or incremental changes to data rather than obviously altering file contents or damaging or deleting entire files. Such changes can be difficult to detect unless files and data are protected by encryption or some kind of integrity check (such as a checksum or message digest) is routinely performed and applied each time a file is read or written. Encrypted file systems, file-level encryption techniques, or some form of file monitoring (which includes integrity checks like those performed by applications like TripWire) usually offer adequate guarantees that no data diddling is underway.

384 Chapter 12 Principles of Security Models

The salami attack is more apocryphal, by all published reports. The name of the attack refers to a systematic whittling at assets in accounts or other records with financial value, where very small amounts are deducted from balances regularly and routinely. Metaphorically, the attack may be explained as stealing a very thin slice from a salami each time it’s put on the slicing machine when it’s being accessed by a paying customer. In reality, though no documented examples of such an attack are available, most security experts concede that salami attacks are possible, especially when organizational insiders could be involved. Only by proper separation of duties and proper control over code can organizations completely prevent or eliminate such an attack. Setting financial transaction monitors to track very small transfers of funds or other items of value should help to detect such activity; regular employee notification of the practice should help to discourage attempts at such attacks.

Programming

We have already mentioned the biggest flaw in programming. The buffer overflow comes from the programmer failing to check the format and/or the size of input data. There are other potential flaws with programs. Any program that does not handle any exception gracefully is in danger of exiting in an unstable state. It is possible to cleverly crash a program after it has increased its security level to carry out a normal task. If an attacker is successful in crashing the program at the right time, they can attain the higher security level and cause damage to the confidentiality, integrity, and availability of your system.

All programs that are executed directly or indirectly must be fully tested to comply with your security model. Make sure you have the latest version of any software installed and be aware of any known security vulnerabilities. Because each security model, and each security policy, is different, you must ensure that the software you execute does not exceed the authority you allow. Writing secure code is difficult, but it’s certainly possible. Make sure all programs you use are designed to address security concerns.

Timing, State Changes, and Communication Disconnects

Computer systems perform tasks with rigid precision. Computers excel at repeatable tasks. Attackers can develop attacks based on the predictability of task execution. The common sequence of events for an algorithm is to check that a resource is available and then access it if you are permitted. The time-of-check (TOC) is the time at which the subject checks on the status of the object. There may be several decisions to make before returning to the object to access it. When the decision is made to access the object, the procedure accesses it at the time-of-use (TOU). The difference between the TOC and the TOU is sometimes large enough for an attacker to replace the original object with another object that suits their own needs. Time-of-check-to-time- of-use (TOCTTOU) attacks are often called race conditions because the attacker is racing with the legitimate process to replace the object before it is used.

A classic example of a TOCTTOU attack is replacing a data file after its identity has been verified but before data is read. By replacing one authentic data file with another file of the attacker’s choosing and design, an attacker can potentially direct the actions of a program in many ways. Of course, the attacker would have to have in-depth knowledge of the program and system under attack.

Summary 385

Likewise, attackers can attempt to take action between two known states when the state of a resource or the entire system changes. Communication disconnects also provide small windows that an attacker might seek to exploit. Anytime a status check of a resource precedes action on the resource, a window of opportunity exists for a potential attack in the brief interval between check and action. These attacks must be addressed in your security policy and in your security model.

Electromagnetic Radiation

Simply because of the kinds of electronic components from which they’re built, many computer hardware devices emit electromagnetic radiation during normal operation. The process of communicating with other machines or peripheral equipment creates emanations that can be intercepted. It’s even possible to re-create keyboard input or monitor output by intercepting and processing electromagnetic radiation from the keyboard and computer monitor. You can also detect and read network packets passively (that is, without actually tapping into the cable) as they pass along a network segment. These emanation leaks can cause serious security issues but are generally easy to address.

The easiest way to eliminate electromagnetic radiation interception is to reduce emanation through cable shielding or conduit and block unauthorized personnel and devices from getting too close to equipment or cabling by applying physical security controls. By reducing the signal strength and increasing the physical buffer around sensitive equipment, you can dramatically reduce the risk of signal interception.

Summary

Secure systems are not just assembled. They are designed to support security. Systems that must be secure are judged for their ability to support and enforce the security policy. This process of evaluating the effectiveness of a computer system is called certification. The certification process is the technical evaluation of a system’s ability to meet its design goals. Once a system has satisfactorily passed the technical evaluation, the management of an organization begins the formal acceptance of the system. The formal acceptance process is called accreditation.

The entire certification and accreditation process depends on standard evaluation criteria. Several criteria exist for evaluating computer security systems. The earliest criteria, TCSEC, was developed by the U.S. Department of Defense. TCSEC, also called the Orange Book, provides criteria to evaluate the functionality and assurance of a system’s security components. ITSEC is an alternative to the TCSEC guidelines and is used more often in European countries. Regardless of which criteria you use, the evaluation process includes reviewing each security control for compliance with the security policy. The better a system enforces the good behavior of subjects’ access to objects, the higher the security rating.

When security systems are designed, it is often helpful to create a security model to represent the methods the system will use to implement the security policy. We discussed three security models in this chapter. The earliest model, the Bell-LaPadula model, supports data confidentiality

386 Chapter 12 Principles of Security Models

only. It was designed for the military and satisfies military concerns. The Biba model and the Clark-Wilson model address the integrity of data and do so in different ways. The latter two security models are appropriate for commercial applications.

No matter how sophisticated a security model is, flaws exist that attackers can exploit. Some flaws, such as buffer overflows and maintenance hooks, are introduced by programmers, whereas others, such as covert channels, are architectural design issues. It is important to understand the impact of such issues and modify the security architecture when appropriate to compensate.

Exam Essentials

Know the definitions of certification and accreditation. Certification is the technical evaluation of each part of a computer system to assess its concordance with security standards. Accreditation is the process of formal acceptance of a certified configuration.

Be able to describe open and closed systems. Open systems are designed using industry standards and are usually easy to integrate with other open systems. Closed systems are generally proprietary hardware and/or software. Their specifications are not normally published and they are usually harder to integrate with other systems.

Know what confinement, bounds, and isolation are. Confinement restricts a process to reading from and writing to certain memory locations. Bounds are the limits of memory a process cannot exceed when reading or writing. Isolation is the mode a process runs in when it is confined through the use of memory bounds.

Be able to define object and subject in terms of access. The subject of an access is the user or process that makes a request to access a resource. The object of an access request is the resource a user or process wishes to access.

Know how security controls work and what they do. Security controls use access rules to limit the access by a subject to an object.

Describe IPSec. IPSec is a security architecture framework that supports secure communication over IP. IPSec establishes a secure channel in either transport mode or tunnel mode. It can be used to establish direct communication between computers or to set up a VPN between networks. IPSec uses two protocols: Authentication Header (AH) and Encapsulating Security Payload (ESP).

Be able to list the classes of TCSEC, ITSEC, and the Common Criteria. The classes of TCSEC include A: Verified protection; B: Mandatory protection; C: Discretionary protection and D: Minimal protection. Table 12.3 covers and compares equivalent and applicable rankings for TCSEC, ITSEC, and the CC (remember that functionality ratings from F7 to F10 in ITSEC have no corresponding ratings in TCSEC).

Define a trusted computing base (TCB). A TCB is the combination of hardware, software, and controls that form a trusted base that enforces the security policy.

Exam Essentials 387

Be able to explain what a security perimeter is. A security perimeter is the imaginary boundary that separates the TCB from the rest of the system. TCB components communicate with non-TCB components using trusted paths.

Know what the reference monitor and the security kernel are. The reference monitor is the logical part of the TCB that confirms whether a subject has the right to use a resource prior to granting access. The security kernel is the collection of the TCB components that implement the functionality of the reference monitor.

Describe the Bell-LaPadula security model. The Bell-LaPadula security model was developed in the 1970s to address military concerns over unauthorized access to secret data. It is built on a state machine model and ensures the confidentiality of protected data.

Describe the Biba integrity model. The Biba integrity model was designed to ensure the integrity of data. It is very similar to the Bell-LaPadula model, but its properties ensure that data is not corrupted by subjects accessing objects at different security levels.

Describe the Clark-Wilson security model. The Clark-Wilson security model ensures data integrity as the Biba model does, but it does so using a different approach. Instead of being built on a state machine, the Clark-Wilson model uses object access restrictions to allow only specific programs to modify objects. Clark-Wilson also enforces the separation of duties, which further protects the data integrity.

Be able to explain what covert channels are. A covert channel is any method that is used to pass information but that is not normally used for information.

Understand what buffer overflows and input checking are. A buffer overflow occurs when the programmer fails to check the size of input data prior to writing the data into a specific memory location. In fact, any failure to validate input data could result in a security violation.

Describe common flaws to security architectures. In addition to buffer overflows, programmers can leave back doors and privileged programs on a system after it is deployed. Even well-written systems can be susceptible to time-of-check-to-time-of-use (TOCTTOU) attacks. Any state change could be a potential window of opportunity for an attacker to compromise a system.

388 Chapter 12 Principles of Security Models

Review Questions

1.What is system certification?

A.Formal acceptance of a stated system configuration

B.A technical evaluation of each part of a computer system to assess its compliance with security standards

C.A functional evaluation of the manufacturer’s goals for each hardware and software component to meet integration standards

D.A manufacturer’s certificate stating that all components were installed and configured correctly

2.What is system accreditation?

A.Formal acceptance of a stated system configuration

B.A functional evaluation of the manufacturer’s goals for each hardware and software component to meet integration standards

C.Acceptance of test results that prove the computer system enforces the security policy

D.The process to specify secure communication between machines

3.What is a closed system?

A.A system designed around final, or closed, standards

B.A system that includes industry standards

C.A proprietary system that uses unpublished protocols

D.Any machine that does not run Windows

4.Which best describes a confined process?

A.A process that can run only for a limited time

B.A process that can run only during certain times of the day

C.A process that can access only certain memory locations

D.A process that controls access to an object

5.What is an access object?

A.A resource a user or process wishes to access

B.A user or process that wishes to access a resource

C.A list of valid access rules

D.The sequence of valid access types

Review Questions

389

6.What is a security control?

A.A security component that stores attributes that describe an object

B.A document that lists all data classification types

C.A list of valid access rules

D.A mechanism that limits access to an object

7.What does IPSec define?

A.All possible security classifications for a specific configuration

B.A framework for setting up a secure communication channel

C.The valid transition states in the Biba model

D.TCSEC security categories

8.How many major categories do the TCSEC criteria define?

A.Two

B.Three

C.Four

D.Five

9.What is a trusted computing base (TCB)?

A.Hosts on your network that support secure transmissions

B.The operating system kernel and device drivers

C.The combination of hardware, software, and controls that work together to enforce a security policy

D.The software and controls that certify a security policy

10.What is a security perimeter? (Choose all that apply.)

A.The boundary of the physically secure area surrounding your system

B.The imaginary boundary that separates the TCB from the rest of the system

C.The network where your firewall resides

D.Any connections to your computer system

11.What part of the TCB validates access to every resource prior to granting the requested access?

A.TCB partition

B.Trusted library

C.Reference monitor

D.Security kernel

390 Chapter 12 Principles of Security Models

12.What is the best definition of a security model?

A.A security model states policies an organization must follow.

B.A security model provides a framework to implement a security policy.

C.A security model is a technical evaluation of each part of a computer system to assess its concordance with security standards.

D.A security model is the process of formal acceptance of a certified configuration.

13.Which security models are built on a state machine model?

A.Bell-LaPadula and Take-Grant

B.Biba and Clark-Wilson

C.Clark-Wilson and Bell-LaPadula

D.Bell-LaPadula and Biba

14.Which security model(s) address(es) data confidentiality?

A.Bell-LaPadula

B.Biba

C.Clark-Wilson

D.Both A and B

15.Which Bell-LaPadula property keeps lower-level subjects from accessing objects with a higher security level?

A.* (star) Security Property

B.No write up property

C.No read up property

D.No read down property

16.What is a covert channel?

A.A method that is used to pass information and that is not normally used for communication

B.Any communication used to transmit secret or top secret data

C.A trusted path between the TCB and the rest of the system

D.Any channel that crosses the security perimeter

17.What term describes an entry point that only the developer knows about into a system?

A.Maintenance hook

B.Covert channel

C.Buffer overflow

D.Trusted path

Соседние файлы в предмете Программирование