Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Podgornaya_angyaz 2 курс.doc
Скачиваний:
290
Добавлен:
16.03.2015
Размер:
650.24 Кб
Скачать

Тексты для самостоятельной работы Текст № 1 Software for Detecting and Removing Viruses

Virus protection (or antivirus) software are applications that can determine when a system has been infected with a virus. Typically, such software runs in the back-ground and scans files whenever they are downloaded from the Internet, received as attachments to e-mail, or modified by another application running on the system. Most virus protection software employs one of the following methods:

Signature-based detection: This is the traditional approach and searches for ‘signatures’, or known portions of code of viruses that have been detected and cataloged in the wild. Signature-based products are fast and reliable in detecting previously known viruses but generally cannot detect new viruses until the vendor has updated its signature database with information about the new virus and users have downloaded the updated signature files to their systems.

Behavior-blocking detection: This is a newer approach borrowed from intrusion detection system (IDS) technologies and uses policies to define which kinds of system behaviors might indicate the presence of a virus infection. Should an action occur that violates such a policy, such as code trying to access the address book to mass mail itself through e-mail, the software steps in and prevents this from happening and can also isolate the suspect code in a ‘sandbox’ until the administrator decides what to do with it. The advantage of behavior blocking detection is that it can detect new viruses for which no signatures are known. The disadvantage is that, like IDSs, such detection systems can generate false positives if the detection threshold is set too low or can miss real infections if it is set too high. A few newer virus protection products include behavior-blocking technology, but most still operate using signature databases.

Текст № 2 Scalability

Given an application and a parallel computer, how much can we boost the number of processors in order to improve performance? How much can we increase the amount of data and still have the same performance? Scalability is an informal measure of how the number of processors and amount of data can be increased while keeping reasonable speedup and efficiency. Unlimited, absolute scalability is obviously unreasonable: it would be like expecting that the design principles needed to build a car could be extended to build a car that travels as fast as an airplane. Too many parameters change if the size of a system radically changes and the design has to obey different principles.

Relative scalability, that is the property of maintaining a reasonable efficiency while slightly changing the number of processors, is instead possible and indeed very useful. This scalability allows users to adapt their system to their needs without having to replace it.

Changing the number of processors to execute the same problem faster causes, sooner or later, a decrease in efficiency because each processor has too little work to do compared to the overhead. If, on the other hand, the size of the problem, i.e. the amount of data, processed, also grows, the efficiency can be held constant. If, instead, the problem size grows while the number of processors remains constant, efficiency also grows unless the increase in the amount of data should saturate some system resources, e.g. the memory. This is a very important consideration because it implies that making a very efficient use of a parallel processor is possible if we are willing to apply it to a sufficiently large problem.

Scalability could be characterized by a function that indicates the relationship between a number of processors and amount of data at constant efficiency. For example, if, when processors are doubled the amount of data needs to be doubled in order to keep the same efficiency, then the scalability is rather good. If, instead, data need to quadruple to keep the efficiency constant, the system is less scalable. Too large an increase in problem size in order to keep efficiency constant is not a good characteristic because both the user might not need to process such a large problem and the system resources and design might not be able to deal with a very large problem.

Being able to keep efficiency constant by scaling the problem size is a very good property, unfortunately not all problems can be scaled to take advantage of a better efficiency. In some cases it might be possible to “batch” a few instances of a problem together and generate a larger problem, in other cases, e.g. weather forecasting it is useful to solve a larger problem. In other cases, e.g. sensory problems like speech recognition, solving a larger problem does not make sense and we have to do with either a low efficiency or a low speedup, or both.

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]