Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Effective Java Programming Language Guide - Bloch J..pdf
Скачиваний:
41
Добавлен:
24.05.2014
Размер:
2.93 Mб
Скачать

Effective Java: Programming Language Guide

One way to do this is to provide a wrapper class (Item 14) that implements an interface describing the class and performs appropriate synchronization before forwarding method invocations to the corresponding method of the wrapped object. This is the approach that was taken by the Collections Framework. Arguably, it should have been taken by java.util.Random as well. A second approach, suitable for classes that are not designed to be extended or reimplemented, is to provide an unsynchronized class and a subclass consisting solely of synchronized methods that invoke their counterparts in the superclass.

One good reason to synchronize a class internally is because it is intended for heavily concurrent use and you can achieve significantly higher concurrency by performing internal fine-grained synchronization. For example, it is possible to implement a nonresizable hash table that independently synchronizes access to each bucket. This affords much greater concurrency than locking the entire table to access a single entry.

If a class or a static method relies on a mutable static field, it must be synchronized internally, even if it is typically used by a single thread. Unlike a shared instance, it is not possible for the client to perform external synchronization because there can be no guarantee that other clients will do likewise. The static method Math.random exemplifies this situation.

In summary, to avoid deadlock and data corruption, never call an alien method from within a synchronized region. More generally, try to limit the amount of work that you do from within synchronized regions. When you are designing a mutable class, think about whether it should do its own synchronization. The cost savings that you can hope to achieve by dispensing with synchronization is no longer huge, but it is measurable. Base your decision on whether the primary use of the abstraction will be multithreaded, and document your decision clearly.

Item 50: Never invoke wait outside a loop

The Object.wait method is used to make a thread wait for some condition. It must be invoked inside a synchronized region that locks the object on which it is invoked. This is the standard idiom for using the wait method:

synchronized (obj) {

while (<condition does not hold>) obj.wait();

... // Perform action appropriate to condition

}

Always use the wait loop idiom to invoke the wait method. Never invoke it outside of a loop. The loop serves to test the condition before and after waiting.

Testing the condition before waiting and skipping the wait if the condition already holds are necessary to ensure liveness. If the condition already holds and notify (or notifyAll) method has already been invoked before a thread waits, there is no guarantee that the thread will ever waken from the wait.

Testing the condition after waiting and waiting again if the condition does not hold are necessary to ensure safety. If the thread proceeds with the action when the condition does not

149

Effective Java: Programming Language Guide

hold, it can destroy the invariants protected by the lock. There are several reasons a thread might wake up when the condition does not hold:

Another thread could have obtained the lock and changed the protected state between the time a thread invoked notify and the time the waiting thread woke up.

Another thread could have invoked notify accidentally or maliciously when the condition did not hold. Classes expose themselves to this sort of mischief by waiting on publicly accessible objects. Any wait contained in a synchronized method of a publicly accessible object is susceptible to this problem.

The notifying thread could be overly “generous” in waking waiting threads. For example, the notifying thread must invoke notifyAll even if only some of the waiting threads have their condition satisfied.

The waiting thread could wake up in the absence of a notify. This is known as a spurious wakeup. Although The Java Language Specification[JLS] does not mention this possibility, many JVM implementations use threading facilities in which spurious wakeups are known to occur, albeit rarely [Posix, 11.4.3.6.1].

A related issue is whether you should use notify or notifyAll to wake waiting threads. (Recall that notify wakes a single waiting thread, assuming such a thread exists, and notifyAll wakes all waiting threads.) It is often said that you should always use notifyAll. This is reasonable, conservative advice, assuming that all wait invocations are inside while loops. It will always yield correct results because it guarantees that you'll wake the threads that need to be awakened. You may wake some other threads too, but this won't affect the correctness of your program. These threads will check the condition for which they're waiting and, finding it false, will continue waiting.

As an optimization, you may choose to invoke notify instead of notifyAll if all threads that could be in the wait-set are waiting for the same condition and only one thread at a time can benefit from the condition becoming true. Both of these conditions are trivially satisfied if only a single thread waits on a particular object (as in the WorkQueue example, Item 49).

Even if these conditions appear true, there may be cause to use notifyAll in place of notify. Just as placing the wait invocation in a loop protects against accidental or malicious notifications on a publicly accessible object, using notifyAll in place of notify protects against accidental or malicious waits by an unrelated thread. Such waits could otherwise “swallow” a critical notification, leaving its intended recipient waiting indefinitely. The reason that notifyAll was not used in the WorkQueue example is that the worker thread waits on a private object (queue) so there is no danger of accidental or malicious waits.

There is one caveat concerning the advice to use notifyAll in preference to notify. While the use of notifyAll cannot harm correctness, it can harm performance. In fact, it systematically degrades the performance of certain data structures from linear in the number of waiting threads to quadratic. The class of data structures so affected are those for which only a certain number of threads are granted some special status at any given time and other threads must wait. Examples include semaphores, bounded buffers, and read-write locks.

If you are implementing this sort of data structure and you wake up each thread as it becomes eligible for “special status,” you wake each thread once for a total of n wakeups. If you wake all n threads when only one can obtain special status and the remaining n-1 threads go back to waiting, you will end up with n + (n – 1) + (n – 2) … + 1 wakeups by the time all waiting

150

Effective Java: Programming Language Guide

threads have been granted special status. The sum of this series is O(n2). If you know that the number of threads will always be small, this may not be a problem in practice, but if you have no such assurances, it is important to use a more selective wakeup strategy.

If all of the threads vying for special status are logically equivalent, then all you have to do is carefully use notify instead of notifyAll. If, however, only some of the waiting threads are eligible for special status at any given time, then you must use a pattern known as Specific Notification [Cargill96, Lea99]. This pattern is beyond the scope of this book.

In summary, always invoke wait from within a while loop, using the standard idiom. There is simply no reason to do otherwise. Usually, you should use notifyAll in preference to notify. There are, however, situations where doing so will impose a substantial performance penalty. If notify is used, great care must be taken to ensure liveness.

Item 51: Don't depend on the thread scheduler

When multiple threads are runnable, the thread scheduler determines which threads get to run and for how long. Any reasonable JVM implementation will attempt some sort of fairness when making this determination, but the exact policy varies greatly among implementations. Therefore well-written multithreaded programs should not depend on the details of this policy. Any program that relies on the thread scheduler for its correctness or performance is likely to be nonportable.

The best way to write a robust, responsive, portable multithreaded application is to ensure that there are few runnable threads at any given time. This leaves the thread scheduler with very little choice: It simply runs the runnable threads till they're no longer runnable. As a consequence, the program's behavior doesn't vary much even under radically different thread scheduling algorithms.

The main technique for keeping the number of runnable threads down is to have each thread do a small amount of work and then wait for some condition using Object.wait or for some time to elapse using Thread.sleep. Threads should not busy-wait, repeatedly checking a data structure waiting for something to happen. Besides making the program vulnerable to the vagaries of the scheduler, busy-waiting can greatly increase the load on the processor, reducing the amount of useful work that other processes can accomplish on the same machine.

The work queue example in Item 49 follows these recommendations: Assuming the clientprovided processItem method is well behaved, the worker thread spends most of its time waiting on a monitor for the queue to become nonempty. As an extreme example of what not to do, consider this perverse reimplementation of WorkQueue, which busy-waits instead of using a monitor:

151

Effective Java: Programming Language Guide

// HORRIBLE PROGRAM - uses busy-wait instead of Object.wait! public abstract class WorkQueue {

private final List queue = new LinkedList(); private boolean stopped = false;

protected WorkQueue() { new WorkerThread().start(); }

public final void enqueue(Object workItem) { synchronized (queue) { queue.add(workItem); }

}

public final void stop() {

synchronized (queue) { stopped = true; }

}

protected abstract void processItem(Object workItem) throws InterruptedException;

private class WorkerThread extends Thread { public void run() {

final Object QUEUE_IS_EMPTY = new Object(); while (true) { // Main loop

Object workItem = QUEUE_IS_EMPTY; synchronized (queue) {

if (stopped) return;

if (!queue.isEmpty())

workItem = queue.remove(0);

}

if (workItem != QUEUE_IS_EMPTY) { try {

processItem(workItem);

} catch (InterruptedException e) { return;

}

}

}

}

}

}

To give you some idea of the price you'd pay for this sort of implementation, consider the following microbenchmark, which creates two work queues and passes a work item back and forth between them. (The work item passed from one queue to the other is a reference to the former queue, which serves as a sort of return address.) The program runs for ten seconds before starting measurement to allow the system to “warm up” and then counts the number of round trips from queue to queue in the next ten seconds. On my machine, the final version of WorkQueue in Item 49 exhibits 23,000 round trips per second, while the perverse implementation above exhibits 17 round trips per second:

class PingPongQueue extends WorkQueue { volatile int count = 0;

protected void processItem(final Object sender) { count++;

WorkQueue recipient = (WorkQueue) sender; recipient.enqueue(this);

}

}

152

Effective Java: Programming Language Guide

public class WaitQueuePerf {

public static void main(String[] args) { PingPongQueue q1 = new PingPongQueue(); PingPongQueue q2 = new PingPongQueue(); q1.enqueue(q2); // Kick-start the system

//Give the system 10 seconds to warm up try {

Thread.sleep(10000);

} catch (InterruptedException e) {

}

//Measure the number of round trips in 10 seconds int count = q1.count;

try { Thread.sleep(10000);

} catch (InterruptedException e) {

}

System.out.println(q1.count - count);

q1.stop();

q2.stop();

}

}

While the WorkQueue implementation above may seem a bit farfetched, it's not uncommon to see multithreaded systems with one or more threads that are unnecessarily runnable. The results may not be as extreme as those demonstrated here, but performance and portability are likely to suffer.

When faced with a program that barely works because some threads aren't getting enough CPU time relative to others, resist the temptation to “fix” the program by putting in calls to Thread.yield. You may succeed in getting the program to work, but the resulting program will be nonportable from a performance standpoint. The same yield invocations that improve performance on one JVM implementation might make it worse on another and have no effect on a third. Thread.yield has no testable semantics. A better course of action is to restructure the application to reduce the number of concurrently runnable threads.

A related technique, to which similar caveats apply, is adjusting thread priorities. Thread priorities are among the least portable features of the Java platform. It is not unreasonable to tune the responsiveness of an application by tweaking a few thread priorities, but it is rarely necessary, and the results will vary from JVM implementation to JVM implementation. It is unreasonable to solve a serious liveness problem by adjusting thread priorities; the problem is likely to return until you find and fix the underlying cause.

The only use that most programmers will ever have for Thread.yield is to artificially increase the concurrency of a program during testing. This shakes out bugs by exploring a larger fraction of the program's state-space, thus increasing confidence in the correctness of the system. This technique has proven highly effective in ferreting out subtle concurrency bugs.

In summary, do not depend on the thread scheduler for the correctness of your application. The resulting application will be neither robust nor portable. As a corollary, do not rely on Thread.yield or thread priorities. These facilities are merely hints to the scheduler. They

153