Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Pro Visual C++-CLI And The .NET 2.0 Platform (2006) [eng]-1

.pdf
Скачиваний:
70
Добавлен:
16.08.2013
Размер:
24.18 Mб
Скачать

658 C H A P T E R 1 5 W E B S E R V I C E S

lbAuthors->Items->RemoveAt(lbAuthors->SelectedIndex);

}

System::Void bnUpdate_Click(System::Object^ sender, System::EventArgs^ e)

{

if (CurrentAuthorID < 0) return;

DataTable ^dt = dSet->Tables["Authors"]; array<DataRow^>^ row =

dt->Select(String::Format("AuthorID={0}", CurrentAuthorID));

row[0]["FirstName"] = tbFirstName->Text; row[0]["LastName"] = tbLastName->Text;

lbAuthors->Items->Insert(lbAuthors->SelectedIndex, ListBoxItem(row[0]));

lbAuthors->Items->RemoveAt(lbAuthors->SelectedIndex);

}

System::Void bnAdd_Click(System::Object^ sender, System::EventArgs^ e)

{

if (tbFirstName->Text->Trim()->Length == 0 || tbLastName->Text->Trim()->Length == 0) return;

DataTable ^dt = dSet->Tables["Authors"];

DataRow ^row = dt->NewRow();

row["FirstName"] = tbFirstName->Text; row["LastName"] = tbLastName->Text;

dt->Rows->Add(row);

lbAuthors->Items->Add(ListBoxItem(row));

tbFirstName->Text = ""; tbLastName->Text = "";

}

System::Void lbAuthors_SelectedIndexChanged(System::Object^ sender, System::EventArgs^ e)

{

array<System::Char>^ ASpace = gcnew array<System::Char>{' '};

if (lbAuthors->SelectedItem == nullptr)

{

CurrentAuthorID = -1; tbFirstName->Text = ""; tbLastName->Text = ""; return;

}

C H A P T E R 1 5 W E B S E R V I C E S

659

array<String^>^ split = lbAuthors->SelectedItem->ToString()->Split(ASpace);

CurrentAuthorID = Convert::ToInt32(split[0]); tbFirstName->Text = split[1]; tbLastName->Text = split[2];

}

};

}

As you can see, the code is the same except that the ADO.NET DataAdapter and DataSet logic has been removed. In actuality, this logic should probably have been moved to its own class in the example in Chapter 12, but this was not done because it simplifies the code listing.

Figure 15-10 shows the Web service version of MaintAuthors.exe in action. Those of you looking for differences between this and the original version in Chapter 12 won’t find any.

Figure 15-10. Web service version of MaintAuthors

Summary

In this chapter you examined the “net” in .NET: Web services. What you found out is that Web services are extremely easy to develop and code because you aren’t doing anything different when coding Web services as compared to developing any other class. In general, any complexities associated with the distributed application nature of Web services are hidden from you. The only real difference of note is that Web services are generally coded in a stateless manner.

You started the chapter by covering the basics of Web services. Then you moved on to examine two different examples of Web services and multiple ways to write consumer clients. The second example was relatively complex, but the complex logic actually had very little to do with Web services and more to do with coding ADO.NET in a stateless manner.

In the next chapter, you’ll take a look at a third way of working over a network. This time, you will take complete control and code at the socket level.

C H A P T E R 1 6

■ ■ ■

Multithreaded Programming

Normally, multithreaded programming would be one of the more advanced topics, if not the most advanced topic, in a book, but due to the .NET Framework, it is no more advanced than any other topic in this book. Why, you might ask? Well, the answer is that the .NET Framework (as usual) has hidden most of the complexities of this habitually complex area of software development within its classes.

Having the complexities hidden doesn’t mean it’s any less powerful or flexible than your doing the entire complex coding yourself. In fact, true to the nature of the .NET Framework, if you want to get lost in the details, you can still do so. On the other hand, because this chapter is about developing multithreaded programs using C++/CLI and not about multithreaded programming in general, I try to stay away from these details and let the .NET Framework deal with them. However, for those of you who like to delve into the details, I try to point you in the right direction for future exploration.

This chapter starts off by covering multithreaded programming at a high level, so those of you who are new to multithreaded programming can get comfortable with the concept. Next, you’ll explore the more commonly used and, fortunately, easy-to-understand multithreaded programming features provided by the .NET Framework. With the basics covered, you’ll explore some of the more complex areas of multithreaded programming, including thread states, priorities, and the weighty topic of synchronization. Finally, you’ll learn about a second way of handling multithreaded programming: thread pools.

What Is Multithreaded Programming?

Most developers are comfortable with the concept of multitasking, or the capability of computers to execute more than one application or process at the same time. However, multithreading may be a more alien term. Many programmers have not had any reason to program in a multithreaded fashion. In fact, for some programming languages, there is no way to do multithreaded programming without jumping through some very convoluted programming hoops.

So, what is multithreaded programming? You might want to think of it as multitasking at the program level. A program has two options for executing itself. The first option is to run itself in one thread of execution. In this method of execution, the program follows the logic of the program from start to end in a sequential fashion. You might want to think of this method of execution as single threaded. The second option is that the program can break itself into multiple threads of execution or, in other words, split the program into multiple segments (with beginning and end points) and run some of them concurrently (at the same time). This is what is better known as multithreading. It should be noted, though, that the end result of either a single-threaded or a multithreaded program will be the same.

661

662 C H A P T E R 1 6 M U L T I T H R E A D E D P R O G R A M M I N G

Of course, if you have a single processor machine, true concurrency is not possible, as only one command can be run at a time through the CPU. (With Intel Corporation’s new Hyper-Threading Technology, you can execute more than one command at the same time on a single CPU, but that is a topic for another book altogether.) This is an important concept to grasp because many programmers mistakenly think that if they break a computationally bound section of a program into two parts and run them in two threads of execution, then the program will take less time to run. The opposite is actually the case—it will take longer. The reason is that the same amount of code is being run for the program, plus additional time must be added to handle the swapping of the thread’s context (the CPU’s registers, stack, and so on).

So for what reason would you use multithreading for a single process computer if it takes longer than single threading? The reason is that, when used properly, multithreading can provide better I/O- related response time, as well as better use of the CPU.

Wait a second, didn’t I just contradict myself? Well, actually, I didn’t.

The key point about proper use of multithreading is the types of commands the threads are executing. Computational bound threads (i.e., threads that do a lot of calculations) gain very little when it comes to multithreading, as they are already working overtime trying to get themselves executed. Multithreading actually slows this type of thread down. I/O threads, on the other hand, gain a lot. This gain is most apparent in two areas: better response and CPU utilization.

I’m sure you’ve all come across a program that seemed to stop or lock up and then suddenly came back to life. The usual reason for this is that the program is executing a computationally bound area of the code. And, because multithreading wasn’t being done, there were no CPU cycles provided for user interaction with the computer. By adding multithreading, it’s possible to have one thread running the computational bound area and another handling user interaction. Having an I/O thread allows the user to continue to work while the CPU blasts its way through the computational bound thread. True, the actual computational bound thread will take longer to run, but because the user can continue to work, this minute amount of time usually doesn’t matter.

I/O threads are notorious for wasting CPU cycles. Humans, printers, hard drives, monitors, and so forth are very slow when compared to a CPU. I/O threads spend a large portion of their time simply waiting, doing nothing. Thus, multithreading allows the CPU to use this wasted time.

Basic .NET Framework Class Library Threading

There is only one namespace that you need to handle threading: System::Threading. What you plan to do while using the threads will determine which of the classes you will use. Many of the classes provide different ways to do the same thing, usually differing in the degree of control. Here is a list of some of the more common classes within the System::Threading namespace:

AutoResetEvent notifies a waiting thread that an event has occurred. You use this class to allow communication between threads using signaling. Typically, you use this class for threads that need exclusive access.

Interlocked allows for atomic operation on a variable that is shared between threads.

C H A P T E R 1 6 M U L T I T H R E A D E D P R O G R A M M I N G

663

ManualResetEvent notifies one or more threads that an event has occurred. You use this class to allow communication between threads using signaling. Typically, you use this class for scenarios where one thread must complete before other threads can proceed.

Monitor provides a mechanism to synchronize access to objects by locking access to a block of code, commonly called a critical section. While a thread owns the lock for an object, no other thread can acquire that lock.

Mutex provides a synchronization primitive that solves the problem of two or more threads needing access to a shared resource at the same time. It ensures that only one thread at a time uses the resource. This class is similar in functionality to Monitor, except Mutex allows for interprocess synchronization.

ReaderWriterLock allows a single writer and multiple readers access to a resource. At any given time, it allows either concurrent read access for multiple threads or write access to a single thread.

Semaphore limits the number of threads that can access a particular system resource.

Thread is the core class to create a thread to execute a portion of the program code.

ThreadPool provides access to a pool of system-maintained threads.

WaitHandle allows for the taking or releasing of exclusive access to a shared system-specific resource.

From the preceding list of classes, you can see that the .NET Framework class library provides two ways to create threads:

Thread

ThreadPool

The difference between the two primarily depends on whether you want to maintain the Thread object or you want the system to handle it for you. In effect, nearly the same results can be achieved with either method. I cover Thread first, as it provides you with complete control of your threads.

Later in this chapter, I cover ThreadPool, where the system maintains the process threads—though, even with this reduction in control, you will see later in the chapter that ThreadPools can be used just as effectively as Threads. But, before you explore either method, you’ll take a look at thread state and priority.

Thread State

The .NET Framework thread model is designed to model an execution thread. Many of the Threading namespace classes and members map directly to an execution state of a thread. Personally, I found knowing the execution states of a thread ultimately made it easier for me to understand threading, so using Figure 16-1 and Table 16-1, I’ll walk you through the state and the action required to change states within the .NET Framework thread model.

664 C H A P T E R 1 6 M U L T I T H R E A D E D P R O G R A M M I N G

Figure 16-1. The execution states of a thread

You might want to note that the states in Table 16-1 map directly to the System::Threading::ThreadState enumeration. And, if you need to determine the current state, you would look in the ThreadState property in the Thread class.

Table 16-1. The Execution States of a Thread

Action

State

The thread is created with the CLR and has not been invoked.

Unstarted

The thread executes its start process.

Running

The thread continues to run until another action occurs.

Running

The running thread calls sleep for a specified length of time.

WaitSleepJoin

The running thread calls wait on a locked resource.

WaitSleepJoin

C H A P T E R 1 6 M U L T I T H R E A D E D P R O G R A M M I N G

665

Table 16-1. The Execution States of a Thread

Action

State

The running thread calls join on another thread.

WaitSleepJoin

Another thread calls interrupt on the WaitSleepJoin thread.

Running

Another thread calls suspend on the thread.

SuspendRequest

The SuspendRequested thread processes the suspend call.

Suspended

Another thread calls resume on a Suspended thread.

Running

Another thread calls abort on the thread.

AbortRequest

The AbortRequested thread processes the abort call.

Aborted

 

 

In addition to these states is a Background state, which means the thread is executing in the background (as opposed to in the foreground). The biggest difference between a background thread and a foreground thread is that a background thread ends when the main application thread ends. A foreground thread continues executing until it is aborted or finishes executing. You set a thread to be in the background by setting the IsBackground property of the Thread class.

Thread Priorities

Not all threads are created equal. Well, that’s not really true, all threads are created equal. You just make them unequal later by updating the Priority property of the Thread class. With the .NET Framework, you have five levels of priorities available to place on a thread:

Highest

AboveNormal

Normal

BelowNormal

Lowest

You can find each of the preceding priorities in the System::Threading:ThreadPriority enumeration.

The basic idea behind priorities is that all threads are created at a Normal priority. When unaltered, each “running” thread gets an equal share of processor time. If, on the other hand, you change the priority of the thread to a higher level—AboveNormal, for example—then the documentation says it will be scheduled to execute prior to threads at a lower level. Well, this is sort of the case. If that were truly how the Framework did it, then lower-level threads would never run (in other words, they would starve) until the higher-level thread finished. This doesn’t happen, so it appears that the .NET Framework has additional logic in it to allow lower-level priority threads to have at least a little processor time.

Normally you don’t want to mess with priorities, but for those rare occasions, the functionality, as you have come to expect with the .NET Framework, is provided.

666 C H A P T E R 1 6 M U L T I T H R E A D E D P R O G R A M M I N G

Using Threads

Of the two methods available in the .NET Framework for creating threads, Thread and ThreadPool, the System::Threading::Thread class provides you with the most control and versatility. The cost is a minor amount of additional coding complexity.

Like all classes in the .NET Framework, the Thread class is made up of properties and methods. The ones you will most likely use are as follows:

Abort() is a method that raises a ThreadAbortException in the thread on which it is invoked, which starts the process of terminating the thread. Calling this method normally results in the termination of the thread.

CurrentThread is a static Thread property that represents the currently running thread.

Interrupt() is a method that interrupts a thread that is currently in the WaitSleepJoin thread state, thus resulting in the thread returning to the Running thread state.

IsBackground is a Boolean property that represents whether a thread is a background or a foreground thread. The default is false.

Join() is a method that causes the calling thread to block until the called thread terminates.

Name is a String property that represents the name of the thread. You can write the name only once to this property.

Priority is a ThreadPriority enumerator property that represents the current priority of the thread. The default is Normal.

Resume() is a method that resumes a suspended thread and makes its thread state Running.

Sleep() is a method that blocks the current thread for a specified length of time and makes its thread state WaitSleepJoin.

Start() is a method that causes the thread to start executing and changes its thread state to Running.

Suspend() is a method that causes the thread to suspend. The thread state becomes Suspended.

ThreadState is a ThreadState enumerator property that represents the current thread state of the thread.

The idea of running and keeping track of two or more things at the same time can get confusing. Fortunately, in many cases with multithreaded programming, you simply have to start a thread and let it run to completion without interference.

I start off by showing you that exact scenario first. Then I show you some of the other options available to you when it comes to thread control.

Starting Threads

The first thing that you need to do to get the multithreaded programming running is to create an instance of a Thread. In prior versions of the .NET Framework 2.0, you didn’t have much in the way of options, as there was only one constructor:

System::Threading::Thread(System::Threading::ThreadStart ^start);

The parameter ThreadStart is a delegate to the method that is the starting point of the thread. The signature of the delegate is a method with no parameters that returns void:

public delegate void ThreadStart();

C H A P T E R 1 6 M U L T I T H R E A D E D P R O G R A M M I N G

667

Version 2.0 of the .NET Framework has expanded the constructors by an additional three. All these additions help to overcome a shortcoming of thread creation. The first addition is to allow the specification of a ParameterizedThreadStart, instead of a simple ThreadStart, thus allowing an Object parameter to be passed to the thread.

System::Threading::Thread(System::Threading::ParameterizedThreadStart ^start);

The third and fourth additional constructors expand the other two constructors by allowing the maximum stack size to be specified. Such fine-tuning of threads is beyond the scope of this book, but I thought I’d let you know it was available, just in case you need it.

Thread(ThreadStart ^start, Int32 mazStackSize);

Thread(ParameterizedThreadStart ^start, Int32 mazStackSize);

Caution The maxStackSize passed to the Thread constructor must be greater than 128K (131072) bytes or an ArgumentOutOfRangeException will be thrown.

One thing that may not be obvious when you first start working with threads is that creating an instance of the Thread object doesn’t cause the thread to start. The thread state after creating an instance of the thread is, instead, Unstarted. To get the thread to start, you need to call the Thread class’s Start() method. It kind of makes sense, don’t you think?

I think it’s about time to look at some code. Take a look at the example of a program that creates two threads in Listing 16-1. The first thread executes a static method of a class, and the second thread executes a member class that passes a parameter.

Listing 16-1. Starting Two Simple Threads

using namespace System;

using namespace System::Threading;

ref class MyThread

{

public:

static void StaticThread();

void NonStaticThread(Object ^name);

};

void MyThread::StaticThread()

{

for (int i = 0; i < 50000001; i++)

{

if (i % 10000000 == 0)

Console::WriteLine("Static Thread {0}", i.ToString());

}

}

void MyThread::NonStaticThread(Object ^name)

{

for (int i = 0; i < 50000001; i++)

{