t e m p o r a l 
 d o o r w a y 

Thread Basics


What is a thread?

A single CPU (central processing unit) computer can perform only one machine instruction at a time. One of the functions of a multitasking operating system is to create the illusion that the system is doing more than one thing at the same time. It does that by task-switching - that is, running some instructions from one program and then switching to run instructions from another, and eventually picking up with the first one where it left off.

As long as the two programs share no data, this is perfectly safe, and, just as television or an animation provides the illusion of movement by flashing single frames faster than the eye and mind can distinguish, time-slicing makes it look as if two or more programs are running at the same time.

Generally, CPU sharing and time-slicing are controlled by the operating system. Most operating systems, like Windows, distinguish between processes and threads.

  • A process (also referred to as "heavyweight") gets its own independent stack and heap memory areas. It is usually associated with a fairly complex data structure (once referred to as a "task control block") which the operating system uses to perform process-switching. Processes need to make a special effort to communicate or share.

  • A thread (also referred to as a "lightweight" process), uses the heap memory of the process which started it, but gets its own stack. It can directly access memory in the process. A thread cannot exist on its own, unlike a heavyweight process.

Often threads and processes are grouped together and referred to as tasks.

Each thread or process has a priority. When several threads or processes have the same priority, and they all have something to do, they each get a fixed slice of the available processor time. However, they can be displaced by a process or thread with a higher priority at any time. That is what makes it "preemptive multitasking" - a task can be preempted by a higher priority task and won't get to run again until the higher priority task runs out of things to do, terminates, or suspends (a voluntary pause in processing, often to wait for some event to occur).

Why use threads?

From the previous section, you can see that neither processes or threads actually make your system run faster. Indeed, there is a cost to task switching at a number of levels.

  • Every task switch requires setup and teardown time. In addition, the operating system must exccute for a brief period to perform the task-switch.

  • Modern CPUs use caches to improve instruction and data processing performance. In addition processors often break down the execution of a single machine instruction into multiple steps and use a method called "pipelining" to "decode" instructions. And, finally, high speed computation requires the use of "registers" in the CPU. A task switch can force a cache flush, pipeline spill, and register spill. It takes time to reload caches, pipelines and registers - and, in many cases, data in caches and registers has to be copied back to RAM before new data is copied from RAM. All of this takes time.

On the other hand, processors and memory get faster every year. What doesn't get cheaper is programmer time.

In other words, processes and threads are a programming technique, intended to simplify the logic of a complex task, just like components and event driven programming. In addition, threads can provide an illusion of better performance, by carrying out tasks in the background while the rest of the program continues to accept commands and allow the user to perform work.

What this implies is that you need specific and well-justified reasons to use threads. Make sure before you start.


In C++ Builder, threads are created as instances of a class derived from TThread. The following shows the basic elements of a TThread derived class, and the member functions you need to override. Note that a thread can have any desired internal data or additional functions. Also note that this thread class inlines the member functions, but that is not required.

  class SampleThread: public TThread

         void __fastcall Execute(void) // This is what gets run when the thread starts
            while (!Terminated)
                // Wait for an event or perform processing

                if (!Terminated)// This is optional but normal


         __fastcall SampleThread(StartupData Whatever /* You can provide any data the thread needs to run properly */): TThread(TRUE) // Start the thread suspended
            // Use the StartupData to initialize class instance data here

            FreeOnTerminate = true;
            Resume(); // Thread now runs

         __fastcall ~SampleThread(void)
            // Cleanup, deallocate, etc.

         void __fastcall Terminate(void)// This is what you call to terminate the thread; use Suspend to pause it
            // If the main loop is waiting on an event, set the event here

Generally, you set up the constructor to receive any data the thread need to start, and to allocate and set any private data. The Execute function body contains the needed processing, often in the form of a loop which waits for an event and performs a process. It also checks for the Terminated flag, which causes it to fall out through the bottom of the Execute function - at which time the thread is actually terminated. Finally, the destructor is executed, and you call release any resources claimed by the thread.

TEvent, TCriticalSection and Synchronize

It is considered very impolite for a thread to be using the CPU when it has nothing to do. Windows Events are often used for the purpose of allowing a thread to wait for something to happen - like a byte to arrive at the serial port, or a request from another thread, or even a signal that a database has been updated. To see raw Windows events in action, see Reading the Serial Port From A Thread. TEvent simply encapsulates these capabilities.

TCriticalSection is used to to declare and object which enables threads to control access to shared resources, so that only one thread can access those resources at a time. Once the first thread enters the critical section, subsequent threads attempting to execute code within the critical section (which requires them to execute TCriticalSection::Enter) are queued and do not proceed until the first thread is finished with the critical section (by calling TCriticalSection::Leave). Critical sections are serializers - they force threads to execute the code in the critical section in a predictable order - the order in which they arrive at the critical section. So they can be used to protect shared memory or to simply ensure that certain actions occur sequentially and completely for each thread.

Synchronize is a special function provided by TThread. It acts like a critical section for accessing forms controlled by the main thread (the thread which is represented by your Project Source.cpp file). You pass it a pointer to a function in your thread class and Synchronize executes it as if it were a function in the main thread. Only one thread at a time can execute in the main thread context, so Synchronize lets you serialize access to main thread resources like controls and tables - where parallel (and potentially interrupted) access by multiple threads could leave important data structures in an inconsistent (and thus crash-prone) state.


Threads are not terribly complex to set up, but understanding their implications requires careful thought. Be careful, and do not just use threads for the sake of using them.

Copyright © 2004 by Mark Cashman (unless otherwise indicated), All Rights Reserved