<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
  <title>C++ Tutorial: Multi-Threaded Programming - Terminology</title>
  <meta
 content="C++ Tutorial: Multi-Threaded Programming"
 name="description" />
  <meta
 content="C++ Tutorial, Multi-Threaded Programming, MultiThreading Programming, Terminology, deadlock, semaphore, mutex, race condition, monitor, mutual exclusion, Lock-Free Code, atomic operation"
 name="keywords" />
  <meta http-equiv="Content-Type"
 content="text/html; charset=ISO-8859-1" />
  <link type="text/css" rel="stylesheet" href="../images/bogostyleWidePre.css" />
</head>
<body>
<div id="page" align="center">
<div id="content" style="width: 800px;">
<div id="logo">
<div class="whitetitle" style="margin-top: 70px;">bogotobogo </div>
</div>
<div id="topheader">
<div class="headerbodytext" align="left"><br />
<strong>Bogotobogo</strong><br />
contact@bogotobogo.com </div>
<div class="smallgraytext" id="toplinks"><a href="../index.html">Home</a>
| <a href="../sitemap.html">Sitemap</a>
| <a href="../blog" target="_blank">Contact Us</a>
</div>
</div>
<div id="menu">
<div class="smallwhitetext" style="padding: 9px;" align="right"><a
 href="../index.html">Home</a>
| <a href="../about_us.html">About Us</a>
| <a href="../products.html">Products</a>
| <a href="../our_services.html">Our Services</a>
| <a href="../blog" target="_blank">Contact Us</a>
</div>
</div>
<div id="submenu">
<div class="smallgraytext" style="padding: 9px;" align="right">
<a href="../gif.html">Gif</a> 
|<a href="../java_applet.html">JavaApplet/Web Start</a>
| <a href="../flash.html">Flash</a>
| <a href="../shockwave.html">ShockWave</a>
| <a href="../svg.html">SVG</a>
| <a href="../iPhone.html">iPhone/iPad</a>
| <a href="../android.html">Android</a>
| <a href="../OnHTML5.html">HTML5</a>
| <a href="../Algorithms/algorithms.html">Algorithms</a>
| <a href="../News/NewsMain.html">News</a>
| <a href="../cplusplus/cpptut.html">C++</a>
| <a href="../Java/tutorial/on_java.html">Java</a>
| <a href="../php/phptut.html">PHP</a>
| <a href="../DesignPatterns/introduction.html">Design Patterns</a>
| <a href="../python/pytut.html">Python</a> 
| <a href="../CSharp/.netframework.html">C#</a>
| <a href="../forums.html">Forums</a> 
| <a href="../VisualBasicSQL/introduction.html">Visual Basic</a>
</div>
</div>

<div id="contenttext">

<!-- Use of this code assumes agreement with the Google Custom Search Terms of Service. -->
<!-- The terms of service are available at http://www.google.com/cse/docs/tos.html -->
<form name="cse" id="searchbox_demo" action="http://www.google.com/cse">
  <input type="hidden" name="cref" value="" />
  <input type="hidden" name="ie" value="utf-8" />
  <input type="hidden" name="hl" value="" />
  <input name="q" type="text" size="40" />
  <input type="submit" name="sa" value="Search" />
</form>
<script type="text/javascript" src="http://www.google.com/cse/tools/onthefly?form=searchbox_demo&lang="></script>

<div id="bookmarkshare">
<script type="text/javascript">var addthis_config = {"data_track_clickback":true};</script>
<a class="addthis_button" href="http://www.addthis.com/bookmark.php?v=250&amp;username=khhong7"><img src="http://s7.addthis.com/static/btn/v2/lg-share-en.gif" width="125" height="16" alt="Bookmark and Share" style="border:0"/></a><script type="text/javascript" src="http://s7.addthis.com/js/250/addthis_widget.js#username=khhong7"></script>
</div>
<br />
<br />
<br />
<br />
<div style="padding: 10px;"><span class="titletext">C++ Tutorial<br />
Multi-Threaded Programming <br />
Terminology</span></div>
<img src="../images/cplusplus/cpp_logo.jpg" alt="cplusplus logo"/>

<div class="bodytext" style="padding: 12px;" align="justify"> 
<div class="subtitle_2nd" id="FullList">Full List of C++ Tutorials</div>
<ul>
   <li><a href="cpptut.html">C++ Home</a> </li>
   <li><a href="string.html">String</a> </li>
   <li><a href="constructor.html">Constructor</a> </li>
   <li><a href="operatoroverloading.html">Operator Overloading</a> </li>
   <li><a href="virtualfunctions.html">Virtual Functions</a></li>
   <li><a href="dynamic_cast.html">Dynamic Cast Operator</a></li>
   <li><a href="typecast.html">Type Cast Operators</a></li>
   <li><a href="autoptr.html">Class auto_ptr</a></li>   
   <li><a href="templates.html">Templates</a></li>
   <li><a href="references.html">References for Built-in Types</a></li>
   <li><a href="valuevsreference.html">Pass by Value vs. Pass by Reference</a></li>
   <li><a href="memoryallocation.html">Memory Allocation</a></li>
   <li><a href="friendclass.html">Friend Functions and Friend Classes</a></li>
   <li><a href="functors.html">Functors (Function Objects)</a></li>
   <li><a href="statics.html">Static Variables and Static Class Members</a></li>
   <li><a href="exceptions.html">Exceptions</a></li>
   <li><a href="stackunwinding.html">Stack Unwinding</a></li>
   <li><a href="pointers.html">Pointers</a></li>
   <li><a href="pointers2.html">Pointers II</a></li>
   <li><a href="pointers3.html">Pointers III</a></li>
   <li><a href="assembly.html">Taste of Assembly</a></li>
   <li><a href="smallprograms.html">Small Programs</a></li>
   <li><a href="linkedlist.html">Linked List Examples</a></li>
   <li><a href="binarytree.html">Binary Tree Example Code</a></li>
   <li><a href="stl.html">Standard Template Library (STL) I</a></li>
   <li><a href="stl2.html">Standard Template Library (STL) II - Maps</a></li>
   <li><a href="stl3_iterators.html">Standard Template Library (STL) III - Iterators</a></li>
   <li><a href="slicing.html">Object Slicing and Virtual Table</a></li>
   <li><a href="this_pointer.html">The this Pointer</a></li>
   <li><a href="stackunwinding.html">Stack Unwinding</a></li>
   <li><a href="upcasting_downcasting.html">Upcasting and Downcasting</a></li>
   <li><a href="object_returning.html">Object Returning</a></li>
   <li><a href="private_inheritance.html">Private Inheritance</a></li>
   <li><a href="cplusplus_keywords.html">C++_Keywords</a></li>
   <li><a href="multithreaded.html">Multi-Threaded Programming - Terminology</a></li>
   <li><a href="multithreaded2A.html">Multi-Threaded Programming II - Native Thread for Win32 (A) </a></li>
   <li><a href="multithreaded2B.html">Multi-Threaded Programming II -  Native Thread for Win32 (B) </a></li>
   <li><a href="multithreaded2C.html">Multi-Threaded Programming II -  Native Thread for Win32 (C) </a></li>
   <li><a href="multithreaded2.html">Multi-Threaded Programming II - C++ Thread for Win32</a></li>
   <li><a href="multithreaded3.html">Multi-Threaded Programming III - C++ Class Thread for Pthreads</a></li>
   <li><a href="multithreadedDebugging.html">Multithread Debugging</a></li>
   <li><a href="embeddedSystemsProgramming.html">Embedded Systems Programming</a></li>
   <li><a href="boost.html">Boost</a></li>

   <li>Programming Questions and Solutions
    <ul>
       <li><a href="quiz_strings_arrays.html">Strings and Arrays</a></li>
       <li><a href="quiz_linkedlist.html">Linked List</a></li>
       <li><a href="quiz_recursion.html">Recursion</a></li>
       <li><a href="quiz_bit_manipulation.html">Bit Manipulation</a> </li>
       <li><a href="google_interview_questions.html">140 Google Interview Questions</a> </li>
    </ul>
   </li>
</ul>
</div>
<br />


<div class="bodytext" style="padding: 12px;" align="justify">
<div class="subtitle" id="mth1">Multi-Threaded Programming - Terminology</div>
<br />

<div class="subtitle_2nd" id="thread">Thread</div>
<p>More exactly it is <strong>Thread of Execution</strong> which is the smallest unit of processing.</p>
<ol>
	<li>It is scheduled by an OS.</li>
	<li>In general, it is contained in a process.</li>
So, multiple threads can exist within the same process.</li>
	<li>It shares the resources with the process: 
The memory, code (instructions), 
and global variable (context - the values that its variables reference at any given moment).</li>
	<li>On a single processor, each thread has its turn by multiplexing based on time. On a multiple processor, each thread is running at the same time with each processor/core running a particular thread.</li>
</ol>
<br />
<br />

<div class="subtitle_2nd" id="threadvsprocesses">Threads vs. Processes</div>
<p>Processes and threads are related to each other but are fundamentally different.<p>
<p>A <strong>process</strong> can be thought of as an instance of a program in execution. Each process is an independent entity to which system resources such as CPU time, memory, etc. are allocated and each process is executed in a separate address space. If we want to access another process; resources, inter-process communications have to be used such as pipes, files, sockets etc.</p>
<p>A <strong>thread</strong> uses the same address space of a process. A process can have multiple threads. A key difference between processes and threads is that multiple threads <strong>share</strong> parts of their state. Typically, one allows multiple threads to read and write the same memory (no process can directly access the memory of another process). However, each thread still has its own stack of activation records and its own copy of CPU registers, including the stack pointer and the program counter, which together describe the state of the thread's execution. But the other thread can read and write the stack memory.</p>
<p>A thread is a particular execution path of a process. When one thread modifies a process resource, the change is immediately visible to sibling threads. </p>
<ol>
	<li>Processes are independent while thread is within a process.</li>
<li>Processes have separate address spaces while threads share their address spaces.</li>
<li>Processes communicate each other through inter-process communication.</li>
<li>Processes carry considerable state (e.g., ready, running, waiting, or stopped)  information, whereas multiple threads within a process share state as well as memory and other resources.</li>
<li><strong>Context switching</strong> between threads in the same process is typically faster than context switching between processes.</li>
<li><strong>Multithreading</strong> has some advantages over <strong>multiple processes</strong>. Threads require less overhead to manage than processes, and intraprocess thread communication is less expensive than interprocess communication.</li>
<li><strong>multiple process</strong> concurrent programs do have one advantage: Each process can execute on a different machine (<strong>distribute program</strong>). Examples of distributed programs are file servers (NFS), file transfer clients and servers (FTP), remote log-in clients and servers (Telnet), groupware programs, and Web browsers and servers. </li>
</ol>

<br />
<br />


<div class="subtitle_2nd" id="AdvantageofMultiThreading">Advantage of Multi-Threading</div>
<ol>
	<li>Faster on a multi-CPU system.</li>
	<li>Even in a single CPU system, application can remain responsive by using worker thread runs concurrently with the main thread.</li>
</ol>

<br />
<br />


<div class="subtitle_2nd" id="Identifying">Identifying Multithread Opportunities</div>
<p>So, multithreading is a good thing. How can we identify multithreading oppurtinities in codes?</p>
<ol>
   <li>We need runtime profile data of our application. Then, we can identify the bottleneck of code.</li>
   <li>Eaxmine the region, and check for dependencies. Then, determine whether the dependencies can be broken into either
        <ul>
          <li>multiple parallel task, or</li>
          <li>loop over multiple parallel iteration.</li>
        </ul>
   </li>
   <li>At this stage, we may consider a different algorithm.</li>
   <li>We need to estimate the overhead and performance gains. Will it give us linear scaling with the number of thread? 
   </li>
   <li>If the scaling does not look promising, we may have to broaden the scope of our analysis.</li>
 
</ol>
<br />
<br />


<div class="subtitle_2nd" id="ContextSwitch">Context Switch</div>
<p>Switching the CPU from one process or thread to another is called <strong>context switch</strong>. It requires saving the state of the old process or thread and loading the state of the new one. Since there may be several hundred context switches per second, context switches can potentially add significant overhead to an execution.</p>
<br />
<br />


<div class="subtitle_2nd" id="RaceCondition">Race Condition</div>
<p>This happens when a critical section is not executed automically.<br />
An execution of threads depends on shared state. For example, two threads share variable <strong>i</strong> and trying to increment it by 1.
It is highly depending on when they get it and when they save it.</p>
<br />
<br />



<div class="subtitle_2nd" id="Deadlock">Deadlock</div>
<p>Two or more competing actions are waiting for other to finish.
No threads are changing their states.</p>
<p>In other words, deadlock occurs when some threads are blocked to acquire resources held by other blocked threads. A deadlock may arise due to dependence between two or more threads that request resources and two or more threads that hold those resources. </p>
<p>Example: Alphonse and Gaston are friends, and great believers in courtesy. A strict rule of courtesy is that when you bow to a friend, you must remain bowed until your friend has a chance to return the bow. Unfortunately, this rule does not account for the possibility that two friends might bow to each other at the same time.</p>



<br />
<br />



<div class="subtitle_2nd" id="Livelock">Livelock</div>
<p>A situation that process is not progressing. Example: When two people meet at narrow path, and both of them  are repeatedly trying to yield to the other person. Both are changing their states but with no progress.</p>
<p>A thread often acts in response to the action of another thread. If the other thread's action is also a response to the action of another thread, then livelock may result. As with deadlock, livelocked threads are unable to make further progress. However, the threads are not blocked - they are simply too busy responding to each other to resume work.</p>

<br />
<br />



<div class="subtitle_2nd" id="Starvation">Starvation</div>
<p>When a process having been denied necessary resources. Without the resources the program can not finish.</p>
<p>Example: an object provides a synchronized method that often takes a long time to return. If one thread invokes this method frequently, other threads that also need frequent synchronized access to the same object will often be blocked. </p>
<br />
<br />


<div class="subtitle_2nd" id="DiningPhilosophersProblem">Dining Philosophers Problem</div>
<p>The dining philosophers problem is summarized as five philosophers sitting at a table doing one of two things: eating or thinking. While eating, they are not thinking, and while thinking, they are not eating. The five philosophers sit at a circular table with a large bowl of spaghetti in the center. A fork is placed in between each pair of adjacent philosophers, and as such, each philosopher has one fork to his left and one fork to his right. As spaghetti is difficult to serve and eat with a single fork, it is assumed that a philosopher must eat with two forks. Each philosopher can only use the forks on his immediate left and immediate right.</p>
<img src="images/multithread/Dining_philosophers.png" alt="Dining_philosophers"/>
<p>source <a href="http://en.wikipedia.org/wiki/Dining_philosophers_problem" target="_blank">wiki</a></p>
<p>
The philosophers never speak to each other, which creates a dangerous possibility of <strong>deadlock</strong> when every philosopher holds a left fork and waits perpetually for a right fork (or vice versa).</p>
<p>
Originally used as a means of illustrating the problem of deadlock, this system reaches deadlock when there is a 'cycle of unwarranted requests'. In this case philosopher P1 waits for the fork grabbed by philosopher P2 who is waiting for the fork of philosopher P3 and so forth, making a circular chain.</p>
<p>
<strong>Starvation</strong> (and the pun was intended in the original problem description) might also occur independently of deadlock if a philosopher is unable to acquire both forks because of a timing problem. For example there might be a rule that the philosophers put down a fork after waiting five minutes for the other fork to become available and wait a further five minutes before making their next attempt. This scheme eliminates the possibility of deadlock (the system can always advance to a different state) but still suffers from the problem of <strong>livelock</strong>. If all five philosophers appear in the dining room at exactly the same time and each picks up the left fork at the same time the philosophers will wait five minutes until they all put their forks down and then wait a further five minutes before they all pick them up again.</p>
<p>
In general the dining philosophers problem is a generic and abstract problem used for explaining various issues which arise in problems which hold <strong>mutual exclusion</strong> as a core idea. The various kinds of failures these philosophers may experience are analogous to the difficulties that arise in real computer programming when multiple programs need exclusive access to shared resources. These issues are studied in the branch of <strong>Concurrent Programming</strong>. The original problems of Dijkstra were related to external devices like tape drives. However, the difficulties studied in the Dining Philosophers problem arise far more often when multiple processes access sets of data that are being updated. Systems that must deal with a large number of parallel processes, such as operating system kernels, use thousands of locks and synchronizations that require strict adherence to methods and protocols if such problems as deadlock, starvation, or data corruption are to be avoided.</p>
<br />
<br />


<div class="subtitle_2nd" id="Mutex">Mutex (Mutual Exclusion)</div>
<p>There are <strong>two</strong> types of <strong>synchronization</strong>:<p>
<ol>
	<li><strong>Mutual Exclusion</strong><br />
	Mutual exclusion ensures that a group of atomic actions (<strong>critical section</strong> can not be executed by more than one thread at a time).</li>
	<br />
	<li><strong>Condition Synchronization</strong><br />
	This ensures that the state of a program satisfies a particular condition before some action occurs. For example, in the bank account problem, there is a need for both condition synchronization and mutual exclusion. The balance must not be in an empty condition before method <strong>withdraw()</strong> is executed, and mutual exclusion is required for ensuring that the withdraw not be executed more than once.</li>
</ol>
	
<p>It is used to avoid simultaneous use of resources such as global variables by the critical sections. A Critical section is a piece of code where a process or thread accesses a common resource.</p>
<p>Mutex locks ensure that only one thread has access to a resource at a time. If the lock is held by another thread, the thread attempting to acquire the lock will sleep until the lock is released. A timeout can also be specified so that lock acquisition will fail if the lock does not become available within the specified interval. One problem with this approach is that it can <strong>serialize</strong> a program. This causes the multithreaded program to have only a sigle executing thread, which stops the program from taking advantage of multiple cores.<p>
<p>If multiple threads are waiting for the lock, the order in which the waiting threads will acquire the mutex is not guaranteed. <strong>Mutexes can be shared between processes</strong>. In comparison, <strong>critical sections</strong> cannot be shared between processes; consequently, performance overhead of critical sections is lower.</p>
<p><strong>Spin locks</strong> are essentially mutex locks. The difference between a mutex lock and a spin lock is that a thread waiting to acquire a spin lock will keep trying to acquire the lock without sleeping. On the contrary, a <strong>mutex lock</strong> may sleep if it is unable to acquire the lock. The advantage of using spin locks is that they will acquire the lock as soon as it is released, while a mutex lock will need to be woken by the OS before it can get the lock. The disadvantage is that a spin lock will spin on a virtual CPU monopolizing that resource, but a mutex lock will sleep and free the CPU for another thread to use. So, in practice, mutex locks are often implemented to be a hybrid of a spin locks and more traditional mutex locks.</p>
<br />
<br />


<div class="subtitle_2nd" id="criticalsection">Critical Section</div>
<p>A code segment that accesses shared variable (or other shared resources) and that has to be executed as an atomic action is referred to as a <strong>critical section</strong>.</p>
<pre>
while (true) {
	<i>entry-section</i>
	critical section 	//accesses shared variables
	<i>exit-section</i>
	noncritical section
}
</pre>
<p>The entry- and exit-sections that surround a critical section must satisfy the following correctness requirements:</p>
<ul>
	<li><strong>Mutual exclusion</strong> <br />
	When a thread is executing in its critical section, no other threads can be executing in their critical sections.</li>
	<li><strong>Progress</strong> <br />
	If no thread is executing in its critical section and there are threads that wish to enter their critical sections, only the threads that are executing in their entry- or exit-sections can participate in the decision about which thread will enter its critical section next, and this decision cannot be postponed indefinitely.</li>
	<li><strong>Bounded waiting</strong> <br />
	After a thread makes a request to enter its critical section, there is a bound on the number of times that other threads are allowed to enter their critical sections before this thread's request is granted.</li>
</ul>	
<p>Critical sections are similar to mutex locks. The difference is that <strong>critical sections cannot be shared between processes</strong>. Therefore, their performance overhead is lower. Critical sections also have a different interface from that provided by mutex locks. Critical sections do not take a timeout value but do have an interface that allows the calling thread to try to enter the critical section. If this fails, the call immediately returns, enabling the thread to continue execution. They also have the facility of spinning for a number of iterations before the thread goes to sleep in the situation where the thread is unable to enter the critical section.</p>

<br />
<br />


<div class="subtitle_2nd" id="slimreader">Slim Reader/Writer Locks</div>
<p>Slim reader/writer locks provide support for the situation where there are multiple threads that read shared data, but on rare occasions the shared data needs to be written. Data that is being read can be simultaneously accessed by multiple threads without concern for problems with corruption of the data being shared. However, only a single thread can have access to update the data at any one time, and other threads cannot access that data during the write operation. This is to prevent threads from reading incomplete or corrupted data that is in the process of being written. Slim reader/write locks cannot be shared across processes.</p>
<br />
<br />

<div class="subtitle_2nd" id="Semaphores">Semaphores</div>
<p><strong>Semaphores</strong> are counters that can be either incremented or decremented. They can be used in situations where there is a finite limit to a resource and a mechanism is needed to impose that limit. An example is a buffer that has a fixed size. Whenever an element is added to a butter, the number of available positions is decreased. Every time an element is removed, the number available is increased.</p>
<p>Semaphores can also be used to mimic mutexes. If there is only one element in the semaphore, then it can be either acquired or available, exactly as a mutex can be either locked or unlocked.</p>
<p>Semaphores will also signal or wake up threads that are waiting on them to use available resources. So, they can be used for signaling between threads.</p>
<p><strong>Semaphores</strong> are used to provide <strong>mutual exclusion</strong> and <strong>condition synchronization</strong>.</p> 
<p>It is a variable either a <strong>binary</strong> semaphore (true or false) or a <strong>counter (counting)</strong> semaphore. Semaphore is used to prevent race condition. Semaphores provide a means of restricting access to a finite set of resources or of signaling that a resource is available. As in the case with mutex locks, semaphores can be shared across processes.</p>
<ul>
	<li><strong>Counting Semaphore</strong> <br />
	<br />
	A <strong>counting semaphore</strong> is a synchronization object  that is initialized with an integer value and then accessed through two operations, named <strong>P</strong> and <strong>V</strong>, meaning <strong>down, up</strong> or <strong>decrement, increment</strong>, <strong>wait, signal</strong>, respectively (Dijkstra).
<pre>
class countingSemaphore {
public:
	countingSemaphore(int initialPermits) {
		permits = initialPermits;
	}
	void P() {}
	void V() {}
private:
	int permits;
};

void countingSemaphore::P() {
	if(permits > 0)
		--permits;	// take a permit from the pool
	else			// the pool is empty so wait for a permit
		<i>wait</i> until <i>permits</i> becomes positive and the decrement <i>permits</i> by one.
}

void CountingSemaphore::V() {
	++permits;		// return a permit to the pool
}

</pre>
It is helpful to interpret a <strong>counting semaphore</strong> as having a <strong>pool of permits</strong>. A thread calls method <strong>P()</strong> to request a permit. If the pool is empty, the thread waits until a permit becomes available. A thread calls method <strong>V()</strong> to return a permit to the pool. A counting semaphore <strong>s</strong> is declared and initialized using
<pre>
countingSemaphore s(1);
</pre>
The initial value, in this case 1, represents the initial number of permits in the pool. For a counting semaphore <strong>s</strong>, at any time, the following relation holds:
<pre>
(the initial number of permits) + (the number of completed <i>s.V()</i> operations)
 >= (the number of completed <i>s.P()</i> operations
</pre>
This relation is referred to as the <strong>invariant</strong> for semaphore <strong>s</strong>. <strong>Counting semaphore rely on a semaphore invariant to define its behavior</strong>.
	</li>
	<br /> <br />
	<li><strong>Binary Semaphore</strong> <br />
	
	A semaphore named <strong>mutex</strong> is initialized with the value of <strong>1</strong>. The calls to <strong>mutex.P()</strong> and <strong>mutex.V()</strong> create a critical section:
<pre>
	Thread1			Thread2
	----------------------------------
	mutex.P();		mutex.P();
	/*critial section*/ 	/*critical section*/
	mutex.V();		mutex.V();
</pre>
Due to the initial value <strong>1</strong> for <strong>mutex</strong> and the placement of <strong>mutex.P()</strong> and <strong>mutex.V()</strong> around the critical section, a <strong>mutex.P()</strong> operation will be completed first, then <strong>mutex.V()</strong>, and so on. For this pattern, we can let <strong>mutex</strong> be a counting semaphore, or we can use a more restrictive type of semaphore called a <strong>binary semaphore</strong>.
	</li>
</ul>
<br />
<br />


<div class="subtitle_2nd" id="Monitors">Monitors</div>
<p><strong>Semaphores</strong> were defined before the introduction of programming concepts such as data encapsulation and information hiding. In semaphore-based programs, shared variables and the semaphores that protect them are global variables. This causes shared variable and semaphore operations to be distributed throughout the program.</p>
<p>Since <strong>P</strong> and <strong>V</strong> operations are used for both mutual exclusion and condition synchronization, it is difficult to determine how a semaphore is being used without examining all the code.</p>
<p>Monitors were invented to overcome these problems.</p>
<p>A monitor encapsulates shared data, all the operations on the data, and any synchronization required for accessing the data. A monitor has separate constructs for mutual exclusion and condition synchronization. In fact, mutual exclusion is provided automatically by the monitor's implementation, freeing the programmer from the burden of implementing critical sections.</p>
<br />
<br />
<div class="subtitle_2nd" id="semaphorevsmonitor">Synchronization - Semaphore vs. Monitor</div>
<p>In order to avoid data corruption and other problems, applications must control how threads access to shared resources. It is referred to as thread synchronization. The fundamental thread synchronization constructs are monitors and semaphores. Which one should we use? It depends on what the system or language supports.<p>
<ul>
	<li>A <strong>monitor</strong> is a set of routines that are protected by a mutual exclusion lock. A thread cannot execute any of the routines in the monitor until it acquires the lock, which means that only one thread at a time can execute within the monitor. All other threads must wait for the currently executing thread to release the lock. A thread can suspend itself in the monitor and wait for an event to occur, in which case another thread is given the chance to enter the monitor. At some point the suspended thread is notified that the event has occurred, allowing it to awake and reacquire the lock as soon as possible.</li>
	<br />
	<li>A <strong>semaphore</strong> is a simpler construct, just a lock that protects a shared resource. Before using a shared resource, the application must acquire the lock. Any other thread that tries to use the resource is blocked until the owning thread releases the lock, at which point one of the waiting threads acquires the lock and is unblocked. This is the most basic kind of semaphore, a mutual exclusion, or mutex, semaphore. There are other semaphore types, such as counting semaphores (which let a maximum of n threads access a resource at any given time) and event semaphores (which notify one or all waiting threads that en event has occurred), but they all work in much the same way.<br />
Monitors and semaphores are equivalent, but monitors are simpler to use because they handle all details of lock acquisition and release. When using semaphores, an application must be very careful to release any locks a thread has acquired when it terminates. Otherwise, no other thread that needs the shared resource can proceed. In addition, every routine that accesses the shared resource must explicitly acquire a lock before using the resource, something that is easily forgotten when coding. Monitors always and automatically acquire the necessary locks.</li>
</ul>
<br />
<br />

<div class="subtitle_2nd" id="ThreadSafe">Thread Safe</div>
<p>A code is thread safe if it functions correctly in concurrent executions by multiple threads.</p>
<p>To <strong>check</strong> if a piece of code is safe:</p>
<ol>
	<li>When it accesses global variable.</li>
	<li>Alloc/realloc/freeing resources of global scope.</li>
	<li>Indirect access through handles or pointers. </li>
</ol>
<p>To <strong>achieve</strong> a thread safety. </p>
<ol>
	<li>Atomic operations - available runtime library (machine language instructions).</li>
	<li>Mutex </li>
	<li>Using Re-entrancy.</li>
</ol>
<br />
<br />


<div class="subtitle_2nd" id="Reentrancy">Re-entrancy</div>
<p>A code is re-entrant if it can be safely called again.</p>
<ul>
<li><strong>Non-entrant code:</strong><br />
<pre>
int g_var = 1;

int f(){
  g_var = g_var + 2;
  return g_var;
}
int g(){
  return f() + 2;
}
</pre>
<p>
If two concurrent threads access g_var, the result depends on the time of execution of each thread.</p></li>
<li>
<strong>Re-entrant code:</strong><br />
<pre>
int f(int i) { 
return i + 2; 
} 
int g(int i) { 
return f(i) + 2; 
} 
</pre>
</li>
</ul>
<br />
<br />


<div class="subtitle_2nd" id="LockFreeCode">Lock-Free Code</div>
<p>An <strong>atomic operation</strong> is one that will either successfully complete or fail. It is not possible for the operation to either result in a <strong>bad</strong> value or allow other threads on the system to have a transient value. An example of this would be an atomic increment, which would mean that the calling thread would swap <strong>n</strong> with <strong>n+1</strong>. This may look trivial, but the operation can involve several steps:</p>
<pre>
Load initial value to register
Increment the value
Store the new value back to memory
</pre>
<p>So, during the three steps, another thread could have come in and interfered, and replaced the value with a new one, creating a data race.</p>
<p>Typically, hardware provides support for a range of atomic operations. Atomic operations are often used to enable the writing of <strong>lock-free</strong> code. A lock-free implementation would not rely on a mutex lock to protect access. Instead, it would use a sequence of operations that would perform the operation without having to acquire an explicit lock. This can be higher performance than controlling access with a lock. </p>
<br />
<br />


<div class="subtitle_2nd" id="ContextSwitch">Context Switch</div>
<p>A context switch is the time spent switching between two processes (e.g., bringing a wait process into execution and sending an execution process into waiting/terminated state). This happens in multitasking. The OS must bring the state information of waiting processes into memory and save the state information of the running process.</p>
<br />
<br />



<div class="subtitle_2nd" id="Socket">Socket</div>
<p>To connect to another machine, we need a <strong>socket</strong> connection. By the way, what's a connection? A <strong>relationship</strong> between two machines, where <strong>two pieces of software know about each other</strong>. Those two pieces of software know how to communicate with each other. In other words, they know how to send <strong>bits</strong> to each other.<br />
A socket connection means the two machines have information about each other, including <strong>network location (IP address)</strong> and <strong>TCP port</strong>.<br />
<p>There are several different types of socket that determine the structure of the transport layer. The most common types are <strong>stream</strong> sockets and <strong>datagram</strong> sockets.</p>
<ul>
	<li><strong>Stream Sockets</strong><br />
	Stream sockets provide <strong>reliable two-way</strong> communication similar to when we call someone on the phone. One side initiates the connection to the other, and after the connection is established, either side can communicate to the other.<br />
In addition, there is immediate confirmation that what we said actually reached its destination. <br />
Stream sockets use a <strong>Transmission Control Protocol (TCP)</strong>, which exists on the transport layer of the Open Systems Interconnection (OSI) model. The data is usually transmitted in packets. TCP is designed so that the packets of data will arrive without errors and in sequence. <br />
Webservers, mail servers, and their respective client applications all use TCP and stream socket to communicate.</li><br />
	<li><strong>Datagram Sockets</strong><br />
	Communicating with a datagram socket is more like mailing a letter than making a phone call. The connection is <strong>one-way</strong> only and <strong>unreliable</strong>. <br />
If we mail several letters, we can't be sure that they arrive in the same order, or even that they reached their destination at all. Datagram sockets use <strong>User Datagram Protocol (UDP)</strong>. Actually, it's not a real connection, just a basic method for sending data from one point to another.<br />
Datagram sockets and UDP are commonly used in networked games and streaming media.</li>
</ul>	
<br />
<br />


<div class="subtitle_2nd" id="TCPport">TCP port</div>
<p>A <strong>TCP port</strong> is just a number. A 16-bit number that identifies a specific program on the server.<br />
Our internet web (HTTP) server runs on port 80. If we've got a Telnet server, it's running on port 23. FTP on 20. SMTP 25.<br />
They represent a logical connection to a particular piece of software running on the server. Without port number, the server would have no way of knowing which application a client wanted to connect to. When we  write a server program, we'll include code that tells the program which port number we want it to run on. <br />
The TCP port numbers from 0 to 1023 are reserved for well know services:</p>
<table border="2" WIDTH="400" cellpadding="3">
<tr>
	<th>Port Number</th>
	<th>Description</th>
</tr>
<tr>
<td>1</td><td>TCP Port Service Multiplexer (TCPMUX)</td></tr>
<td>5</td><td>Remote Job Entry (RJE)</td></tr>
<td>7</td><td>ECHO</td></tr>
<td>18</td><td>Message Send Protocol (MSP)</td></tr>
<td>20</td><td>FTP -- Data</td></tr>
<td>21</td><td>FTP -- Control</td></tr>
<td>22</td><td>SSH Remote Login Protocol</td></tr>
<td>23</td><td>Telnet</td></tr>
<td>25</td><td>Simple Mail Transfer Protocol (SMTP)</td></tr>
<td>29</td><td>MSG ICP</td></tr>
<td>37</td><td>Time</td></tr>
<td>42</td><td>Host Name Server (Nameserv)</td></tr>
<td>43</td><td>WhoIs</td></tr>
<td>49</td><td>Login Host Protocol (Login)</td></tr>
<td>53</td><td>Domain Name System (DNS)</td></tr>
<td>69</td><td>Trivial File Transfer Protocol (TFTP)</td></tr>
<td>70</td><td>Gopher Services</td></tr>
<td>79</td><td>Finger</td></tr>
<td>80</td><td>HTTP</td></tr>
<td>103</td><td>X.400 Standard</td></tr>
<td>108</td><td>SNA Gateway Access Server</td></tr>
<td>109</td><td>POP2</td></tr>
<td>110</td><td>POP3</td></tr>
<td>115</td><td>Simple File Transfer Protocol (SFTP)</td></tr>
<td>118</td><td>SQL Services</td></tr>
<td>119</td><td>Newsgroup NNTP</td></tr>
<td>137</td><td>NetBIOS Name Service</td></tr>
<td>139</td><td>NetBIOS Datagram Service</td></tr>
<td>143</td><td>Interim Mail Access Protocol (IMAP)</td></tr>
<td>150</td><td>NetBIOS Session Service</td></tr>
<td>156</td><td>SQL Server</td></tr>
<td>161</td><td>SNMP</td></tr>
<td>179</td><td>Border Gateway Protocol (BGP)</td></tr>
<td>190</td><td>Gateway Access Control Protocol (GACP)</td></tr>
<td>194</td><td>Internet Relay Chat (IRC)</td></tr>
<td>197</td><td>Directory Location Service (DLS)</td></tr>
<td>389</td><td>Lightweight Directory Access Protocol (LDAP)</td></tr>
<td>396</td><td>Novell Netware over IP</td></tr>
<td>443</td><td>HTTPS</td></tr>
<td>444</td><td>Simple Network Paging Protocol (SNPP)</td></tr>
<td>445</td><td>Microsoft-DS</td></tr>
<td>458</td><td>Apple QuickTime</td></tr>
<td>546</td><td>DHCP Client</td></tr>
<td>547</td><td>DHCP Server</td></tr>
<td>563</td><td>SNEWS</td></tr>
<td>569</td><td>MSN</td></tr>
<td>1080</td><td>Socks</td></tr>
</table>  
<br />
<br />
<br />


<div class="subtitle_2nd" id="TCPIP">TCP/IP</div>
<img src="images/multithread/tcpip_stack_connections.png" alt="tcpip_stack_connections"/>
<p>TCP/IP stack operating on two hosts connected via two routers and the corresponding layers used at each hop</p>
<br />
<br />
<img src="images/multithread/data_encap.png" alt="Encapsulation of application data 
"/>
<p>Encapsulation of application data descending through the protocol stack.<br />
image source <a href="http://en.wikipedia.org/wiki/Internet_Protocol_Suite" target="_blank">wiki</a></p>
<br />
<br />


<div class="subtitle_2nd" id="tcpvsudp">TCP vs. UDP</div>
<p>What's the difference between <strong>TCP</strong> and <strong>UDP</strong>?</p>
<ul>
	<li><strong>TCP (Transmission Control Protocol) </strong> <br />
	TCP is a <strong>connection-oriented</strong> protocol. A connection can be made from client to server, and from then on any data can be sent along that connection.
		<ul>
		<li><strong>Reliable</strong> <br />
		When we send a message along a TCP socket, we know it will get there unless the connection fails completely. If it gets lost along the way, the server will re-request the lost part. This means complete integrity. In other words, the data will not get corrupted.</li>
		<li><strong>Ordered</strong> <br />
		If we send two messages along a connection, one after the other, we know the first message will get there first. We don't have to worry about data arriving in the wrong order.</li>
		<li><strong>Heavyweight</strong> <br />
		When the low level parts of the TCP stream arrive in the wrong order, resend requests have to be sent. All the out of sequence parts must be put back together, which requires a bit of work.</li>
		</ul>
	</li>
	<li><strong>UDP (User Datagram Protocol)</strong> <br />
	UDP is <strong>connectionless</strong> protocol. With UDP we send messages (packets) across the network in chunks.
		<ul>
		<li><strong>Unreliable</strong> <br />
		When we send a message, we don't know if it'll get there. It could get lost on the way.</li>
		<li><strong>Not ordered</strong> <br />
		If we send two messages out, we don't know what order they'll arrive in.</li>
		<li><strong>Lightweight</strong> <br />
		No ordering of messages, no tracking connections, etc. It's just fire and forget! This means it's a lot quicker, and the network card/OS have to do very little work to translate the data back from the packets.</li>
		</ul>
	</li>
</ul>


<div id="rightpanel">
<div align="center">
</div></div>

<br /><br />
<div class="subtitle_2nd" id="FullList">Full List of C++ Tutorials</div>
<ul>
   <li><a href="cpptut.html">C++ Home</a> </li>
   <li><a href="string.html">String</a> </li>
   <li><a href="constructor.html">Constructor</a> </li>
   <li><a href="operatoroverloading.html">Operator Overloading</a> </li>
   <li><a href="virtualfunctions.html">Virtual Functions</a></li>
   <li><a href="dynamic_cast.html">Dynamic Cast Operator</a></li>
   <li><a href="typecast.html">Type Cast Operators</a></li>
   <li><a href="autoptr.html">Class auto_ptr</a></li>
   <li><a href="templates.html">Templates</a></li>
   <li><a href="references.html">References for Built-in Types</a></li>
   <li><a href="valuevsreference.html">Pass by Value vs. Pass by Reference</a></li>
   <li><a href="memoryallocation.html">Memory Allocation</a></li>
   <li><a href="friendclass.html">Friend Functions and Friend Classes</a></li>
   <li><a href="functors.html">Functors</a></li>
   <li><a href="statics.html">Static Variables and Static Class Members</a></li>
   <li><a href="exceptions.html">Exceptions</a></li>
   <li><a href="pointers.html">Pointers</a></li>
   <li><a href="pointers2.html">Pointers II</a></li>
   <li><a href="pointers3.html">Pointers III</a></li>
   <li><a href="assembly.html">Taste of Assembly</a></li>
   <li><a href="smallprograms.html">Small Programs</a></li>
   <li><a href="linkedlist.html">Linked List Examples</a></li>
   <li><a href="binarytree.html">Binary Tree Example Code</a></li>
   <li><a href="stl.html">Standard Template Library (STL) I</a></li>
   <li><a href="stl2.html">Standard Template Library (STL) II - Maps</a></li>
   <li><a href="slicing.html">Object Slicing and Virtual Table</a></li>
   <li><a href="this_pointer.html">The this Pointer</a></li>
   <li><a href="stackunwinding.html">Stack Unwinding</a></li>
   <li><a href="upcasting_downcasting.html">Upcasting and Downcasting</a></li>
   <li><a href="object_returning.html">Object Returning</a></li>
   <li><a href="private_inheritance.html">Private Inheritance</a></li>
   <li><a href="cplusplus_keywords.html">C++_Keywords</a></li>
   <li><a href="multithreaded.html">Multi-Threaded Programming - Terminology</a></li>
   <li><a href="multithreaded2A.html">Multi-Threaded Programming II - Native Thread for Win32 (A) </a></li>
   <li><a href="multithreaded2B.html">Multi-Threaded Programming II -  Native Thread for Win32 (B) </a></li>
   <li><a href="multithreaded2C.html">Multi-Threaded Programming II -  Native Thread for Win32 (C) </a></li>
   <li><a href="multithreaded2.html">Multi-Threaded Programming II - C++ Thread for Win32</a></li>
   <li><a href="multithreaded3.html">Multi-Threaded Programming III - C++ Class Thread for Pthreads</a></li>  
   <li><a href="multithreadedDebugging.html">Multithread Debugging</a></li>
   <li><a href="embeddedSystemsProgramming.html">Embedded Systems Programming</a></li>
   <li><a href="boost.html">Boost</a></li>
   <li>Programming Questions and Solutions
    <ul>
       <li><a href="quiz_strings_arrays.html">Strings and Arrays</a></li>
       <li><a href="quiz_linkedlist.html">Linked List</a></li>
       <li><a href="quiz_recursion.html">Recursion</a></li>
       <li><a href="quiz_bit_manipulation.html">Bit Manipulation</a> </li>
       <li><a href="google_interview_questions.html">140 Google Interview Questions</a> </li>
    </ul>
   </li>
</ul>
<br /><br />



<br />


<br />
<br />
<br />


</div>
</div>
<div class="smallgraytext" id="footer"><a href="../index.html">Home</a>
| <a href="../about_us.html">About Us</a>
| <a href="../products.html">products</a>
| <a href="../our_services.html">Our Services</a>
| <a href="#">Contact Us</a>
| Bogotobogo &copy; 2011 | <a target="_blank" href="http://www.bogotobogo.com">Bogotobogo </a>
</div>
</div>
</div>
</body>
</html>
