Wednesday, December 31, 2008

Microsoft Virtualization - Summary

Nice images that I copied from one of the Webcasts





1. How does licensing work for virtualization ?

To make it easy, with the DataCenter edition, you can have unlimited licenses and includes the license for the host OS.

Friday, November 7, 2008

Serialization - Advanced Topics

1. IDeserializationCallback

2. FormatterServices

3. Serializing a type that was not designed to be serializable.

4. Overriding the Assembly or Type When Deserializing an Object

http://msdn.microsoft.com/en-us/magazine/cc188950.aspx

Serialization - ISerializable

Occasionally, you will design a type that requires complete control over how it is serialized and deserialized. The type must

1. Implement


public interface ISerializable {
void GetObjectData(SerializationInfo, StreamingContext context);
}


The GetObjectData method is responsible for determining what information is necessary to serialize the object and adds this information to the SerializationInfo object. The formatter now takes all of the values added to the SerializationInfo object and serializes each of them to the byte stream.

There are many destinations for a serialized set of objects: same process, different process on the same machine, different process on a different machine, and so on. In some rare situations, an object might want to know where it is going to be deserialized so that it can emit its state differently.

A method that receives a StreamingContext structure can examine the State property's bit flags to determine the source or destination of the objects being serialized/deserialized.

Note : Always call an overloaded AddValue methods to add serialization information for your type. If a field's type implements the ISerializable interface, don't call the GetObjectData on the field.

2. A constructor

If your class is sealed, I highly recommend that you declare this special constructor to be private.


protected Hashtable(
SerializationInfo info, StreamingContext context) {
}


Note: Instead of calling the various Get methods, the special constructor could instead call GetEnumerator, which returns a SerializationInfoEnumerator object that iterates through all the values contained within the SerializationInfo object. Each value enumerated is a System.Runtime.Serialization.SerializationEntry object.

A type may include fields that refer to other objects. When the special constructor is called, any fields that refer to other objects are guaranteed to be set correctly. That is, the fields' values will contain references to allocated objects. As these referenced objects may not have had their fields initialized yet, you should not execute any code in the special constructor that accesses any members on a referenced object.

If your type must access members (such as call methods) on a referenced type, then it is recommended that your type also implement the IDeserializationCallback interface's OnDeserialization method. When this method is called, all objects have had their fields set. But there's no way to tell what order multiple objects have their OnDeserialization method called. So, while the fields may be initialized, you still don't know if a referenced object is completely deserialized if that referenced object also implements the IDeserializationCallback interface.


Case 1 : Base class implements ISerializable

If your type also implements ISerializable, then your implementation of GetObjectData and your implementation of the special constructor must call the same functions in the base class in order for the object to be serialized and deserialized properly.

If your derived type doesn't have any additional fields and therefore has no special serialization/deserialization needs, then you do not have to implement ISerializable at all. Like all interface members, GetObjectData is virtual and will be called to properly serialize the object. In addition, the formatter treats the special constructor as "virtualized." That is, during deserialization, the formatter will check the type that it is trying to instantiate. If that type doesn't offer the special constructor, then the formatter will scan all of the base classes until it finds one that implements the special constructor.

Case 2 : Base class does not implement ISerializable

In this case, your class must manually serialize the base type's fields.



void ISerializable.GetObjectData(
SerializationInfo info, StreamingContext context) {

// Serialize the desired values for this class
info.AddValue("title", title);

// Get the set of serializable members for our class and base classes
Type thisType = this.GetType();
MemberInfo[] mi =
FormatterServices.GetSerializableMembers(thisType, context);

// Serialize the base class's fields to the info object
for (Int32 i = 0 ; i < mi.Length; i++) {
// Don't serialize fields for this class
if (mi[i].DeclaringType == thisType) continue;
info.AddValue(mi[i].Name, ((FieldInfo) mi[i]).GetValue(this));
}
}


Summary of http://msdn.microsoft.com/en-us/magazine/cc301767.aspx

Serialization - Basics

Serialization is the process of converting an object or a con-nected graph of objects into a contiguous stream of bytes. Deserialization is the process of converting a contiguous stream of bytes back into its graph of connected objects. The ability to convert objects to and from a byte stream is an incredibly useful mechanism.

Formatters

Formatters know how to serialize the complete object graph by referring to the metadata that describes each object's type. The Serialize method uses reflection to see what instance fields are in each object's type as it is serialized. If any of these fields refer to other objects, then the formatter's Serialize method knows to serialize these objects, too.

Formatters have very intelligent algorithms. They know to serialize each object in the graph out to the stream no more than once. That is, if two objects in the graph refer to each other, then the formatter detects this, serializes each object just once, and avoids entering into an infinite loop.

1. Binary formatter
2. Soap formatter

Serialization Steps

1. The developer must apply the System.SerializableAttribute custom attribute to this type he wants to serialize.

2. When serializing an object, the full name of the type and the name of the type's defining assembly are written to the byte stream. By default, the BinaryFormatter and SoapFormatter types output the assembly's full identity. However, you can make these formatters write the simple assembly name (just file name; no version, culture, or public key information) for each serialized type by setting the formatter's AssemblyFormat property to FormatterAssemblyStyle.Simple.

3. When serializing a graph of objects, some of the object's types may be serializable while some of the objects may not be serializable. For performance reasons, formatters do not verify that all of the objects in the graph are serializable before serializing the graph. So, when serializing an object graph, it is entirely possible that some objects may be serialized to the byte stream before the SerializationException is thrown. If this happens, the byte stream is corrupt.

Your application code should try to recover gracefully from this situation. If you think you may be serializing an object graph where some objects may not be serializable, I recommend that you serialize the objects into a MemoryStream first. Then, if all objects are successfully serialized, you can copy the bytes in the MemoryStream to whatever stream (file or network, for example) you really want the bytes written to.

4. When you apply the SerializableAttribute custom attribute to a type, all instance fields (public, private, protected, and so on) are serialized.



Deserialization Steps

1. When deserializing an object, the formatter first grabs the assembly identity and ensures that the assembly is loaded into the executing AppDomain.

Calls Assembly class's Load or LoadWithPartialName methods.

2. After an assembly has been loaded, the formatter looks in the assembly for a type matching that of the object being deserialized. If the assembly doesn't contain a matching type, an exception is thrown and no more objects can be deserialized.

3. If a matching type is found, an instance of the type is created and its fields are initialized from the values contained in the byte stream.

4. If you use Assembly.LoadFrom to load an assembly and then construct objects from types defined in the loaded assembly. These objects can be serialized to a stream without any trouble. However, when deserializing this stream, the formatter attempts to load the assembly by calling Assembly's Load or LoadWithPartialName method instead of calling the LoadFrom method.

You implement a method whose signature matches the System.ResolveEventHandler delegate and register this method with System.AppDomain's AssemblyResolve event just before calling a formatter's Deserialize method. (Unregister this method with the event after Deserialize returns.) Now, whenever the formatter fails to load an assembly, the CLR calls your ResolveEventHandler method. The identity of the assembly that failed to load is passed to this method. The method can extract the assembly file name from the assembly's identity and use this name to construct the path where the application knows the assembly file can be found. Then, the method can call Assembly.LoadFrom to load the assembly and return the resulting Assembly reference back from the ResolveEventHandler method.

Summary of http://msdn.microsoft.com/en-us/magazine/cc301761.aspx

Saturday, November 1, 2008

c# Tips

1. as and is operator

if (o is Employee)
{
Employee e = (Employee) o;
}

In this code, the CLR is actually checking the object's type twice: The is operator first checks to see if o is compatible with the Employee type. If it is, inside the if statement, the CLR again verifies that o refers to an Employee when performing the cast.

C# offers a way to simplify this code and improve its performance by providing an as operator:

Employee e = o as Employee;
if (e != null) {
// Use e within the 'if' statement.
}

2. What should one use string or String ?

Because in C# the string (a keyword) maps exactly to System.String (an FCL type), there is no difference and either can be used.

object and string are also primitive types.

3. Type casting

C# allows implicit casts if the conversion is "safe," that is, no loss of data is possible, such as converting an Int32 to an Int64. But C# requires explicit casts if the conversion is potentially unsafe.

4. Checked and Unchecked Primitive Type Operations

a. CLR offers IL instructions that allow the compiler to choose the desired behavior. The CLR has an instruction called add that adds two values together. The
add instruction performs no overflow checking. The CLR also has an instruction called add.ovf that also adds two values together. However, add.ovf throws a System.OverflowException if an overflow occurs.

b. C# allows the programmer to decide how overflows should be handled. By default, overflow checking is turned off. As a result, the code runs faster—but developers must be assured that overflows won't occur or that their code is designed to anticipate these overflows.

One way to get the C# compiler to control overflows is to use the /checked+ compiler switch. The code executes more slowly because the CLR is checking these operations to determine whether an overflow will occur.

c. There are also checked/unchecked operators/statements.

Byte b = 100;
b = checked((Byte) (b + 200)); // OverflowException is thrown

d. Here's the best way to go about using checked and unchecked:

i. As you write your code, explicitly use checked around blocks where an unwanted overflow might occur due to invalid input data, such as processing a request with data supplied from an end user or a client machine.

ii. As you write your code, explicitly use unchecked around blocks where an overflow is OK, such as calculating a checksum.

iii. For any code that doesn't use checked or unchecked, the assumption is that you do want an exception to occur on overflow.

Now, as you develop your application, turn on the compiler's /checked+ switch for debug builds. Your application will run more slowly because the system will be checking for overflows on any code that you didn't explicitly mark as checked or unchecked. If an exception occurs, you'll easily detect it and be able to fix the bug in your code. For the release build of your application, use the compiler's /checked- switch so that the code runs faster and exceptions won't be generated.

5. Value types and reference types

a. Value type instances are usually allocated on a thread's stack (although they can also be embedded in a reference type object).

b. Reference types are always allocated from the managed heap.

c. Value type are sealed.

d. Value type can implement interfaces.

e. Value types have two representations - boxed and unboxed.

f. Value types can't be assigned null.

g. When you assign a value type variable to another value type variable, a field-by-field copy is made.

h. C# compiler selects LayoutKind.Auto for reference types (classes) and LayoutKind.Sequential for value types (structures). However, if you're creating a value type that has nothing to do with interoperability with unmanaged code, you probably want to override the C# compiler's default.

i. The StructLayoutAttribute also allows you to explicitly indicate the offset of each field by passing LayoutKind.Explicit to its constructor. Then you apply an instance of the System.Runtime.InteropServices.FieldOffsetAttribute. This allows you to create unions in C#.

Saturday, October 11, 2008

Memory Allocation - Generation GC

Since it is generational GC, in the marking phase it does not go through the entire heap.

It makes the following assumptions

1. The newer an object is, the shorter its lifetime will be.
2. The older an object is, the longer its lifetime will be.
3. Collecting a portion of the heap is faster than collecting the whole heap.

The picture shows how it works. A few things to remember

1. Generation 0 is empty immediately after a garbage collection and is where
new objects will be allocated.

2. Any objects that were in generation 0 that survived the garbage collection would be now in generation 1 and so on.

3. The objects in a generation are examined only when generation reaches its budget, which usually requires several garbage collections of generation 0. When the CLR initializes, it selects budgets for all three generations. The budget for generation 0 is about 256 KB, and the budget for generation 1 is about 2 MB. The budget for generation 2 is around 10 MB.

4. The larger the budget, the less frequently a garbage collection will occur.

5. The managed heap supports only three generations: generation 0, generation 1, and generation 2; there is no generation 3.

6. Garbage collector dynamically modifies generation 0's budget after every collection.

Memory Allocation - Details of Garbage Collection

Every type identifies some resource available for the program to use. The steps to allocate memory to resource are

1. Allocate memory for that type by calling the the new operator.

2. Initialize the memory to set the initial state of the resource and to make the resource usable. The type's instance constructor is responsible for setting this initial state.

3. Use the resource by accessing the type's members (repeating as necessary).

4. Tear down the state of a resource to clean up.

5. Free the memory. The garbage collector is solely responsible for this step.

But the garbage collector knows nothing about the resource represented by the type in memory, which means that a garbage collector can't know how to perform Step 4 in the preceding list. So how is step 4 performed... The developer writes this code in the Finalize, Dispose, and Close methods.

Step 4 can be skipped for managed resources for example String. But for a type that wraps a native resource such as a file, a database connection, a socket, a mutex, a bitmap, an icon, and so on, always requires the execution of some cleanup code when the object is about to have its memory reclaimed.

Garbage collection algorithm

1. Marking phase

The garbage collector marks all of the reachable objects.

2. Compaction phase

The garbage collector compacts the memory, squeezing out the holes left by the
unreachable objects.

But the GC is a generational collector. So in the marking phase, instead of marking the whole heap it just focusses on generation (or a portion of heap).

Finalization - Releasing Native Resources

Any type that wraps a native resource, such as a file, network connection, socket, mutex, or other type, must support finalization. Basically, the type implements a method named Finalize. When the garbage collector determines that an object is garbage, it calls the object's Finalize method (if it exists).

When an application creates a new object, the new operator allocates the memory from the heap. If the object's type defines a Finalize method, a pointer to the object is placed on the finalization list just before the type's instance constructor is called. The finalization list is an internal data structure controlled by the garbage collector. Each entry in the list points to an object that should be finalised.



A. First Pass

When a garbage collection occurs, objects B, E, G, H, I, and J are determined to be garbage. The garbage collector scans the finalization list looking for pointers to these objects. When a pointer is found, the pointer is removed from the finalization list and appended to the freachable queue which is another of the garbage collector's internal data structures. Each pointer in the freachable queue identifies an object that is ready to have its Finalize method called. After the collection, the managed heap looks like



A special high-priority CLR thread is dedicated to calling Finalize methods. Because of the way this thread works, you shouldn't execute any code in a Finalize method that makes any assumptions about the thread that's executing the code. So if an object is in the freachable queue, the object is reachable and is not garbage. Then when the garbage collector moves an object's reference from the finalization list to the freachable queue, the object is no longer considered garbage and its memory can't be reclaimed.

B. Second Pass

The next time the garbage collector is invoked, it will see that the finalized objects are truly garbage because the application's roots don't point to it and the freachable queue no longer points to it either. The memory for the object is simply reclaimed.



Questions

1. Why can't C# support destructors ?

The CLR doesn't support deterministic destruction, which makes it difficult for C# to
provide this mechanism.

Saturday, October 4, 2008

Threads - Thread Pool

When a thread is created:

1. A kernel object is allocated and initialized
2. The thread's stack memory is allocated and initialized,
3. Windows sends every DLL in the process a DLL_THREAD_ATTACH notification, causing pages from disk to be faulted into memory so that code can execute.

When a thread dies:

1. Every DLL is sent a DLL_THREAD_DETACH notification
2. The thread's stack memory is freed
3. The kernel object is freed (if its usage count goes to 0).

So, there is a lot of overhead associated with creating and destroying a thread that has nothing to do with the work that the thread was created to perform in the first place.

Thread Pool can come to rescue here. There is one thread pool per process; this thread pool is shared by all AppDomains in the process.

ThreadPool class is a static class. The CLR's thread pool will automatically create a thread, if necessary, and reuse an exiting thread if possible. Also, this thread is not immediately destroyed; it goes back into the thread pool so that it is ready to handle any other work items in the queue.

1. All the threads in ThreadPool are background threads.
2. All the threads run with normal priority and should not be changed.
3. The threads from ThreadPool should not be aborted.
4. Internally, the thread pool categorizes its threads as either worker threads or I/O threads.
a. Worker threads are used when your application asks the thread pool to perform an asynchronous compute-bound operation.
b. I/O threads are used to notify your code when an asynchronous I/O-bound operation has completed.
5. When using ThreadPool's QueueUserWorkItem method to queue an asynchronous operation, the CLR offers no built-in way for you to determine when the operation has completed.
6. If a thread pool thread has been idle for approximately 2 minutes, the thread wakes itself up and kills itself in order to free up resources.

Number of threads in thread pool

A thread pool should never place an upper limit on the number of threads in the pool because
starvation or deadlock might occur.

With version 2.0 of the CLR, the maximum number of worker threads default to 25 per CPU in the machine2 and the maximum number of I/O threads defaults to 1000.

If you think that your application needs more than 25 threads per CPU, there is something seriously wrong with the architecture of your application and the way that it's using threads.

Uses of Thread Pool

A thread pool can offer performance advantage. It offers the following capabilities

1. Calling a method asynchronously
2. Calling a method at a timed interval
3. Calling a method when a single kernel object is signaled
4. Calling a method when an asynchronous I/O request completes

Calling a Method Asynchronously

To queue a task for the thread pool, use the following methods. Using QueueUserWorkItem might make your application more efficient because you won't be creating and destroying threads for every single client request.

public static Boolean QueueUserWorkItem(WaitCallback wc, Object state);
public static Boolean QueueUserWorkItem(WaitCallback wc);

public delegate void WaitCallback(Object state);

Calling a Method at Timed Intervals

The System.Threading namespace defines the Timer class. When you construct an instance of the Timer class, you are telling the thread pool that you want a method of yours called back at a particular time in the future.

Internally, the CLR has just one thread that it uses for all Timer objects.

If your callback method takes a long time to execute, the timer could go off again. This would cause multiple thread pool threads to be executing your callback method simultaneously. Watch out for this; if your method accesses any shared data, you will probably need to add some thread synchronization locks to prevent the data from becoming corrupted.

Calling a Method When a Single Kernel Object Becomes Signaled

Use the Register and Unregister WaitHandle methods.

When not to use Thread Pool thread

1. If I wanted the thread to run at a special priority (all thread pool
threads run at normal priority, and you should not alter a thread pool thread's priority).


2. You want the thread a foreground thread (all thread pool threads are background threads), thereby preventing the application from dying until my thread has completed its task.

3. I'd also use a dedicated thread if the compute-bound task were extremely long running; this way, I would not be taxing the thread pool's logic as it tries to figure out whether to create an additional thread.

4. Finally, I'd use a dedicated thread if I wanted to start a thread and possibly abort it prematurely by calling Thread's Abort method

Sunday, September 28, 2008

C# 2.0 - Anonymous Methods

In the program below, we have used "named methods".


class Program {

static void Main(string[] args) {
Fn f = Add; // Fn f = new Fn(Add);
Console.WriteLine(f(20, 30));
}

static int Add(int a, int b) {
return a + b;
}

delegate int Fn(int a, int b);
}



The same can be written using anonymous methods (introduced in c# 2.0) as


class Program {
static void Main(string[] args) {

Fn f1 = delegate(int a, int b) {
return a + b;
};

Console.WriteLine(f1(50, 60));
}

static int Add(int a, int b) {
return a + b;
}

delegate int Fn(int a, int b);
}



The advantages of using anonymous methods is

1. Elegant and clean code.

2. The place where anonymous methods come handy is that it includes capability of capturing local state. The local variables and parameters whose scope contain an anonymous method declaration are called outer or captured variables of the anonymous method. For example, in the following code segment, n is an outer variable:

int n = 0;
Del d = delegate() { System.Console.WriteLine("Copy #:{0}", ++n); };

3. Unlike local variables, the lifetime of the outer variable extends until the delegates that reference the anonymous methods are eligible for garbage collection. A reference to n is captured at the time the delegate is created.

4. An anonymous method cannot access the ref or out parameters of an outer scope.

Another example

List list = new List() { 50, 20, 30 };
list.Sort(delegate(int a, int b) { return a - b; });

Friday, September 26, 2008

Threads - Thread Synchronization

1. Exclusive locking using Monitors

lock(obj){

}

is similar to

Monitor.Enter(obj)



try{
.....
}

finally{
Monitor.Exit(obj)
}



It is considered a better practice to lock on an object internal to your class rather than on the class itself, which is externally exposed.

Limitation of Monitors

1. They only provide synchronization based on exclusive locking. In other words, a monitor cannot permit access to more than one thread at a time. This is true even when a set of threads are only attempting to perform read operations. This limitation can lead to inefficiencies in a scenario in which there are many read operations for each write operation.

1. Shared locking using ReaderWriterLock

The ReaderWriterLock class allows you to design a synchronization scheme which employs shared locks together with exclusive locks. This makes it possible to provide access to multiple reader threads at the same time, which effectively reduces the level of blocking. The ReaderWriterLock class also provides exclusive locking for write operations so you can eliminate inconsistent reads.

a. The reading thread can acquire a shared lock using method AcquireReaderLock

b. The writing thread can acquire an exclusibe lock using method AcquireWriterLock . Once an exclusive lock is acquired, all other reader threads and writer threads will be blocked until this thread can complete its work and call ReleaseWriterLock.

c. What would happen if a single thread calls AcquireWriterLock more than once before calling ReleaseWriterLock? Your intuition might tell you that this thread will acquire an exclusive lock with the first call to AcquireWriterLock and then block on the second call. Fortunately, this is not the case.
The ReaderWriterLock class is smart enough to associate exclusive locks with threads and track an internal lock count. Therefore, multiple calls to AcquireWriterLock will not result in a deadlock. However, this issue still requires your attention because you must ensure that two calls to the AcquireWriterLock method from a single thread are offset by two calls to the ReleaseWriterLock method. If you call AcquireWriterLock twice and only call ReleaseWriterLock once, you haven't released the lock yet.

d. Note that it is not possible for any single thread to hold both a shared lock and an exclusive lock at the same time. However, it is not uncommon that a thread will need to obtain a shared lock at first and then later escalate to an exclusive lock to perform write operations. The key point about using the ReaderWriterLock class is that a thread should never call AcquireReaderLock and then follow that with a call to AcquireWriterLock. If you do this, your call to AcquireWriterLock will block indefinitely. Instead, after calling AcquireReaderLock you should call UpgradeToWriterLock to escalate from a shared lock to an exclusive lock, as shown here:

// Acquire shared lock
lock.AcquireReaderLock(Timeout.Infinite)

// Escalate shared lock to exclusive lock
lock.UpgradeToWriterLock(Timeout.Infinite)

The key point is that a call to UpgradeToWriterLock doesn't lock down your data.

Summary of

http://msdn.microsoft.com/en-us/magazine/cc188722.aspx
http://msdn.microsoft.com/en-us/magazine/cc163846.aspx

Threads - Secondary Threads

1. A native thread is a Win32 thread which created by Windows OS where as a managed thread is a thread created by .NET Framework.

2. But is there a difference... after all a managed thread would be a simple wrapper around Win32 thread ?

a. Yes, there is a difference. With version 1.0 and version 1.1 of the CLR, each Thread object is associated with its own physical Win32® thread. Note that future versions of the CLR are likely to provide an optimization whereby it will not be necessary to create a separate physical thread for each Thread object.

b. And creating an object from the Thread class doesn't actually create a physical thread. Instead, you must call the Thread object's Start method for the CLR to call to the Windows OS and create a physical thread.

3. There is an overhead involved in creating and destroying physical threads.

4. The lifetime of this physical thread is controlled by the target method's execution. When execution of the method completes, the CLR gives control of the physical thread back to Windows. At this point, the OS destroys the physical thread.

Creating secondary thread

1. The Thread class is used to create secondary threads.

2. It takes an object of ThreadStart delegate

public delegate void ThreadStart();

3. To pass parameters to thread you have two options

a. Use ParameterizedThreadStart

b. Create a custom thread class with a parameterized constructor. When you create an object from a custom thread class, you can initialize it with whatever parameter values are required in your particular situation.



class Program {
static void Main(string[] args) {

ThreadStart st = new ThreadStart(new A().ThreadMethod);
System.Threading.Thread th = new System.Threading.Thread(st);
th.IsBackground = true;

th.Start();
}

}

class A {

public void ThreadMethod() {
Thread.Sleep(-1);
}
}


Lifecycle of a Thread



1. When you create a thread and have not called the Start method, the state of the thread is Unstarted.

2. After you call the Start method,

a. If the thread is a background thread the state is Background
b. If the thread is a foreground thread the state is Running
c. If the thread is sleeping, the state is WaitSleepJoin

3. A thread in WaitSleepJoin is also called blocked thread i.e. it waits or pauses for a result. Once blocked, a thread immediately relinquishes its allocation of CPU time and doesn’t get re-scheduled until unblocked. A thread, while blocked, doesn't consume CPU resources. A thread can enter blocked by calling
a. Thread.Sleep
b. Thread.Join
c. Thread Synchronization Blocking (Lock, Mutex, Semaphore) or Signaling Constructs (WaitHandles)

Thread Methods

1. public void Join();
2. public bool Join(int millisecondsTimeout);

Blocks the calling thread until a thread terminates or the specified time elapses, while continuing to perform standard COM and SendMessage pumping.

3. public void Abort();

4. public void Interrupt();

5. The Resume and Suspend methods have been suspended.

Interrupts a thread that is in the WaitSleepJoin thread state.

Handling Exceptions

1. Exceptions from secondary threads are never passed to other threads i.e. each thread should handle its own exceptions.

2. From .NET 2.0 onwards, an unhandled exception on any thread shuts down the whole application, meaning ignoring the exception is generally not an option. Hence a try/catch block is required in every thread entry method – at least in production applications – in order to avoid unwanted application shutdown in case of an unhandled exception.

3. Application.ThreadException which is global exception handler in Windows Forms applications do not get exceptions raised by worker threads. The .NET framework provides a lower-level event for global exception handling: AppDomain.UnhandledException. This event fires when there's an unhandled exception in any thread, and in any type of application (with or without a user interface).

When to create a secondary thread

You should be creating and managing secondary threads to execute methods asynchronously only when you cannot use delegates.

1. You need to execute a long-running task

When you need to dedicate a thread to a task that's going to take a long time, you should create a new thread. It would be considered bad style to use delegates because you would effectively be taking a thread out of the CLR thread pool and never returning it. Asynchronous method execution using delegates should only be used for relatively short-running tasks.

2. You need to adjust a thread's priority

3. You need a foreground thread that will keep a managed desktop application alive

Each Thread object has an IsBackground property that indicates whether it is a background thread. All the threads in the CLR thread pool are background threads and should not be modified in this respect. That means a task executing asynchronously as a result of a call to BeginInvoke on a delegate is never important enough by itself to keep an application alive.

4. You need a single-threaded apartment (STA) thread to work with apartment-threaded COM objects.

5. Use a dedicated thread if you want to abort it prematurely by calling
Thread's Abort method.

Saturday, September 20, 2008

Messages and Message Queues

Summary of link http://msdn.microsoft.com/en-us/library/ms644927(VS.85).aspx

There are two kinds of messages

1. Queued Messages

a. A message queue is a first-in, first-out queue, a system-defined memory object that temporarily stores messages, and sending messages directly to a window procedure.

b. Messages posted to a message queue are called queued messages. They are primarily the result of user input entered through the mouse or keyboard. Other queued messages include the timer, paint, and quit messages.

c. The system maintains a single system message queue and one thread-specific message queue for each graphical user interface (GUI) thread. To avoid the overhead of creating a message queue for non–GUI threads, all threads are created initially without a message queue. The system creates a thread-specific message queue only when the thread makes its first call to one of the User or Windows Graphics Device Interface (GDI) functions.

d. Whenever the user moves the mouse, clicks the mouse buttons, or types on the keyboard, the device driver for the mouse or keyboard converts the input into messages and places them in the system message queue. The system removes the messages, one at a time, from the system message queue, examines them to determine the destination window, and then posts them to the message queue of the thread that created the destination window.

e. A thread's message queue receives all mouse and keyboard messages for the windows created by the thread. The thread removes messages from its queue and directs the system to send them to the appropriate window procedure for processing.

2. Queued Messages

Nonqueued messages are sent immediately to the destination window procedure, bypassing the system message queue and thread message queue.

Handling of messages

1. A single-threaded application usually uses a message loop in its WinMain function to remove and send messages to the appropriate window procedures for processing.

2. Applications with multiple threads can include a message loop in each thread that creates a window.

3. Only one message loop is needed for a message queue, even if an application contains many windows.

Window Procedure

1. A window procedure is a function that receives and processes all messages sent to the window. Every window class has a window procedure, and every window created with that class uses that same window procedure to respond to messages.

2. Each window has a function, called a window procedure, that the system calls whenever it has input for the window. The window procedure processes the input and returns control to the system.

The sample "Hello World" program would help understand a few concepts



HINSTANCE hinst;
HWND hwndMain;

int PASCAL WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance,
LPSTR lpszCmdLine, int nCmdShow)
{
MSG msg;
BOOL bRet;
WNDCLASS wc;
UNREFERENCED_PARAMETER(lpszCmdLine);

// Register the window class for the main window.

if (!hPrevInstance)
{
wc.style = 0;
wc.lpfnWndProc = (WNDPROC) MainWndProc;
wc.cbClsExtra = 0;
wc.cbWndExtra = 0;
wc.hInstance = hInstance;
wc.hIcon = LoadIcon((HINSTANCE) NULL,
IDI_APPLICATION);
wc.hCursor = LoadCursor((HINSTANCE) NULL,
IDC_ARROW);
wc.hbrBackground = GetStockObject(WHITE_BRUSH);
wc.lpszMenuName = "MainMenu";
wc.lpszClassName = "MainWndClass";

if (!RegisterClass(&wc))
return FALSE;
}

hinst = hInstance; // save instance handle

// Create the main window.

hwndMain = CreateWindow("MainWndClass", "Sample",
WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT,
CW_USEDEFAULT, CW_USEDEFAULT, (HWND) NULL,
(HMENU) NULL, hinst, (LPVOID) NULL);

// If the main window cannot be created, terminate
// the application.

if (!hwndMain)
return FALSE;

// Show the window and paint its contents.

ShowWindow(hwndMain, nCmdShow);
UpdateWindow(hwndMain);

// Start the message loop.

while( (bRet = GetMessage( &msg, NULL, 0, 0 )) != 0)
{
if (bRet == -1)
{
// handle the error and possibly exit
}
else
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}

// Return the exit code to the system.

return msg.wParam;
}

LRESULT CALLBACK MainWndProc(
HWND hwnd, // handle to window
UINT uMsg, // message identifier
WPARAM wParam, // first message parameter
LPARAM lParam) // second message parameter
{

switch (uMsg)
{
case WM_CREATE:
// Initialize the window.
return 0;

case WM_PAINT:
// Paint the window's client area.
return 0;

case WM_SIZE:
// Set the size and position of the window.
return 0;

case WM_DESTROY:
// Clean up window-specific data objects.
return 0;

//
// Process other messages.
//

default:
return DefWindowProc(hwnd, uMsg, wParam, lParam);
}
return 0;
}

Sunday, September 14, 2008

Threads - UI and Worker threads

Lets start with the basics. We would first try to gain an understanding of native threads and then gradually move to managed threads.

A thread is the basic unit to which the operating system allocates processor time. Each process is started with a single thread, often called the primary thread, but can create additional threads from any of its threads.

The figure below shows variety of models for threads and processes.

1. MS-DOS supports a single process and single thread.



Now, there can be a slight difference between the GUI application and console applications. There were two kinds of threads

1. User-Interface Threads (GUI Threads) : Commonly used to handle user input and respond to user events.

2. Worker Threads : Commonly used to handle background tasks that the user shouldn't have to wait for to continue using your application.

But why is there a distinction between user-interface threads and worker threads ?

1. User Interface thread has its own message pump/loop to process the messages in its message queue. It can implement message maps and message handlers. Worker thread does not have its own message pump. Any thread can create a window. The thread that creates the window becomes a GUI thread, it owns the window and its associated message queue.

2. A worker thread often terminates once its work is done. On other hand a UI-Thread often remains in the memory standby (inside Run() method, which does message pumping) to do the work on receiving any message.

Multiple Threads and GDI Objects

To enhance performance, access to graphics device interface (GDI) objects (such as palettes, device contexts, regions, and the like) is not serialized. This creates a potential danger for processes that have multiple threads sharing these objects. For example, if one thread deletes a GDI object while another thread is using it, the results are unpredictable. This danger can be avoided simply by not sharing GDI objects. If sharing is unavoidable (or desirable), the application must provide its own mechanisms for synchronizing access.

In Win32 programming, CreateThread function is used to create a thread. It takes starting address of the code the new thread is to execute. If the thread is a GUI thread, it is its responsibility to provide a message loop.

In MFC, you were supposed to derive from CWinThread to create a user-interface thread.

Friday, September 12, 2008

Exceptions and Multiple Event Handlers

If you have multiple handlers for an event, when the event is raised, each of the handlers gets called in turn by the .NET Framework. So far, so good. What happens if one of the event handlers raises an exception? Then things don't go so well.

If any event listener raises an exception, the whole event-handling chain stops. If you pause to consider what's going on, this behavior makes sense. If an exception occurs in any listener, the exception bubbles back to the event-raising code, and the .NET Framework calls no more of the event listeners, and event handling grinds to a halt.

There is an alternative. Rather than simply raising the event, it is possible for you to call each individual listener explicitly. You can take advantage of the members of the Delegate class to solve the problem of unhandled exceptions in multiple listeners.



class Program {
static void Main(string[] args) {

BankAccount b = new BankAccount();
b.LargeAmountEvent += new LargeAmountEventHandler(b_LargeAmountEvent1);
b.LargeAmountEvent += new LargeAmountEventHandler(b_LargeAmountEvent2);
b.Withdraw(100);
}

// Throws an exception
static private void b_LargeAmountEvent1(Object sender, BankEventArgs e) {

if (true) {
throw new Exception("exception");
}
Console.WriteLine("Withdrawn" + e.Amount);
}


// Handler which does not throw an exception
static private void b_LargeAmountEvent2(Object sender, BankEventArgs e) {

Console.WriteLine("Withdrawn" + e.Amount);

}
}/*End Program*/


public class BankEventArgs : EventArgs {
private int amount;

public BankEventArgs(int amount) {
this.amount = amount;
}

public int Amount {
get { return amount; }
}
}



public delegate void LargeAmountEventHandler(Object sender, BankEventArgs e);



public class BankAccount {

public event LargeAmountEventHandler LargeAmountEvent;

public void Withdraw(int amount){
OnLargeWithdraw(new BankEventArgs(amount));
}

protected void OnLargeWithdraw(BankEventArgs e){

if (LargeAmountEvent != null) {
// Instead of raising event, call GetInvocationList
Delegate[] d = LargeAmountEvent.GetInvocationList();
foreach (LargeAmountEventHandler evnt in d) {
try {
evnt.Invoke(this, e);
} catch (Exception ex) {
Console.WriteLine(ex.Message);
}
}
//LargeAmountEvent(this, e);
}
}
}

Sunday, September 7, 2008

Updating the UI from a Secondary Thread

A very nice article describing how to switch from secondary thread to primary thread.

i. Control.Invoke
ii. Control.BeginInvoke
iii. Control.InvokeRequired

Asynchronous Method Execution Using Delegates

There are a number of scenarios in which asynchronous execution can be a valuable design technique. Delegates provide an easy, powerful abstraction for asynchronous method execution.

Let's assume that we have the method GetCustomerList which is a long running method and we want to execute it asynchronously.



class DataAccessCode{
public static string[] GetCustomerList(string state){
// call across network to DBMS or Web Services to retrieve data
// pass data back to caller using string array return value
}
}



This method can be executed asynchronously using delegates.

1. Define a delegate type matching the signature of the method you want to invoke asynchronously.

public delegate string[] LongRunning(string state);

2. Create a delegate object and bind it to the GetCustomerList method.

LongRunning method = new LongRunning(DataAccessCode.GetCustomerList);

3. Call BeginInvoke.

a. When you make a call to BeginInvoke on a delegate object, you are essentially asking the common language runtime (CLR) to begin executing the handler method asynchronously on a secondary thread. While executing a method asynchronously with a call to BeginInvoke is powerful, it's also fairly easy because you don't have to be concerned with creating and managing a secondary thread. The CLR does this for you automatically.

b. When you call the BeginInvoke method, the delegate object places a request in a special internal queue. The CLR maintains a pool of worker threads that are responsible for servicing the request in this queue. Asynchronous execution is achieved because the thread that calls BeginInvoke is not the same as the thread that executes the handler method.

c. The call to BeginInvoke returns right away and the calling thread can then move on to other business without having to wait for the CLR to execute the target method.

IAsyncResult result = method.BeginInvoke("Delhi",null, null);

4. Call EndInvoke

string[] customers = method.EndInvoke(result);

a. EndInvoke allows you to retrieve the return value from an asynchronous method call.

b. A call to EndInvoke requires you to pass the IAsyncResult object associated with a particular asynchronous call.

c. Parameter list for EndInvoke will include any ref parameters defined within the delegate type so you can also retrieve any output parameters.

d. If the asynchronous method experiences an unhandled exception, that exception object is tracked by the CLR and then thrown when you call EndInvoke.

e. Forgetting to call EndInvoke will prevent the CLR from cleaning up some of the resources required in dispatching asynchronous calls. Therefore, you should assume your application will leak if you make calls to BeginInvoke without also making associated calls to EndInvoke.

f. A call to EndInvoke returns right away if the worker thread has already completed the execution of the handler method. However, a call to EndInvoke will block if the asynchronous call hasn't yet started or is still in progress.

g. When to call EndInvoke

i. Immediately after calling BeginInvoke

ii. Wait on WaitHandle and then call EndInvoke

iii. Poll for IsCompleted to be true and then call EndInvoke when it is true.

iv. From callback

In a GUI application, all the code usually runs on a single thread known as the primary UI thread. The primary UI thread is important in a Windows Forms UI application because it is in charge of maintaining the responsiveness of the user interface. If you freeze the primary user interface thread with a long-running call across the network, the hosting application will be unresponsive to the user until the call returns. So, you should take advantage of a handy feature of delegates that lets you set up a callback method that's automatically executed by the CLR when an asynchronous method call is completed.

When the application calls BeginInvoke, the CLR executes the handler method using a worker thread from the CLR thread pool. Next, that same worker thread executes the callback method. When this work is complete, the CLR returns that worker thread to the pool so it is available to service another asynchronous request.

It's very important to understand that the callback method does not run on the primary UI thread that made the call to BeginInvoke. Once again, the callback method is executed on the same secondary thread that executed the asynchronous method.

There is an important threading rule to follow when programming with Windows Forms. The primary UI thread is the only thread that's allowed to touch a form object and all of its child controls. That means it's illegal for code running on a secondary thread to access the methods and properties of a form or a control. However, you must remember that a callback method executes on a secondary thread and not the primary UI thread. That means you should never attempt to update the user interface directly from this kind of callback method. To be more concise, the callback method that's running on the secondary thread must force the primary UI thread to execute the code that updates the user interface.






IAsyncResult

public interface IAsyncResult {
object AsyncState { get; }
WaitHandle AsyncWaitHandle { get; }
bool CompletedSynchronously { get; }
bool IsCompleted { get; }
}

a. The IAsyncResult object allows you to monitor an asynchronous call in progress.
b. The property IsComplete allows you to monitor the status of an asynchronous call and determine whether it has completed.

AsyncCallback

public delegate void AsyncCallback(IAsyncResult ar);

Saturday, September 6, 2008

Demystifying Delegates

1. public delegate Int32 Add(Int32 a, Int32 b);

When compiler sees this line, it actually defines a class


public class Add: System.MulticastDelegate {

// Constructor
public Add(Object target, Int32 methodPtr);

// Method with same prototype as specified by the source code
public void virtual Invoke(Int32 a, Int32 b){
// If there are any delegates in the chain that
// should be called first, call them
if (_prev != null) _prev.Invoke(a, b);

// Call our callback method on the specified target object
_target.methodPtr(a, b);

}

// Methods allowing the callback to be called asynchronously
public virtual IAsyncResult BeginInvoke(Int32 a, Int32 b,
AsyncCallback callback, Object object);
public virtual void EndInvoke(IAsyncResult result);
}


2. All delegates are derived from MulticastDelegate.

3. The signature of the Invoke method matches the signature of delegate exactly.

4. In the code below, the constructor Add is taking one parameter but the generated class's constructor has two parameters. Well, compiler magic again

a. The compiler knows that a delegate is being constructed, and the compiler parses the source code to determine which object and method are being referred to. A reference to the object is passed for the target parameter, and a special Int32 value (obtained from a MethodDef or MethodRef metadata token) that identifies the method is passed for the methodPtr parameter.
b. For static methods, null is passed for the target parameter. Inside the constructor, these two parameters are saved in their corresponding private fields.




class Test{

Int32 MyAdd(Int32 a, Int32 b);
static Int32 StMyAdd(Int32 a, Int32 b);

}

Test obj = new Test();
Add a = new Add(obj.MyAdd)



5. There are three private fields that one should be aware of
private IntPtr _methodPtr;
private object _target;
private MulticastDelegate _prev;

6. Adding more than one callback methods
Add obj = null;
obj += new Add(Test.StMyAdd);
obj += new Add(Test.StMyAdd);

You can also use the static Combine methods of the Delegate class.

7. The field _prev allows delegate objects to be part of a linked-list.

8. http://msdn.microsoft.com/en-us/magazine/cc301810.aspx
http://msdn.microsoft.com/en-us/magazine/cc301816.aspx

Uses of delegates

1. Used for implementing callbacks.

2. Used for implementing multicasting.

3. Delegates also provide the primary means for executing a method on a secondary thread in an asynchronous fashion.

Things you should know about events

The recommended design pattern that should be used to expose events.

1. Define a type which will hold any additional information that should be sent to receivers of the event notification. By convention, types that hold event information are derived from System.EventArgs and the name of the type should end with "EventArgs".

The EventArgs type is inherited from Object and looks like this:

[Serializable]
public class EventArgs {
public static readonly EventArgs Empty = new EventArgs();
public EventArgs() { }
}

It simply serves as a base type from which other types may derive. Many events don't have any additional information to pass on. When you are defining an event that doesn't have any additional data to pass on, just use EventArgs.Empty.

2. Define a delegate type specifying the prototype of the method that will be called when the event fires. By convention, the name of the delegate should end with "EventHandler". It is also by convention that the prototype have a void return value and take two parameters. The first parameter is an Object that refers to the object sending the notification, and the second parameter is an EventArgs-derived type containing any additional information that receivers of the notification require.

If you're defining an event that has no additional information that you want to pass to receivers of the event, then you do not have to define a new delegate.

You can use the System.EventHandler delegate and pass EventArgs.Empty for the second parameter. The prototype of EventHandler is as follows:

public delegate void EventHandler(Object sender, EventArgs e);

3. Define an event.

4. Define a protected, virtual method responsible for notifying registered objects of the event. The OnXXX method is called when a event occurs. This method receives an initialized XXXEventArgs object containing additional information about the event. This method should first check to see if any objects have registered interest in the event and if so, the event should be fired.

This gives the derived type control over the firing of the event. The derived type can handle the event in any way it sees fit. Usually, a derived type will call the base type's OnXXX method so that the registered object will receive the notification. However, the derived type may decide not to have the event forwarded on.

5. Finally, define a method that translates the input into the desired event.

It is also useful to know what compiler does when it sees the event keyword. This also would help us understand why a new type "event" was required. For example, say we have the following line

public event EventHandler MailMsg;

The C# compiler translates this single line of source code into three constructs

1. The first construct is simply a field that is defined in the type. This field is a reference to the head of a linked list of delegates that want to be notified of this event. This field is initialized to null, meaning that no listeners have registered interest in the event.

You'll notice that event fields (MailMsg in this example) are always private even though the original line of source code defines the event as public. The reason for making the field private is to prevent code outside the defining type from manipulating the field improperly.

// A PRIVATE delegate field that is initialized to null
private EventHandler MailMsg = null;

2. The second construct generated by the C# compiler is a method that allows other objects to register their interest in the event. The C# compiler automatically names this function by prepending "add_" to the field's name (MailMsg).

// A PUBLIC add_* method.
// Allows objects to register interest in the event


MethodImplAttribute(MethodImplOptions.Synchronized)]
public void add_MailMsg(EventHandler handler) {
MailMsg = (EventHandler)
Delegate.Combine(MailMsg, handler);
}


3. The C# compiler is a method that allows an object to unregister its interest in the event. Again, the C# compiler automatically names this function by prepending "remove_" to the field's name (MailMsg).

// A PUBLIC remove_* method.
// Allows objects to unregister interest in the event
[MethodImplAttribute(MethodImplOptions.Synchronized)]
public void remove_Click (MailMsgEventHandler handler) {
MailMsg = (MailMsgEventHandler)
Delegate.Remove(MailMsg, handler);
}


Other facts that you should know about the events.

1. The thread used when firing the event, is the thread used to handle the event.

2. When an event has multiple subscribers, the event handlers are invoked synchronously when an event is raised.

3. It is necessary to unsubscribe from events to prevent resource leaks. The publisher holds a reference of the subscriber and the garbage collector would not delete the subscriber object until you unsubscribe from the event. So, you should unsubscribe from events before you dispose of a subscriber object.

Saturday, August 30, 2008

Deconstructing the System.Type class

I always knew that Type class was special. So lets spend some time on understanding it.

1. On looking at its documentation, what confused me a bit was that the class is declared as an abstract class.

a. So how does the typeof operator creates an instance of the class. Well it actually creates instance of System.RuntimeType, which itself is derived from System.Type

2. Another interesting fact to note is that Type object that represents a type is unique; that is, two Type object references refer to the same object if and only if they represent the same type. This allows for comparison of Type objects using reference equality. In other words, for any type, there is only one instance of Type per application domain.

a. Not to worry, the Type class is thread safe.



Type type1 = typeof(A);
Type type2 = typeof(B);

Type type3 = type1.GetType(); // System.RuntimeType
Type type4 = type2.GetType(); // System.RuntimeType

bool b = type3 == type4; // Returns true,showing for both A and B,there is 1 instance





3. In multithreading scenarios, do not lock Type objects in order to synchronize access to static data. Other code, over which you have no control, might also lock your class type. This might result in a deadlock. Instead, synchronize access to static data by locking a private static object. This means never write the most commonly used

lock (typeof (A) ){
}

Saturday, August 9, 2008

How .NET supports the Singleton

The following example shows an easy way to implement Singleton class using the .NET Framework.


Sealed class Singleton {
public static readonly Singleton = new Singleton();
}


1. The Framework guarantees thread safety on static type initialization.

2. The Singleton is still created using lazy initialization in order not to consume memory when the object is not needed.

Friday, June 27, 2008

Dependency Inversion Principle

Before delving deeper into the principle we should understand

a. What is a dependency ?
b. What is it trying to invert ? And to understand this we should understand what happens in the normal case which the principle says should be inverted ?

What is dependency ?

When a class refers another class, it is said to be dependent on the other class. Consider the following example,


Class A{
}

class B{
A obj;
}


The class B has a member variable of class A and so it is dependent on A. The dependency can be set either using constructor or setter methods.

What is the 'un-inverted' way ?

An application has high level classes and low level classes. The low level classes would implement the basic functionality and the high level classes would implement complex logic and in turn use the low level classes.
A natural way of implementing such structures would be to write low level classes and once we have them to write the complex high level classes. Since the high level classes are defined in terms of others this seems the logical way to do it.

High Level Module --> Low Level Module

Besides, as mentioned by Martin Folwer in his article he has used the word inversion because...

more traditional software development methods, such as Structured Analysis and Design, tend to create software structures in which high level modules depend upon low level modules, and in which abstractions depend upon details. Indeed one of the goals of these methods is to define the subprogram hierarchy that describes how the high level modules make calls to the low level modules. Thus, the dependency structure of a well designed object oriented program is “inverted” with respect to the dependency structure that normally results from traditional procedural methods.

What is DIP ?
  • High-level modules should not depend on low-level modules. Both should depend on abstractions.
  • Abstractions should not depend on details. Details should depend on abstractions.

According to this principle the way of designing a class structure is to start from high level modules to the low level modules:

High Level Classes --> Abstraction Layer --> Low Level

But why ?

1. If the high level modules would depend on low level modules, it would be difficult to reuse the high level modules in different contexts. And it is high level modules we generally want to reuse.

2. Its the high level modules which have business logic. Yet,when these modules depend upon the lower level modules, then changes to the lower level modules can have direct effects upon them; and can force them to change. But it should be high level modules that ought to be forcing the low level modules to change.

Advantages of DIP

1. The use of DIP makes the high level modules reusable as they are not directly dependent on low level details/modules. It helps in creation of reusable frameworks.

2. It helps in creation of code that is resilient to change. And, since the abstractions and details are all isolated from each other, the code is much easier to maintain.

Layering and DIP

According to Booch, “...all well structured object-oriented architectures have clearly-defined layers, with each layer providing some coherent set of services though a well-defined and controlled interface.”

For example, lets consider the three layers A->B->C. In this case the layer A is sensitive to all the changes down in layer C. Dependency is transitive.

Using DIP, A->IB->BImpl->IC->CImpl

Here, each of the lower layers are represented by an abstract class. Each of the higher level classes uses the next lowest layer through the abstract interface. Thus, none of the layers
depends upon any of the other layers. Instead, the layers depend upon abstract classes. Not only is the transitive dependency of A Layer upon C Layer broken, but even the direct dependency of A Layer upon B Layer broken.

Program to interfaces, not implementation

The point is to exploit polymorphism by programming to a supertype so that the actual runtime object is not locked into the code. And the supertype can be an interface or an abstract class.

Sunday, June 15, 2008

Best practices for statics

1. .NET also supports type constructors also known as static constructors, class constructors or type initializers. A type's constructor is guaranteed to run before any instance of the type is created and before any static field or method of the type is referenced.

a. Implicit static constructor

class AType {
static int x = 5;
}


When this code is built, the compiler automatically generates a type constructor for AType. This constructor is responsible for initializing the static field x to the value 5.

b. Explicit static constructor



class AType {
static int x;

static AType() {
x = 5;
}
}


When you run FxCop, b would give a warning "Do not declare explicit static constructors." The rule description tells you that an explicit static constructor results in code that performs worse. The recommendation from FxCop is to initialize static fields where they are declared. The reason for this is that implicit static constructors can be run at anytime the runtime where as if an explicit static constructor is defined, the runtime must run the type constructor at a precise time—just before the first access to static or instance fields and methods of the type and the precise timing restrictions lead to the performance drop hinted at by FxCop. The checks that the runtime performs in order to run the type initializer at a precise time adds overhead.

2. Exceptions in static constructors

a. The runtime will stop any exceptions trying to leave a type constructor and wrap the exception inside of a new TypeInitializationException object. The original exception thrown inside the type constructor is then available from the InnerException property of the TypeInitializationException object. One reason this special behavior might occur is that the second time you try to access a static property of the type, the runtime does not try to invoke the type constructor again, but throws the same exception observed in the first iteration. The runtime does not give a type constructor a second chance. The same rules hold true if the exception is thrown from a static field initializer. As you saw earlier, the code for a static field initializer executes inside an implicit type constructor.

b. A TypeInitializationException can be a fatal error in an application since it renders a type useless. You should plan to catch any exceptions inside of a type constructor if there is any possibility of recovering from the error, and you should allow an application to terminate if the error cannot be reconciled.

c. When an exception is thrown, the runtime will begin to look for the nearest catch clause whose filters specify that it can handle the exception.

3. The rule of thumb here is to avoid touching the static members of another type from within a type constructor. Although the chances of seeing the previous scenario are slim, the effects would be difficult to debug and track down since there are few diagnostics available.

4. Static members are not inherited. Though static members are not inherited, they can still be accessed with derived class type.

5. If you have a class with only static members, you would not want to
a. Allow it to get derived (use sealed keyword)
b. Allow it to be instantiated (private default constructor)

A solution to both of this is the static classes.

The new syntax allows the compiler to enforce a number of new rules.

a. You cannot declare a variable of type Static1 because it is marked as a static class.
b. You also cannot derive from Static1, or add any non-static members to the class.
c. Note also that it's an error to derive a static class from any class other than object.

6. You should also remember that static classes that preserve state between method calls should be made thread safe by default. Making a class thread safe requires additional care during the implementation and testing of the class. Before going down this road, ask yourself if the extra overhead is absolutely necessary.



7. The .NET Framework guarantees thread safety on static type initialization.



class Singleton{
public static readonly Singleton Instance = new Singleton();
}