October 12, 2007

My Articles and Tutorials For IEEE-NUCES Karachi E-Newsletter (On different topics)

(These are old articles of my university life so information can be obsolete especially articles related to technology…….)


If I say, “I know something that has already happened in the future.” then you may think I am wrong but many philosophers and physicists believe the possibility of such statement being true.

Present, past and future are relative terminologies used with the events occurring in a particular time. Then question arises what is time? This is one of the burning questions with many contradictory answers.

“A successful unification of quantum theory and relativity would necessarily be a theory of the universe as a whole. It would tell us, as Aristotle and Newton did before, what space and time are, what the cosmos is, what things are made of, and what kind of laws those things obey. Such a theory will bring about a radical shift – a revolution – in our und2erstanding of what nature is. It must also have wide repercussions, and will likely bring about, or contribute to, a shift in our understanding of ourselves and our relationship to the rest of the universe” (Lee Smolin)

Physicists define time as what the clock reads. But this is not simple as it looks, this statement it self opens another series of confusing questions like what is the starting point of time? Does time has a start? Is time finite? Is time always real?

Many of the scientists consider time has roots leading to the Big Bang. But on the other hand another argument waits what was Big Bang and also whether there was any Big Bang?

Time is also considered to be relative with respect to space that’s why we use the term space time coordinate. This can be understood by a simple example:

If I say a point was on x=0, y=0 and z=0 position at t=0 time and then moved to x=1,y=2 and z=2 at t=1 time means in 1 unit of time the point moved 3 position units (using difference formula).

Another question arises is the time same or the clock same for every observer? Einstein’s special theory of relativity did away with the idea that events can be simultaneous if they are in different locations. The difference in time between two events depends on their difference in distance and how fast the observer is moving.

Time is considered by some as linear such as Newton and Bacon who considered time as one way with no coming back but in the twentieth century, Gödel and others discovered solutions to the equations of Einstein’s general theory of relativity that allowed closed loops of proper time. These causal loops or closed curves in space-time allow you to go forward continuously in time until you arrive back into your past. Which means we can go to our past meet our childhood self. On the other hand time dilation concept allows you to see the future.

There are other terminologies we use which are related to time such as duration, occurrence of events, instants. Events occur in a particular duration of time. Time is also defined as the collection of instants and instants are said to be the boundaries of durations. Durations are considered to be an ordered set of instants means instants are not part of duration but a member of duration.

Now at the end of my article I quote Albert Einstein who once said, “The development during the present century is characterized by two theoretical systems essentially independent of each other: the theory of relativity and the quantum theory. The two systems do not directly contradict each other; but they seem little adapted to fusion into one unified theory. For the time being we have to admit that we do not possess any general theoretical basis for physics which can be regarded as its logical foundation.If it is true that the axiomatic basis of theoretical physics cannot be extracted from experience but must be freely invented, can we ever hope to find the right way? I answer without hesitation that there is, in my opinion, a right way, and that we are capable of finding it. I hold it true that pure thought can grasp reality, as the ancients dreamed. “

I hope we will be able to find the right way.

Windows Presentation Foundation

Windows Presentation Foundation (WPF), also known as Avalon is Microsoft’s new presentation framework for windows vista (previously longhorn) and windows xp’s latest sp.

In WPF controls and graphics are merged into one programming model giving programmers the lowest level content support which they were looking for many years. WPF’s content model allows a programmer to put any image, 3d model etc into a control by allowing a control to host any other control.

WPF also has the capability to keep responsiveness into a web application by having support for asynchronous and multi-threaded programming techniques.

WPF integrates many of the previous presentation technologies such as GDI,GDI+,HTML etc and also it includes the features of DIRECTX it provides a great graphical experience for developers.

WPF’s other major feature is ClickOnce technology which is to give programmers benefits of HTTP-based deployment. Previously with .Net 1.x Microsoft gave No Touch Deployment (NTD) which was a great success but having few problems such as no offline support and lacking of distribution of non .Net assemblies files. These and many other issues are solved by the new .Net 2.0 ClickOnce technology.

There is another useful feature of Microsoft latest .Net framework “XAML” ,the extensible markup language. It provides an easy way of creating WPF UI(although it is not necessary to use XAML to make WPF applications). Usually in WPF programming XAML’s .xaml and C#’s .xaml.cs files are combined to make an application.


Many developers face problems with type safety when working in .Net environment because .In C++ class templates were used which allows a programmer to create a class generically so that it will increase the reusability and then to instantiate that class with type-specification so that to have type safety.

In .Net 1.x since every type was derived from the class Object so when ever you use collections there was no type checking at all and the problem usually creates headaches on the runtime.

Generics will allow a programmer to solve the type safety problem at compile time.For e.g. in C# 1.1 when using push and pop operations of a stack let say for integers we used to write:Stack intstack = new Stack();


Int ipop=(int)stack.Pop();In the above example when the value of an integer is pushed it is automatically boxed but when it is poped we had to explicitly unbox it.Now see another problem when we write:Stack intstack = new Stack();


string spop=(string)stack.Pop();The above code will not give compile time error but we know it will cause problems for us.Now Microsoft in their .Net 2.0 environment has solved this problem, now we can use generics and some incorrect retrieve (such as Pop ()) operation with respect to type safety is performed the compiler will give a type mismatch error.For e.g. look at the following code:

Intstack istack= new Intstack();

Istack.Insert(new int());

Int I = istack.Retrieve();

Istack.Insert(“ faisal “); // Now the compiler will give type mismatch error

Declaration of Intstack class can be done as follows:

Public class Intstack


T[ ] elements;

int numelements;

public void Insert(T elements)


//do something


public T Retrieve()


// do something



I hope this article would have given you some idea one of the many new features in the .Net environment which will help you to improve your product quality.


During 50s or 60s no body would have imagined for an alarm clock with its own microprocessors but now such devices known as embedded devices are common like TVs, air conditioners, microwave ovens have there own processing devices and operating systems. Another break through in this regard is the emergence of embedded internet technology which has given a new dimension to this field. A person in 80s or early 90s would have never thought of a TV connected to the internet but now with the emergence of embedded internet devices all this has become possible.

Java has been one of the leading technology companies which are developing the tools which will support this great advancement. Java’s J2ME consisting of Java’s virtual machine specification and API specifications which are designed for the special requirements of each products. Java has developed many APIs mostly based on J2ME to cater the needs of different embedded devices for example for mobile devices such as cell phones and two way pagers they have developed J2ME Mobile Information Device Profile (MIDP), to develop interactive applications JAVA has developed a JAVA TV API, JAVA has also introduced API for card technology and many more are there for the developers. All these are dependent on JVM like JAVA PC applications except for the fact that instead of using operating systems like windows, Linux etc the devices on which JVM will run have there own type of RTOS or real time operating systems. The RTOS from which the Java VM derives some of its functionality allows the Java VM to abstract itself from device-specific architectures. In exchange, the Java VM provides a secure wrapper around the RTOS.

Today with the development of such technologies it is now possible to find your location and the shortest root to your destination by sitting in an internet connected car and in future you will see moms will cook food while going for shopping and accessing there kitchen servers just by using there hand held devices.

I hope after reading this article would have at least given you an introduction to embedded internet devices and would have given you some ideas for your future projects.


In my previous article I gave an overview of the embedded internet devices and the set of JAVA API’s available for embedded systems application development. Now let’s look at how fuzzy logic can be used to improve the performance of modern embedded system devices as this is one of the hottest areas of research and development.

When embedded systems emerged on the scene 3 or 4 decades ago I think they were meant for the simple tasks and were based on traditional binary logic i.e. ‘1’ or ‘0’ which denote the complete truth approach mean either it is yes or it is no. But with the passage of time the requirement of industry, military and also home appliances demanded for more intelligent embedded systems for that purpose fuzzy logic and artificial intelligent were merged with embedded system technology.

Fuzzy logic is basically used to extract meanings from the partial truth or relative truth for example if we design an air conditioning system which will start working if the temperature gets hot then the issue arise what does it mean for an electronic device if it uses traditional binary logic then it may have in its program that any thing above 40 degree is hot but what if we want to make it smarter so that it shall work on the range of values lets say from 0 to 10 its cold , 11 to 39 it is normal and 40 to 60 it is hot and so the air conditioning will be done accordingly then we have to take the use of fuzzy logic under consideration because above partial truths cannot be processed by using traditional binary logic of 0 and 1.

As the time is passing new ideas are emerging on the technology scene bringing new areas of research such as the emergence of embedded internet devices, location aware multi-application devices etc has brought a revolution in this area and also development in these areas demand a greater need of involvement of fuzzy logic and artificial intelligence so that we can have more sensible and smart embedded devices with stronger decision making power.

Pluto’s New Status as “Dwarf Planet”

When I was in my 1st grade of school I learned that there are nine planets in the solar system and it took me almost a whole day to learn the names of these planets but kids of future will have less tension because they only have to learn the names of eight planets i.e. Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus and Neptune.

Yes it is true because in last week of August in a conference Pluto’s status as a planet was voted out by the astronomers of IAU (International Astronomical Union) and now Pluto is categorized as a dwarf planet and is assigned a number 134340 by the Minor Planet Center. Other notable objects to receive asteroid numbers included 2003 UB313, also known as “Xena,” and the recently discovered Kuiper Belt objects 2003 EL61 and 2005 FY9. Their asteroid numbers are 136199, 136108 and 136472, respectively.

Now there are three categories of objects in our solar system. One is Planet which includes Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus and Neptune, second dwarf planets which include Pluto, Xena etc and third small solar system bodies which include all other solar system objects.

Pluto, discovered in 1930, was at first thought to be larger than it is. It has an eccentric orbit that crosses the path of Neptune and also takes it well above and below the main plane of the solar system.

This decision for many is a victory of science over emotions and some simply say it shouldn’t have happened. The decision has started a new debate over the definition of planets and recently according to news more than 300 scientists have signed a petition of protest against the decision.

Lets see what future brings for us as science is not only a subject but also a world in which every day is full of changes, new theories replace old theories, new phenomenon to wander are discovered every day and amazing becomes ordinary every time you think of some thing new.

Quantum Computers

Imagine a computer much superior to modern computers, a computer that can process information with speed equivalent to 10150 Pentium 4s, which is based on the logic representing 21000 or may be more states, which can crack the unbreakable codes, then you just need to wait for may be 10 or 15 years as quantum computers are expected to hit the commercial markets in next decade or so.

What are Quantum Computers?

Quantum Computers are computers that use quantum mechanical properties of atoms for logical operations. The computers we use today have bit as fundamental unit of information, which has two logic states 1 or 0. But in the case of Quantum Computers the fundamental unit of information is qubit. Like a bit, a qubit can have logic states 0 and 1, but also states in between 0 and 1, and can also have states 0 and 1 simultaneously i.e. superposition of 0 and 1 logic states.

Obviously there will be lots of probabilistic approaches involved in determining and getting the required logic or logics. Each qubit can replace an entire processor i.e. 500 qubits (made by 500 ions of a certain element) can replace 500 processors!!

The idea behind Quantum Computing

In 1965, Gordon Moore gave his famous law about microprocessor technology that states that the number of transistors on a microprocessor would double every 18 months. During the last two decades, scientist realized the fact that if microprocessor technology continues to abide Moore’s law then by the year 2020 the size of the microprocessor circuits will shrink to the size as comparable to the atom and at this stage the quantum mechanical behavior will take over the classical behavior based on existence or non-existence of 0 and 1. In 1970’s and 80’s scientists such as Charles H. Bennett of the IBM, Thomas J. Watson Research Center, Paul A. Benioff of Argonne National Laboratory in Illinois, David Deutsch of the University of Oxford and Richard IR Feynman of the California Institute of Technology were the pioneering brains behind the idea of developing quantum logic for future computations.

Research in Quantum Computing

Quantum Computers, promising a new dimension of computing, are still in the evolutionary stage. Current research is going on in error correction in Quantum Computers and design of hardware architecture.

In 1995 the theory of Quantum Error Correction was proposed and since then it has resulted in the development of many small scale Quantum Computers. In 1998, researchers at Los Alamos National Laboratory and MIT led by Raymond Laflamme managed to spread a single bit of quantum information (qubit) across three nuclear spins in each molecule of a liquid solution of alanine or trichloroethylene molecules. They accomplished this using the techniques of nuclear magnetic resonance or NMR.

Organizations such as IBM, Los Alamos National Laboratory, Microsoft MIT and Caltech are spending hundreds of millions of dollars on quantum error correction research.

Another major challenge is the design of optimal hardware for Quantum Computations (obviously that’s what we actually want to have). Most of the research in this area nowadays has involved ion traps, cavity quantum electrodynamics (QED) and NMR.

Future and Application of Quantum Computers

In future, Quantum Computers will replace the present computers based on Charles Babbage’s concept. Advent of commercial quantum computers will revolutionize the areas of cryptography, communication (when combined with probably future photon based devices), natural process simulation, numerical analysis and so on.


In .Net framework 2.0 Microsoft has introduced a new concept partial classes. By using partial classes a programmer can separate the designer and the implementation code.

The keyword introduced for this purpose is “partial” and its implementation is quite simple ,for example:In C# 2.0

public partial class NameClass

In VB 2.0

Partial Public Class NameClassAs we know it’s a good programming practice to keep all source code for a type in a single file but sometimes when the code is very large it becomes a headache to review the code so this problem is solved by dividing the class code in segments.

Partial class separate different code segments by placing them in different files. When the class is compiled the resulting code looks as if the class is written in

old fashioned way.A C# 2.0 example of a partial class.

public partial class NameClass


private int isomeint;

private string somestr;public NameClass(){


}public partial class NameClass


public void DoSomething1(…){

…}public void DoSomething2(…){


}When the above code will be compiled it will generate a code like this:

public partial class NameClass


private int isomeint;

private string somestr;

public NameClass(){


public void DoSomething1(…){


public void DoSomething2(…){




Last month Pakistan faced one of the worst natural disasters of recent history. The earth quake in northern parts of Pakistan resulted in the deaths of thousands of people and caused billions of dollars of infrastructure damage.

Our earth is made up of three main layers which are:

•1-  Crust (the upper most layer and sub-subdivided into upper, middle and lower crusts).

•2-  Mantle (the middle layer).

•3-  Core (the inner most layer).All the land on which we live (which is part of the core) is floating on sea and is constantly in motion. The recent quake is the result of the Indian plate moving towards the Euro-Asian plate. The process has been there for millions of years (the Himalayan range has also came into existence because of this process). This process of movement in earth’s plates results in collision which causes the most dangerous form of earth quake known as tectonic earth quake. There are three types of earth quakes:

•1- Tectonic (the most dangerous and their main cause is the motion and collision of earth’s plates).

•2- Volcanic (they come because of the volcanic activity)

•3- Artificial (they come because of nuclear explosions, shockwave bomb explosions etc).Earth quake sensitive areas are identified by the fault lines which pass through those areas.

Scientists identify four types of faults, characterized by the position of the fault plane, the break in the rock and the movement of the two rock blocks:

•1- In a normal fault the fault plane is nearly vertical. The hanging wall, the block of rock positioned above the plane, pushes down across the footwall, which is the block of rock below the plane. The footwall, in turn, pushes up against the hanging wall. These faults occur where the crust is being pulled apart, due to the pull of a divergent plate boundary.

•2- The fault plane in a reverse fault is also nearly vertical, but the hanging wall pushes up and the foot wall pushes down. This sort of fault forms where a plate is being compressed.

•3- A thrust fault moves the same way as a reverse fault, but the fault line is nearly horizontal. In these faults, which are also caused by compression, the rock of the hanging wall is actually pushed up on top of the foot wall. This is the sort of fault that occurs in a converging plate boundary.

•4- In a strike-slip fault, the blocks of rock move in opposite horizontal directions. These faults form when the crust pieces are sliding against each other, as in a transform plate boundary.


The radiation during an earth quake spreads out in the form of S and P waves. P waves or primary waves are faster than the S waves and move straight to the earth because they can travel through solid, liquid and gas. Secondary waves, also called S waves or shear waves, lag a little behind the P waves. As these waves move, they displace rock particles outward, pushing them perpendicular to the path of the waves. This results in the first period of rolling associated with earthquakes. Unlike P waves, S waves don’t move straight through the earth. They only travel through solid material, and so are stopped at the liquid layer in the earth’s core.

Both sorts of body waves do travel around the earth, however, and can be detected on the opposite side of the planet from the point where the earthquake began. At any given moment, there are a number of very faint seismic waves moving all around the planet.

Surface waves are something like the waves in a body of water — they move the surface of the earth up and down. This generally causes the worst damage because the wave motion rocks the foundations of man made structures. L waves are the slowest moving of all waves, so the most intense shaking usually comes at the end of an earthquake.

The Richter scale which is used to measure earth quakes is a logarithmic one means with an increase of a whole number there is an increase of 10 times in the magnitude of the earth quake means earth quake of magnitude 7 will be 100 times stronger than that of the magnitude 5. The energy released increases almost 32 times with every whole number (the earth quake in northern Pakistan measured 7.6 on the Richter scale).

The other measuring scale is Mercalli scale which actually gives information about the infrastructure damage. It uses Roman numerals to give measurement. XII on the Mercalli means 100 percent destruction of the area (the Merchalli reading estimation of the recent quake in Pakistan ranges from VII to X).

In the end I just would like to appeal to everyone who reads this article to do what ever he or she can to help the victims of this probably the worst form of natural disaster.



  1. Hello my friends 🙂

    Comment by pneuntylede — April 12, 2008 @ 1:28 am

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a free website or blog at

%d bloggers like this: