February 23, 2011

How Scrollable ResultSets Work?

JDBC result sets are created with three properties: type, concurrency and holdability.
The type can be one of
-TYPE_FORWARD_ONLY
-TYPE_SCROLL_INSENSITIVE
-TYPE_SCROLL_SENSITIVE.

The concurrency can be one of
-CONCUR_READ_ONLY
-CONCUR_UPDATABLE.

The holdability can be one of
-HOLD_CURSORS_OVER_COMMIT
-CLOSE_CURSORS_AT_COMMIT.

JDBC allows the full cross product of these. Some database like SQL 2003 prohibits the combination {TYPE_SCROLL_INSENSITIVE, CONCUR_UPDATABLE}, but this combination is supported by some vendors, notably Oracle.

The movable cursors,moving forward and backward on a resultset is one of the new features in the JDBC 2.0 API. There are also methods that let you move the cursor to a particular row and check the position of the cursor.
Statement stmt = con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,ResultSet.CONCUR_READ_ONLY);

ResultSet resultSet = stmt.executeQuery("SELECT FNAME, LNAME FROM EMPLOYEE");

while (resultSet.next()) {

. . . // iterates forward through resultSet

}

. . .

resultSet.absolute(5); // moves cursor to the fifth row

. . .

resultSet.relative(-2); // moves cursor to the third row

. . .

resultSet.relative(4); // moves cursor to the seventh row

. . .

resultSet.previous(); // moves cursor to sixth row

. . .

int rowNumber = resultSet.getRow(); // rowNumber should be 6

resultSet.moveAfterLast(); // moves cursor to position // after last row

while (previous()) {

. . . // iterates backward through resultSet

}

When a resultset type is defined then it is also significant to define whether it is readonly or updatable and type and concurrency should be in the same order as shown in the code above.If you change the order then compiler can not distinguish it.If you specify the constant TYPE_FORWARD_ONLY, it creates a nonscrollable result set, in which the cursor moves forward only. The default value of ResultSet object type is TYPE_FORWARD_ONLY and CONCUR_READ_ONLY.

February 22, 2011

How JVM handles thread synchronization?

JVM associates a lock with an object or a class to achieve mutilthreading. A lock is like a token or privilege that only one thread can "possess" at any one time. When a thread wants to lock a particular object or class, it asks the JVM.JVM responds to thread with a lock maybe very soon, maybe later, or never. When the thread no longer needs the lock, it returns it to the JVM. If another thread has requested the same lock, the JVM passes the lock to that thread.If a thread has a lock,no other thread can access the locked data until the thread that owns the lock releases it.

The JVM uses locks in conjunction with monitors. A monitor is basically a guardian in that it watches over a sequence of code, making sure only one thread at a time executes the code.Each monitor is associated with an object reference. It is the responsibility of monitor to watch an arriving thread must obtain a lock on the referenced object.

When the thread leaves the block,it releases the lock on the associated object.A single thread is allowed to lock the same object multiple times.JVM maintains a count of the number of times the object has been locked. An unlocked object has a count of zero. When a thread acquires the lock for the first time, the count is incremented to one. Each time the thread acquires a lock on the same object, a count is incremented. Each time the thread releases the lock, the count is decremented. When the count reaches zero, the lock is released and made available to other threads.

In Java language terminology, the coordination of multiple threads that must access shared data is called synchronization. The language provides two built-in ways to synchronize access to data: with synchronized statements or synchronized methods.

The JVM does not use any special opcodes to invoke or return from synchronized methods. When the JVM resolves the symbolic reference to a method, it determines whether the method is synchronized. If it is, the JVM acquires a lock before invoking the method. For an instance method, the JVM acquires the lock associated with the object upon which the method is being invoked. For a class method, it acquires the lock associated with the class to which the method belongs. After a synchronized method completes, whether it completes by returning or by throwing an exception, the lock is released.

Two opcodes, monitorenter and monitorexit are used by JVM for accomplishing this task.

When monitorenter is encountered by the Java virtual machine, it acquires the lock for the object referred to by objectref on the stack. If the thread already owns the lock for that object, a count is incremented. Each time monitorexit is executed for the thread on the object, the count is decremented. When the count reaches zero, the monitor is released.

Story of a Java Program from start to end

When JVM executes a Java application, a runtime instance of JVM is born.This runtime instance invoke main() method of Java application.The main() method of an application serves as the starting point for that application's initial thread. The initial thread can in turn fire off other threads.

This thread has a program counter(PC) and Java stack.Whenever main() method is invoked, a stack frame is pushed onto the stack,this then becomes the active tack frame.The program counter in the new Java stack frame will point to the beginning of the method.

If there are more method invocations within main() method then this process of pushing new stack frame onto the stack for each method call is repeated as and when they are invoked.When a method returns, the active frame is popped from the stack and the one below becomes the active stack frame.The PC is set to the instruction after the method call and the method continues.

There is only one heap corresponding to an instance of JVM and all objects created are stored here.This heap is shared by all threads created in an application.

Inside the Java virtual machine, threads come in two flavors: daemon and non- daemon. A daemon thread is ordinarily a thread used by the virtual machine itself, such as a thread that performs garbage collection. The application, however, can mark any threads it creates as daemon threads. The initial thread of an application--the one that begins at main()--is a non- daemon thread.

A Java application continues to execute (the virtual machine instance continues to live) as long as any non-daemon threads are still running. When all non-daemon threads of a Java application terminate, the virtual machine instance will exit. If permitted by the security manager, the application can also cause its own demise by invoking the exit() method of class Runtime or System.

When main() returns,it terminates the application's only non-daemon thread, which causes the virtual machine instance to exit.

February 4, 2011

Time Complexity of Algorithms

From my college days, I always found myself struggling with Algorithms,their complexities,asymptotic notations and blah blah blah and still. But now after reading a lot of articles on this I can say that I learnt something. So, whatever I had learnt till now I am going to post it here for my future reference and for other geeks who are at the same place where I was few years back.

(Specifically for Beginners)May be you found this post very slow,but once u finished I am sure that today u had learnt something new.



If you feel like reading this post slowly and carefully, it will describe what this notation _really_ means.

All functions have some kind of behavior as n grows towards infinity. For example, if f(n) = 1/n, then as n grows towards infinity, f(n) gets closer and closer to zero. Whereas if f(n) = n*n, then as n grows towards infinity, f(n) grows too.

Functions can grow at different speeds. If two functions are equal, then they obviously grow at the same speed. But wait, there's more! Two functions are deemed to grow at the same speed if they're separated by a constant multiple! For example, if f(n) = n*n and g(n) = 3*n*n, then f and g are deemed to grow at the same pace, because g(n) = 3*f(n), so they are only a constant multiple apart. That is, g(n) / f(n) = 3, as n grows arbitrarily large.

But consider the scenario where f(n) = n * n, and g(n) = n. Then what is the behavior of f(n) and g(n) as n grows arbitrarily large? Well, f(n) / g(n) = (n * n) / (n). Which simplifies to f(n) / g(n) = n. Which means that as n grows large, f(n) = n * g(n). What does this mean? f and g are in this case not separated by a constant multiple. The multiple between f and g grows larger and larger as time progresses, without stopping. We say that "f grows faster than g" when this happens. Or "g grows slower than f". Just try some sample values of n: Try 10, 100, and 1000. First, f(10) / g(10) = 100 / 10 = 10. f(100) / g(100) = 10000 / 100 = 100. f(1000) / g(1000) = 1000000 / 1000 = 1000. We can easily see, hopefully, that the ratio between f and g is not constant.

Now consider the scenario where f(n) = 2 * n * n + 1, and g(n) = n * n. In this case, our functions have the ratio f(n) / g(n) = (2 * n * n + 1) / (n * n). What is the value of this ratio as n grows towards infinity? The answer is 2. Let's simplify the expression:

(2 * n * n + 1) / (n * n) =
(2 * n * n) / (n * n) + 1 / (n * n) =
2 + 1 / (n * n).

So f(n) / g(n) = 2 + 1 / (n * n).

So as n grows large, the term 1 / (n * n) gets arbitrarily (or rediculously) small. As n grows large, then, the value of (2 + 1 / n * n) gets closer and closer to 2.

We could plug in some values -- let's try 10, 100, and 1000 again.

f(10) / g(10) = (2 * 10 * 10 + 1) / (10 * 10) = 201 / 100 = 2.01
f(100) / g(100) = (2 * 100 * 100 + 1) / (100 * 100) = 20001 / 10000 = 2.0001
f(1000) / g(1000) = (2 * 1000 * 1000 + 1) / (1000 * 1000) = 2000001 / 1000000 = 2.0000001.

So the ratio between these two functions approaches a constant value as n grows large. Hence, f and g are said to grow at the same pace.

In comparing the growth rates of functions, we have these rules:

1. If f(n) / g(n) grows out of control, getting larger and larger as n gets larger, then f is said to grow faster than g.
2. If f(n) / g(n) settles towards some constant positive value, then f is said to grow at the same pace as g.
3. If f(n) / g(n) gets closer and closer to zero, then this means that its reciprocal, g(n) / f(n), is growing out of control, so g is said to grow faster than f. (Or f is said to grow slower than g.)


Now on to big O notation!

Big O notation actually refers to an entire _set_ of functions. The notation O(expression) represents the entire set of functions that grow slower than or at the same pace as expression. For example, O(n^2) represents the entire set of functions that grow slower than or at the same pace as n^2.

In other words, if g(n) = n, then since n grows slower than n^2, it follows that g lies in the set O(n^2). Likewise, if h(n) = 2 * n * n + 1, then since h grows at the same pace as n^2, it follows that h lies in the set O(n^2).

It's also true that h lies in the set O(n^3), because h grows slower than n^3. (Assuming the leading term is positive, quadratic functions always grow slower than cubic polynomials, which always grow slower than fourth-degree polynomials, which always grow slower than fifth-degree polynomials, and so on.)

Because O(expression) represents all the functions that grow _slower_ than or equal to expression, it is used to represent upper bounds.

The other notations refer to different sets of functions. For example, o(expression) represents all the functions that grow slower than expression (and not at the same speed.) f(n) = n^2 + n - 1 lies in the set O(n^2), but it doesn't lie in o(n^2). g(n) = n lies in both.

Theta(expression) represents all the functions that grow at the same rate as expression.

Omega(expression) represents all the functions that grow faster than or equal to expression.

In computer programs, we are of course interested in how much time it takes programs to run, and these forms of notation are useful for representing that, so that's why we use them. When you say an algorithm runs in O(expression) time, you're just saying that the algorithm's runtime is no worse than some multiple of expression. When you use Theta, you're saying that the algorithm's runtime is some multiple of expression. And Omega is used to say that the amount of time an algorithm takes to run grows at a rate that is larger than or equal to some expression's rate of growth.

Please feel free to post any comment or suggestions ...

Cheers!!!
 

Copyright 2007 All Right Reserved. shine-on design by Nurudin Jauhari. and Published on Free Templates