May 18, 2011

MVC1 Vs MVC2

A MVC Model 1 architecture consists of a Web browser directly accessing Web-tier JSP pages. The JSP pages access Web-tier JavaBeans that represent the application model, and the next view to display (JSP page, servlet, HTML page, and so on) is determined either by hyperlinks selected in the source document or by request parameters. A Model 1 application control is decentralized, because the current page being displayed determines the next page to display. In addition, each JSP page or servlet processes its own inputs (parameters from GET or POST). In some Model 1 architectures, choosing the next page to display occurs in scriptlet code, but this usage is considered poor form.

A MVC Model 2 architecture introduces a controller servlet between the browser and the JSP pages or servlet content being delivered. The controller centralizes the logic for dispatching requests to the next view based on the request URL, input parameters, and application state. The controller also handles view selection, which decouples JSP pages and servlets from one another. Model 2 applications are easier to maintain and extend, because views do not refer to each other directly. The Model 2 controller servlet provides a single point of control for security and logging, and often encapsulates incoming data into a form usable by the back-end MVC model. For these reasons, the Model 2 architecture is recommended for most interactive applications.

March 9, 2011

XML Parsers

Evolution of the XML Parsing/Manipulation using Java

The combination of Java and XML has been one of the most attracting things which had happened in the field of software development in the 21st century. It has been mainly for two reasons - Java, arguably the most widely used programming language and XML, almost unarguably the best mechanism of data description and transfer.

Since these two were different technologies and hence it initially required a developer to have a sound understanding of both of these before he can make the best use of the combination. Since then there have been a paradigm shift towards Java and we have seen few interesting technologies getting evolved to make this happen. Some of them are:-

SAX - Simple API for XML Parsing

It was the first to come on the scene and interestingly it was developed in the XML-Dev maling list. Evidently the people who developed this were XML gurus and it is quite visible in the usage of this API. You got to have a fair understanding of XML, but at least Java developers got something to combine the two worlds - Java and XML in a structured way. It instantly became a hit for the obvious reasons.

Being the first in the evolution ladder, it obviously had only the basic support for XML processing. It is an event-based technology, which uses callbacks to load the parts of the XML document in a sequential way. This effectively means you can't go back to some part which was read/processed previously - if you do have such a requirement then you would need to store/manage the relevant data yourself.

Since this API does require to load the entire XML doc and also because it offers only a sequential processing of the doc hence it is quite fast. Another reason of it being faster is that it does not allow modification of the underlying XML data.

Interested in going through a step-by-step implementation (with explanation of the complete source code) of a simple SAX Parser in Java using SAX2 APIs? Here is it for you - SAX Parser Implementation in Java >>

DOM - Document Object Model

The Java binding for DOM provided a tree-based representation of the XML documents - allowing random access and modification of the underlying XML data. Not very difficult to deduce that it would be slower as compared to SAX.

The event-based callback methodology was replaced by an object-oriented in-memory representation of the XML documents. Though, it differs from one implementation to another if the entire document or a part of it would be kept in the memory at a particular instant, but the Java developers are kept out of all the hassle and they get the entire tree readily available whenever they wish.

JAXP - Java API for XML Parsing

The creators and designers of Java realized that the Java developers should not be XML gurus to use the XML in Java applications. The first step towards making this possible was the evolution of JAXP, which made it easier to obtain either a DOM Document or a SAX-compliant parser via a factory class. This reduced the dependence of Java developers over the numerous vendors supplying the parsers of either type. Additionally, JAXP made sure that an interchange between the parsers required minimal code changes.


February 23, 2011

How Scrollable ResultSets Work?

JDBC result sets are created with three properties: type, concurrency and holdability.
The type can be one of
-TYPE_FORWARD_ONLY
-TYPE_SCROLL_INSENSITIVE
-TYPE_SCROLL_SENSITIVE.

The concurrency can be one of
-CONCUR_READ_ONLY
-CONCUR_UPDATABLE.

The holdability can be one of
-HOLD_CURSORS_OVER_COMMIT
-CLOSE_CURSORS_AT_COMMIT.

JDBC allows the full cross product of these. Some database like SQL 2003 prohibits the combination {TYPE_SCROLL_INSENSITIVE, CONCUR_UPDATABLE}, but this combination is supported by some vendors, notably Oracle.

The movable cursors,moving forward and backward on a resultset is one of the new features in the JDBC 2.0 API. There are also methods that let you move the cursor to a particular row and check the position of the cursor.
Statement stmt = con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,ResultSet.CONCUR_READ_ONLY);

ResultSet resultSet = stmt.executeQuery("SELECT FNAME, LNAME FROM EMPLOYEE");

while (resultSet.next()) {

. . . // iterates forward through resultSet

}

. . .

resultSet.absolute(5); // moves cursor to the fifth row

. . .

resultSet.relative(-2); // moves cursor to the third row

. . .

resultSet.relative(4); // moves cursor to the seventh row

. . .

resultSet.previous(); // moves cursor to sixth row

. . .

int rowNumber = resultSet.getRow(); // rowNumber should be 6

resultSet.moveAfterLast(); // moves cursor to position // after last row

while (previous()) {

. . . // iterates backward through resultSet

}

When a resultset type is defined then it is also significant to define whether it is readonly or updatable and type and concurrency should be in the same order as shown in the code above.If you change the order then compiler can not distinguish it.If you specify the constant TYPE_FORWARD_ONLY, it creates a nonscrollable result set, in which the cursor moves forward only. The default value of ResultSet object type is TYPE_FORWARD_ONLY and CONCUR_READ_ONLY.

February 22, 2011

How JVM handles thread synchronization?

JVM associates a lock with an object or a class to achieve mutilthreading. A lock is like a token or privilege that only one thread can "possess" at any one time. When a thread wants to lock a particular object or class, it asks the JVM.JVM responds to thread with a lock maybe very soon, maybe later, or never. When the thread no longer needs the lock, it returns it to the JVM. If another thread has requested the same lock, the JVM passes the lock to that thread.If a thread has a lock,no other thread can access the locked data until the thread that owns the lock releases it.

The JVM uses locks in conjunction with monitors. A monitor is basically a guardian in that it watches over a sequence of code, making sure only one thread at a time executes the code.Each monitor is associated with an object reference. It is the responsibility of monitor to watch an arriving thread must obtain a lock on the referenced object.

When the thread leaves the block,it releases the lock on the associated object.A single thread is allowed to lock the same object multiple times.JVM maintains a count of the number of times the object has been locked. An unlocked object has a count of zero. When a thread acquires the lock for the first time, the count is incremented to one. Each time the thread acquires a lock on the same object, a count is incremented. Each time the thread releases the lock, the count is decremented. When the count reaches zero, the lock is released and made available to other threads.

In Java language terminology, the coordination of multiple threads that must access shared data is called synchronization. The language provides two built-in ways to synchronize access to data: with synchronized statements or synchronized methods.

The JVM does not use any special opcodes to invoke or return from synchronized methods. When the JVM resolves the symbolic reference to a method, it determines whether the method is synchronized. If it is, the JVM acquires a lock before invoking the method. For an instance method, the JVM acquires the lock associated with the object upon which the method is being invoked. For a class method, it acquires the lock associated with the class to which the method belongs. After a synchronized method completes, whether it completes by returning or by throwing an exception, the lock is released.

Two opcodes, monitorenter and monitorexit are used by JVM for accomplishing this task.

When monitorenter is encountered by the Java virtual machine, it acquires the lock for the object referred to by objectref on the stack. If the thread already owns the lock for that object, a count is incremented. Each time monitorexit is executed for the thread on the object, the count is decremented. When the count reaches zero, the monitor is released.

Story of a Java Program from start to end

When JVM executes a Java application, a runtime instance of JVM is born.This runtime instance invoke main() method of Java application.The main() method of an application serves as the starting point for that application's initial thread. The initial thread can in turn fire off other threads.

This thread has a program counter(PC) and Java stack.Whenever main() method is invoked, a stack frame is pushed onto the stack,this then becomes the active tack frame.The program counter in the new Java stack frame will point to the beginning of the method.

If there are more method invocations within main() method then this process of pushing new stack frame onto the stack for each method call is repeated as and when they are invoked.When a method returns, the active frame is popped from the stack and the one below becomes the active stack frame.The PC is set to the instruction after the method call and the method continues.

There is only one heap corresponding to an instance of JVM and all objects created are stored here.This heap is shared by all threads created in an application.

Inside the Java virtual machine, threads come in two flavors: daemon and non- daemon. A daemon thread is ordinarily a thread used by the virtual machine itself, such as a thread that performs garbage collection. The application, however, can mark any threads it creates as daemon threads. The initial thread of an application--the one that begins at main()--is a non- daemon thread.

A Java application continues to execute (the virtual machine instance continues to live) as long as any non-daemon threads are still running. When all non-daemon threads of a Java application terminate, the virtual machine instance will exit. If permitted by the security manager, the application can also cause its own demise by invoking the exit() method of class Runtime or System.

When main() returns,it terminates the application's only non-daemon thread, which causes the virtual machine instance to exit.

February 4, 2011

Time Complexity of Algorithms

From my college days, I always found myself struggling with Algorithms,their complexities,asymptotic notations and blah blah blah and still. But now after reading a lot of articles on this I can say that I learnt something. So, whatever I had learnt till now I am going to post it here for my future reference and for other geeks who are at the same place where I was few years back.

(Specifically for Beginners)May be you found this post very slow,but once u finished I am sure that today u had learnt something new.



If you feel like reading this post slowly and carefully, it will describe what this notation _really_ means.

All functions have some kind of behavior as n grows towards infinity. For example, if f(n) = 1/n, then as n grows towards infinity, f(n) gets closer and closer to zero. Whereas if f(n) = n*n, then as n grows towards infinity, f(n) grows too.

Functions can grow at different speeds. If two functions are equal, then they obviously grow at the same speed. But wait, there's more! Two functions are deemed to grow at the same speed if they're separated by a constant multiple! For example, if f(n) = n*n and g(n) = 3*n*n, then f and g are deemed to grow at the same pace, because g(n) = 3*f(n), so they are only a constant multiple apart. That is, g(n) / f(n) = 3, as n grows arbitrarily large.

But consider the scenario where f(n) = n * n, and g(n) = n. Then what is the behavior of f(n) and g(n) as n grows arbitrarily large? Well, f(n) / g(n) = (n * n) / (n). Which simplifies to f(n) / g(n) = n. Which means that as n grows large, f(n) = n * g(n). What does this mean? f and g are in this case not separated by a constant multiple. The multiple between f and g grows larger and larger as time progresses, without stopping. We say that "f grows faster than g" when this happens. Or "g grows slower than f". Just try some sample values of n: Try 10, 100, and 1000. First, f(10) / g(10) = 100 / 10 = 10. f(100) / g(100) = 10000 / 100 = 100. f(1000) / g(1000) = 1000000 / 1000 = 1000. We can easily see, hopefully, that the ratio between f and g is not constant.

Now consider the scenario where f(n) = 2 * n * n + 1, and g(n) = n * n. In this case, our functions have the ratio f(n) / g(n) = (2 * n * n + 1) / (n * n). What is the value of this ratio as n grows towards infinity? The answer is 2. Let's simplify the expression:

(2 * n * n + 1) / (n * n) =
(2 * n * n) / (n * n) + 1 / (n * n) =
2 + 1 / (n * n).

So f(n) / g(n) = 2 + 1 / (n * n).

So as n grows large, the term 1 / (n * n) gets arbitrarily (or rediculously) small. As n grows large, then, the value of (2 + 1 / n * n) gets closer and closer to 2.

We could plug in some values -- let's try 10, 100, and 1000 again.

f(10) / g(10) = (2 * 10 * 10 + 1) / (10 * 10) = 201 / 100 = 2.01
f(100) / g(100) = (2 * 100 * 100 + 1) / (100 * 100) = 20001 / 10000 = 2.0001
f(1000) / g(1000) = (2 * 1000 * 1000 + 1) / (1000 * 1000) = 2000001 / 1000000 = 2.0000001.

So the ratio between these two functions approaches a constant value as n grows large. Hence, f and g are said to grow at the same pace.

In comparing the growth rates of functions, we have these rules:

1. If f(n) / g(n) grows out of control, getting larger and larger as n gets larger, then f is said to grow faster than g.
2. If f(n) / g(n) settles towards some constant positive value, then f is said to grow at the same pace as g.
3. If f(n) / g(n) gets closer and closer to zero, then this means that its reciprocal, g(n) / f(n), is growing out of control, so g is said to grow faster than f. (Or f is said to grow slower than g.)


Now on to big O notation!

Big O notation actually refers to an entire _set_ of functions. The notation O(expression) represents the entire set of functions that grow slower than or at the same pace as expression. For example, O(n^2) represents the entire set of functions that grow slower than or at the same pace as n^2.

In other words, if g(n) = n, then since n grows slower than n^2, it follows that g lies in the set O(n^2). Likewise, if h(n) = 2 * n * n + 1, then since h grows at the same pace as n^2, it follows that h lies in the set O(n^2).

It's also true that h lies in the set O(n^3), because h grows slower than n^3. (Assuming the leading term is positive, quadratic functions always grow slower than cubic polynomials, which always grow slower than fourth-degree polynomials, which always grow slower than fifth-degree polynomials, and so on.)

Because O(expression) represents all the functions that grow _slower_ than or equal to expression, it is used to represent upper bounds.

The other notations refer to different sets of functions. For example, o(expression) represents all the functions that grow slower than expression (and not at the same speed.) f(n) = n^2 + n - 1 lies in the set O(n^2), but it doesn't lie in o(n^2). g(n) = n lies in both.

Theta(expression) represents all the functions that grow at the same rate as expression.

Omega(expression) represents all the functions that grow faster than or equal to expression.

In computer programs, we are of course interested in how much time it takes programs to run, and these forms of notation are useful for representing that, so that's why we use them. When you say an algorithm runs in O(expression) time, you're just saying that the algorithm's runtime is no worse than some multiple of expression. When you use Theta, you're saying that the algorithm's runtime is some multiple of expression. And Omega is used to say that the amount of time an algorithm takes to run grows at a rate that is larger than or equal to some expression's rate of growth.

Please feel free to post any comment or suggestions ...

Cheers!!!

November 21, 2010

Review on credit score

A free credit score online provides many great opportunities for both consumers and lenders. Lenders have the opportunity to receive more customers, as the consumers are able to review this information and attempt to repair any problems. A free credit score on line has the ability to offer valuable information to both consumers and lenders. As consumers have the opportunity to research and view this information, they will understand how they rate and what may need to be improved, as far as financial scores are concerned. Lenders can utilize this information to appeal to consumers by offering great things to people with high scores. Financial problems are very difficult to overcome and this information will assist anyone in understanding the financial situation that they may be facing. Trusting in God may be the best answer to any financial issues in life. "Commit thy way unto the LORD; trust also in him; and he shall bring it to pass." (Psalm 37:5) Discipline and trust in God are two very important keys to financial success.

Consumers can receive this information from many lenders, financial institutes, and other entities that deal with similar topics. As the free american credit score online is reviewed, the consumer has the ability to determine the best ways to remedy any problems or maintain a good status. The most important information that is available through a free credit score on line is the actual rating or score. Consumers can also view debts, loans, and other charges that have been assessed over a period of time. This information shows the consumer this financial information and allows them the opportunity to know how credit score decisions in the past have affected credit ratings.

Lenders will have the opportunity to increase business as consumers review their information. As consumers learn of where they stand as far as their rating is concerned, many will want to find a way to increase the number. One of the most effective methods of improving credit is to receive a loan to repay debts, loans, or other charges that have been assessed. Lenders that offer the consumer a chance to review a free credit score online and then offer services to help improve that score can gain customers and increase profits. The ability for consumers to receive a free credit score on line will make lenders much more attractive to consumers in search of a way to improve scores or maintain a high rating.
 

Copyright 2007 All Right Reserved. shine-on design by Nurudin Jauhari. and Published on Free Templates