Wednesday, July 09, 2008

using Spring Web Flow 2

I have got opportunity to work with Spring WebFlow 2 recently in a project, here I share my personal views on that with you.


Let me first tell you all nice things about recent spring stack (spring 2.5 and above). Two things which  improved a lot with recent release are: annotation support, specific namespaces.


Annotations lets you spend your time more on writing code than to wiring components through xml. Off-course spring fails fast if you have messed up a configuration, but still annotations are lot better to avoid that in first place. With improved @Repository, @Service and @Component it's easy to configure beans with required specific responsibilities by default.


Namespace improvements, help to keep the xml configuration minimal and typo-error free. Schema definitions helps to validate you configuration as you type, and also with convention over configuration approach they have reduced the lines of XML we need to wire up objects. If you want to replace a component with your custom implementation, sometimes its easy by using auto-wire option; sometime you have to configure them by the old way (i.e. using beans namespace and manually declaring most of the configuration) which is more painful after you getting used to the new way.


With SpringTest framework it's fairly easy to write integration test cases. With simple annotation spring will automatically loads the application context on the test start up. Also with @Timed you could even clock your test method, and make it fail if it exceeds specified time. And it also supports Transactional test with automatic rollback on default, so if you could write tests which doesn't dirties up the database.


Let's come back to the original topic Spring web flow. Spring webflow works as advertised for, i.e. they are for application which has a natural flow behind in business, and UI acts as a way to capture input for the flow and to display something back. Not for an application that has a different requirement than stated above.


Everything is a flow, each flow has a starting point and a end point, and could have any number of transitions in between. As a part of transition you could go to a sub-flow and come back to the original flow later, but these transitions could only happen at the pre-defined places on the flow. It will be tough to implement a free-flow (random browse) kind of applications with it.


It serializes all the information you add to the flow context and restores them as you resume a flow after UI interaction, so every object like entities, repositories, and whatever should implement Serializable. This restricts what you could share in the flow context.


Most of the decision for transition could be easily handled in the flow definition, this avoids creating Action classes which returns just the outcome.


in JSF UI:


<h:commandButton action="save" />



in Flow definition:


<view-state ...


 <transition on="save" >


    <expression ="validator.validate(model)" />


</transition>



As you could see, you don't need to have the Action class which returns outcome 'save', you could direct specify a transition on the command button. Ok, now you could ask what if the save has to be returned only on certain condition (say after only validation passes on the entity). For that you could have a expression executed on the transition, the transition will execute only if the validator returns true, if the validator returns false it will come back to the same view. The expression will accept any EL method expression, need not be just a validator. So you could run any action before the transition. As you could see the method executions in the action class are moved to the flow definition. This will look elegant only if the number of calls made at transition is small, or your application is well thought and designed to share less number of information in state, and keeping the method calls down. (Basically this is a nice feature , but would go awry for huge apps, and for apps which there is no certain business flow behind it)


Spring web flow also supports inheritance of flows, so you could inherit common transition rules from a parent flow. Which is a nice feature to keep the definition DRY as far as possible.


What makes flow definition looks ugly? Whenever there are more no. of mere actions which is called in the transitions to set a variable, to retrieve a variable from flowScope and setting back to the viewScope or so. One thing I had to do multiple times in flow definitions are to transform a List to dataModel for the UI, so I could use listName.selectedRow to identify item selected by the user.


Adding this kind of non-business related method executions and transformations, etc ., to the flow definitions makes it bulky, and also alienates the flow from resembling the business definitions. This defeats the very own cause of having a flow definition.


WebFlow provides convenient default variables like resourceBundle, currentUser, messageContext available in the flow context, which you could refer directly in the flow definition or pass it as arguments to bean action methods, or call actions on them.


When a root flow ends, all the information will be discarded. This is nice for cleaning unwanted data in the  memory but that also means that you cannot share anything with the user after the flow is ended. Suppose I would like to say that the user have successfully placed an order at the end of the flow, I could not do that! You could ask that why not keep the confirmation as part of the flow, well it depends on what time you are committing the changes to the db, or how you are sharing a persistent context, or even like its just a end message, there should not be interaction after that from the view to end the flow.


It's like redirecting to the home page after successfully placing the order and showing a banner "Thank you for shopping with us!", which is not just possible.


One last point is that with UrlMapper definition in the configuration you could make a simple url as a starting point of the flow, but otherwise generally can't use a RESTFUL GET url to reach a page on the flow.


What's your experience with Spring Web Flow?

Monday, June 09, 2008

Quick Groovy Scripting

Recently I have to port some data from mainframe database to SQL based db for testing purposes. I have started with some text report files generated from mainframe. I have fond of using unix awk, grep for these kind of data munging. Also used perl and ruby for some scripting activities in the past. But given that I had to do this on windows and also with fading knowledge of perl, thought of getting in donw with groovy. Since eclipse also supports groovy it became easy to start with.

I got something running which spits SQL statements (using println) for every line of the input. Sooner my eclipse console started eating the output because of the buffer size for console display I had in my settings! Though I had the huge monolithic script which works fine, I cannot able to get the output in single shot. I had to rerun them in parts to get the final collective output. This slowed me on tweaking the final script. Given we didn't have much re-factoring support in eclipse, I couldn't either easily extract them as functions as I could in Java. But I am able to use a more powerful tool i.e. define a closure immediately and redirect the inputs to the println statements to a File without much changes to the original script.
println "insert into table_name (col1, col2, col3) into values (${col1},'${col2}', ${col3})"

def file = new File( "C:\output.txt")

def println = { line ->  file.append(line)}

Just adding these two line saved me a lot of time, also now I can switch back to see the output in command line or to capture them in a file very easily.

Other things that helped me to get things done quickly are the ability to refer the variables inside the string directly as " '${col2}'". This is especially useful where I have to qualify the column of string data type with quotes, otherwise for which I have to use endless escaping and + con-catenations!

Also for the next script I did, I started writing in small classes than single file, so made things easier to change at last minute.  Another gotcha for beginner for the groovy script is the use of '=='. Remember in groovy use of '==' is actually converted to this.equals(that) before the execution. I ran into endless self-recursive calls as I used the == for reference comparison as we do in Java.

As I got the script completed there were lot of duplicate SQL statements in the output. As we get errors due to integrity constraints in database, I have to find some way to remove duplicate statements. In unix, I normally use `uniq` to get this done. Since I have to get that done quickly, i just looped thru the output file and added each line to the Set and dumped it back out to remove the duplicates.

Being used Perl, and Ruby in the past I know the libraries support in perl or ruby are far huge when compared to groovy. But the single fact that I have used to Java in past years and have to work with windows, Groovy was a life saver!

N.B. No data conversion is possible without effective use of Regular Expressions. I did used regular expressions to format the input files before running groovy scripts against them. I used Textpad to do  find/replace with regular expressions. The regular expression support in eclipse editor find/replace tool still needs improvements before could it could be really useful.

Thursday, March 20, 2008

# tricks in url

We all may know that # symbol in html is especially used with anchors. They mark the particular anchor within the single html page.
For example in the seam doc reference (single html page) http://docs.jboss.com/seam/2.1.0.A1/reference/en/html_single/#d0e336

in the url the #d0e336 marks the section 'Understanding the code' within the whole html page. If you do a view source you could see that section of the page is marked with anchor href to #d0e336

URI combined with this #mark points to the particular section of the page, this helps people to bookmark page, and return exactly there when they come back.

Lets get into some more interesting stuff with the # sign. Whenever you request a page with #mark in the end; the browser sends GET request only with url up to the #mark. The part that comes after the # sign is never sent to server.
If you request for http://mypage.com/page#profile, the browser sends the request as 'http://mypage.com/page' ripping off the # sign and the text after that. Once the browser loads the page, it tries to locate the anchor with matching href '#profile' () and positions the page. If the browser cannot find the specified anchor href it just ignores it and shows the page as it is.

Given that the text after the #mark concerns only for the client and also that browser ignores it for taking any action if the particular anchor is missing in the markup. There are some potential uses for the # sign.

  • fancy url

  • could be potentially used to maintain client-side state!

  • generate unique bookmark-able url


fancy url:
http://mail.google.com/mail/#inbox
http://mail.google.com/mail/#sent

As you see the server url is just http://mail.google.com/mail/, but the browser displaying #inbox denotes that you are in inbox view.

maintain client-state:

Say there are 2 tabs in a page, the user wants to bookmark the page along with the current tab that he is working with. Thereby whenever he loads the page with the saved bookmark; the page should be loaded with the same tab highlighted of the group.

You could add an identifier with the # sign on the url, and use client side javascript to parse the location and pick the identifier to determine which tab should be highlighted.

Some javascript libraries use this as trick to generate part of the page in the browser. The iUI library which generates iphone style webpages actually uses the same trick. It maintains client state by this identifier, and uses javascript to re-render part of the page in iphone style mock up.

http://m.linkedin.com/#_home

unique bookmark-able url:

Say you use greasemonkey to customize a webpage. And you set up that custom script to run for a particular url/site. Now you want to test a new script with the same url, you could add some identifier along the pound sign to create a unique url. Map the script to be triggered for this new unique url, so the same site will be handled by different greasemonkey scripts based on the url you load.

reference:
http://gbiv.com/protocols/uri/rfc/rfc3986.html#fragment

Thursday, February 14, 2008

Understanding JBoss Seam

We are currently working in a project using JBoss Seam extensively. The interesting and key feature of JBoss Seam is conversations. Conversation combined with bi-jection feature of Seam, just makes state management in web applications slicker and clean.


 


On the first look you may think that seam just provides one more scope (like REQUEST, SESSION, etc) for state management. But it provides a lot more. If you really want to see how conversations can fix some common issues with web applications (like. back buttoning) I would highly recommend this blog of Jacob Orshalick.


 


Jacob is also co-authoring the second edition of JBoss Seam: Simplicity and Power Beyond JavaEE with Michael Yuan. The second edition of their book will be released out this year.


 


 Recently preview of some of the chapters of this upcoming book is released. Even if you are already using Seam in your projects, definitely you will find this book more insightful.


 


So better understand your conversations, before you are timed-out!


 


 

Tuesday, January 29, 2008

segregating environment variables

Whenever you work with different version of a product at a single time, you run into overriding environment variables.

lets take Java SDK as example, you may have the JAVA_HOME pointed to JDK1.5 for the project you work on production deployment. At the same time you may also need to have JAVA_HOME pointed to JDK6 for your fun projects.

In windows you could have a batch command file that sets the correct environment variable. You could execute the batch file every time before you run any scripts that refer the JAVA_HOME variable. This is nice.

To add to it, you can put those .bat file in system path (i.e. c:\windows\system32) so you call that bat file from any directory.

However when you click this bat file, it executes the batch commands in it and exits. The scope of those new variables set is also exactly till it exits. This isn't of much use since you can't start working on the environment with the desired settings on the click of mouse.

To accomplish that trick just Right Click and create a shortcut in desktop and add following lines in the target location of the shortcut.
c:/windows/system32/cmd.exe /k c:/setenv.bat

in this the '/k' argument to the command line application tells it to execute the script and wait for further commands.

So by this you can click the shortcut to get a console opened and ready to accept commands.

You could also change buffer settings and screen settings for the console, so you get desired command console as you want.

N.B. even though windows shell is not that useful you can make it so if you add something like this.
UNIX Utilities 

but still UNIX rocks!

Sunday, January 20, 2008

listening on 0.0.0.0

After you start your Tomacat/Apache HTTPD Server:

just go to the command line and use netstat -an command to check the network statistics. You might have noticed
foobar:~ nrs$ netstat -an | grep LISTEN
tcp46 0 0 *.8080 *.* LISTEN
tcp4 0 0 192.168.2.101.3873 *.* LISTEN

that the listening port is listed as either *.8080 or 0.0.0.0:8080.

Basically this means that your server is listening for connection from all the network interfaces in your machine. i.e. if you have Wi-Fi, ethernet or couple of other VirtualMachine ethernet port configured. Then you can reach the server using any of those interfaces (IP address).

You could reach the server using 127.0.0.1 (local host), and any IP address of one of the network interfaces you have. So even when you write socket programming code, use the server host address as 0.0.0.0 if you want that your server to be reachable through all the interfaces.

You could also use the same feature to gain precise control of how your application can be reached. When you start the server in production or other critical environments it just be better that the server listens only in single IP address that is the expected interface for reaching the service.

In JBoss application server you can control this attribute either in the configuration file, or through system property jboss.bind.address. This property can also have multiple values separated by comma (i.e. jboss.bind.address=127.0.0.0,232.213.232.12). This helps to control precisely through which interface your service were accessible.
C:\jboss-home\bin>.\run.bat -Djboss.bind.address=0.0.0.0 -c default

Sunday, January 13, 2008

Continuous testing w/Ant

UPDATED - It Works

As we write code, a continuous feedback will help us know how we are progressing. And what code are we breaking as we add functionality. A way to run unit tests, as we code and save java files will be great!

I know there was a plugin for Eclipse, Continuous Testing from MIT. So I immediately downloaded the plugin, and tried to integrate with my eclipse IDE. but unluckily the plugin didn't worked with the version of the eclipse I work with. There also seems to be no activity in that plugin development. So I thought of ways to get this started, through some simple ways.

After checking other option with Eclipse, I came to know that you can create a task in ant build.xml, and assign that task to execute as part of build process (clean/rebuild) from within IDE. (My bad, I didn't realized that you can't trigger the ant task on every save command on resources. you can trigger only by manual build. Before I did checked this I went ahead on trying it out. so I will explain below, how much I reached there) It works!


Ok, so made an simple ant target with JUnit task in it. It executes all the unit tests in the project. As this would be time consuming, you will never use this. So this is not at all worth. Lets create test suites that represent a single unit that would be executed one every save operation. This would be the best approach, as test suites can represent a behavior/specification so it will give a larger perspective of what failed. And each package will have a test suite which could be triggered. But I want something that will work with my current setup.

So thought, how about an ant task that figures out itself what are the Test Cases that are affected by my currently working file. All I need is to find the right test cases that need to be executed, and pass it on the JUnit task. For this we don't even need to write an ant task, instead we just need to create a custom file selector ant component which can be used inside any ant <fileselector> task.

So I checked any existing tool/task that could list all source files that uses given class file. But I couldn't find something out of my inpatient quick search. So I looked into some reflections library so, I could trace if current file is dependent of the given file. I tried out Apache BCEL, as even some the core ant tasks uses the same library to do some bytecode engineering.

Here is the code for the custom selector.
public class DependentClassSelector extends org.apache.tools.ant.types.selectors.BaseExtendSelector {

String changedClassName;

public void setChangedClassName(String changedClassName) {
this.changedClassName = changedClassName.replace('.', '/');
}

@Override
public boolean isSelected(File basedir, String filename, File file)
throws BuildException {
boolean testable = filename.endsWith("Test.java");

if(testable && changedClassName != null)
{
testable = false;
//check if this Unit Test, depends on the changed class
//System.out.println("$$$$$$$$"+filename + "$$$$$$$$" + changedClassName);

filename = filename.replace(".java", "");

//System.out.println(filename);

com.sun.org.apache.bcel.internal.classfile.JavaClass javaFile = com.sun.org.apache.bcel.internal.Repository.lookupClass(filename);
com.sun.org.apache.bcel.internal.classfile.Constant[] constants = javaFile.getConstantPool().getConstantPool();

//System.out.println("loaded java file");

for (Constant constant : constants) {

//check if the constants pool has an entry for given class

if (constant != null && constant.toString().contains("L"+changedClassName+";"))
{
//System.out.println(constant);

testable = true;
break;
}

}

}

return testable;
}

After you coded the custom selector's isSelected method. just drop it in the class path, and add the typedef lines in the build.xml
<property environment="env"/>

<typedef name="selected-tests"
classname="org.countme.ant.tasks.DependentClassSelector" />

<target name="continuous_testing">

<junit  printsummary="yes" haltonfailure="yes">
<classpath>
<pathelement path="${classpath}"/>
<fileset dir=".">
<include name="**/*.jar"/>
</fileset>
<pathelement location="bin"/>
<dirset dir="bin">
<include name="**/*.class"/>
</dirset>

</classpath>

<batchtest fork="yes" todir="reports/">
<fileset dir="src">
<selected-tests changedClassName="${env.java_type_name}"/>
</fileset>
</batchtest>

</junit>
</target>

The ant script gets the current file open in the eclipse, by the environment variable java_type_name. To get this working, you should launch your ant script from within eclipse. The custom selector uses this information to decide where this test should be passed to the unit task or not. This works amazingly good, but still needs some more improvements, like: This code handles only, one level dependency, not the whole chain of dependency. This script need lot of improvements, but this looked like a good way to start.

Test Result: When I ran my script, after coding and saving the changes in just a single file BusinessDomain.java
It picks up the related test cases automatically.

Eclipse continuous testing



Since I cant get this ant script triggered on every file save within eclipse, I will lose value of this script if I forget to run it on every save. Unfortunately eclipse ant builder cant be triggered on automatic builds (i.e. when eclipse compiles your file). If you know a way to get this fixed, let me know. I will be happy to use it.

Otherwise instead of starting with the files dependent on current file, we could run all tests that dependent on the files that changed since last run. This way I will have some gain over running the whole test suite.

Please comment on any continuos testing approach that worked for you!

UPDATE



After spending some more time today, I am able to get this working.  You should be able to set the Ant Builder to run any task, on Auto-Build (i.e. for every save). If you get NullPointerException, then you are missing some library in the class path. Also configure to export the {java_type_name} to the environment, by adding in the Environment Tab of the Ant Builder Configuration. Probably I will post with screenshots, on my next post

The other feature, which I thought about is to increase the chaining depth in the dependency. But it will be of too much cost to execute TestCases that are more than one block away from your modified class file.

Of the choices between:

  • don't execute test case as you modify a class

  • execute all of them

  • execute all the dependent test cases


This sounds most pragmatic way for me: On every change to your class file, execute the Test Cases that are one-block away from your class.

Monday, January 07, 2008

Testing Naturally, and Agile

Behaviour-Driven Development is what you are already doing, if you are doing Test-Driven Development properly...

Test-Driven Development is commonly used everywhere, however the term 'test' makes people to think it as something they do extra to their coding activity. The same makes it hard to convince managers about the value of the unit-testing. Also the term unit-testing means different to different people.

How BDD, helps in: In BDD you capture every test you write under a specification of the requirement. The user stories could be directly translated to test specification. The greatest value here is this grouping of unit-test cases to a behavior that matches to a requirement from user story. So its more traceable on test coverage, from the business perspective. This business perspective helps management team to understand the value of testing.

The BDD concept is quite spoken around for years, but people misunderstood it as something where they don't have to do testing. But it actually the same test-driven development. This makes again to think that, BDD has nothing new.

BDD could be considered as DSL for testing. It uses concepts such as Story, Behavior (terms that are more common in agile practice and Object Oriented Design*) to directly describe the test cases that we write. These test cases are described using common terms called as Specification.

Dan North first described this term and he had taken efforts to come up with specification framework in java, called as JBehave. If you check the API, you will know that its more are like JUnit, just with test method naming conventions. In JUnit, we have every test-method starting with 'test' keyword. Here it starts with 'should'. As I told you earlier, its not different in API much, but its concepts and the domain terms you use to describe the test-cases makes them better. Also here you should note that from JUnit4, ou dont need to have the methods named starting with test. Also later versions of JUnit also introduces higher level domain terms such as Theory, etc.,

rSpec is a ruby testing framework which uses behavioral driven development concepts.

Since lately there are many such common abstraction is coming up in development testing, and practices. And finally one of the abstraction is going to be commonly, used in organizing your test cases.

Ok coming back to the point, what makes me to write about this now. The ruby or any dynamic language I liked most is because of the human readable form they give to the code. But still Java gives the more robust JVM platform, that is tested and more compatible with any middleware which is required to support production level high-volume transactions. The dynamic languages are something that is most available for writing tools, scripts that helps to speed up development. I have used perl to write to monitoring scripts which automate as bot logging into production system, as a test user, and executes standard test cases. And it checks back the response to confirm the functionality of the code in production. We have used them even to load test applications in small-scale. With the rich library the perl/ruby has you can automate any task in simple steps. Using java for these is a over-kill.

Unit Testing can also be one such activity, which could be handled well by these languages provided a good flexibility between them. Said this naturally Groovy and JRuby are natural choice for testing our Java code.

And very recently, actually just weeks back there are some projects started on this perspective.

easyb, a project from Andrew Glover, would be a good choice given that its build on Groovy. Go check out yourself. I had written below some of my first hand experiences with easyb.

jtestr, released by Ola Bini, a Thoughtworks employee of UK. This framework is about coding in Java, and Testing in Ruby = > JtestR. I didn't got my hands dirty with this yet, will try this soon and write about it.

easyb: EasyB has very good documentation, so I am not gonna repeat here on using it. I just show you a sample test case Story.
package com.bdd.test;

given "new IM instance", {
im = new com.bdd.InstantMessenger()
}

when "somebody logs in", {
im.login()
}

then "status message should show Available", {
ensureThat(im.getStatusMessage().equals("Available"),eq(true))
}

Test Result:
.
Time: 0.568s


Total: 1. Success!

Instead of the simple output we can generate, striping the closure method definition and could generate something like:
Login Story: Success

given new IM instance
when somebody logs in

then status message should show Available


Since we use groovy closures to wrap a given definition, we can pass that around the Test Story; thereby reusing blocks of code.

As you could see, this way the test report becomes more natural, and next time when a test fails; even your business team can see what's failing without digging into the method to see the details. Off-course you can cheat, but we wont with all intentions of a good developer!

LINKS:
Google Tech Talk from rSpec developer:
http://video.google.com/videoplay?docid=8135690990081075324
*Object Design: Roles, Responsibilities and Collaborations:
http://www.amazon.com/Object-Design-Responsibilities-Collaborations-Addison-Wesley/dp/0201379430

Saturday, January 05, 2008

Mac OS X automation in ruby

Mac OS X ships with a tool called Automator, using automator you can create scripts to perform a task. Its kinda macro recording tool. Those who are most used to mac os x, can also write scripts which will perform any task they want. These scripts are called as applescript. Applescript has a friendly syntax so you don't have to be programmer to start with. More information @ Apple Scripting

But why learn one more syntax, just for the sake of using it. Yes, you can control Mac application from within in ruby. you need to install Ruby OSA - Ruby bridge to apple event manager. This enables us to call any application in mac, and control them. for example: you can Open iTunes and get current playing song and you can set that as you status message in your iChat application.

Ruby OSA, also comes with documentation tool, which creates a ruby doc, with list of available methods of a mac application. So using it you can right away know the interfaces, and methods to control any application
rdoc-osa --name <Application-Name>

ex:
rdoc-osa --name Adium

A Sample Script:

require 'rubygems'
require 'rbosa'
require 'httpclient'
require 'hpricot'


client = HTTPClient.new
uri = 'http://www.google.com/search?q=ind+vs+aus'
resp = client.get uri
doc = Hpricot.parse(resp.body.content)
msg = doc.search("//div[@id='3po']//td[2]/[1]/div")
app = OSA.app('Adium')

app.adium_controller.my_status_message = msg

The above ruby  code, retrieves latest cricket score of the match india vs australia, and set it as status message in the Adium (ver 1.1.4), chat application. Incase you are trying with different chat application/ different version of Adium you can always generate ruby docs (rdoc-osa command) and dig further.

Some tips in getting here:

1) You need to install XCode from optional installs of the Mac OS X dvd - this to get rid of the ruby gems updade error in Leopard.

2) get the required ruby library using ruby gems.

Recommended Blog Posts