Wednesday, August 09, 2006

Customizing the web - chickenfoot

User customizations of today’s website are mostly restricted to theme and color of the site. Are these customizations solves the specific needs of the user? The customizations provided are just visual; they don’t solve the content needs of different users.

Though we have concept of services, and portal yet the flexibility is restricted by service provider. The end user doesn’t have real control to customize the site as per his needs both in look and content.
Now-a-days the web is just display of HTML tags interpreted by browsers. Nobody can sense out of data with the markup tags in the HTML. The mark-up are for visual clues, no markup for the meaning of the data. RDF is a proposed technology to describe your web document and resources much better. If the markup gives you the meaning and context of the data out there in the web, then the search can be most accurate as possible. The search query more specific like this, even will lead you the exact result right away.
“Show the theaters running ‘MI-3’ where tickets are available for tonight”

But there is long way to achieve this, as it takes time to implement this for most sites.


What are the options available users of today’s web? As any website final outcome shown to the user is just a plain HTML. HTML can be constructed as a DOM tree, (i.e. a tree like representation where the root element is HTML tag, and sub branches in them will be the forms, button, check box, label and other items present in the web page.) And this DOM tree can be manipulated with JavaScript. The whole DOM tree available for JavaScript as object with attributes. So suppose you need to change the color of the web-page then change the document.bgcolor attribute to your favourite color. You can groups a set of changes you wish to do into a javascript file and trigger these scripts after the load event of the website into a browser. You can also associate a script to be triggered only to a group of websites that match the particular rule.


Even Gmail beta when released, it never had DELETE button to remove unwanted mails. It just had an archive button to archive the old mails, and remove them from your inbox. Though many people wanted a DELETE button, Gmail team took more than few months to deliver that feature. In the mean time several users write a javascript to add DELETE button, and triggered with GreaseMonkey, in FireFox Browser. So all firefox users can install this script and get the delete button in their gmail webpage. This saves your waiting time for simple feature requests, where the provider doesn’t give you that or even the requested feature is down in their priority list.


There are lots of scripts customized for various sites written by users themselves. The user-scripting environment has to be supported by the browser. Firefox, Opera supports GreaseMonkey Scripts. Also check UserScripts.org for repository of available scripts. But if you are struck with Internet Explorer, Your wait will be till further. Though there is a project running to make greasemonkey scripts for Internet Explorer (http://www.gm4ie.com/), it still in infant stages I think.


This user customization features will even to help web accessibility, if you are having trouble in reading small fonts, and specific font, you can just enlarge the font size, or even the change the font yourself, with no support from the website. In another scenario, say you are reading on-line articles. Where you are just shown part of the article, you have to click next manually to read the next page of the article. Or you have to click on the print view to get the article on a single page, but even then you have to repeatedly do this for every article you read. Rather you could write a script which will click the print view link if present under any article. You can configure this script to be triggered, when you are viewing any pages belonging to particular domain. You can automate simple click tasks and save your time.


But drawbacks with this is still the user needs knowledge of JavaScript and DOM attributes to write a script, or else he have to wait on some developer to write that and post it in the web. To solve this problem, recently a similar environment which supports user scripts for Firefox came; ‘chickenfoot’. Writing a script for this environment doesn’t need knowledge programming language. If you have decent web access watch this google video, the inventors showing a demo on the usefulness of the tool.


How Chickenfoot differs from Greasemonkey

 




Chickenfoot operates on the rendered model of a web page.
The goal of Chickenfoot is to enable users to automate and customize web pages without viewing their HTML source. A key step toward this goal is enabling users to identify page elements with words they see in the page rather than the page author's name for the element. For example, the name of the HTML element for the search box on yahoo.com is p, so in Greasemonkey, the code to set the value of the search box would be:

  document.getElementById('p').value = 'my search query';

However, in Chickenfoot, the equivalent line of code would be:



  enter('search the web', 'my search query');

The differences between these two lines of code is significant. The former requires the user to inspect the HTML of the page to discover the name of the element, by either reading the HTML directly or through a more structured representation, such as Firefox's DOM Inspector. The latter simply requires the user to load the page and choose some keywords that appear to identify the box. This makes the code easier to for the author to compose and for others to understand.



Even chicketfoots future version (as shown in the video) promises to do action on entering words. Like to search for ‘latest novels’ in google. The sequence of commands will be like this




go google.com


Enter ‘latest novels’


click 'i am feeling lucky'



 


This script just look like natural language or hints, this can be fitted even for voice browsing capabilities. The commands are interpreted by a heuristic algorithm which works. See the video for more details.


The heuristic search algorithm matches the components from the text provided, like if it is a click command (click google search), then links and buttons will be associated with higher ranks. Button with ‘google search’ will be given a highest matching than that of text ‘google search’ just appearing anywhere in the webpage. My thought is like if there is a still ambiguity in the matching text, the prime areas of the web page can be given higher precedence, or like elements in the visible area in the browser, elements appearing the center part of the browser window, or elements appearing on the left side of browser window.


A macro recorder for a web browser will also be helpful in capturing functions for these scripts, then with the captured template you should be able to customise easier. We can also use this tool to capture monitoring scripts for websites (i.e. record the steps like opening the page in browser, check for all important links, if the links doesn't work give an alert. Then schedule to run the scripts in background in particular intervals, whenever your website goes down, you may receive a alert.)


I hope this tools when improves and becomes successful will change a lot for normal web users...


 


Ref: other tools for semantic web -> http://simile.mit.edu/

1 comment:

With mashups, webapps becoming legacy « As I Learn said...

[...] the rise. Writing script here doesn’t need knowledge of DOM representation. Read my earlier post on this. I too had my hands on trying out these two, sometime [...]

Recommended Blog Posts