All opinions expressed are those of the authors and not necessarily those of OSNews.com, our sponsors, or our affiliates.
  Add to My Yahoo!  Subscribe with Bloglines  Subscribe in NewsGator Online

published by noreply@blogger.com (Selvakumar Arumugam) on 2017-10-04 16:57:00 in the "Java" category
I recently ran into a case working on an application with a PKIX path validation error on a site that had a valid certificate. I was able to solve the issue using OpenSSL to debug.

Typically, the PKIX path validation error arises due to SSL certificate expiry, but I ran into the same error even when the system was configured with a valid certificate. There are two web applications in our scenario, AppX and AppY. AppX uses AppY's authentication mechanism to allow the users to login with same user account. AppX sends a POST request using HttpClient with necessary arguments to SSL enabled AppY and allows the user to login based on the response.
HttpClient httpclient = new DefaultHttpClient();
// ...
HttpPost httppost = new HttpPost("https://app2domain.com/sessions");

try {
    resp = httpclient.execute(httppost);
}
catch (Exception e) {
    throw new Exception("Exception: ", e);
}

Error

The AppX was isolated to new server and it started throwing PKIX path validation failed error while sending requests to AppY.
Exception: javax.net.ssl.SSLHandshakeException: 
sun.security.validator.ValidatorException: PKIX path validation failed: 
java.security.cert.CertPathValidatorException: timestamp check failed
PKIX (Public-Key Infrastructure - X.509) is standard for key based encryption mechanism. The PKIX path related errors come up due to the failure establishing the connection with SSL applications.

Debug

It is good to identify the root cause of the problem since there are few possible reasons for the same error. Let's start debugging...

  • Check the Certificate status and expiration date in your browser
  • The browser reports that the certificate is valid and will expire at a future date for AppY's domain name. So now on to the detailed debugging using OpenSSL.
  • OpenSSL validation
  • The openssl tool is a handy utility to validate the SSL certificate for any domain. It reports error with return code 20 (unable to get local issuer certificate) when checking the status of certificate - which is in contrast with browser's report on the same certificate.
$ echo -n | openssl s_client -CApath /etc/ssl/certs/ -connect app2domain.com:443 
...
Start Time: 1482921042
Timeout   : 300 (sec)
Verify return code: 20 (unable to get local issuer certificate)


$ echo -n | openssl s_client -CApath /etc/ssl/certs/ -connect app2domain.com:443 </dev/null | openssl x509 -noout -dates

verify error:num=20:unable to get local issuer certificate
DONE
notBefore=May  4 00:00:00 2013 GMT
notAfter=May 14 23:59:59 2015 GMT

Root Cause

The openssl tool reported certificate details another unused and expired domain. Since this is configured on the same server, it is causing the error in our case. The same scenario happened to the AppX when sending request to AppX. It may have tried to establish connection through the expired certificate. So, the lesson here is that it is necessary to clean up the expired certificates when the connection is established through HttpClient utilities. Also, a specific domain name can be validated by passing the -servername option (for SNI) to the openssl, which in this case reports appYdomain.com has valid certificate.
$ echo -n | openssl s_client -CApath /etc/ssl/certs/ -connect app2domain.com:443 -servername app2domain.com
...
Start Time: 1482920942
Timeout   : 300 (sec)
Verify return code: 0 (ok)

$ echo -n | openssl s_client -CApath /etc/ssl/certs/ -connect app2domain.com:443 -servername app2domain.com </dev/null | openssl x509 -noout -dates
...
verify return:0
DONE
notBefore=Sep 26 11:52:51 2015 GMT
notAfter=Apr  1 12:35:52 2018 GMT

Conclusion

In most cases, the PKIX path validation error comes up when the SSL certificate is expired for the domain name, however, there may be different reasons such as certificate expiry, picking wrong certificate, etc. It is always helpful to debug with the openssl tool to identify the root cause. This specific issue was fixed by removing the unused expired certificate.


Comments

published by noreply@blogger.com (Peter Hankiewicz) on 2017-07-31 23:00:00 in the "Java" category

Introduction

Recently, we have taken over a big Java project that ran on the old JBoss 4 stack. As we know how dangerous for a business is outdated software, we and our client agreed that the most important task is to upgrade the server stack to the latest WildFly version.

It?s definitely not an easy job, but it?s worth to invest to sleep well and don?t worry about software problems.

This time it was even more work because of a complicated and not documented application, that?s why I wanted to share some tips and problem resolutions for issues I encountered.

Server configuration

You can set it up using multiple configuration files in the standalone/configuration directory.

I can recommend to use the standalone-full.xml file for most of setup, it contains a default full stack as opposed to standalone.xml.

You can also set up an application specific configuration using various configuration XML files (https://docs.jboss.org/author/display/WFLY10/Deployment+Descriptors+used+In+WildFly).Remember to keep the appliction specific configuration in the Classpath.

Quartz as a message queue

The Quartz library was used as a native message queue in previous JBoss versions. If you struggle and try to use its resource adapter with WildFly just skip it. It?s definitely too much work, even if it?s possible.

In the latest WildFly version (as of today 10.1) the default message queue library is ActiveMQ. It has almost the same API as the old Quartz has, so it?s easy to use it.

Quartz as a job scheduler

We had multiple cron-like jobs to migrate as well. All the jobs used Quartz to schedule runs.

The best solution here is to update Quartz to the latest version (yes!) and use a new API (http://www.quartz-scheduler.org/documentation/quartz-2.2.x/tutorials/tutorial-lesson-06.html) to create CronTriggers for the jobs.

trigger = newTrigger()
    .withIdentity("trigger3", "group1")
    .withSchedule(cronSchedule("0 42 10 * * ?"))
    .forJob(myJobKey)
    .build();

You can use the same cron syntax (e.g 0 42 10 * * ?) as in the 12 years old Quartz version. Yes!

JAR dependencies

In WildFly you can set up an internal module for each JAR dependency. It can be pretty time consuming to create declarations for more than 100 libraries (exactly 104 in our case). We decided to use Maven to handle dependencies of our application and skip declaring modules in WildFly. Why? In our opinion it?s better to encapsulate everything in an EAR file and keep WildFly configuration minimal as we won?t use our server for any other application in the future.

Just keep your dependencies in the Classpath and you will be fine.

JBoss CLI

I really prefer the bin/jboss-cli.sh interface to the web interface to handle deployments. It?s a powerful tool and much faster to work with than clicking through the UI.

JNDI path

If you can?t access your JNDI definition try to use the global namespace. Up to JAVA EE6 developers defined their own JNDI names. These names had a global scope. This doesn?t work anymore. To access a previously globally scoped name use this pattern: java:global/OLD_JNDI_NAME.

The java:global namespace was introduced in JAVA EE 6.

Reverse proxy

To configure a WildFly application with a reverse proxy you need to, of course, set up a virtual host with a reverse proxy declaration.

In addition, you must add an attribute to the server?s http-listener in the standalone-full.xml file. The attribute is proxy-address-forwarding and must be set to true.

Here is an example declaration:


    
        
            
 
    

Summary

If you consider to upgrade to WildFly I can recommend it, it?s much faster than JBoss 4/5/6, scalable and fully prepared for modern applications.


Comments

published by noreply@blogger.com (Szymon Lipi?ski) on 2016-10-18 12:00:00 in the "Java" category

While working with one of our clients, I was tasked with integrating a Java project with a Perl project. The Perl project is a web application which has a specific URL for the Java application to use. To ensure that the URL is called only from the Java application, I wanted to send a special hash value calculated using the request parameters, a timestamp, and a secret value.

The Perl code calculating the hash value looks like this:

use strict;
use warnings;

use LWP::UserAgent;
use Digest::HMAC_SHA1;
use Data::Dumper;

my $uri = 'param1/param2/params3';

my $ua = LWP::UserAgent->new;
my $hmac = Digest::HMAC_SHA1->new('secret_something');

my $ts = time;

$hmac->add($_) for (split (m{/}, $uri));
$hmac->add($ts);

my $calculated_hash = $hmac->hexdigest;

My first try for calculating the same hash in the Java code looked something like this (without class/package overhead):

import javax.crypto.Mac;
import javax.crypto.spec.SecretKeySpec;
import org.apache.commons.codec.binary.Hex;

public String calculateHash(String[] values) throws NoSuchAlgorithmException, UnsupportedEncodingException, InvalidKeyException {

    java.util.Date date= new java.util.Date();
    Integer timestamp = (int) date.getTime()/1000;

    Mac mac = Mac.getInstance("HmacSHA1");
    
    SecretKeySpec signingKey = new SecretKeySpec("secret_something".getBytes(), "HmacSHA1");
    mac.init(signingKey);
    
    for(String value: values) {
        mac.update(value.getBytes());
    }

    mac.update(timestamp.getBytes());
    byte[] rawHmac = mac.doFinal();
    byte[] hexBytes = new Hex().encode(rawHmac);
    return new String(hexBytes, "UTF-8");
}

The code looks good and successfully calculated a hash. However, using the same parameters for the Perl and Java code, they were returning different results. After some debugging, I found that the only parameter causing problems was the timestamp. My first guess was that the problem was caused by the use of Integer as the timestamp type instead of some other numeric type. I tried a few things to get around that, but none of them worked.

Another idea was to check why it works for the String params, but not for Integer. I found that Perl treats the timestamp as a string and passes a string to the hash calculating method, so I tried emulating this by converting the timestamp into a String before using the getBytes() method:

import javax.crypto.Mac;
import javax.crypto.spec.SecretKeySpec;
import org.apache.commons.codec.binary.Hex;

public String calculateHash(String[] values) throws NoSuchAlgorithmException, UnsupportedEncodingException, InvalidKeyException {

    java.util.Date date= new java.util.Date();
    Integer timestamp = (int) date.getTime()/1000;

    Mac mac = Mac.getInstance("HmacSHA1");
    
    SecretKeySpec signingKey = new SecretKeySpec("secret_something".getBytes(), "HmacSHA1");
    mac.init(signingKey);
    
    for(String value: values) {
        mac.update(value.getBytes());
    }

    mac.update(timestamp.toString().getBytes());
    byte[] rawHmac = mac.doFinal();
    byte[] hexBytes = new Hex().encode(rawHmac);
    return new String(hexBytes, "UTF-8");
}

This worked perfectly, and there were no other problems with calculating the hash in Perl and Java.


Comments

published by noreply@blogger.com (Matt Vollrath) on 2015-03-24 21:16:00 in the "javascript" category

ROS and RobotWebTools have been extremely useful in building our latest crop of distributed interactive experiences. We're continuing to develop browser-fronted ROS experiences very quickly based on their huge catalog of existing device drivers. Whether a customer wants their interaction to use a touchscreen, joystick, lights, sound, or just about anything you can plug into the wall, we now say with confidence: "Yeah, we can do that."

A typical ROS system is made out of a group ("graph") of nodes that communicate with (usually TCP) messaging. Topics for messaging can be either publish/subscribe namespaces or request/response services. ROS bindings exist for several languages, but C++ and Python are the only supported direct programming interfaces. ROS nodes can be custom logic processors, aggregators, arbitrators, command-line tools for debugging, native Arduino sketches, or just about any other imaginable consumer of the data streams from other nodes.

The rosbridge server, implemented with rospy in Python, is a ROS node that provides a web socket interface to the ROS graph with a simple JSON protocol, making it easy to communicate with ROS from any language that can connect to a web socket and parse JSON. Data is published to a messaging topic (or topics) from any node in the graph and the rosbridge server is just another subscriber to those topics. This is the critical piece that brings all the magic of the ROS graph into a browser.

A handy feature of the rosbridge JSON protocol is the ability to create topics on the fly. For interactive exhibits that require multiple screens displaying synchronous content, topics that are only published and subscribed between web socket clients are a quick and dirty way to share data without writing a "third leg" ROS node to handle input arbitration and/or logic. In this case, rosbridge will act as both a publisher and a subscriber of the topic.

To develop a ROS-enabled browser app, all you need is an Ubuntu box with ROS, the rosbridge server and a web socket-capable browser installed. Much has been written about installing ROS (indigo), and once you've installed ros-indigo-ros-base, set up your shell environment, and started the ROS core/master, a rosbridge server is two commands away:

$ sudo apt-get install ros-indigo-rosbridge-suite
$ rosrun rosbridge_server rosbridge_websocket

While rosbridge is running, you can connect to it via ws://hostname:9090 and access the ROS graph using the rosbridge protocol. Interacting with rosbridge from a browser is best done via roslibjs, the JavaScript companion library to rosbridge. All the JavaScripts are available from the roslibjs CDN for your convenience.

<script type="text/javascript"
  src="http://cdn.robotwebtools.org/EventEmitter2/current/eventemitter2.min.js">
</script>
<script type="text/javascript"
  src="http://cdn.robotwebtools.org/roslibjs/current/roslib.min.js">
</script>

From here, you will probably want some shared code to declare the Ros object and any Topic objects.

//* The Ros object, wrapping a web socket connection to rosbridge.
var ros = new ROSLIB.Ros({
  url: 'ws://localhost:9090' // url to your rosbridge server
});

//* A topic for messaging.
var exampleTopic = new ROSLIB.Topic({
  ros: ros,
  name: '/com/endpoint/example', // use a sensible namespace
  messageType: 'std_msgs/String'
});

The messageType of std_msgs/String means that we are using a message definition from the std_msgs package (which ships with ROS) containing a single string field. Each topic can have only one messageType that must be used by all publishers and subscribers of that topic.

A "proper" ROS communication scheme will use predefined message types to serialize messages for maximum efficiency over the wire. When using the std_msgs package, this means each message will contain a value (or an array of values) of a single, very specific type. See the std_msgs documentation for a complete list. Other message types may be available, depending on which ROS packages are installed on the system.

For cross-browser application development, a bit more flexibility is usually desired. You can roll your own data-to-string encoding and pack everything into a single string topic or use multiple topics of appropriate messageType if you like, but unless you have severe performance needs, a JSON stringify and parse will pack arbitrary JavaScript objects as messages just fine. It will only take a little bit of boilerplate to accomplish this.

/**
 * Serializes an object and publishes it to a std_msgs/String topic.
 * @param {ROSLIB.Topic} topic
 *       A topic to publish to. Must use messageType: std_msgs/String
 * @param {Object} obj
 *       Any object that can be serialized with JSON.stringify
 */
function publishEncoded(topic, obj) {
  var msg = new ROSLIB.Message({
    data: JSON.stringify(obj)
  });
  topic.publish(msg);
}

/**
 * Decodes an object from a std_msgs/String message.
 * @param {Object} msg
 *       Message from a std_msgs/String topic.
 * @return {Object}
 *       Decoded object from the message.
 */
function decodeMessage(msg) {
  return JSON.parse(msg.data);
}

All of the above code can be shared by all pages and views, unless you want some to use different throttle or queue settings on a per-topic basis.

On the receiving side, any old anonymous function can handle the receipt and unpacking of messages.

// Example of subscribing to a topic with decodeMessage().
exampleTopic.subscribe(function(msg) {
  var decoded = decodeMessage(msg);
  // do something with the decoded message object
  console.log(decoded);
});

The sender can publish updates at will, and all messages will be felt by the receivers.

// Example of publishing to a topic with publishEncoded().
// Explicitly declare that we intend to publish on this Topic.
exampleTopic.advertise();

setInterval(function() {
  var mySyncObject = {
    time: Date.now(),
    myFavoriteColor: 'red'
  };
  publishEncoded(exampleTopic, mySyncObject);
}, 1000);

From here, you can add another layer of data shuffling by writing message handlers for your communication channel. Re-using the EventEmitter2 class upon which roslibjs depends is not a bad way to go. If it feels like you're implementing ROS messaging on top of ROS messaging.. well, that's what you're doing! This approach will generally break down when communicating with other non-browser nodes, so use it sparingly and only for application layer messaging that needs to be flexible.

/**
 * Typed messaging wrapper for a std_msgs/String ROS Topic.
 * Requires decodeMessage() and publishEncoded().
 * @param {ROSLIB.Topic} topic
 *       A std_msgs/String ROS Topic for cross-browser messaging.
 * @constructor
 */
function RosTypedMessaging(topic) {
  this.topic = topic;
  this.topic.subscribe(this.handleMessage_.bind(this));
}
RosTypedMessaging.prototype.__proto__ = EventEmitter2.prototype;

/**
 * Handles an incoming message from the topic by firing an event.
 * @param {Object} msg
 * @private
 */
RosTypedMessaging.prototype.handleMessage_ = function(msg) {
  var decoded = decodeMessage(msg);
  var type = decoded.type;
  var data = decoded.data;
  this.emit(type, data);
};

/**
 * Sends a typed message to the topic.
 * @param {String} type
 * @param {Object} data
 */
RosTypedMessaging.prototype.sendMessage = function(type, data) {
  var msg = {type: type, data: data};
  publishEncoded(this.topic, msg);
};

Here's an example using RosTypedMessaging.

//* Example implementation of RosTypedMessaging.
var myMessageChannel = new RosTypedMessaging(exampleTopic);

myMessageChannel.on('fooo', function(data) {
  console.log('fooo!', data);
});

setInterval(function() {
  var mySyncObject = {
    time: Date.now(),
    myFavoriteColor: 'red'
  };
  myMessageChannel.sendMessage('fooo', mySyncObject);
}, 1000);

If you need to troubleshoot communications or are just interested in seeing how it works, ROS comes with some neat command line tools for publishing and subscribing to topics.

### show messages on /example/topicname
$ rostopic echo /example/topicname

### publish a single std_msgs/String message to /example/topicname
### the quotes are tricky, since rostopic pub parses yaml or JSON
$ export MY_MSG="data: '{"type":"fooo","data":{"asdf":"hjkl"}}'"
$ rostopic pub -1 /example/topicname std_msgs/String "$MY_MSG"

To factor input, arbitration or logic out of the browser, you could write a roscpp or rospy node acting as a server. Also worth a look are ROS services, which can abstract asynchronous data requests through the same messaging system.

A gist of this example JavaScript is available, much thanks to Jacob Minshall.


Comments

published by noreply@blogger.com (Steph Skardal) on 2014-07-29 20:06:00 in the "javascript" category

A while back, we started using the Nestable jQuery Plugin for H2O. It provides interactive hierarchical list functionality – or the ability to sort and nest items.


Diagram from Nestable jQuery Plugin representing interactive hierarchical sort and list functionality.

I touched on H2O's data model in this post, but it essentially mimics the diagram above; A user can build sortable and nestable lists. Nesting is visible at up to 4 levels. Each list is accessible and editable as its own resource, owned by a single user. The plugin is ideal for working with the data model, however, I needed a bit of customization that I'll describe in this post.

Limiting Nestability to Specific List Items

The feature I was requested to develop was to limit nesting to items owned by the current authorized (logged in) user. Users can add items to their list that are owned by other users, but they can not modify the list elements for that list item. In visual form, it might look something like the diagram below, where green represents the items owned by the user which allow modified nesting, and red represents items that are not owned by the user which can not be modified. In the diagram below, I would not be able to add to or reorder the contents of Item 5 (including Items 6 - 8), and I would not be able to add any nested elements to Item 10. I can, however, reorder elements inside Item 2, which means e.g. I can move Item 5 above Item 3.


Example of nesting where nesting is prohibited among some items (red), but allowed under others (green).

There are a couple of tricky bits to note in developing this functionality:

  • The plugin doesn't support this type of functionality, nor is it currently maintained, so there are absolutely no expectations of this being an included feature.
  • These pages are fully cached for performance optimization, so there is no per-user logic that can be run to generate modified HTML. The solution here was implemented using JavaScript and CSS.

Background Notes on the Plugin

There are a couple of background notes on the plugin before I go into the implemented solution:

  • The plugin uses <ol> tags to represent lists. Only items in <ol> elements are nestable and sortable.
  • The plugin recognizes .dd-handle as the draggable handle on a list item. If an item doesn't have a .dd-handle element, no part of it can be dragged.
  • The plugin creates a <div> with a class of dd-placeholder to represent the placeholder where an item is about to be dropped. The default appearance for this is a white box with dashed outline.
  • The plugin has an on change event which is triggered whenever any item is dropped in the list or any part of the list is reordered.

Step 1: JavaScript Update for Limiting Nestability

After the content loads, as well as after additional list items are added, a method called set_nestability is run to modify the HTML of the content, represented by the pseudocode below:

set_nestability: function() {
  // if user is not logged in
    // nestability is never enabled
  // if user is logged in user is not a superadmin (superadmins can edit all) 
    // loop through each list item 
      // if the list item data('user_id') != $.cookie('user_id')
        // remove dd-handle class from all list item children .dd-handle elements
        // replace all <ol> tags with <ul> tags inside that list item

The simple bit of pseudocode does two things: It removes the .dd-handle class for elements that can't be reordered, and it replaces <ol> tags with <ul> tags to enable CSS. The only thing to take note of here is that in theory, a "hacker" can change their cookie to enable nestability of certain items, but there is additional server-side logic that would prohibit an update.

Step 2: CSS Updates

ul .dd-placeholder {
  display: none;
}

Next up, the simple CSS change above was made. This results in the placeholder div being hidden in any non-editable list. I made several small CSS modifications so that the ul and ol elements would look the same otherwise.

Step 3: JavaScript On Change Updates

Finally, I modified the on change event:

$('div#list').on('change', function(el) {
  // if item is dropped
    // if dropped item is not inside <ol> tag, return (do nothing)
    // else continue with dropped logic
  // else if position is changed
    // trigger position change logic  
}

The on change event does nothing when the dropped item is not inside an editable list. Otherwise, the dropped logic continues as the user has permissions to edit the list.

Conclusion

The functionality described here has worked well. What may become a bit trickier is when advanced rights and roles will allow non-owners to edit specific content, which I haven't gotten to yet. I haven't found additional resources that offer sortable and nestable functionality in jQuery, but it'd be great to see a new well-supported plugin in the future.


Comments

published by noreply@blogger.com (Steph Skardal) on 2014-07-15 19:36:00 in the "javascript" category

Over a year ago, I wrote about JavaScript-driven interactive highlighting that emulates the behavior of physical highlighting and annotating text. It's interesting how much technology can change in a short time. This spring, I've been busy at work for on a major upgrade of both Rails (2.3 to 4.1) and of the annotation capabilities for H2O. As I explained in the original post, this highlighting functionality is one of the most interesting and core features of the platform. Here I'll go through a brief history and the latest round of changes.


Example digital highlighting of sample content.

History

My original post explains the challenges associated with highlighting content on a per-word basis, as well as determining color combinations for those highlights. In the past implementations, each word was wrapped in a single DOM element and that element would have its own background color based on the highlighting (or a white background for no highlighting). In the first iteration of the project, we didn't do allow for color combinations at all – instead we tracked history of highlights per word and always highlighted with the most recent color if it applied. In the second iteration of the annotation work, opaque layers of color were added under the words to simulate color combinations using absolute positioning. Cross browser support for absolute positioning is not always consistent, so this iteration had challenges.

In the third iteration that lasted for over a year, I found a great plugin (xColor) to calculate color combinations, eliminating the need for complex layering of highlights. The most recent iteration was acceptable in terms of functionality, but the major limitation we found was in performance. When every word of a piece of content has a DOM element, there are significant performance issues when content has more than 20,000 words, especially noticeable in the slower browsers.

I've had my eye out for a better way to accomplish this desired functionality, but without having a DOM per word markup, I didn't know if there was a better way to accomplish annotations without the performance challenges.

Annotator Tool

Along came Annotator, an open source JavaScript plugin offering annotation functionality. A coworker first brought this plugin to my attention and I spent time evaluating it. At the time, I concluded that while the plugin looked promising, there was too much customization required to support the already existing features in H2O. And of course, the tool did not support IE8, which was a huge at the time, although it becomes less of a limitation as time passes and users move away from IE8.

Time passed, and the H2O project manager also came across the same tool and brought it to my attention. I spent a bit of time developing a proof of concept to see how I might accomplish some of the desired behavior in a custom encapsulated plugin. With the success of the proof of concept, I also spent time working through the IE8 issues. Although I was able to work through many of them, I was not able to find a solution to fully support the tool in IE8. At that time, a decision was made to use Annotator and disable annotation capabilities for IE8. I moved forward on development.

How does it work?

Rather than highlighting content on a word level, Annotator determines the XPath of a section of highlighted characters. The XPath for the annotation starting point and ending point is retrieved, and one or more DOM elements wrap this content. If the annotated characters span multiple DOM elements (e.g. the annotation spans multiple paragraphs), multiple DOM elements are created for each parent element to wrap the annotated characters. Annotator handles all the management of the wrapped DOM elements for an annotation, and it provides triggers or hooks to be called tied to specific annotation events (e.g. after annotation created, before annotation deleted).

This solution has much better performance than the aforementioned techniques, and there's a growing community of open source developers involved in it, who have helped improve functionality and contribute additional features.

Annotator Customizations

Annotator includes a nice plugin architecture designed to allow custom functionality to be built on top of it. Below are customized features I've added to the application:

Colored Highlighting

In my custom plugin, I've added tagged colored highlighting. An editor can select a specific color for a tag assigned to an annotation from preselected colors. All users can highlight and unhighlight annotations with that specific tag. The plugin uses jQuery XColor, a JavaScript plugin that handles color calculation of overlapping highlights. Users can also turn on and off highlighting on a per tag basis.


Tagged colored highlighting (referred to as layers here) is selected from a predefined set of colors.

Linked Resources

Another customization I created was the ability to link annotations to other resources in the application, which allows for users to build relationships between multiple pieces of content. This is merely an extra data point saved on the annotation itself.


A linked collage from this annotation.

Toggle Display of layered and unlayered content

One of the most difficult customization points was building out the functionality that allows users to toggle display of unlayered and layered content, meaning that after a user annotates a certain amount of text, they can hide all the unannotated text (replaced with an ellipsis). The state of the content (e.g. with unlayered text hidden) is saved and presented to other users this way, which essentially allows the author to control what text is visible to users.

Learn More

Make sure to check out the Annotator website if you are interested in learning more about this plugin. The active community has interesting support for annotating images, video, and audio, and is always focused on improving plugin capabilities. One group is currently focused on cross browser support, including support of IE8.


Comments

published by noreply@blogger.com (Steph Skardal) on 2014-04-04 14:13:00 in the "javascript" category

I'm here in San Francisco for the second annual I Annotate conference. Today I'm presenting my work on the H2O project, but in this post I'll share a couple of focus points for the conference thus far, described below.

What do we mean by Annotations?

Annotation work is off the path of End Point's ecommerce focus, and annotations means different things for different users, so to give a bit of context: To me, an annotation is markup tied to single target content (image, text, video). There are other interpretations of annotations, such as highlighted text with no markup (ie flagging some target content), and cases where annotations are tied to multiple pieces of target contents.

Annotations in General

One of the focuses of yesterday's talks was the topic of how to allow for the powerful concept of annotations to succeed on the web. Ivan Herman of the W3C touched on why the web has succeeded, and what we can learn from that success to help the idea of annotations. The web has been a great idea, interoperable, decentralized, and open source and we hope that those concepts can translate to web annotations to help them be successful. Another interesting topic Tom Lehman of RapGenius touched on was how the actual implementation of annotation doesn't matter, but rather it's the community in place to encourage many high quality annotations. For RapGenius, that means offering a user hierarchy that awards users accessibility such as as moderators, editors, contributors, layering on a point-based ranking system, and including encouraging posting RapGenius annotated content in other sites. This talk struck a chord with me, because I know how hard it is to get high quality content for a website.

Specific Use Cases of Annotator

Yesterday's talks also covered several interesting use cases of Annotator, an open source JavaScript-based tool that aims to be the reference platform for web annotations that has been commonly adopted in this space, which is what we are using in H2O. Many of the people attending the conference are using Annotator and interested in its future and capabilities. Some highlights of implementation were:

  • RapGenius: Aforementioned, contains active community of annotating lyrics.
  • SocialBook: A classroom and/or "book club" application for interactive discussions and annotations of books.
  • FinancialTimes: From my understanding, annotations are the guide to how content is aggregated and presented in various facets of the BBC website.
  • annotationstudio.org: collaborative web-based annotation tools under development at MIT which has similarities to H2O.
  • AustESE: Work being done in Australia for scholarly editing, includes a Drupal plugin implemented using Annotator with several plugins layered on top, including image annotation, categorization, threaded discussions.
  • hypothes.is: Hypothes.is uses tool built on top of annotator, featuring several advanced features such as image annotation, bookmarklet annotation implementation, and real time stream updates with search.

After the morning talks, we broke into two longer small group sessions, and I joined the sessions to delve into deeper issues and implementation details of Annotator, as well as the challenges and needs associated with annotation the law. I'll share my presentation and more notes from today's talks. Stay tuned!


Comments

published by noreply@blogger.com (Steph Skardal) on 2014-03-03 16:05:00 in the "javascript" category

Since the release of Rails 3 [a while back], I've had a lot of use with the asset pipeline. There can be minor headaches associated with it, but ultimately, the process of combining, minifying, and serving a single gzipped JavaScript and CSS file is a great gain in terms of reducing requests to speed up your web application. It's a behavior that I've wanted to emulate in other platforms that I've used (including Interchange).

One headache that might come up from the asset pipeline is that JavaScript functions from various parts of the application might have the same name, and may override existing functions of the same name. The last defined function will be the one that executes. I've come up with a common pattern to avoid this headache, described below:

Set the Body ID

First, I set the body tag in my application layout file to be related to the controller and action parameters. Here's what it looks like:

<body id="<%= "#{params[:controller].gsub(///, '_')}_#{params[:action]}" %>">
...
</body>

If I was less lazy, I could create a helper method to spit out the id.

Shared JS

I create a shared.js file, which contains JavaScript shared across the application. This has the namespace "shared", or "app_name_global", or something that indicates it's global and shared:

var shared = {
};

Namespace JavaScript

Next, I namespace my JavaScript for that particular controller and action which contains JavaScript applicable only to that controller action page. I namespace it to match the body ID, such as:

# for users edit page
var users_edit = {
...
};
# for product_show page
var products_show = {
...
};

Add initialize method:

Next, I add an initialize method to each namespace, which contains the various listeners applicable to that page only:

# for users edit page
var users_edit = {
    intialize: function() {
        //listeners, onclicks, etc.
    }, 
    ...
};
# for product_show page
var products_show = {
    intialize: function() {
        //listeners, onclicks, etc.
    }, 
    ...
};

Shared initialize method

Finally, I add a method to check for the initialize method applicable to the current page and execute that, in the shared namespace:

var shared = {
    run_page_initialize: function() {
        var body_id = $('body').attr('id');
        if(eval(body_id + '.initialize') !== undefined) {
            eval(body_id + '.initialize()');
        }
    }  
};
$(function() {
    shared.run_page_initializer();
});

Dealing with shared code across multiple actions

In some cases, code might apply to multiple parts of the application and no one wants to repeat that code! I've set up a single namespace for one of the controller actions, and then defined another namespace (or variable) pointing to that first one in this case, shown below.

# for users edit page
var users_edit = {
    intialize: function() {
        //listeners, onclicks, etc.
    }, 
    ...
};

# reuse users_edit code for users_show
var users_show = users_edit;

Conclusion

It's pretty simple, but it's a nice little pattern that has helped me be consistent in my organization and namespacing and makes for less code repetition in executing the initialized methods per individual page types. Perhaps there are a few more techniques in the Rails space intended to accomplish a similar goal – I'd like to hear about them in the comments!


Comments

published by noreply@blogger.com (Jeff Boes) on 2014-01-20 14:00:00 in the "javascript" category

This isn't earth-shattering news, but it's one of those things that someone new to jQuery might trip over so I thought I'd share.

I had a bad experience recently adding jQuery to an existing page that had less than stellar HTML construction, and I didn't have time nor budget to clean up the HTML before starting work. Thus, I was working with something much more complex than, but equally broken as what follows:

<table><tr><td>
<form>
</td></tr>
<tr><td>
<input type="text">
</form>
</td></tr></table>

The jQuery I added did something like this:

$('form input').css('background-color: red');
and of course, I was quite puzzled when it didn't work. The pitfall here is that jQuery may or may not be able to handle misconstructed or unbalanced HTML, at least not as well as your average browser, which will shift things around internally until something makes sense to it. The minimal solution is to move the opening and closing "form" tags outside the table.
Comments

published by noreply@blogger.com (Greg Davidson) on 2013-11-07 14:11:00 in the "javascript" category

I ran into this issue the other day while testing a new feature for a client site. The code worked well in Chrome, Firefox, Safari and IE (8-11) but it blew up in IE7. The page was fairly straightforward — I was using jQuery and the excellent doT.js templating library to build up some HTML and add it to the page after the DOM had loaded. This content included several links like so:

    More Info
    More Info
    More Info

Each of the links pointed to their corresponding counterparts which had also been added to the page. The JavaScript code in question responded to clicks on the "More Info" links and used their href attribute as a jQuery selector:

$('.my-links').on('click', function(e) {
      e.preventDefault();
      var sel = $(this).attr('href');
      ...
    });
  

Links "Enhanced" By IE7

As I debugged in IE7, I determined that it was adding the fully qualified domain name to the links. Instead of "#panel2" the href attributes were set to "http://example.com/#panel2" which broke things — especially my jQuery selectors. Fixing the issue was straightforward at this point:

// fix hrefs in IE7 and 6
  if ( /^http/.test(href) ) {
    sel = sel.substr(href.indexOf('#'));
  } 
  

When the href attribute begins with http, discard everything before the hash (#).

Digging Deeper

Although the problem had been solved, I was still curious as to why this was happening. While the .href property (e.g. myAnchor.href) of a link will return the entire domain in all browsers, getAttribute('href') will return only the text of the attribute. I believe the $.attr() method from jQuery is using getAttribute() behind the scenes. For modern browsers, calling $.attr() or getAttribute() worked as I was expecting.

However IE 6 and 7 had different ideas about this:

Ie7 demo

Notice that the fully qualified domain name is prepended to the href attribute in the first example.

When links are added to the page via innerHTML, IE includes/prepends the full URL. However, when links are added to the page via the createElement function they remain unscathed.

The following screenshot demonstrates how Chrome and other modern browsers handle the same code:

Chrome demo

These browsers happily leave the href attributes alone :)

You learn something new every day!


Comments

published by noreply@blogger.com (Greg Davidson) on 2013-06-04 19:51:00 in the "javascript" category

Room With A View

I attended JSConf in Amelia Island, FL last week. As you can see, the venue was pretty spectacular and the somewhat remote location lent itself very well to the vision set by the conference organizers. Many developers myself included, often find the line between work and play blurring because there is much work to be done, many new open source projects to check out, constant advancements in browser technology, programming languages, you name it. Keeping up with it all is fun but can be challenging at times. While the talks were amazing, the focus and ethos of JSConf as I experienced it was more about people and building on the incredible community we have. I highly recommend attending meetups or conferences in your field if possible.

Without further ado, I've written about some of the talks I attended. Enjoy!

Day One

Experimenting With WebRTC

Remy Sharp presented the first talk of the day about his experience building a Google Chrome Experiment with WebRTC and the peer to peer communication API. The game (headshots), was built to work specifically on Chrome for Android Beta. Because WebRTC is so young, the libraries supporting it (Peer.js, easyRTC, WebRTC.io, SimpleWebRTC) are very young as well and developing quickly. He found that "Libraries are good when you're fumbling, bad when stuff doesn't work". The "newness" of WebRTC ate much more time than he had anticipated. Remi demoed the game and it looked like good fun. If you're interested in checking out headshots further, the source code is up on GitHub.

JavaScript Masterclass

Angelina Fabbro gave a talk about levelling up your skills as a developer. She first broke everyone's heart by telling us "we're not special" and that nobody is a natural born programmer. She presented some data (studies etc) to support her point and argued that early exposure to the practice of logical thinking and practicing programming can make you seem like a natural.

She described several ways to know you're not a beginner:

  • You can use the fundamentals in any language
  • You are comfortable writing code from scratch
  • You peek inside the libraries you use
  • You feel like your code is mediocre and you're unsure what to do about it
...and ways to know you're not an expert:
  • You don't quite grok (understand) all the code you read
  • You can't explain what you know
  • You aren't confident debugging
  • You rely on references/docs too much

“Welcome to the ambiguous zone of intermediate-ness”.

Angelina suggested several ways to improve your skills. Ask "why?" obsessively, teach or speak at an event, work through a suggested curriculum, have opinions, seek mentorship, write in another language for a while etc. One book she specifically recommended was Secrets of the JavaScript Ninja by John Resig.

JavaScript is Literature is JavaScript

Angus Croll from Twitter presented a hilariously entertaining talk in which he refactored some JavaScript functions in the literary styles of Hemingway, Shakespeare, Bolaño, Kerouac, James Joyce and other literary figures. The talk was inspired by a blog post he'd written and had the entire conference hall erupting with laughter throughout.

poolside.js

Learning New Words

Andrew Dupont continued in the literary, language-oriented vein, giving a talk which drew a parallel between olde english purists who did not want to adopt any "new" words and the differing views surrounding the EcmaScript 6 specification process. Dupont's talk was very thought-provoking especially in light of the resistance to some proposals in the works (e.g. ES6 modules). Check out his slides — the video will also be published in future.

Flight.js

Flight twitter

Dan Webb spoke about Flight, a client-side framework developed by the team at Twitter and released this past January. They have been using it to run lots of twitter.com and most of tweetdeck as well. Webb got a laugh out of the room when he recalled the first comment on Hacker News after Flight was announced: "Why not use Angular?". The motivation behind Flight's design was to make a "system that's easy to think about". This aim was achieved by decoupling components (entirely self-contained).

A Flight component is just a JavaScript object with a reference to a DOM node. The component can be manipulated, trigger events and listen to events (from other components). Applications are constructed by attaching Flight components to pieces of the DOM. Keeping the components independent in this way makes testing very easy. Complexity can be layered on but does not require any additional mental overhead. Dan suggested that Flight's design and architecture would work very well with Web Components and Polymer in future.


Comments

published by noreply@blogger.com (Greg Davidson) on 2013-06-03 19:30:00 in the "javascript" category
Cssconf 2013

I attended the inaugural CSS Conf last week at Amelia Island, Florida. The conference was organized by Nicole Sullivan, Brett Stimmerman, Jonathan Snook, and Paul Irish and put on with help from a host of volunteers. The talks were presented in a single track style on a wide range of CSS-related topics; there was something interesting for everyone working in this space. I really enjoyed the conference, learned lots and had great discussions with a variety of people hacking on interesting things with CSS. In the coming days I will be blogging about some of the talks I attended and sharing what I learned, so stay tuned!

When Bootstrap Attacks

Pamela Fox had the opening slot and spoke about the experiences and challenges she faced when upgrading Bootstrap to V2 in a large web app (Coursera). What she initially thought would be a quick project turned into a month-long "BOOTSTRAPV2ATHON". Bootstrap styles were used throughout the project in dozens of PHP, CSS and JavaScript files. The fact that Bootstrap uses generic CSS class names like "alert", "btn", error etc made it very difficult to grep through the codebase for them. The Bootstrap classes were also used as hooks by the project's JavaScript application code.

Lessons Learned

Fox offered some tips for developers facing a similar situation. The first of which was to prefix the Bootstrap CSS classes (e.g. .tbs-alert) in order to decouple Bootstrap from the customizations in your project. Some requests have been made to the Bootstrap team on this front but the issue has not been addressed yet. In the meantime, devs can add a task to their build step (e.g. Grunt, the asset pipeline in Rails etc) to automate the addition of prefixes to each of the CSS classes.

Another tip is to avoid using Bootstrap CSS classes directly. Instead, use the "extend" functionality in your preprocessor (Sass, Less, Stylus etc) of choice. For example:

  .ep-btn {
    @extend .btn
      &:hover {
          @extend .btn:hover
      }
  }
This way your project can extend the Bootstrap styles but keep your customizations separate and not closely coupled to the framework.

The same logic should also be applied to the JavaScript in your project. Rather than using the Bootstrap class names as hooks in your JavaScript code, use a prefix (e.g. js-btn) or use HTML5 data attributes. Separating the hooks used for CSS styles from those used in JavaScript is very helpful when upgrading or swapping out a client-side framework like Bootstrap.

Test All Of The Things

Pamela wrapped up the talk by explaining how testing front end code would ease the pain of upgrading a library next time. There are many testing libraries available today which address some of these concerns. She mentioned mocha, Chai, jsdom and Selenium which all look very helpful. In addition to testing front end code she offered up the idea of "diffing your front end" in a visual way. This concept was very interesting to someone who ensures designs are consistent across a wide array of browsers and devices on a daily basis. Diff your front end

Needle is a tool which allows you to do this automatically. Once you develop a test case, you can run Needle to view a visual diff of your CSS changes. I think this is an excellent idea. Pamela also noted that the combination of Firefox screenshots and Kaleidoscope could be used manually in much the same way.

Many thanks to Pamela for sharing this! The slides for this talk can be viewed here and the talk was recorded so the video will also be available sometime soon.


Comments

published by noreply@blogger.com (Mike Farmer) on 2013-05-15 17:31:00 in the "javascript" category

The Rails Asset Pipeline to me is kind of like Bundler. At first I was very nervous about it and thought that it would be very troublesome. But after a while of using it, I realized the wisdom behind it and my life got a lot easier. The asset pipeline is fabulous for putting all your assets into a single file, compressing them, and serving them in your site without cluttering everything up. Remember these days?

<%= javascript_include_tag 'jquery.validate.min.js','jquery.watermark.min.js','jquery.address-1.4.min','jquery.ba-resize.min','postmessage','jquery.cookie','jquery.tmpl.min','underscore','rails','knockout-1.3.0beta','knockout.mapping-latest' %>

A basic component of the Asset Pipeline is the manifest file. A manifest looks something like this

// Rails asset pipeline manifest file
// app/assets/javascript/application.js

//= require jquery
//= require jquery_ujs
//= require_tree .

For a quick rundown on how the Asset Pipeline works, I highly recommend taking a few minutes to watch Railscast episode 279. But there are a few things that I'd like to point out here.

First, notice that the name of the file is application.js. This means that all the javascript specified in the manifest file will be compiled together into a single file named application.js. In production, if specified, it will also be compressed and uglified. So rather than specifying a really long and ugly javascript_include_tag, you just need one:

<%= javascript_include_tag "application.js" %>

Second, this means that if you only have one manifest file in your application, then all of your javascript will be loaded on every page, assuming that your javascript_include_tag is loading in the head of your layout. For small applications with very little javascript, this is fine, but for large projects where there are large client-side applications, this could be a problem for performance. For example, do you really need to load all of ember.js or backbone.js or knockout.js in the admin portion of your app when it isn't used at all? Granted, these libraries are pretty small, but the applications that you build that go along with them don't need to be loaded on every page.

So I thought there should be a way to break up the manifest file so that only javascript we need is loaded. The need for this came when I was upgrading the main functionality of a website to use the Backbone.js framework. The app was large and complex and the "single-page-application" was only a part of it. I didn't want my application to load on the portion of the site where it wasn't need. I looked high and low on the web for a solution to this but only found obscure references so I thought I would take some time to put the solution out there cleanly hoping to save some of you from trying to figure it out on your own.

The solution lies in the first point raised earlier about the asset pipeline. The fact is, you can create a manifest file anywhere in your assets directory and use the same directives to load the javascript. For example, in the app/assets/javascripts/ directory the application.js file is used by default. But it can be named anything and placed anywhere. For my problem, I only wanted my javascript application loaded when I explicitly called it. My directory structure looked like this:

|~app/
| |~assets/
| | |+fonts/
| | |+images/
| | |~javascripts/
| | | |+admin/
| | | |+lib/
| | | |+my_single_page_app/
| | | |-application.js

My application.js file looks just like the one above but I removed the require_tree directive. This is important because now I don't want all of my javascript to load from the application.js. I just want some of the big stuff that I use all over like jQuery. (Ok, I also added the underscore library in there too because it's just so darn useful!) This file is loaded in the head of my layout just as I described above.

Then, I created another manifest file named load_my_app.js in the root of my_single_page_app/. It looks like this:

//= require modernizr
//= require backbone-min
//= require Backbone.ModelBinder
//= require my_single_page_app/my_app
//= require_tree ./my_single_page_app/templates
//= require_tree ./my_single_page_app/models
//= require_tree ./my_single_page_app/collections
//= require_tree ./my_single_page_app/views
//= require_tree ./my_single_page_app/utils

Then in my view that displays the single page app, I have these lines:

<% content_for :head do %>
  <%= javascript_include_tag "my_single_page_app/load_my_app.js" %>
<% end %>

Now my single page application is loaded into the page as my_single_page_app/load_my_app.js by the Asset Pipeline and it's only loaded when needed. And a big bonus is that I don't need to worry about any of the code that I wrote or libraries I want to use interfering with the rest of the site.


Comments

published by noreply@blogger.com (Greg Davidson) on 2012-05-17 12:08:00 in the "javascript" category

RequireJS is a very handy tool for loading files and modules in JavaScript. A short time ago I used it to add a feature to Whiskey Militia that promoted a new section of the site. By developing the feature as a RequireJS module, I was able to keep all of its JavaScript, HTML and CSS files neatly organized. Another benefit to this approach was the ability to turn the new feature "on" or "off" on the site by editing a single line of code. In this post I'll run through a similar example to demonstrate how you could use RequireJS to improve your next project.

File Structure

The following is the file structure I used for this project:
??? index.html
??? scripts
    ??? main.js
    ??? my
    ?   ??? module.js
    ?   ??? styles.css
    ?   ??? template.html
    ??? require-jquery.js
    ??? requirejs.mustache.js
    ??? text.js

The dependencies included RequireJS bundled together with jQuery, mustache.js for templates and the RequireJS text plugin to include my HTML template file.

Configuration

RequireJS is included in the page with a script tag and the data-main attribute is used to specify additional files to load. In this case "scripts/main" tells RequireJS to load the main.js file that resides in the scripts directory. Require will load the specified files asynchronously. This is what index.html looks like:

<!DOCTYPE html>
<html>
<head>
<title>RequireJS Example</title>
</head>
<body>
<h1>RequireJS Example</h1>
<!-- This is a special version of jQuery with RequireJS built-in -->
<script data-main="scripts/main" src="scripts/require-jquery.js"></script>
</body>
</html>

I was a little skeptical of this approach working on older versions of Internet Explorer so I tested it quickly with IE6 and confirmed that it did indeed work just fine.

Creating a Module

With this in place, we can create our module. The module definition begins with an array of dependencies:

define([
  "require",
  "jquery",
  "requirejs.mustache",
  "text!my/template.html"
  ],

This module depends on require, jQuery, mustache, and our mustache template. Next is the function declaration where our module's code will live. The arguments specified allow us to map variable names to the dependencies listed earlier:

  function(require, $, mustache, html) { ... }

In this case we're mapping the $ to jQuery, mustache to requirejs.mustache and, html to our template file.

Inside the module we're using Require's .toUrl() function to grab a URL for our stylesheet. While it is possible to load CSS files asynchronously just like the other dependencies, there are some issues that arise that are specific to CSS files. For our purposes it will be safer to just add a <link> element to the document like so:

  var cssUrl = require.toUrl("./styles.css");
  $('head').append($('',
    { rel: "stylesheet", media: "all", type: "text/css", href: cssUrl }));

Next, we define a view with some data for our Mustache template and render it.

  var view = {
    products: [
      { name: "Apples", price: 1.29, unit: 'lb' },
      { name: "Oranges", price: 1.49, unit: 'lb'},
      { name: "Kiwis", price: 0.33, unit: 'each' }
    ],
    soldByPound: function(){
      return (this['unit'] === 'lb') ? true : false;
    },
    soldByEach: function() {
      return (this['unit'] === 'each') ? true : false;
    }
  }

  // render the Mustache template
  var output = mustache.render(html, view);

  // append to the HTML document
  $('body').append(output);
});

The Template

I really like this approach because it allows me to keep my HTML, CSS and JavaScript separate and also lets me write my templates in HTML instead of long, messy JavaScript strings. This is what our template looks like:

<ul class="hot-products">
{{#products}}
<li class="product">
{{name}}: ${{price}} {{#soldByEach}}each{{/soldByEach}}{{#soldByPound}}per lb{{/soldByPound}}
</li>
{{/products}}
</ul>

Including the Module

To include our new module in the page, we simply add it to our main.js file:

require(["jquery", "my/module"], function($, module) {
    // jQuery and my/module have been loaded.
    $(function() {

    });
});

When we view our page, we see that the template was was rendered and appended to the document:
Require rendered

Optimizing Your Code With The r.js Optimizer

One disadvantage of keeping everything separate and using modules in this way is that it adds to the number of HTTP requests on the page. We can combat this by using the the RequireJS Optimizer. The r.js script can be used a part of a build process and runs on both node.js and Rhino. The Optimizer script can minify some or all of your dependencies with UglifyJS or Google's Closure Compiler and will concatenate everything into a single JavaScript file to improve performance. By following the documentation I was able to create a simple build script for my project and build the project with the following command:

node ../../r.js -o app.build.js

This executes the app.build.js script with Node. We can compare the development and built versions of the project with the Network tab in Chrome's excellent Developer Tools.

Development Version:
Webapp devel

Optimized with the RequireJS r.js optmizer:
Webapp built

It's great to be able to go from 8 HTTP requests and 360 KB in development mode to 4 HTTP requests and ~118 KB after by running a simple command with Node! I hope this post has been helpful and that you'll check out RequireJS on your next project.

 


Comments

published by noreply@blogger.com (Brian Gadoury) on 2012-02-28 12:00:00 in the "javascript" category

I recently encountered a bug in a Rails 3 application that used a remote link_to tag to create a Facebook-style "delete comment" link using unobtrusive javascript. I had never worked with remote delete links like this before, so I figured I?d run through how I debugged the issue.

Here are the relevant parts of the models we're dealing with:

class StoredFile < ActiveRecord::Base
  has_many :comments, :dependent => :destroy
end
class Comment < ActiveRecord::Base
  belongs_to :user
  belongs_to :stored_file
end
Here?s the partial that renders a single Comment (from the show.html.erb view for a StoredFile) along with a delete link if the current_user owns that single Comment:

  <%= comment.content %> -<em><%= comment.user.first_name %></em>
  <% if comment.user == current_user %>
    <%= link_to 'X', 
stored_file_comment_path(@stored_file, comment), 
:remote => true,
:method => :delete, 
:class => 'delete-comment'
    %>

  <% end -%>

Here?s a mockup of the view with 3 comments:



At first, the bug seemed to be that the ?X? wasn?t actually a link, and therefore, didn?t do anything. Clicking the ?X? with Firebug enabled told a different story. There was a link there (hidden by sneaky CSS,) and the Firebug console showed that it was sending the appropriate request to the correct url: /stored_files/78/comments/25

The development.log file on the server corraborated the story and showed a successful delete:

  Started DELETE "/stored_files/78/comments/25"
   Processing by CommentsController#destroy as JS
   Parameters: {"stored_file_id"=>"78", "id"=>"25"}
   SQL (0.6ms)  DELETE FROM "comments" WHERE "comments"."id" = 25
  Completed 200 OK in 156ms

So far, so good. I know that our client code is making the correct request and the Rails app is handling it appropriately. I knew that the existing code ?worked,? but still didn?t provide UI feedback to inform the user. I needed to write some jQuery to handle the successful (HTTP 200) server response to our unobtrusive javascript call.

Normally, when writing my own handler (e.g. to handle a button click) to initiate an Ajax call with jQuery, I?d use $.ajax or $.post and use its built-in success handler. Something like this:

  $.ajax({
      type: 'POST',
      url: 'my_url',
      data: { param_name1: 'param_value1'},
      success: function(data){
          alert('Successfully did the needful!'); 
 }
  });

It turns out that you still define a event handler when handling server responses to unobtrusive javascript, it?s just that the syntax is very different. I needed to bind the ?ajax:success? event when it?s fired by any of my comment delete links (where class=?delete-comment?, as specified in my link_to call).

  $(document).on('ajax:success', '.delete-comment', function() {
      // $(this).parent() is the div containing this "X" delete link
      $(this).parent().slideUp();
  });

(Note that I happen to be using the newer jQuery 1.7+ .on() method and argument list instead of the older, more common .live() style. In this case, they are functionally equivalent, but the methods that .on() replaces are deprecated in jQuery 1.7.) 

Now, when the user clicks the "X" to delete one of their own comments, the successful unobtrusive javascript call is detected and the div containing that single comment is neatly hidden with the help of jQuery's slideUp() method. 



Much better! 




Comments