All opinions expressed are those of the authors and not necessarily those of OSNews.com, our sponsors, or our affiliates.
  Add to My Yahoo!  Subscribe with Bloglines  Subscribe in NewsGator Online

published by noreply@blogger.com (Jon Jensen) on 2017-05-10 04:57:00 in the "ecommerce" category

We do a lot of ecommerce development at End Point. You know the usual flow as a customer: Select products, add to the shopping cart, then check out. Checkout asks questions about the buyer, payment, and delivery, at least. Some online sales are for ?soft goods?, downloadable items that don?t require a delivery address. Much of online sales are still for physical goods to be delivered to an address. For that, a postal code or zip code is usually required.

No postal code?

I say usually because there are some countries that do not use postal codes at all. An ecommerce site that expects to ship products to buyers in one of those countries needs to allow for an empty postal code at checkout time. Otherwise, customers may leave thinking they aren?t welcome there. The more creative among them will make up something to put in there, such as ?00000? or ?99999? or ?NONE?.

Someone has helpfully assembled and maintains a machine-readable (in Ruby, easily convertible to JSON or other formats) list of the countries that don?t require a postal code. You may be surprised to see on the list such countries as Hong Kong, Ireland, Panama, Saudi Arabia, and South Africa. Some countries on the list actually do have postal codes but do not require them or commonly use them.

Do you really need the customer?s address?

When selling both downloadable and shipped products, it would be nice to not bother asking the customer for an address at all. Unfortunately even when there is no shipping address because there?s nothing to ship, the billing address is still needed if payment is made by credit card through a normal credit card payment gateway ? as opposed to PayPal, Amazon Pay, Venmo, Bitcoin, or other alternative payment methods.

The credit card Address Verification System (AVS) allows merchants to ask a credit card issuing bank whether the mailing address provided matches the address on file for that credit card. Normally only two parts are checked: (1) the street address numeric part, for example, ?123? if ?123 Main St.? was provided; (2) the zip or postal code, normally only the first 5 digits for US zip codes, and often non-US postal code AVS doesn?t work at all with non-US banks.

Before sending the address to AVS, validating the format of postal codes is simple for many countries: 5 digits in the US (allowing an optional -nnnn for ZIP+4), and 4 or 5 digits in most others countries ? see the Wikipedia List of postal codes in various countries for a high-level view. Canada is slightly more complicated: 6 characters total, alternating a letter followed by a number, formally with a space in the middle, like K1A 0B1 as explained in Wikipedia?s components of a Canadian postal code.

So most countries? postal codes can be validated in software with simple regular expressions, to catch typos such as transpositions and missing or extra characters.

UK postcodes

The most complicated postal codes I have worked with is the United Kingdom?s, because they can be from 5 to 7 characters, with an unpredictable mix of letters and numbers, normally formatted with a space in the middle. The benefit they bring is that they encode a lot of detail about the address, and it?s possible to catch transposed character errors that would be missed in a purely numeric postal code. The Wikipedia article Postcodes in the United Kingdom has the gory details.

It is common to use a regular expression to validate UK postcodes in software, and many of these regexes are to some degree wrong. Most let through many invalid postcodes, and some disallow valid codes.

We recently had a client get a customer report of a valid UK postcode being rejected during checkout on their ecommerce site. The validation code was using a regex that is widely copied in software in the wild:

[A-PR-UWYZ0-9][A-HK-Y0-9][AEHMNPRTVXY0-9]?[ABEHMNPRVWXY0-9]?[0-9][ABD-HJLN-UW-Z]{2}

(This example removes support for the odd exception GIR 0AA for simplicity?s sake.)

The customer?s valid postcode that doesn?t pass that test was W1F 0DP, in London, which the Royal Mail website confirms is valid. The problem is that the regex above doesn?t allow for F in the third position, as that was not valid at the time the regex was written.

This is one problem with being too strict in validations of this sort: The rules change over time, usually to allow things that once were not allowed. Reusable, maintained software libraries that specialize in UK postal codes can keep up, but there is always lag time between when updates are released and when they?re incorporated into production software. And copied or customized regexes will likely stay the way they are until someone runs into a problem.

The ecommerce site in question is running on the Interchange ecommerce platform, which is based on Perl, so the most natural place to look for an updated validation routine is on CPAN, the Perl network of open source library code. There we find the nice module Geo::UK::Postcode which has a more current validation routine and a nice interface. It also has a function to format a UK postcode in the canonical way, capitalized (easy) and with the space in the correct place (less easy).

It also presents us with a new decision: Should we use the basic ?valid? test, or the ?strict? one? This is where it gets a little trickier. The ?valid? check uses a regex validation approach will still let through some invalid postcodes, because it doesn?t know what all the current valid delivery destinations are. This module has a ?strict? check that uses a comprehensive list of all the ?outcode? data ? which as you can see if you look at that source code, is extensive.

The bulkiness of that list, and its short shelf life ? the likelihood that it will become outdated and reject a future valid postcode ? makes strict validation checks like this of questionable value for basic ecommerce needs. Often it is better to let a few invalid postcodes through now so that future valid ones will also be allowed.

The ecommerce site I mentioned also does in-browser validation via JavaScript before ever submitting the order to the server. Loading a huge list of valid outcodes would waste a lot of bandwidth and slow down checkout loading, especially on mobile devices. So a more lax regex check there is a good choice.

When Christmas comes

There?s no Christmas gift of a single UK postal code validation solution for all needs, but there are some fun trivia notes in the Wikipedia page covering Non-geographic postal codes:

A fictional address is used by UK Royal Mail for letters to Santa Claus:

Santa?s Grotto
Reindeerland XM4 5HQ

Previously, the postcode SAN TA1 was used.

In Finland the special postal code 99999 is for Korvatunturi, the place where Santa Claus (Joulupukki in Finnish) is said to live, although mail is delivered to the Santa Claus Village in Rovaniemi.

In Canada the amount of mail sent to Santa Claus increased every Christmas, up to the point that Canada Post decided to start an official Santa Claus letter-response program in 1983. Approximately one million letters come in to Santa Claus each Christmas, including from outside of Canada, and they are answered in the same languages in which they are written. Canada Post introduced a special address for mail to Santa Claus, complete with its own postal code:

SANTA CLAUS
NORTH POLE H0H 0H0

In Belgium bpost sends a small present to children who have written a letter to Sinterklaas. They can use the non-geographic postal code 0612, which refers to the date Sinterklaas is celebrated (6 December), although a fictional town, street and house number are also used. In Dutch, the address is:

Sinterklaas
Spanjestraat 1
0612 Hemel

This translates as ?1 Spain Street, 0612 Heaven?. In French, the street is called ?Paradise Street?:

Saint-Nicolas
Rue du Paradis 1
0612 Ciel

That UK postcode for Santa doesn?t validate in some of the regexes, but the simpler Finnish, Canadian, and Belgian ones do, so if you want to order something online for Santa, you may want to choose one of those countries for delivery. :)


Comments

published by noreply@blogger.com (Josh Lavin) on 2016-08-22 11:30:00 in the "ecommerce" category

It's a good idea for ecommerce stores to regularly contact their customers. This not only reminds customers that your business exists, but also allows the sharing of new products and resources that can enrich the lives of your customers and clients. One of the easiest ways to stay in touch is by using an email newsletter service, such as MailChimp.

MailChimp offers the regular suite of email newsletter services: lists, campaigns, and reports — but in addition, they allow an ecommerce store to integrate sales data back into MailChimp. When you have detailed shopping statistics for each subscriber, it opens new possibilities for customized marketing campaigns.

Endless possibilities

For example, imagine you have an email mailing list with 1,000 recipients. Instead of mailing the same generic newsletter to each subscriber, what if you could segment the list to identify your 100 best customers, and email them a special campaign?

Additional ideas could include:

  • Reach out to inactive subscribers, offering a coupon
  • Invite your best customers to a secret sale
  • Re-engage customers who placed items in their cart, but left without purchasing
  • Offer complementary products to purchasers of Product X

Automatic marketing

Once your store has sales data for subscribers, and you've decided on the campaigns you want to run with this data, the next step is to automate the process. This is where MailChimp's Automation feature comes in. Spend some time up-front to craft the automated campaigns, then sit back and let MailChimp run them for you; day in, day out.

Steps to implement

There are several off-the-shelf integrations for ecommerce stores, including Magento and BigCommerce.

Users of Perl and Interchange can use our newly-released toolsets of the Mail::Chimp3 CPAN module and the integration for Interchange5.

Contact us today for expert help integrating one of these solutions with your ecommerce store.

Go beyond the simple newsletter

Most businesses already have an email newsletter. Hopefully, you are sending regular email campaigns with it. This is a great first step. Going beyond this to segment your email list and reach out to these segments with relevant information to each of them, is the next step. Not only can this increase your sales, but it also respects your clients' and customers' time and preferences. It's a win-win for all.

Additional resource: MailChimp for Online Sellers


Comments

published by noreply@blogger.com (Jon Jensen) on 2015-07-24 05:23:00 in the "ecommerce" category

The big picture

Computer security is a moving target, and during the past few years it's been moving faster than ever.

In the e-commerce world, the PCI Security Standards Council sets the rules for what merchants and vendors must do to have what they consider to be a sufficiently secure environment to handle cardholder data such as credit card numbers, expiration dates, and card security codes.

PCI DSS 3.1, released on 15 April 2015 puts us all on notice that TLS 1.0 is considered unfit to use for e-commerce website encryption (HTTPS), and will be disallowed soon. The new rules specify that new software implementations must not use TLS versions prior to 1.1. Existing implementations must require TLS 1.1 or 1.2 no later than 30 June 2016.

They provide some guidance on Migrating from SSL and early TLS and explain what is expected in more detail.

Long ago we were required to disable SSL 2, and last year we were expected to disable SSL 3, the predecessor to TLS 1.0. That turned out to not be particularly hard or cause too many problems, because almost all systems that supported SSL 3 also supported TLS 1.0.

This time we are not so lucky. Many clients (such as browsers) and servers did not support TLS beyond version 1.0 until fairly recently. That means much more work is involved in meeting these new requirements than just changing some settings and restarting servers.

Almost every client (browser) and server that supports TLS 1.1 also supports TLS 1.2, and almost everything that doesn't support TLS 1.2 doesn't support TLS 1.1 either. So to keep things simpler here I'll just talk about TLS 1.0 vs. TLS 1.2 below, and TLS 1.1 can be assumed to apply as well where TLS 1.2 is mentioned.

At End Point we deploy, support, and host e-commerce sites for many customers, so I'll talk about the server side of this first. Note that servers can act as both server and client in TLS connections, since servers often make outgoing HTTPS connections as well as accepting incoming requests. Let's review the situation with each of the major Linux server operating systems.

Debian

Debian 8 is the current version, and supports TLS 1.2. It is scheduled to be supported until April 2020.

Debian 7 supports TLS 1.2, and has planned support until May 2018.

Debian's support lifetime has historically depended on how quickly future releases come, but recently the project began to offer long-term support (LTS) for Debian 6, which was supposed to be at end of life, so it will be supported until February 2016. It also only supports TLS 1.0.

Ubuntu

Ubuntu's long-term support (LTS) server versions are supported for 5 years. Currently supported versions 12.04 and 14.04 both handle TLS 1.2.

Some sites are still using Ubuntu 10.04, which supports only TLS 1.0, but its support ended in April 2015, so it should not be used any longer in any case.

Red Hat Enterprise Linux (RHEL) and CentOS

Red Hat and CentOS are "enterprise" operating systems with a very long support lifetime of 10 years. Because that is so long, the oldest supported versions may become practically unusable due to changes in the world such as the deprecation of TLS 1.0.

RHEL/CentOS 7 is the current version, supported until June 2024. It supports TLS 1.2.

RHEL/CentOS 6 is supported until November 2020. It is mostly ok for TLS 1.2. One exception is that the bundled version of curl doesn't support TLS > 1.0 for some reason, so if you have applications making curl client calls to other systems, they may break without workarounds.

RHEL/CentOS 5 is the oldest version still supported, until March 2017, and it is very widely used, but it does not supprt TLS > 1.0.

Old server remediation

If you're on an older server that doesn't support TLS 1.2, the best thing to do is upgrade or migrate to a newer operating system, as soon as possible.

The common versions of OpenSSL that don't support TLS 1.2 also do not support Server Name Indication used for hosting multiple HTTPS sites on the same IP address. That is now becoming more commonly used since the thing holding back acceptance was old versions of Windows XP that didn't support it, and they are now are mostly dead. You can control whether you need SNI on the server side, avoiding it by continuing to get a separate IP address for each HTTPS site you host. But when you are a client of someone else's service that requires SNI, you'll wish you had it.

So migrate. That's easier said than done, of course. Moving to a new OS version involves a lot of new system library versions, language versions, web server version, etc. Some things aren't compatible. It takes work and time. That's life, so accept it and move ahead. Advocate it, schedule it, do it. But in the meantime, if you must cope with old servers, there are some not entirely terrible options.

You could use plain HTTP on a local private network to talk to a newer server running stunnel or an nginx proxy to do the TLS layer, or use a VPN if you have no private network.

You can use CDN in front of your site, which will certainly support TLS 1.2, and covers the path between the end user and the CDN, at least.

You can build your own versions of OpenSSL, any libraries that link to OpenSSL such as curl or wget, Apache or nginx, etc. This is tempting but is a terrible option, because you are almost certain to not update this hand-crafted stack often enough in the future to protect against new vulnerabilities in it. Sidestepping the operating system's native package management for core infrastructure software like this is usually a mistake.

You could avoid that problem by using someone else's backported parallel-install packages of all that, if you can find some, and if you think they're trustworthy, and if they're going to maintain them so you can get later updates. I'm not familiar with anyone doing this, but it may be out there and could be hired for the right ongoing price.

But the best bet is to start planning your upgrade or migration as soon as possible.

Browser support for TLS 1.2

Of course the other half of the connection is the client, primarily end-users' web browsers. On Wikipedia is a very detailed table showing various browsers' support of various features, broken down by version, here: TLS support history of web browsers. My summary follows:

Google Chrome and Mozilla Firefox have been automatically updating themselves for a long time, so unless you or your system administrator have disabled the auto-update, they will work with TLS 1.2.

Internet Explorer 8, 9, and 10 support TLS 1.2, but it is disabled by default. Not until IE 11 can TLS 1.2 be counted on to work.

Apple Safari 7, 8, and 9 for Mac OS X support TLS 1.2.

The built-in browser on Android < 4.4 doesn't support TLS > 1.0, and Android 4.4 has TLS > 1.0 disabled by default, which is the same thing for most users. So anyone with Android < 5.0 will not be able to connect to your site unless they're using a separate and newer mobile browser such as Chrome and Firefox.

Browser support for TLS 1.0

I have not heard of any timelines being announced for browsers to disable TLS 1.0 yet. I suspect that's because there are still so many servers that only support TLS 1.0. But before too long we may start to see servers catch up and then I expect browsers will eventually disable TLS 1.0.

Other clients

There are too many non-browser clients to list here. We've already mentioned curl; there is also wget, and the web client libraries in Perl, Python, Ruby, and Java. PHP uses libcurl. NodeJS and Go are likely not from the operating system and newer than it, so may be more current. At any rate, some of those clients will be old and won't support TLS 1.2, so when other sites stop allowing TLS 1.0 connections, whatever you were talking to them for will stop working.

PCI DSS will require client applications to stop using TLS 1.0 also, which may mean that applications need to be configured to require TLS 1.2 for outgoing HTTPS connections.

Summary

Your systems need to stop supporting TLS 1.0 by June 2016 at the latest. Start planning the migration now! We are available to help our current and new clients test, assess needs, and plan upgrades and migrations.

Reference


Comments

published by noreply@blogger.com (Steph Skardal) on 2015-03-18 19:33:00 in the "ecommerce" category

One of my recent projects for Paper Source has been to introduce advanced product filtering (or faceted filtering). Paper Source runs on Interchange, a perl-based open source ecommerce platform that End Point has been involved with (as core developers & maintainers) for many years.

In the case of Paper Source, personalized products such as wedding invitations and save the dates have advanced filtering to filter by print method, number of photos, style, etc. Advanced product filtering is a very common feature in ecommerce systems with a large number of products that allows a user to narrow down a set of products to meet their needs. Advanced product filtering is not unlike faceted filtering offered by many search engines, which similarly allows a user to narrow down products based on specific tags or facets (e.g. see many Amazon filters on the left column). In the case of Paper Source, I wrote the filtering code layered on top of the current navigation. Below I'll go through some of the details with small code examples.

Data Model

The best place to start is the data model. A simplified existing data model that represents product taxonomy might look like the following:


Basic data model linking categories to products.

The existing data model links products to categories via a many-to-many relationship. This is fairly common in the ecommerce space – while looking at a specific category often identified by URL slug or id, the products tied to that category will be displayed.

And here's where we go with the filtering:


Data model with filtering layered on top of existing category to product relationship.

Some notes on the above filtering data model:

  • filters contains a list of all the files. Examples of entries in this table include "Style", "Color", "Size"
  • filters_categories links filters to categories, to allow finite control over which filters show on which category pages, in what order. For example, this table would link category "Shirts" to filters "Style", "Color", "Size" and the preferred sort order of those filters.
  • filter_options includes all the options for a specific filter. Examples here for various options include "Large", "Medium", and "Small", all linked to the "Size" filter.
  • filter_options_products links filter options to a specific product id with a many to many relationship.

Filter Options Exclusivity

One thing to consider before coding are the business rules pertaining to filter option exclusivity. If a product is assigned to one filter option, can it also have another filter option for that same filter type? IE, if a product is marked as blue, can it also be marked as red? When a user filters by color, can they filter to select products that are both blue and red? Or, if a product is is blue, can it not have any other filter options for that filter? In the case of Paper Source product filtering, we went with the former, where filter options are not exclusive to each other.

A real-life example of filter non-exclusivity is how Paper Source filters wedding invitations. Products are filtered by print method and style. Because some products have multiple print methods and styles, non-exclusivity allows a user to narrow down to a specific combination of filter options, e.g. a wedding invitation that is both tagged as "foil & embossed" and "vintage".

URL Structure

Another thing to determine before coding is the URL structure. The URL must communicate the current category of products and current filter options (or what I refer to as active/activated filters).

I designed the code to recognize one component of the URL path as the category slug, and the remaining paths to map to the various filter option url slugs. For example, a URL for the category of shirts is "/shirts", a URL for large shirts "/shirts/large", and the URL for large blue shirts "/shirts/blue/large". The code not only has to accept this format, but it also must create consistently ordered URLs, meaning, we don't want both "/shirts/blue/large" and "/shirts/large/blue" (representing the same content) to be generated by the code. Here's what simplified pseudocode might look like to retrieve the category and set the activated filters:

my @url_paths = split('/', $request_url); #url paths is e.g. /shirts/blue/large
my $category_slug = shift(@url_paths)
# find Category where slug = $category_slug
# redirect if not found
# @url_paths is active filters

Applying the Filtering

Next, we need a couple things to happen:

  • If there is an activated filter for any filter option, apply it.
  • Generate URLs to toggle filter options.

First, all products are retrieved in this category with a query like this:

SELECT products.*,
  COALESCE((SELECT GROUP_CONCAT(fo.url_slug) FROM filters_options_item foi 
    JOIN filters_options fo ON fo.id = foi.filter_option_id 
    WHERE foi.product_id = products.id), '') AS filters
FROM products 
JOIN categories_products cp ON cp.product_id = products.id
JOIN categories c ON c.id = cp.category_id
WHERE c.url_slug = ?

Next is where the code gets pretty hairy, so instead I'll try to explain with pseudocode:

#@filters = all applicable filters for current category
# loop through @filters
  # loop through filter options for this filter
  # filter product results to include any selected filter options for this filter
  # if there are no filter options selected for this filter, include all products
  # build the url for each filter option, to toggle the filter option (on or off)
# loop through @filters (yes, a second time)
  # loop through filter options for this filter
  # count remaining products for each filter option, if none, set filter option to inactive
  # build the final url for each filter option, based on all filters turned on and off

My pseudocode shows that I iterate through the filters twice, first to apply the filter and determine the base URL to toggle each filter option, and second to count the remaining filtered products and build the URL to toggle each filter. The output of this code is a) a set of filtered products and b) a set of ordered filters and filter options with corresponding counts and links to toggle on or off.

Here's a more specific example:

  • Let's say we have a set of shirts, with the following filters & options: Style (Long Sleeve, Short Sleeve), Color (Red, Blue), Size (Large, Medium, Small).
  • A URL request comes in for /shirts/blue/large
  • The code recognizes this is the shirts category and retrieves all shirts.
  • First, we look at the style filter. No style filter is active in this request, so to toggle these filters on, the activation URLs must include "longsleeve" and "shortsleeve". No products are filtered out here.
  • Next, we look at the color filter. The blue filter option is active because it is present in the URL. In this first loop, products not tagged as blue are removed from the set of products. To toggle the red option on, the activation URL must include "red", and to toggle the blue filter off, the URL must not include "blue", which is set here.
  • Next, we look at the size filter. Products not tagged as large are removed from the set of products. Again, the large filter has to be toggled off in the URL because it is active, and the medium and small filter need to be toggled on.
  • In the second pass through filters, the remaining items applicable to each filter option are counted, for long sleeve, short sleeve, red, medium, and small options. And the URLs are built to turn on and off all filter options (e.g. applying the longsleeve filter will yield the URL "/shirts/longsleeve/blue/large", applying the red filter will yield the URL "/shirts/blue/red/large", turning off the blue filter will yield the URL "/shirts/large").

The important thing to note here is the double pass through filters is required to build non-duplicate URLs and to determine the product count after all filter options have been applied. This isn't simple logic, and of course changing the business rules like exclusivity will change the loop behavior and URL logic.

Alternative Approaches

Finally, a few notes regarding alternative approaches here:

  • Rather than going with the blacklist approach described here, one could go with a whitelist approach where a set of products is built up based on the filter options set.
  • Filtering could be done entirely via AJAX, in which case URL structure may not be a concern.
  • If the data is simple enough, products could potentially be filtered in the database query itself. In our case, this wasn't feasible since we generate product filter option details from a number of product attributes, not just what is shown in the simplified product filter data model above.

Comments

published by noreply@blogger.com (Mark Johnson) on 2015-02-09 16:58:00 in the "ecommerce" category

It's important to understand both how loops work in Interchange and the (very) fundamental differences between interpolating Interchange tag language (ITL) and the special loop tags (typically referred to as [PREFIX-*] in the literature). Absent this sometimes arcane knowledge, it is very easy to get stuck with inefficient loops even with relatively small loop sets. I'll discuss both the function of loops and interpolation differences between the tag types while working through a [query] example. While all loop tags--[item-list], [loop], [search-list], and [query]--process similarly, it is to [query] where most complex loops will gravitate over time (to optimize the initiation phase of entering the loop) and where we have the most flexibility for coming up with alternative strategies to mitigate sluggish loop-processing.

Loop Processing

All loop tags are container tags in Interchange, meaning they have an open and close tag, and in between is the body. Only inside this body is it valid to define [PREFIX-*] tags (notable exception of [PREFIX-quote] for the sql arg of [query]). This is because the [PREFIX-*] tags are not true ITL. They are tightly coupled with the structure of the underlying rows of data and they are processed by distinct, optimized regular expressions serially. Outside the context of the row data from a result set, they are meaningless.

Moreover, the body of a loop tag is slurped into a scalar variable (as all bodies of container tags are handled via the ITL parser) and for each row in the record set of the loop, the contents are acted upon according to the [PREFIX-*] tags defined within the body. The first important distinction to recognize here is, the per-row action on this scalar is limited to only the [PREFIX-*] tags. The action occurring at loop time ignores any embedded ITL.

At the end of each row's processing, the copy of the body tied to that one row is then concatenated to the results of all previous rows thus processed. For a loop with N rows (assuming no suppression by [if-PREFIX-*] conditionals) that means every instance of ITL originally placed into the loop body is now present N times in the fully assembled body string. Simple example:

[loop list='1 2 3']
[tmp junk][loop-code][/tmp]
[if scratch junk == 2]
I declare [loop-code] to be special!
[else]
Meh. [loop-code] is ordinary.
[/else]
[/if]
[/loop]

Once this result set with N=3 is processed, but before Interchange returns the results, the assembled return looks like the following string:

[tmp junk]1[/tmp]
[if scratch junk == 2]
I declare 1 to be special!
[else]
Meh. 1 is oridinary
[/else]
[/if]

[tmp junk]2[/tmp]
[if scratch junk == 2]
I declare 2 to be special!
[else]
Meh. 2 is oridinary
[/else]
[/if]

[tmp junk]3[/tmp]
[if scratch junk == 2]
I declare 3 to be special!
[else]
Meh. 3 is oridinary
[/else]
[/if]

Some important observations:

  • It doesn't take much ITL to turn a loop body into a monster interpolation process. One must consider the complexity of the ITL in the body by a factor of the number of rows (total, or the "ml" matchlimit value).

  • ITL does nothing to short-circuit action of the [PREFIX-*] tags. Having [PREFIX-param], [PREFIX-calc], etc. inside an ITL [if] means all those loop tags parse regardless of the truth of the if condition.

  • ITL vs. Loop Tags

    ITL maps to routines, both core and user-defined, determined at compile time. They are processed in order of discovery within the string handed to the ::interpolate_html() routine and have varied and complex attributes that must be resolved for each individual tag. Further, for many (if not most) tags, the return value is itself passed through a new call to ::interpolate_html(), acting on all embedded tags, in an action referred to as reparse. There is, relatively speaking, a good deal of overhead in processing through ::interpolate_html(), particularly with reparse potentially spawning off a great many more ::interpolate_html() calls.

    Loop tags, by contrast, map to a pre-compiled set of regular expressions. In contrast to crawling the string and acting upon the tags in the order of discovery, each regex in turn is applied globally to the string. The size of the string is limited to the exact size of the single loop body, and there is no analogue to ITL's reparse. Further, given this processing pattern, a careful observer might have noted that the order of operations can impact the structure. Specifically, tags processed earlier cannot depend on tags processed later. E.g., [PREFIX-param] processes ahead of [PREFIX-pos], and so:

    [if-PREFIX-pos 2 eq [PREFIX-param bar]]
    

    will work, but:

    [if-PREFIX-param bar eq [PREFIX-pos 2]]
    

    will not. While the above is a somewhat contrived exampled, the impacts of loop tag processing can be more easily seen in an example using [PREFIX-next]:

    [PREFIX-next][PREFIX-param baz][/PREFIX-next]
    Code I only want to run when baz is false, like this [PREFIX-exec foo][/PREFIX-exec] call
    

    Because [PREFIX-next] is the absolute last loop tag to run, every other loop tag in the block is run before the next condition is checked. All [PREFIX-next] does is suppress the resulting body from the return, unlike Perl's next, which short-circuits the remaining code in the loop block.

    An Optimization Example

    As long as you're familiar with the idiosyncracies of [PREFIX-*] tags, you should make every effort to use them instead of ITL because they are substantially lighter weight and faster to process. A classic case that can yield remarkable performance gains is to directly swap an embedded [perl] or [calc] block with an equivalent [PREFIX-calc] block.

    Let's take a typical query with little consideration given to whether we use loop tags or ITL, not unlike many I've seen where the resource has just become unusably slow. This code originally was developed processing 50 records per page view, but the team using it has requested over time to increase that count.

    [query
        list=1
        ml=500 [comment]Ouch! That's a big N[/comment]
        sql="
            SELECT *
            FROM transactions
            WHERE status = 'pending'
            ORDER BY order_date DESC
        "
    ]
    Order [sql-param order_number]
    [if global DEVELOPMENT]
        Show [sql-exec stats_crunch][sql-param stats][/sql-exec], only of interest to developers
    [/if]
    Date: [convert-date format="%b %d, %Y at %T"][sql-param order_date][/convert-date]
    [if cgi show_inventory]
    Inv:
    [if cgi show_inventory eq all]
    * Shipped: [either][sql-param is_shipped][or]pending[/either]
    * Count: [inventory type=shipped sku=[sql-param sku]]
    [/if]
    * On Hand: [inventory type=onhand sku=[sql-param sku]]
    * Sold: [inventory type=sold sku=[sql-param sku]]
    * Shipping: [inventory type=shipping sku=[sql-param sku]]
    [/if]
    Order details:
        <a href="[area
                    href=order_view
                    form="
                        order=[sql-param order_number]
                        show_status=[either][cgi show_status][or]active[/either]
                        show_inventory=[cgi show_inventory]
                    "
                ]">View [sql-param order_number]</a>
    [/query]
    

    Considering this block out of context, it doesn't seem all that unreasonable. However, let's look at some of the pieces individually and see what can be done.

  • We use [if] in 3 different circumstances in the block. However, those values they test are static. They don't change on any iteration. (We are excluding the potential of any other ITL present in the block from changing their values behind the scenes.)

  • [convert-date] may be convenient, but it is only one of a number of ways to address date formatting. Our database itself almost certainly has date-formatting routines, but one of the benefits of [convert-date] is you could have a mixed format underlying the data and it can make sense out of the date to some degree. So perhaps that's why the developer has used [convert-date] here.

  • Good chance that stats_crunch() is pretty complicated and that's why the developer wrote a catalog or global subroutine to handle it. Since we only want to see it in the development environment, it'd be nice if it only ran when it was needed. Right now, because of ITL happening on reparse, stats_crunch() fires for every row even if we have no intention of using its output.

  • We need that link to view our order, but on reparse it means ::interpolate_html() has to parse 500 [area] tags along with [either] and [cgi] x 500. All of these tags are lightweight, but parsing numbers are really going to catch up to us here.

  • Our goal here is to replace any ITL we can with an equivalent use of a loop tag or, absent the ability to remove ITL logically, to wrap that ITL into a subroutine that can itself be called in loop context with [PREFIX-exec]. The first thing I want to address are those [if] and [either] tags, the lowest hanging fruit:

    [query
        list=1
        ml=500
        sql="
            SELECT *,
                '[if global DEVELOPMENT]1[/if]' AS is_development,
                [sql-quote][cgi show_inventory][/sql-quote] AS show_inventory,
                COALESCE(is_shipped,'pending') AS show_inventory_shipped
            FROM transactions
            WHERE status = 'pending'
            ORDER BY order_date DESC
        "
    ]
    Order [sql-param order_number]
    [if-sql-param is_development]
        Show [sql-exec stats_crunch][sql-param stats][/sql-exec], only of interest to developers
    [/if-sql-param]
    Date: [convert-date format="%b %d, %Y at %T"][sql-param order_date][/convert-date]
    [if-sql-param show_inventory]
    Inv:
    [if-sql-param show_inventory eq all]
    * Shipped: [sql-param show_inventory_shipped]
    * Count: [inventory type=shipped sku=[sql-param sku]]
    [/if-sql-param]
    * On Hand: [inventory type=onhand sku=[sql-param sku]]
    * Sold: [inventory type=sold sku=[sql-param sku]]
    * Shipping: [inventory type=shipping sku=[sql-param sku]]
    [/if-sql-param]
    Order details:
        <a href="[area
                    href=order_view
                    form="
                        order=[sql-param order_number]
                        show_status=[either][cgi show_status][or]active[/either]
                        show_inventory=[cgi show_inventory]
                    "
                ]">View [sql-param order_number]</a>
    [/query]
    

    By moving those evaluations into the SELECT list of the query, we've reduced the number of interpolations to arrive at those static values to 1 or, in the case of the [either] tag, 0 as we've offloaded the calculation entirely to the database. If is_shipped could be something perly false but not null, we would have to adjust our field accordingly, but in either case could still be easily managed as a database calculation. Moreover, by swapping in [if-sql-param is_development] for [if global DEVELOPMENT], we have kept stats_crunch() from running at all when in the production environment.

    Next, we'll consider [convert-date]:

    Date: [convert-date format="%b %d, %Y at %T"][sql-param order_date][/convert-date]

    My first attempt would be to address this similarly to the [if] and [either] conditions, and try to render the formatted date from a database function as an aliased field. However, let's assume the underlying structure of the data varies and that's not easily accomplished, and we still want [convert-date]. Luckily, Interchange supports that same tag as a filter, and [PREFIX-filter] is a loop tag:

    Date: [sql-filter convert_date."%b %d, %Y at %T"][sql-param order_date][/sql-filter]

    [PREFIX-filter] is very handy to keep in mind as many transformation tags have a filter wrapper for them. E.g., [currency] -> [PREFIX-filter currency]. And if the one you're looking at doesn't, you can build your own, easily.

    Now to look at that [inventory] tag. The most direct approach assumes that the code inside [inventory] can be run in Safe, which often it can even if [inventory] is global. However, if [inventory] does run-time un-Safe things (such as creating an object) then it may not be possible. In such a case, we would want to create a global sub, like our hypothetical stats_crunch(), and invoke it via [PREFIX-exec]. However, let us assume we can safely (as it were) invoke it via the $Tag object to demonstrate another potent loop option: [PREFIX-sub].

    [if-sql-param show_inventory]
    [sql-sub show_inventory]
        my $arg = shift;
        return $Tag->inventory({ type => $arg, sku => $Row->{sku} });
    [/sql-sub]
    Inv:
    [if-sql-param show_inventory eq all]
    * Shipped: [sql-param show_inventory_shipped]
    * Count: [sql-exec show_inventory]shipped[/sql-exec]
    [/if-sql-param]
    * On Hand: [sql-exec show_inventory]on_hand[/sql-exec]
    * Sold: [sql-exec show_inventory]sold[/sql-exec]
    * Shipping: [sql-exec show_inventory]shipping[/sql-exec]
    [/if-sql-param]
    

    Let's go over what this gives us:

  • [PREFIX-sub] creates an in-line catalog sub that is compiled at the start of processing, before looping actually begins. As such, the [PREFIX-sub] definitions can occur anywhere within the loop body and are then removed from the body to be parsed.

  • The body of the [PREFIX-exec] is passed to the sub as the first argument. We use that here for our static values to the "type" arg. If we also wanted to access [sql-param sku] from the call, we would have to include that in the body and set up a parser to extract it out of the one (and only) arg we can pass in. Instead, we can reference the $Row hash within the sub body just as we can do when using a [PREFIX-calc], with one minor adjustment to our [query] tag--we have to indicate to [query] we are operating on a row-hash basis instead of the default row-array basis. We do that by adding the hashref arg to the list:
    [query
        list=1
        ml=500
        hashref=1
    

  • We still have access to the full functionality of [inventory] but we've removed the impact of having to parse that tag 2000 times (in the worst-case scenario) if left as ITL in the query body. If we run into Safe issues, that same sub body can either be created as a pre-compiled global sub or, if available, we can set our catalog AllowGlobal in which case catalog subs will no longer run under Safe.

  • Finally, all we have left to address is [area] and its args which themselves have ITL. I will leverage [PREFIX-sub] again as an easy way to manage the issue:

    [sql-sub area_order_view]
        my $show_status = $CGI->{show_status} || 'active';
        return $Tag->area({
            href => 'order_view',
            form => "order=$Row->{order_number}n"
                  . "show_status=$show_statusn"
                  . "show_inventory=$CGI->{show_inventory}",
        });
    [/sql-sub]
    Order details:
        <a href="[sql-exec area_order_view][/sql-exec]">View [sql-param order_number]</a>
    

    By packaging all of [area]'s requirements into the sub body, I can address all of the ITL at once.

    So now, let's put together the entire [query] rewrite to see the final product:

    [query
        list=1
        ml=500
        hashref=1
        sql="
            SELECT *,
                '[if global DEVELOPMENT]1[/if]' AS is_development,
                [sql-quote][cgi show_inventory][/sql-quote] AS show_inventory,
                COALESCE(is_shipped,'pending') AS show_inventory_shipped
            FROM transactions
            WHERE status = 'pending'
            ORDER BY order_date DESC
        "
    ]
    Order [sql-param order_number]
    [if-sql-param is_development]
        Show [sql-exec stats_crunch][sql-param stats][/sql-exec], only of interest to developers
    [/if-sql-param]
    Date: [sql-filter convert_date."%b %d, %Y at %T"][sql-param order_date][/sql-filter]
    [if-sql-param show_inventory]
    [sql-sub show_inventory]
        my $arg = shift;
        return $Tag->inventory({ type => $arg, sku => $Row->{sku} });
    [/sql-sub]
    Inv:
    [if-sql-param show_inventory eq all]
    * Shipped: [sql-param show_inventory_shipped]
    * Count: [sql-exec show_inventory]shipped[/sql-exec]
    [/if-sql-param]
    * On Hand: [sql-exec show_inventory]on_hand[/sql-exec]
    * Sold: [sql-exec show_inventory]sold[/sql-exec]
    * Shipping: [sql-exec show_inventory]shipping[/sql-exec]
    [/if-sql-param]
    [sql-sub area_order_view]
        my $show_status = $CGI->{show_status} || 'active';
        return $Tag->area({
            href => 'order_view',
            form => "order=$Row->{order_number}n"
                  . "show_status=$show_statusn"
                  . "show_inventory=$CGI->{show_inventory}",
        });
    [/sql-sub]
    Order details:
        <a href="[sql-exec area_order_view][/sql-exec]">View [sql-param order_number]</a>
    [/query]
    

    Voila! Our new query body is functionally identical to the original body, though admittedly a little more complicated to set up. However, the trade-off in efficiency is likely to be substantial.

    I recently worked on a refactor for a client that was overall very similar to the above example, with a desired N value of 250. The code prior to refactoring took ~70s to complete. Once we had completed the refactor using the same tools as I've identified here, we brought down processing time to just under 3s, losing no functionality.

    Time taken optimizing Interchange loops will almost always pay dividends.


    Comments

    published by noreply@blogger.com (Matt Galvin) on 2015-01-14 14:53:00 in the "ecommerce" category

    Hello again all. I like to monitor the orders and exceptions of the Spree sites I work on to ensure everything is working as intended. One morning I noticed an unusual error: "invalid value for Integer(): "09"" in Spree::Checkout/update on a Spree 2.1.x site.

    The Issue

    Given that this is a Spree-powered e-commerce site, a customer's inability to checkout is quite alarming. In the backtrace I could see that a string of "09" was causing an invalid value for an integer. Why hadn't I seen this on every order in that case?

    I went into the browser and completed some test orders. The bug seemed to affect only credit cards with a leading "0" in the expiration month, and then only certain expiration months. I returned to the backtrace and saw this error was occurring with Active Merchant. So, Spree was passing Active Merchant a string while Active Merchant was expecting an integer.

    Armed with a clearer understanding of the problem, I did some Googling. I came across this post. This post describes the source of this issue as being the behavior of sprintf which I will describe below. This topic was discussed in the Ruby Forum.

    Octal Numbers

    As per Daniel Martin on the aforementioned post:

    • sprintf("%d",'08') ==> ArgumentError
    • sprintf("%d",'8') ==> "8"
    • sprintf("%d",'08'.to_i) ==> "8"
    • sprintf("%f",'08') ==> "8.000000"

    As you can see, sprintf cannot convert '08' or '09' to a decimal. Matthias Reitlinger notes,

    "%d tells sprintf to expect an Integer as the corresponding argument. Being given a String instead it tries to convert it by calling Kernel#Integer"

    In the same post, we can review some documentation of Kernel#Integer



    We can see here that if the argument being provided is a string (and it is since that is what Spree is sending), the "0" will be honored. Again, we know

    sprintf("%d",'01') => "1" | sprintf("%d", 01) => "1"
    sprintf("%d",'02') => "2" | sprintf("%d", 02) => "2"
    sprintf("%d",'03') => "3" | sprintf("%d", 03) => "3"
    sprintf("%d",'04') => "4" | sprintf("%d", 04) => "4"
    sprintf("%d",'05') => "5" | sprintf("%d", 05) => "5"
    sprintf("%d",'06') => "6" | sprintf("%d", 06) => "6"
    sprintf("%d",'07') => "7" | sprintf("%d", 07) => "7"
    sprintf("%d",'08') => error | sprintf("%d", 08) => error
    sprintf("%d",'09') => error | sprintf("%d", 09) => error

    By pre-prepending the "0" to the numbers, they are being marked as 'octal'. Wikipedia defines octal numbers as

    "The octal numeral system, or oct for short, is the base-8 number system, and uses the digits 0 to 7. Octal numerals can be made from binary numerals by grouping consecutive binary digits into groups of three (starting from the right)."

    So, 08 and 09 are not octal numbers.

    Solution

    This is why this checkout error did not occur on every order whose payment expiration month had a leading "0", only August (08) and September (09) were susceptible as the leading '0' indicates we are passing in an octal of which 08 and 09 are not valid examples of. So, I made Spree send integers (sprintf("%d",8) #=> "8" and sprintf("%d",9) #=> "9") so that the leading "0" would not get sent (thereby not trying to pass them as octals). I created a app/models/spree/credit_card_decorator.rb file with the contents

    Spree::CreditCard.class_eval do
      def expiry=(expiry)
        if expiry.present?
          self[:month], self[:year] = expiry.delete(' ').split('/')
          self[:year] = "20" + self[:year] if self[:year].length == 2
          self[:year] = self[:year].to_i
          self[:month] = self[:month].to_i
        end
      end
    end
    

    After adding this, I tested it in the browser and there were no more checkout errors! I hope you've found this interesting and helpful, thanks for reading!


    Comments

    published by noreply@blogger.com (Mike Farmer) on 2012-06-15 15:25:00 in the "ecommerce" category

    World of Powersports is a family of websites that runs on Interchange. Carl Bailey describes how a few years after working on their initial website, World of Powersports came to End Point to develop a new website called Dealer Orders which has been very successful. This has allowed End Point the opportunity to work on several other related websites for the client.

    IMG_0862.JPG

    Since then, we have worked on several other sites including:

    All of the websites pull from a single database that is fed by various APIs from parts vendors such as Honda, Suzuki, and Polaris. This updates the inventory counts and other related information for all of the sites. It also interacts with online sites such as eBay, Google Base, and Amazon for checking part availability and pricing.

    Implementing the interactions between these different entities has provided End Point with much of the challenge of these sites but continues to provide the client and customers with great value.


    Comments

    published by noreply@blogger.com (Jon Jensen) on 2011-06-17 18:20:00 in the "ecommerce" category

    Last night concluded Internet Retailer Conference & Exhibition 2011 in San Diego. We had a lot of good conversations with attendees and other exhibitors at the End Point booth, and our Liquid Galaxy with Google Earth was a great draw for visitors:

    The majority of exhibitors at the show were offering software as a service or productized ecommerce services. A couple of our favorite small SaaS companies, both for their knowledgeable and friendly technical staff, and for their challenging some of the less-beloved incumbent giants in the space, were Olark, offering a SaaS live chat service, and SearchSpring, with their SaaS faceted search service. We look forward to trying out their services.

    Some of the more dazzling software demonstrations at the show were:

    • Total Immersion, an "augmented reality solution. Their TryLive Eyewear demo had us looking into their webcam and trying out different eyeglass frames that were overlaid on our video image in real time.
    • Styku, a company offering 3-D virtual fitting room software. They had an amazing video demo of mannequins modeling different clothes, and it's all customizable per visitor who wants to use his/her measurements to be fitted online. It's easy to see that this kind of thing has a lot of potential for online clothing sales, and could greatly reduce returns and exchanges due to bad sizing.

    E-commerce can seem to make location less relevant, and at End Point our workforce is distributed throughout the U.S., adding to the effect of "dislocation". But at Internet Retailer, people's location was a common topic. I'm located in Teton Valley, Idaho, just a few miles west of Wyoming, and it was nice for me to meet some of my geographic neighbors in Idaho, Utah, and Colorado.

    We had good conversations with several companies in the Salt Lake City area: Logica transportation analytics, Molding Box fulfillment outsourcing, Doba drop ship, and AvantLink affiliate technology, headed up by Scott Kalbach who we worked with at Backcountry.com years ago. I also met attendees and exhibitors with offices and staff closer to home in Idaho Falls and Boise, Idaho.

    The show ended last night at 7:00 pm and we broke down the booth and packed up our Liquid Galaxy for shipping back to New York City, a somewhat labor-intensive task.

    It's been a busy show, staffing our booth from 9:00 am till 7:00 pm each day, so we didn't have much time to enjoy beautiful San Diego or get much sleep or exercise. On the way back to the hotel we made a quick stop at a playground to unwind.


    Comments

    published by steph@endpoint.com (Steph Skardal) on 2011-06-15 19:12:00 in the "ecommerce" category

    We are in full force with a booth at at Internet Retailer Conference 2011 in San Diego. The exhibit hall opened yesterday afternoon after the last few stragglers flew in from North Carolina (me) and Idaho (Jon) to join Ben, Rick, Carl, and Ron.

    We've had a steady flow of booth visitors interested in hearing about our core ecommerce services and Liquid Galaxy. We've also heard from a few companies interested in partnering, which is a nice way to learn about the latest popular technologies in ecommerce, such as mobile and tablet opportunities, live chat integration, real-time user interactivity ecommerce features, and shipping integration and analytics.

    Stop by if you're here and interested in hearing more about End Point's open source consulting and development services!

    Here at IRCE 2011!
    Ben navigates our Liquid Galaxy display.
    Rick navigates through San Diego before a team dinner.
    Ben & Carl pose in front our our Liquid Galaxy display.

    Comments

    published by steph@endpoint.com (Steph Skardal) on 2011-05-18 21:05:00 in the "ecommerce" category

    With the inclusion of the Scss gem in Rails 3.1, RailsConf is a nice time to get a refresher on Sass/Scss functionality. Sass defines itself as syntactically awesome stylesheets, or a CSS meta language built to provide more powerful functionality to manipulate website appearances with efficiency and elegance. Note that Sass has two syntaxes and the examples presented in this article use the newer Scss syntax. Around the time of RailsConf two years ago, Sass was a included in Spree, an open-source Ruby on Rails ecommerce framework that End Point supports. At the time, I was skeptical about Sass inclusion in Spree because it wasn't being leveraged to it's full potential and had hopes of taking advantage of Sass, but a few months later it was removed from the core. Since then, I haven't worked with Sass on other projects but hope to do so moving forward after being reminded of it's features and of the fact that it will be included in Rails 3.1 as a default. I attended Chris Eppstein's talk on Sass and explain a few features related to real-life use cases of CSS manipulation.

    Variables

    While working on a new feature, your client says, "I want this to be the same red that we use all over the site." This is exactly what I experienced while working on Paper Source to build out new Quickview/popup functionality shown in the image below.

    Sass variables can be defined and then included in various selectors. Need to change the styling for all those selectors that use this variable? Just change the variable instead of grepping through the code for the color and different variations of the color. Here's an example of what variable definition and use might look like:

    $red_error: #33FF00;
    .error {
        color: $red_error;
    }
    .qty_alert {
        color: $red_error;
    }
    

    Nesting

    Instead of writing code with repeated selectors, take full advantage of nesting in Sass. Below is a good example of a repeated styling selector I use for Paper Source's custom wedding invitation feature:

    .editor_box {}
      .editor_box label { font-size: 11px; display: block; padding: 2px 0px; font-style: italic; }
      .editor_box label a { font-size: 9px; color: #82B3CC; }
      .editor_box .fonts { float: left; width: 176px; margin: 0px 5px 5px 0px; }
        .editor_box fonts select { width: 176px; margin: 0px; }
    

    In Sass, this might would look like:

    .editor_box {
        label {
            font-size: 11px;
            display: block;
            padding: 2px 0px;
            font-style: italic;
            a { 
                font-size: 9px;
                color: #82B3CC;
            }
        }
        .fonts { 
            float: left;
            width: 176px;
            margin: 0px 5px 5px 0px;
            select {
                width: 176px; margin: 0px;
            }
        }
    }
    

    While the nested functionality doesn't produce less lines of code, it does do a better job of following the DRY principle and is more readable in Sass form.

    Mixins

    Next, while building out a new feature, a client says, "I want you to build a product thumbnail page that has equivalent styling to our other thumbnail pages." See the thumbnails in the image below that share similar styling on multiple product navigation pages through Paper Source's site.

    Mixins are reusable sections of CSS that can be included in other selectors to reduce duplicate property and value definitions. A simple example, shown below, includes the product thumbnail mixin in two distinct classes.

    @mixin product_thumbnail {
        a { 
            color: $blue;
            text-decoration: none;
            &:hover {
                color: $red;
            }
        }
        img { padding-top: 10px; }
    }
    .category_products {
        @include product_thumbnail;
    }
    .special_products {
        @include product_thumbnail;
    }
    

    Transformations

    Next, while building out another(!) new feature, the client comments, "I want this to be a color close to the other one, but a little different. Can you try what you think looks good?" The image below is a screen snippet with a jQuery-based accordion with different colors representing open and closed sections.

    Transformations are calculations that can be applied to values such as color manipulation like saturate, desaturate, lighten, darken, make greyscale. Sass might look like this in the jQuery accordion scenario to show a lightened version of blue on inactive accordion regions:

    $blue: #99CCCC;
    .active {
        background-color: $blue;
    }
    .inactive {
        background-color: lighten($blue, 25%);
    }
    

    Importing Stylesheets

    From a performance perspective, it's ideal to have a single compiled CSS file included on each page. This can be difficult for maintainability as you try to manage an extremely large stylesheet. The CSS directive @import can import additional stylesheets, but these imported files are divided in multiple HTTP requests. Sass's approach allows you to include stylesheets with rules and other Sass functionality, where a single file will be created at compile time. In the Paper Source example, we could do the following to include styles for thumbnails on various pages:

    _product_thumbnails.scss
    @mixin product_thumbnail {
        a { 
            color: $blue;
            text-decoration: none;
            &:hover { color: $red; }
        }
        img { padding-top: 10px; }
    }
    
    category.scss
    @import "product_thumbnails"
    .category_products {
        @include product_thumbnail;
    }
    
    @import "product_thumbnails"
    landing_page.scss
    .special_products {
        @include product_thumbnail;
    }
    

    Check out the Sass documentation to read more about Sass and it's features, including abstractions, calculations, and selector inheritence covered by Chris.

    CSS Sprites with Compass

    Compass is an additional gem that can be installed on top of Sass. In my opinion, the best feature of Compass mentioned in the talk was automated CSS spriting. CSS sprites is a technique where an aggregate of images is served as one image and CSS is used to show only a portion of the image in or as the DOM element. I've built a few different scripts in Ruby and Perl using ImageMagick that automatically build sprites, and was pleasantly surprised to hear that there is a feature in Compass that handles this. With Compass installed, CSS sprites in Sass might look like the code below, where wrapper elements are automagically compiled into a single sprited image and CSS rules are defined.

    @import "wrapper_elements/*.png";
    $wrapper-sprite-dimensions: true;
    @include all-wrapper-sprites;
    

    Conclusion

    Admittedly, the examples shown in this blog article come from a site that runs on Perl-based Interchange, but I used these examples because I can distinctly remember each of these use cases. It might not be quite as easy to include Sass here with Interchange as it will be in Rails 3.1, where Scss/Sass is included as a new default.


    Comments

    published by steph@endpoint.com (Steph Skardal) on 2011-03-25 14:29:00 in the "ecommerce" category

    It's pretty easy to use Google Analytics to examine referral traffic, including using custom referral tracking codes. Here's how:

    Once you have referrers or affiliates that plan to link to your site, you can ask that those affiliates append a unique tracking ID to the end of the URL. For example, I'll use the following referral ID's to track metrics from Milton and Roger's websites to End Point's site.

    • http://www.endpoint.com/?ref=milton
    • http://www.endpoint.com/?ref=roger

    After you've seen some traffic build up from those affiliates, you must create two Custom Advanced Segments in Google Analytics:



    Follow the link to create an Advanced Segment. The New Advanced Segment page.


    Once you've landed on the New Advanced Segment page, you create a custom segment by dragging "Landing Page" from the "Content" tab to define the criteria, and set it to contains your unique referral identifier.



    Roger's Referral Traffic Milton's Referral Traffic


    That's it! You now have custom Advanced Segments defined to track referral or affiliate data. You can select the Advanced Segments from any metrics page:


    All traffic compared to referral traffic from Milton and Roger's sites.


    Traffic from Milton's website only.

    You can also examine conversion driven from the affiliate. For example, how does conversion driven by one affiliate compare to the entire site's conversion? On our site, conversion is measured by contact form submission — but on ecommerce sites, you can measure conversion in the form of purchases relative to different affiliates.


    Roger's Referral conversion versus conversion of the entire site. Roger's doing pretty good!

    One potential disadvantage to this method for affiliate tracking is that you are creating duplicate content in Google by introducing additional URLs. You may want to use the rel="canonical" tag on the homepage to minimize duplicate content in search engine indexes. A very similar alternative to this method to bypass adding a referral ID would be to create custom segments defined by Source and Referral Path, however, the method described in this article is valuable for sites that may have a redirect between the referral site and the landing URL (http://www.miltonsblog.com/ links to http://www.endpointcorp.com/?ref=milton redirects to http://www.endpoint.com/?ref=milton retains the referral information).

    Google Analytics is a great tool that allows you to measure analytics such as the ones shown in this post. It's fairly standard for our all of our clients to request Google Analytics installation. Google announced last week that a new Google Analytics platform will be rolled out soon, which includes a feature update to multiple segments that will allow us to examine traffic from multiple affiliates without showing "All Visits".

    Boys
    Note that the data presented in this article is fictitious.
    I don't think Milton and Roger (shown above) will be linking to End Point's site any time soon!


    Comments

    published by steph@endpoint.com (Steph Skardal) on 2011-03-15 20:38:00 in the "ecommerce" category

    One of the more challenging yet rewarding projects Richard and I have worked on over the past year has been an ecommerce product personalization project with Paper Source. I haven't blogged about it much, but wanted to write about the technical challenges of the project in addition to shamelessly self-promote (a bit).


    Personalize this and many other products at Paper Source.

    Paper Source runs on Interchange and relies heavily on JavaScript and jQuery on the customer-facing side of the site. The "personalization" project allows you to personalize Paper Source products like wedding invitations, holiday cards, stationery, and business cards and displays the dynamic product images with personalized user data on the fly using Adobe's Scene7. The image requests are made to an external location, so our application does not need to run Java to render these dynamic personalized product images.

    Technical Challenge #1: Complex Data Model

    To say the data model is complex is a bit of an understatement. Here's a "blurry" vision of the data model for tables driving this project. The tables from this project have begun to exceed the number of Interchange core tables.


    A snapshot of the data model driving the personalization project functionality.

    To give you an idea of what business needs the data model attempts to meet, here are just a few snapshots and corresponding explanations:

    At the highest level, there are individual products that can be personalized.
    Each product may or may not have different color options, or what we refer to as colorways. The card shown here has several options: gravel, moss, peacock, night, chocolate, and black. Clicking on each colorway here will update the image on the fly.
    In addition to colorways, each product will have corresponding paper types and print methods. For example, each product may be printed on white or cream paper and each product may have a "digital printing" option or a letterpress option. Colorways shown above apply differently to digital printing and letterpress options. For example, letterpress colorways are typically a subset of digital printing colorways.
    Each card has a set of input fields with corresponding fonts, sizes, and ink colors. The input fields can be input fields or text boxes. All cards have their own specific set of data to control the input fields – one card may have 4 sections with 1 text field in each section while another card may have 6 sections with 1 text field in some sections and 2 text fields in other sections. In most cases, inks are limited between card colorways. For example, black ink is only offered on the black colorway card, and blue ink is only offered on the blue colorway card.
    Each card also has a set of related items assigned to it. When users toggle between card colorways, the related item thumbnails update to match the detail option. This allows users to see an entire suite of matching cards: a pink wedding invite, RSVP, thank you, and stationery or a blue business card, matching letterhead stationery, and writing paper shown here.
    In addition to offering the parent product, envelopes are often tied to the products. In most cases, there are default envelope colors tied to products. For example, if a user selected a blue colorway product, the blue envelope would show as the default on the envelopes page.
    In addition to managing personalization of the parent products, the functionality also meets the business needs to offer customization of return address printing on envelopes tied to products. For example, here is a personalized return address printing tied to my wedding invitation.

    Technical Challenge #2: Third Party Integration with Limited Documentation

    There are always complexities that come up when implementing third-party service in a web application. In the case of this project, there is a fairly complex structure for image requests made to Scene7. In the case of dynamic invitations, cards, and stationery, examples of image requests include:

    https://a248.e.akamai.net/f/248/9086/10h/origin-d7.scene7.com/is/image/?layer=0& anchor=-50,-50 &size=2000,2000&layer=1&src=is{PaperSource/A7_env_back_closed_sfwhite}& anchor=2900,-395 &rotate=-90&op_usm=1,1,8,0&resMode=sharp&qlt=95,1&pos=100,50&size=1800,1800& layer=3&src=fxg {PaperSource/W136-122208301?&imageres=150}&anchor=0,0&op_usm=1,1,1,0&pos=500,315& size=1732,1732 &effect=0&resMode=sharp&fmt=jpg
    https://a248.e.akamai.net/f/248/9086/10h/origin-d7.scene7.com/is/image/?layer=0& anchor=-50, 50&size=2000,2000&layer=1&src=is{PaperSource/4bar_env_back_closed_fig}&anchor=0,0& pos=115,375 &size=1800,1800&layer=2&src=fxg{PaperSource/4barV_white_background_key}&anchor=0,0& rotate=-90& pos=250,1757&size=1733,1733&layer=3& src=fxg{PaperSource/ST57-2011579203301?&$color_fig=true&$color_black=false&$ color_chartreuse =false&$color_espresso=false&$color_moss=false&$color_peacock=false &$ink_0=780032 &$ink_2=780032&$ink_1=780032&imageres=150}&anchor=0,0 &op_usm=2,1,1, 0&pos=255,513&size=1721,1721&resMode=sharp&effect=0&fmt=jpg
    https://a248.e.akamai.net/f/248/9086/10h/origin-d7.scene7.com/is/image/?layer=0&anchor=-50,- 50&size=2000,2000&layer=1&src=is{PaperSource/4bar_env_back_closed_night}&anchor=0,0&pos =115,375&&size=1800,1800&layer=2&src=fxg{PaperSource/4bar_white_sm} &anchor=0,0&rotate=-90&pos=250,1757&size=1733,1733&layer=3&src=fxg{PaperSource/W139-201203301?} &anchor=0,0&op_usm=2,1,1,0&pos=255,513&size=1721,1721&resMode=sharp&effect=0&fmt=jpg

    Each argument is significant to the dynamic image; background envelope color, card colorway, ink color, card positioning, envelope positioning, image quality, image format, and paper color are just a few of the factors controlled by the image arguments. And part of the challenge was dealing with the lack of documentation to build the logic to render the dynamic images.

    Conclusion

    As I mentioned above, this has been a challenging and rewarding project. Paper Source has sold personalizable products for a couple of years now. They continue to move their old personalized products to use this new functionality including many stationery products moved yesterday. Below are several examples of Paper Source products that I created with the new personalized functionality.


    Comments

    published by steph@endpoint.com (Steph Skardal) on 2011-01-22 21:50:00 in the "ecommerce" category

    Last week, I wrote about creating a very simple ecommerce application on Ruby with Sinatra. This week, we continue on the yellow brick road of ecommerce development on Ruby with Sinatra.

    yellow brick road
    A yellow brick road.

    Part 2: Basic Admin Authentication

    After you've got a basic application running which accepts payment for a single product as described in the previous tutorial, the next step is to add admin authorization to allow lookup of completed orders. I found several great resources for this as well as a few Sinatra extensions that may be useful. For the first increment of implementation, I followed the instructions here, which uses Basic::Auth. The resulting code can be viewed here. I also introduce subclassing of Sinatra::Base, which allows us to keep our files a bit more modular and organized.

    And if we add an "/admin" method to display orders, we can see our completed orders:


    Completed orders.

    Part 3: Introducing Products

    Now, let's imagine an ecommerce store with different products! Whoa! For this increment, let's limit each order to one product. A migration and model definition is created to introduce products, which contains a name, description, and price. For this increment, product images match the product name and live ~/public/images. The orders table is modified to contain a reference to products.id. The orders model is updated to belong_to :products. Finally, the frontend authorization method is modified to use the order.product.price in the transaction.

    # Products ActiveRecord migration
    require 'lib/model/product'
    class CreateProducts < ActiveRecord::Migration
      def self.up
        create_table :products do |t|
          t.string :name,
            :null => false
          t.decimal :price,
            :null => false
          t.string :description,
            :null => false
        end
      end
    
      def self.down
        drop_table :products
      end
    end
    
     
    # Products model class
    class Product < ActiveRecord::Base
      validates_presence_of :name
      validates_presence_of :price
      validates_numericality_of :price
      validates_presence_of :description
    
      has_many :orders
    end
    
    # Order migration update
    class CreateOrders < ActiveRecord::Migration
      def self.up
        create_table :orders do |t|
    +      t.references :product,
    +        :null => false
        end
      end
    
      def self.down
        drop_table :orders
      end
    end
    
    # Order model changes
    class Order < ActiveRecord::Base
    ...
    + validates_presence_of :product_id
    +
    +  belongs_to :product
    end
    
    # in main checkout action
    # Authorization amount update
    - response = gateway.authorize(1000,
    -   credit_card)
    + response = gateway.authorize(order.product.price*100,
    +   credit_card)
    


    Our new data model.

    And let's use Sinatra's simple and powerful routing to build resource management functionality that allows our admin to list, create, update, and delete items, or in this case orders and products. Here's the sinatra code that accomplishes this basic resource management:

    # List items
    app.get '/admin/:type' do |type|
      require_administrative_privileges
      content_type :json
    
      begin
        klass = type.camelize.constantize
        objects = klass.all
        status 200
        objects.to_json
      rescue Exception => e
        halt 500, [e.message].to_json 
      end
    end
    
    # Delete item
    app.delete '/admin/:type/:id' do |type, id|
      require_administrative_privileges
      content_type :json
    
      begin
        klass = type.camelize.constantize
        instance = klass.find(id)
        if instance.destroy
          status 200
        else
          status 400
          errors = instance.errors.full_messages
          [errors.first].to_json
        end
      rescue Exception => e
        halt 500, [e.message].to_json
      end
    end
    
    # Create new item
    app.post '/admin/:type/new' do |type|
      require_administrative_privileges
      content_type :json
      input = json_to_hash(request.body.read.to_s)
     
      begin
        klass = type.camelize.constantize
        instance = klass.new(input)
        if instance.save
          status 200
          instance.to_json
        else
          status 400
          errors = instance.errors.full_messages
          [errors.first].to_json
        end
      rescue Exception => e
        halt 500, [e.message].to_json
      end
    end
    
    # Edit item
    app.post '/admin/:type/:id' do |type, id|
      require_administrative_privileges
      content_type :json
      input = json_to_hash(request.body.read.to_s)
      
      begin
        klass = type.camelize.constantize
        instance = klass.find(id)
        if instance.update_attributes(input)
          status 200
          instance.to_json
        else
          status 400
          errors = instance.errors.full_messages
          [errors.first].to_json
        end
      rescue Exception => e
        halt 500, [e.message].to_json
      end
    end
    

    Note that in the code shown above, the request includes the class (product or order in this application), and the id of the item in some cases. The constantize method is used to get the class constant, and ActiveRecord methods are used to retrieve and edit, create, or delete the instance. This powerful routing now allows us to easily manage additional resources with minimal changes to our server-side code.

    Next, I use jQuery to call these methods via AJAX, also in such a way that it'll be easy to manage new resources with minimal client side code. That base admin code can be found here. With this jQuery admin base, we now define our empty resource, content for displaying that resource, and content for editing that resource. Examples of this are shown below:

    functions.product = {
      edit: function(product) {
        return '<h4>Editing Product: '
          + product.id
          + '</h4>'
          + '<p><label for="name">Name</label>'
          + '<input type="text" name="name" value="'
          + product.name
          + '" /></p>'
          + '<p><label for="price">Price</label>'
          + '<input type="text" name="price" value="'
          + parseFloat(product.price).toFixed(2)
          + '" /></p>'
          + '<p><label for="description">Description</label>'
          + '<textarea name="description">'
          + product.description
          + '</textarea></p>';
      },
      content: function(product) {
        var inner_html = '<h4>Product: '
          + product.id
          + '</h4>'
          + 'Name: '
          + product.name
          + '<br />Price: $'
          + parseFloat(product.price).toFixed(2)
          + '<br />Description: '
          + product.description
          + '<br />';
        return inner_html;
      },
      empty: function() {
        return { name: '',
          price: 0, 
          description: '' };  
      }
    };
    


    Product listing.

    Creating a new product.

    Editing an existing product.

    functions.order = {
      edit: function(order) {
        return '<b>Order: '
          + order.id
          + '</b><br />'
          + '<input name="email" value="'
          + order.email
          + '" />'
          + ' – '
          ...
          //Order editing is limited
      },
      content: function(order) {
        return '<b>Order: '
          + order.id
          + '</b><br />'
          + order.email
          + ' – '
          + order.phone
          + '<br />'
          ...
      },
      empty: function() {
        return { 
          email: '',
          phone: '',
          ...
        };  
      }
    };
    


    For this example, we limit order editing to email and phone number changes.

    With a final touch of frontend JavaScript and CSS changes, the following screenshots show the two customer-facing pages from our example store. Like the application described in the previous article, this ecommerce application is still fairly lightweight, but it now allows us to sell several products and manage our resources via the admin panel. Stay tuned for the next increment!

    The cupcake images shown in this article are under the Creative Commons license and can be found here, here, and here. The code shown in this article can be found here (branches part2 and part3).


    Comments

    published by noreply@blogger.com (Ben Goldstein) on 2011-01-06 17:49:00 in the "ecommerce" category

    Happy New Year! And what would a new year be without a new year bug bite? This year we had one where figuring out the species wasn't easy.

    On January 2nd one of our ecommerce clients reported that starting with the new year a number of customers weren't able to complete their web orders because of credit card security code failures. Looking in the Interchange server error logs we indeed found a significant spike in the number of CVV2 code verification failures (Payflow Pro gateway error code "114") starting January 1st.

    We hadn't made any programming or configuration changes on the system in the recent days. We double-checked to make sure: nope, no code changes. So it had to be a New Year's bug and presumably something with the Payflow Pro gateway or banks further upstream. We checked error logs for other customers to see if they were being similarly impacted, but they weren't. Our client contacted PayPal (the vendor for Payflow Pro) and they reported there were no problems with their system. The failures must indeed be card failures or a problem with the website according to them. We further checked our code looking for what we could possibly have done that might be the cause, double-checking our Git repository (which showed no recent changes) and reexamining our checkout code for possible year-based logic flaws.

    Our client's top-notch customer service group got on the phone with a customer who'd gotten a security code failure and got PayPal tech support on another line. The customer service rep tried to place the customer's order on the website using the customer's credit card info and once again got the CVV2 error. She then did the credit card transaction using the swipe machine in the office, and lo and behold the order went through! What was going on??!

    It turned out that despite the Payflow Pro gateway returning CVV2 verification errors what was really happening was that the year of the credit card was coming into the Payflow Pro gateway as "2012"—not as "2011" as entered into the checkout form. We knew all along that it was possible that the 114 error code responses were possibly misleading because payment gateway error codes are notorious this way. (Payment gateways blame the banks, saying they can only pass along what the banks give them. Some banks' credit card validations don't actually even care about the years being correct, but just that they not be in the past; but I digress...)

    Previously we'd reviewed the checkout pages and the dropdown menus to verify that the dropdown menus weren't off, but nevertheless it very much sounded like this rather stupid problem could very well be the culprit. So we checked and checked again. What we found is that sometimes on the checkout form the year dropdown menu was mangled such that the values associated with the displayed years were YYYY+1.

    The oddly intermittent behavior of the problem, the process of elimination and the all around hair pulling this loss of business was causing made somebody in the marketing group at our client realize that they are in fact still running an Omniture Test & Target A/B test on the checkout pages that they thought had been discontinued. To quote David Christensen (thanks, David!): "The Omniture system works by replacing select content for randomly chosen users in an effort to track user behavior/response to proposed site changes. Alternate site content is created and dynamically replaced for these users as they use the site, such as the specific content on the checkout page in this instance."

    We verified that the Omniture A/B test's JavaScript replacement code was alternately mangling and not mangling the year dropdown on the checkout form as mentioned. Our client took out the A/B test and the "security code errors" dropped back to a normal low level.

    This was a difficult and expensive problem—not only was there business lost because of the problem, but there were a lot of resources put into troubleshooting it. We've come away from this episode with some lessons learned and with plenty of food for thought. I'll leave it to commentators to opine away on this, including the End Point folks who scratched this itch: David Christensen, Jeff Boes, Mark Johnson, and Jon Jensen.


    Comments

    published by noreply@blogger.com (Ron Phipps) on 2010-12-22 19:12:00 in the "ecommerce" category

    A new update to Interchange's robots.cfg can be found here. This update adds "SearchToolbar" to the NotRobotUA directive which is used to exclude certain user agent strings when determining whether an incoming request is from a search engine robot or not. The SearchToolbar addon for IE and FireFox is being used more widely and we have received reports that users of this addon are unable to add items to their cart, checkout, etc. You may remember a similiar issue with the Ask.com toolbar that we discussed in this post. If you are using Interchange you should download the latest robots.cfg and restart Interchange.


    Comments