All opinions expressed are those of the authors and not necessarily those of, our sponsors, or our affiliates.
  Add to My Yahoo!  Subscribe with Bloglines  Subscribe in NewsGator Online

published by (Phin Jensen) on 2017-09-19 18:10:00 in the "browsers" category

Security is often a very difficult thing to get right, especially when it?s not easy to find reliable or up-to-date information or the process of testing can be confusing and complicated. We have a lot of history and experience working on the security of websites and servers, and we?ve found many tools and websites to be very helpful. Here is a collection of those.

Server-side security

There are a number of tools available that can scan your website to check for common vulnerabilities and the quality of SSL/TLS configuration, as well as give great tips on how to improve security for your website.

  • Qualys SSL Labs Server Test takes a simple domain name, performs a series of tests from a variety of clients, and returns a simple letter grade (from A+ down to F) indicating the quality of your SSL/TLS configuration, as well as a detailed summary for a host of configuration options. It covers certificates key and algorithms; TLS and SSL configurations; cipher suites; handshakes on a wide variety of platforms including Android, iOS, Chrome, Firefox, Internet Explorer and Edge, Safari, and others; common protocols and vulnerabilities; and other details.
  • HTTP Security Report does a similar scan, but provides a much more simplified summary of a website, with a numeric score from 0 to 100. It gives a simple, easy to understand list of results, with a green check mark or a red X to indicate whether something is configured for security or not. It also provides short paragraphs explaining settings and recommended configurations.
  • HT-Bridge SSL/TLS Server Test is very similar to Qualys SSL Labs Server Test, but provides some valuable extra information, such as PCI-DSS, HIPAA, and NIST guidelines compliance, as well as industry best practices and basic analysis of third-party content.
  • is another letter-grade scan, but focuses on server headers only. It provides simple explanations for each recommended server header and links to guides on how to configure them correctly.
  • Observatory by Mozilla scans and gives information on HTTP, TLS, and SSH configuration, as well as simple summaries from other websites, including Qualys, HT-Bridge, and as covered above.
  • SSL-Tools is focused on SSL and TLS configuration and certificates, with tools to scan websites and mail servers, check for common vulnerabilities, and decode certificates.
  • Microsoft Site Scan performs a series of simple tests, focused more on general website guidelines and best practices, including tests for outdated libraries and plugins which can be a security issue.
  •, the final website scanning tool I?ll cover, is a more advanced bash script that covers many of the same things these other websites do, but provides lots of options for fine-tuning test methods, returned information, and testing abnormal configurations. It?s also open source and doesn?t rely on any third parties.

These websites provide valuable information on SSL/TLS which can be used to create a secure, fast, and functional server configuration:

  • Security/Server Side TLS on the Mozilla wiki is a fantastic page which provides great summaries, recommendations, and reference information on many TLS topics, including handshakes, OCSP Stapling, HSTS, HPKP, certificate ciphers, and common attacks.
  • Mozilla SSL Configuration Generator is a simple tool that generates boilerplate server configuration files for common servers, including Apache and Nginx, and specific server and OpenSSL versions. It also allows you to target ?modern?, ?intermediate?, or ?old? clients and servers, which will give the best configuration possible for each level.
  • Is TLS Fast Yet? is a great, simple, and to-the-point informational website which explains why TLS is so important and how to improve its performance so it has the smallest impact possible on your website?s speed.

Client-side security

These websites provide information and diagnostic tools to ensure that you are using a secure browser.

  • gives a list of links to subdomains with various SSL configurations, including badly configured SSL, so you can have a good idea of what a well-configured website looks like versus one with errors in configuration, weak ciphers or key exchange protocols, or insecure HTTP forms.
  • IPv6 Test checks your network and browser for IPv6 support, showing you your ISP, reverse DNS pointers, both your IPv4 and IPv6 addresses, and giving an idea of when your computer or network may have problems with dual-stack IPv4 + IPv6 remote hosts or DNS.
  • How?s My SSL? and Qualys Labs SSL Client Test both check your browser for support of SSL/TLS versions, protocols, ciphers, and features, as well as susceptibility to common vulnerabilities.

General Tools

  • NeverSSL is a simple website that promises to never use SSL. Many public wifi networks require you to go through a payment or login page, which can be blocked when trying to access a well-secured website such as Google, Facebook, Twitter, or Amazon, which can cause trouble connecting to that website. NeverSSL provides an easy and simple way to access that login website.
  • is a search engine for public TLS certificate information. It provides a history of certificates for a given domain name, with information including issuer and issue date, as well as an advanced search.
  • Digital Attack Map is an interactive map showing DDoS attacks across the world.
  • The Internet-Wide Scan Data Repository is a public archive of scans across the internet, intended for research and provided by the University of Michigan Censys Team.
  • is a simple website that shows how to take a screenshot on a variety of operating systems and desktop environments. It?s a fantastic tool to help less technically-minded people share their screens or issues they?re having.


published by (Peter Hankiewicz) on 2017-05-27 22:00:00 in the "browsers" category

The last week was really interesting for me. I attended the infoShare 2017, the biggest tech conference in central-eastern Europe. The agenda was impressive, but that?s not everything. There was a startup competition going on and really, I?m totally impressed.

infoShare in numbers:

  • 5500 attendees
  • 133 speakers
  • 250 startups
  • 122 hours of speeches
  • 12 side events
Let?s go through each speech I was attending.

Day 1

Why Fast Matters by Harry Roberts from

Harry tried to convince us that performance is important.

Great speech, showing that it?s an interesting problem not only from a financial point of view. You must see it, link to his presentation:

Dirty Little Tricks From The Dark Corners of Front-End by Vitaly Friedman from

It was magic! I work a lot with CSS, but this speech showed me some new ideas and reminded me that the simplest solution is maybe not the best solution usually and that we should reuse CSS between components as much as possible.

Keep it DRY!

One of these tricks is a quantity query CSS selector. It?s a pretty complex selector that can apply your styles to elements based on the number of siblings. (

The Art of Debugging (browsers) by Remy Sharp

It was great to see some other developer and see his workflow during debugging. I usually work from home and it?s not easy to do it in my case.

Remy is a very experienced JavaScript developer and showed us his skills and tricks, especially interesting Chrome developer console integration.

I always thought that using the developer console for programming is not the best idea, maybe it?s not? It looked pretty neat.

Desktop Apps with JavaScript by Felix Rieseberg from Slack

Felix from Slack presented and show the power of desktop hybrid apps. He used a framework called Electron. Using Electron you can build native, cross-system desktop apps using HTML, JavaScript and CSS. I don?t think that it?s the best approach for more complex applications and probably takes more system memory than native-native applications, but for simpler apps it can a way to go!

Github uses it to build their desktop app, so maybe it?s not so slow? :)

RxJava in existing projects by Tomasz Nurkiewicz from Allegro

Tomasz Nurkiewicz from Allegro showed us his high programming skills and provided some practical RxJava examples. RxJava is a library for composing asynchronous and event-based programs using observable sequences for the Java VM.

Definitely something to read about.

Day 2

What does a production ready Kubernetes application look like? by Carter Morgan from Google

Carter Morgan from Google showed us practical uses of Kubernetes.

Kubernetes is an open-source system for automating deployment, scaling and management of containerized applications. It was originally designed by Google developers and I think that they really want to popularize it. It looked that Kubernetes has a low learning curve, but devops agents I spoke after the presentation were sceptical, saying that if you know how to use Docker Swarm then you don?t really need Kubernetes.

Vue.js and Service Workers become Realtime by Blake Newman from Sainsbury's

Blake Newman is a JavaScript developer, member of the core Vue.js (trending, hot JavaScript framework) team. He explained how to use Vue.js with service workers.

The service workers are scripts that your browser runs in the background. Nice to see how it fits together, even though it?s not yet supported by every popular browser.



Listen to your application and sleep by Gianluca Arbezzano from InfluxData

Gianluca showed us his modern and flexible monitoring stack. Great tips and mostly discussing and recommending InfluxDB and Telegraf, we use it a lot in End Point.

He was right that it?s easy to configure, open-source and really useful. Great speech!


Amazing two days. All the presentations will be available on Youtube soon:

I can fully recommend this conference, see you next time!


published by (Dave Jenkins) on 2017-04-21 17:21:00 in the "browsers" category

As many of you may have seen, earlier this week Google released a major upgrade to the Google Earth app. Overall, it's much improved, sharper, and a deeper experience for viewers. We will be upgrading/incorporating our managed fleet of Liquid Galaxies over the next two months after we've had a chance to fully test its functionality and polish the integration points, but here are some observations for how we see this updated app impacting the overall Liquid Galaxy experience.

  • Hooray! The new Earth is here! The New Earth is here! Certainly, this is exciting for us. The Google Earth app plays a key central role in the Liquid Galaxy viewing experience, so a major upgrade like this is a most welcome development. So far, the reception has been positive. We anticipate it will continue to get even better as people really sink their hands into the capabilities and mashup opportunities this browser-based Earth presents.

  • We tested some pre-release versions of this application, and successfully integrated them with the Liquid Galaxy and are very happy with how we are able to view-synchronize unique instances of the new Google Earth across displays with appropriate geometrically configured offsets.

  • What to look for in this new application:
    • Stability: The new Google Earth runs as a NaCl application in a Chrome browser. This is an enormous advance for Google Earth. As an application in Chrome it is instantly accessible to billions of new users with their established expectations. Because the new Google Earth uses Chrome the Google Earth developers will no longer need to engage in the minutiae of having to support multiple desktop operating systems, but now can instead concentrate on the core-functionality of Google Earth and leverage the enormous amount of work that the Chrome browser developers do to make Chrome a cross-platform application.
    • Smoother 3D: The (older) Google Earth sometimes has a sort of "melted ice cream" look to the 3D buildings in many situations. Often, buildings fail to fully load from certain viewpoints. From what we're seeing so far, the 3D renderings in the New Earth appear to be a lot sharper and cleaner.
    • Browser-based possibilities: As focus turns more and more to browser-based apps, and as JavaScript libraries continue to mature, the opportunities and possibilities for how to display various data types, data visualizations, and interactions really start to multiply. We can already see this with the sort of deeper stories and knowledge cards that Google is including in the Google Earth interface. We hope to take the ball and run with it, as the Liquid Galaxy can already handle a host of different media types. We might exploit layers, smart use controls, realtime content integration from other available databases, and... okay, I'm getting ahead of myself.

  • The New Google Earth makes a major point of featuring stories and deeper contextual information, rather than just ogling at the terrain: as pretty as the Grand Canyon is to look at, knowing a little about the explorers, trails, and history makes it such a nicer experience to view. We've gone through the same evolution with the Liquid Galaxy: it used to be just a big Google Earth viewer, but we quickly realized the need for more context and usable information for a richer interaction with the viewers by combining Earth with street view, panoramic video, 3D objects, etc. It's why we built a content management system to create presentations with scenes. We anticipate that the knowledge cards and deeper information that Google is integrating here will only strengthen that interaction.
We are looking to roll out the new Google Earth to the fleet in the next couple of months. We need to do a lot of testing and then update the Liquid Galaxies with minimal (or no) disturbance to our clients, many of whom rely on the platform as a daily sales and showcasing tool for their businesses. As always, if you have any questions, please reach us directly via email or call.

published by (Greg Davidson) on 2014-05-28 04:02:00 in the "browsers" category

Geeks in Paradise

IMG 2414

Today I was very lucky to once again attend CSSConf US here in Amelia Island, Florida. Nicole Sullivan and crew did an excellent job of organizing and curating a wide range of talks specifically geared toward CSS developers. Although I work daily on many aspects of the web stack, I feel like I'm one of the (seemingly) rare few who actually enjoy writing CSS so it was a real treat to spend the day with many like-minded folks.

Styleguide Driven Development

Nicole Sullivan started things off with her talk on Style Guide Driven Development (SGDD). She talked about the process and challenges she and the team at Pivotal Labs went through when they redesigned the Cloud Foundry Developer Console and how they overcame many of them with the SGDD approach. The idea behind SGDD is to catalog all of the reusable components used in a web project so developers use what's already there rather than reinventing the wheel for each new feature. The components are displayed in the style guide next to examples of the view code and CSS which makes up each component. Check out the Developer Console Style Guide for an example of this in action. The benefits of this approach include enabling a short feedback loop for project managers and designers and encouraging developers who may not be CSS experts to follow the "blessed" path to build components that are consistent and cohesive with the design. In Nicole's project they were also able to significantly reduce the amount of unused CSS and layouts once they had broken down the app into reusable components.

Hologram is an interesting tool to help with the creation of style guides which Nicole shared and is definitely worth checking out.

Sara Soueidan — Styling and Animating Scalable Vector Graphics with CSS

Sara talked to us about using SVG with CSS and included some really neat demos. Adobe Illustrator, Inkscape and Sketch 3 are the commonly used tools used to create SVG images. Once you have your SVG image you can use the SVG Editor by Peter Collingridge or SVGO (node.js based tool) to clean up and optimize the SVG code. After the cleanup and optimization you can replace the generic CSS class names from your SVG creation app with more semantic CSS class names.

There are a variety of ways to include SVG on a page and Sara went over the pros and cons of each. The method that seemed most interesting to me was to use an <object> tag which allowed for a fallback image for browsers that do not support SVG. Sara mapped out the subset of CSS selectors which can be used to target SVG elements, how to "responsify" SVGs and to animate SVG paths. Be sure to check out her slides from the talk.

Lea Verou — The Chroma Zone: Engineering Color on the Web

Lea's talk was about color on the web. She detailed the history of how color has been handled up to this point, how it works today and some of the interesting color-related CSS features which are coming in the future. She demonstrated how each of the color spaces have a geographical representation (e.g. RGB can be represented as a cube and HSL as a double-cone) which I found neat. RGB is very unintuitive when it comes to choosing colors. HSL is much more intuitive but has some challenges of its own. The new and shiny CSS color features Lea talked about included:

  • filters
  • blending modes
  • CSS variables
  • gray()
  • color() including tint, shade and other adjusters
  • the #rgba and #rrggbbaa notation
  • hwb()
  • named hues and <angle> in hsl()

Some of these new features can be used already via libs like Bourbon and Myth. Check out the Chroma Zone: Engineering Color on the Web slides to learn more.


I will write up more of the talks soon but wanted to thank Jenn Schiffer for keeping us all laughing throughout the day in her role as MC and topping it off with a hilarious, satirical talk of her own. Thanks also to Alex and Adam for curating the music and looking after the sound.


published by (Greg Sabino Mullane) on 2014-04-18 20:00:00 in the "browsers" category

Image by Flickr user Dennis Jarvis

tl;dr: avoid using onmousemove events with Google Chrome.

I recently fielded a complaint about not being able to select text with the mouse on a wiki running the MediaWiki software. After some troubleshooting and research, I narrowed the problem down to a bug in the Chrome browser regarding the onmousemove event. The solution in this case was to tweak JavaScript to use onmouseover instead of onmousemove.

The first step in troubleshooting is to duplicate the problem. In this case, the page worked fine for me in Firefox, so I tried using the same browser as the reporter: Chrome. Sure enough, I could no longer hold down the mouse button and select text on the page. Now that the browser was implicated, it was time to see what it was about this page that caused the problem.

It seemed fairly unlikely that something like this would go unfixed if it was happening on the flagship MediaWiki site, Wikipedia. Sure enough, that site worked fine, I could select the text with no problem. Testing some other random sites showed no problems either. Some googling indicated others had similar problems with Chrome, and gave a bunch of workarounds for selecting the text. However, I wanted a fix, not a workaround.

There were hints that JavaScript was involved, so I disabled JavaScript in Chrome, reloaded the page, and suddenly everything started working again. Call that big clue number two. The next step was to see what was different between the local MediaWiki installation and Wikipedia. The local site was a few versions behind, but I was fortuitously testing an upgrade on a test server. This showed the problem still existed on the newer version, which meant that the problem was something specific to the wiki itself.

The most likely culprit was one of the many installed MediaWiki extensions, which are small pieces of code that perform certain actions on a wiki. These often have their own JavaScript that they run, which was still the most likely problem.

Then it was some basic troubleshooting. After turning JavaScript back on, I edited the LocalSettings.php file and commented out all the user-supplied extensions. This made the problem disappear again. Then I commented out half the extensions, then half again, etc., until I was able to narrow the problem down to a single extension.

The extension in question, known simply as "balloons", has actually been removed from the MediaWiki extensions site, for "prolonged security issues with the code." The extension allows creation of very nice looking pop up CSS "balloons" full of text. I'm guessing the concern is because the arguments for the balloons were not sanitized properly. In a public wiki, this would be a problem, but this was for a private intranet, so we were not worried about continuing to use the extension. As a side note, such changes would be caught anyway as this wiki sends an email to a few people on any change, including a full text diff of all the changes.

Looking inside the JavaScript used by the extension, I was able to narrow the problem down to a single line inside balloons/js/balloons.js:

  // Track the cursor every time the mouse moves
  document.onmousemove = this.setActiveCoordinates;

Sure enough, duck-duck-going through the Internet quickly found a fairly incriminating Chromium bug, indicating that onmousemove did not work very well at all. Looking over the balloon extension code, it appeared that onmouseover would probably be good enough to gather the same information and allow the extension to work while not blocking the ability for people to select text. One small replacement of "move" to "over", and everything was back to working as it should!

So in summary, if you cannot select text with the mouse in Chrome (or you see any other odd mouse-related behaviors), suspect an onmousemove issue.


published by (Spencer Christensen) on 2014-02-04 18:24:00 in the "browsers" category

If you are like me you may not have given much thought to an HTML doctype other than "yes, it needs to be there" and "there are a few different types, but I don't really understand the differences between them." Well if you are like me then you also recently discovered why doctypes matter and how they actually affect your web page.

For those that are not familiar with an HTML doctype, they are the very first line of an HTML document and look like this:

<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" 

As I mentioned before, there are a few different document types. The reason for this is because each one corresponds to a different rule set of HTML syntax for the web browser to follow, like: HTML 4.0, HTML 4.01, XHTML 1.0, or XHTML 1.1.

"The HTML syntax requires a doctype to be specified to ensure that the browser renders the page in standards mode. The doctype has no other purpose." [1]

When you specify an official doctype, your web browser should follow the standard behavior for rendering that specific rule set for the given HTML syntax. Thus you should get expected results when it renders your web page. There are official doctypes defined by the W3C for both "strict" and "transitional" rule sets. A "strict" document type adheres strictly to the specification for that syntax, and any legacy or not supported tags found in your document will cause rendering problems. A "transitional" document follows the specification for the given syntax, but also allows for legacy tags to exist in the document. They also define "frameset" doctypes which are transitional documents that also allow for frame related tags.

The doctype for HTML 5 is a lot simpler and shorter than other doctypes, and may look like this:

<!DOCTYPE html>

When you declare an unofficial doctype, the browser will not know which tag syntax rule set to use and will not render the page in a standard way. This is called quirks mode, and the browser basically regresses to an older rules engine to support all legacy tags it knows about and attempts to handle it. This also means that your web page may not render or behave as expected, especially if you use newer tags or features in your document.

Besides the HTML tag syntax, JavaScript is also affected by the doctype- since it is tied to the DOM engine being used by the browser rendering the page. For example, in an strict doctype you will have the native JSON parser object available, but in quirks mode it may not even exist and calls to JSON.parse() or JSON.stringify() could fail.

If you are not sure you are using an official doctype, or if you are using tags that are not supported by the doctype that you are using, then you can check with and validate your page. The whole point is to get your web page to render and behave as you expect it, providing a better experience for you and your users.


published by (Jon Jensen) on 2014-01-28 23:20:00 in the "browsers" category

WebP is an image format for RGB images on the web that supports both lossless (like PNG) and lossy (like JPEG) compression. It was released by Google in September 2010 with open source reference software available under the BSD license, accompanied by a royalty-free public patent license, making it clear that they want it to be widely adopted by any and all without any encumbrances.

Its main attraction is smaller file size at similar quality level. It also supports an alpha channel (transparency) and animation for both lossless and lossy images. Thus it is the first image format that offers the transparency of PNG in lossy images at much smaller file size, and animation only available in the archaic limited-color GIF format.

Comparing quality & size

While considering WebP for an experiment on our own website, we were very impressed by its file size to quality ratio. In our tests it was even better than generally claimed. Here are a few side-by-side examples from our site. You'll only see the WebP version if your browser supports it:

12,956 bytes JPEG

2186 bytes WebP

11,149 bytes JPEG

2530 bytes WebP

The original PNG images were converted by ImageMagick to JPEG, and by `cwebp -q 80` to WebP. I think we probably should increase the WebP quality a bit to keep a little of the facial detail that flattens out, but it's amazing how good these images look for file sizes that are only 17% and 23% of the JPEG equivalent.

One of our website's background patterns has transparency, making the PNG format a necessity, but it also has a gradient, which PNG compression is particularly inefficient with. WebP is a major improvement there, at 13% the size of the PNG. The image is large so I won't show it here, but you can follow the links if you'd like to see it:

337,186 bytescontainer-pattern.png
43,270 bytescontainer-pattern.webp

Browser support

So, what is the downside? WebP is currently natively supported only in Chrome and Opera among the major browsers, though amazingly, support for other browsers can be added via WebPJS, a JavaScript WebP renderer.

Why don't the other browsers add support given the liberal license? Especially Firefox you'd expect to support it. In fact a patch has been pending for years, and a debate about adding support still smolders. Why?

WebP does not yet support progressive rendering, Exif tagging, non-RGB color spaces such as CMYK, and is limited to 16,384 pixels per side. Some Firefox developers feel that it would do the Internet community a disservice to support an image format still under development and cause uncertain levels of support in various clients, so they will not accept WebP in its current state.

Many batch image-processing tools now support WebP, and there is a free Photoshop plug-in for it. Some websites are quietly using it just because of the cost savings due to reduced bandwidth.

For our first experiment serving WebP images from the End Point website, I decided to serve WebP images only to browsers that claim to be able to support it. They advertise that support in this HTTP request header:

Accept: image/webp,*/*;q=0.8

That says explicitly that the browser can render image/webp, so we just need to configure the server to send WebP images. One way to do that is in the application server, by having it send URLs pointing to WebP files.

Let's plan to have both common format (JPEG or PNG) and WebP files side by side, and then try a way that is transparent to the application and can be enabled or disabled very easily.

Web server rewrites

It's possible to set up the web server to transparently serve WebP instead of JPEG or PNG if a matching file exists. Based on some examples other people posted, we used this nginx configuration:

    set $webp "";
    set $img "";
    if ($http_accept ~* "image/webp") { set $webp "can"; }
    if ($request_filename ~* "(.*).(jpe?g|png)$") { set $img $1.webp; }
    if (-f $img) { set $webp "$webp-have"; }
    if ($webp = "can-have") {
        add_header Vary Accept;
        rewrite "(.*).w+$" $1.webp break;

It's also good to add to /etc/nginx/mime.types:

image/webp .webp

so that .webp files are served with the correct MIME type instead of the default application/octet-stream, or worse, text/plain with perhaps a bogus character set encoding.

Then we just make sure identically-named .webp files match .png or .jpg files, such as those for our examples above:

-rw-rw-r-- 337186 Nov  6 14:10 container-pattern.png
-rw-rw-r--  43270 Jan 28 08:14 container-pattern.webp
-rw-rw-r--  14734 Nov  6 14:10 josh_williams.jpg
-rw-rw-r--   3386 Jan 28 08:14 josh_williams.webp
-rw-rw-r--  13420 Nov  6 14:10 marina_lohova.jpg
-rw-rw-r--   2776 Jan 28 08:14 marina_lohova.webp

A request for a given $file.png will work as normal in browsers that don't advertise WebP support, while those that do will instead receive the $file.webp image.

The image is still being requested with a name ending in .jpg or .png, but that's just a name as far as both browser and server are concerned, and the image type is determined by the MIME type in the HTTP response headers (and/or by looking at the file's magic numbers). So the browser will have a file called $something.jpg in the DOM and in its cache, but it will actually be a WebP file. That's ok, but could be confusing to users who save the file for whatever reason and find it isn't actually the JPEG they were expecting.

301/302 redirect option

One remedy for that is to serve the WebP file via a 301 or 302 redirect instead of transparently in the response, so that the browser knows it's dealing with a different file named $something.webp. To do that we changed the nginx configuration like this:

    rewrite "(.*).w+$" $1.webp permanent;

That adds a little bit of overhead, around 100-200 bytes unless large cookies are sent in the request headers, and another network round-trip or two, though it's still a win with the reduced file sizes we saw. However, I found that it isn't even necessary right now due to an interesting behavior in Chrome that may even be intentional to cope with this very situation. (Or it may be a happy accident.)

Chrome image download behavior

Versions of Chrome I tested only send the Accept: image/webp [etc.] request header when fetching images from an HTML page, not when you manually request a single file or asking the browser to save the image from the page by right-clicking or similar. In those cases the Accept header is not sent, so the server doesn't know the browser supports WebP, so you get the JPEG or PNG you asked for. That was actually a little confusing to hunt down by sniffing the HTTP traffic on the wire, but it may be a nice thing for users as long as WebP is still less-known.

Batch conversion

It's fun to experiment, but we needed to actually get all the images converted for our website. Surprisingly, even converting from JPEG isn't too bad, though you need a higher quality setting and the file size will be larger. Still, for best image quality at the smallest file size, we wanted to start with original PNG images, not recompress JPEGs.

To make that easy, we wrote two shell scripts for Linux, bash, and cwebp. We found a few exceptional images that were larger in WebP than in PNG or JPEG, so the script deletes any WebP file that is not smaller, and our nginx configuration will in that case not find a .webp file and will serve the original PNG or JPEG.

Full-page download sizes compared

Here are performance tests run by using Chrome 32 on Windows 7 on a simulated cable Internet connection. The total download size difference is most impressive, and on a slower mobile network or with higher latency (greater distance from the server) would affect the download time more.

Page URL With WebP Without WebP
Bytes Time Details Bytes Time Details 374 KB 2.9s report 850 KB 3.4s report 613 KB 3.6s report 1308 KB 4.1s report


This article is not even close to a comprehensive shootout between WebP and other image types. There are other sites that consider the image format technical details more closely and have well-chosen sample images.

My purpose here was to convert a real website in bulk to WebP without hand-tuning individual images or spending too much time on the project overall, and to see if the overall infrastructure is easy enough to set up, and the download size and speed improved enough to make it worth the trouble, and get real-world experience with it to see if we can recommend it for our clients, and in which situations.

So far it seems worth it, and we plan to continue using WebP on our website. With empty browser caches, visit using Chrome and then one of the browsers that doesn't support WebP, and see if you notice a speed difference on first load, or any visual difference.

I hope to see WebP further developed and more widely supported.

Further reading


published by (Greg Davidson) on 2014-01-25 00:15:00 in the "browsers" category

I have been doing some mobile development lately and wanted to share the new Mobile Emulation feature in Chrome Canary with y'all. Chrome Canary is a development build of Chrome which gets updated daily and gives you the chance to use the latest and greatest features in Chrome. I've been using it as my primary browser for the past year or so and it's been fairly stable. What's great is that you can run Chrome Canary side-by-side with the stable release version of Chrome. For the odd time I do have issues with stability etc., I can just use the latest stable Chrome and be on my way. If you need more convincing, Paul Irish's Chrome Canary for Developers post might be helpful.

I should mention that Chrome Canary is only available for OS X and Windows at this point. I tested Dev channel Chromium on Ubuntu 13.10 this afternoon and the new mobile emulation stuff is not ready there yet. It should not be long though.

Mobile Emulation in Chrome Dev Tools

Mobile emulation chrome canary

Once enabled, the Emulation panel shows up in the Dev Tools console drawer. It gives you the option of emulating a variety devices (many are listed in the drop-down) and you also have the ability to fine tuning the settings à la carte. If you choose to emulate the touchscreen interface the mouse cursor will change and operate like a touch interface. Shift+drag allows you to pinch and zoom. There are some cool features for debugging and inspecting touch events as well.

Learning More

If you would like to learn more, be sure to check out the Mobile emulation documentation at the Chrome DevTools docs site.


published by (Jon Jensen) on 2011-02-01 17:17:00 in the "browsers" category

It's no secret that Internet Explorer has been steadily losing market share, while Chrome and Safari have been gaining.

But in the last couple of years I've been surprised to see how strong IE has remained among visitors to our website -- it's usually been #2 after Firefox.

Recently this has changed and IE has dropped to 4th place among our visitors, and Chrome now has more than double the users that Safari does, as reported by Google Analytics:

1. Firefox 43.61%
2. Chrome 30.64%
3. Safari 11.49%
4. Internet Explorer 11.02%
5. Opera 2.00%

That's heartening. :)


published by (Jon Jensen) on 2009-09-02 17:14:00 in the "browsers" category

Once upon a time there were still people using browsers that only supported SSLv2. It's been a long time since those browsers were current, but when running an ecommerce site you typically want to support as many users as you possibly can, so you support old stuff much longer than most people still need it.

At least 4 years ago, people began to discuss disabling SSLv2 entirely due to fundamental security flaws. See the Debian and GnuTLS discussions, and this blog post about PCI's stance on SSLv2, for example.

To politely alert people using those older browsers, yet still refusing to transport confidential information over the insecure SSLv2 and with ciphers weaker than 128 bits, we used an Apache configuration such as this:

# Require SSLv3 or TLSv1 with at least 128-bit cipher
<Directory "/">
    # Make an exception for the error document itself
    SSLRequire (%{SSL_PROTOCOL} != "SSLv2" and %{SSL_CIPHER_USEKEYSIZE} >= 128) or %{REQUEST_URI} =~ m:^/errors/:
    ErrorDocument 403 /errors/403-weak-ssl.html

That accepts their SSLv2 connection, but displays an error page explaining the problem and suggesting some links to free modern browsers they can upgrade to in order to use the secure part of the website in question.

Recently we've decided to drop that extra fuss and block SSLv2 entirely with Apache configuration such as this:

SSLProtocol all -SSLv2

The downside of that is that the SSL connection won't be allowed at all, and the browser doesn't give any indication of why or what the user should do. They would simply stare at a blank screen and presumably go away frustrated. Because of that we long considered the more polite handling shown above to be superior.

But recently, after having completely disabled SSLv2 on several sites we manage, we have gotten zero complaints from customers. Doing this also makes PCI and other security audits much simpler because SSLv2 and weak ciphers are simply not allowed at all and don't raise audit warnings.

So at long last I think we can consider SSLv2 dead, at least in our corner of the Internet!


published by (Jon Jensen) on 2009-07-27 21:30:00 in the "browsers" category

Here's something new in HTTP land to play with: Shared Dictionary Compression over HTTP (SDCH, apparently pronounced "sandwich") is a new HTTP 1.1 extension announced by Wei-Hsin Lee of Google last September. Lee explains that with it "a user agent obtains a site-specific dictionary that then allows pages on the site that have many common elements to be transmitted much more quickly." SDCH is applied before gzip or deflate compression, and Lee notes 40% better compression than gzip alone in their tests. Access to the dictionaries stored in the client is scoped by site and path just as cookies are.

The first client support was in the Google Toolbar for Internet Explorer, but it is now going to be much more widely used because it is supported in the Google Chrome browser for Windows. (It's still not in the latest Chrome developer build for Linux, or at any rate not enabled by default if the code is there.)

Only Google's web servers support it to date, as far as I know. Someone intended to start a mod_sdch project for Apache, but there's no code at all yet and no activity since September 2008.

It is interesting to consider the challenge this will have on HTTP proxies that filter content, since the entire content would not be available to the proxy to scan during a single HTTP conversation. Sneakily-split malicious payloads would then be reassembled by the browser or other client, not requiring JavaScript or other active reassembly methods. This forum thread discusses this threat and gives an example of stripping the Accept-encoding: sdch request headers to prevent SDCH from being used at all. Though the threat is real, it's hard to escape the obvious analogy with TCP filtering, which had to grow from stateless to more difficult stateful TCP packet inspection. New features mean not just new benefits but also new complexity, but that's not reason to reflexively reject them.

SDCH references:


published by (Jon Jensen) on 2009-07-15 14:01:00 in the "browsers" category

This has been frequently mentioned around the web already, but it's important enough that I'll bring it up again anyway. Firefox 3.5 adds the CSS @font-face rule, which makes it possible to reference fonts not installed in the operating system of the browser, just as is done with images or other embedded content.

Technically this is not a complicated matter, but font foundries (almost all of whom have a proprietary software business model) have tried to hold it back hoping for magical DRM to keep people from using fonts without paying for them, which of course isn't possible. As one of the original Netscape developers mentioned, if they had waited for such a thing for images, the web would still be plain-text only.

The quickest way to get a feel for the impact this change can have is to look at Ian Lynam & Craig Mod's article demonstrating @font-face in Firefox 3.5 side-by-side with any of the other current browsers. It is exciting to finally see this ability in a mainstream browser after all these years.


published by (Jon Jensen) on 2009-07-11 04:35:00 in the "browsers" category

While traveling and staying at Hostel Tyn in Prague's city center, I ran into a strange problem with my laptop on their wireless network.

When many people were using the network (either on the hostel's public computers or on the wireless network), sometimes things bogged down a bit. That wasn't a big deal and required merely a little patience.

But after a while I noticed that absolutely no "uploads" worked. Not via ssh, not via browser POST, nothing. They always hung. Even when only a file upload of 10 KB or so was involved. So I started to wonder what was going on.

As I considered trying some kind of rate limiting via iptables, I remembered somewhere hearing that occasionally you can run into mismatched MTU settings between the Ethernet LAN you're on and your operating system's network settings.

I checked my setup and saw something like this:

ifconfig wlan0
wlan0     Link encap:Ethernet  HWaddr xx:xx:xx:xx:xx:xx
          inet addr:10.x.x.x  Bcast:10.x.x.x  Mask:
          inet6 addr: fe80::xxx:xxxx:xxxx:xxxx/64 Scope:Link
          RX packets:1239 errors:0 dropped:0 overruns:0 frame:0
          TX packets:20 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:191529 (191.5 KB)  TX bytes:4543 (4.5 KB)

The MTU 1500 stood out as being worthy of tweaking. So I tried a completely unscientific change:

sudo ifconfig wlan0 mtu 1400

Then tried the same HTTP POST that had been consistently failing, and poof! It worked fine. Every time.

I think mostly likely something more than 1400 bytes would've been possible, perhaps just a few short of 1500. The number 1492 rings familiar. I'll be old-fashioned and not look it up on the web. But this 1400-byte MTU worked fine and solved the problem. To my delight.

As an interesting aside, before making the change, I found one web application where uploads did work fine anyway: Google's Picasa. I'm not sure why, but maybe it sliced & diced the upload stream into smaller chunks on its own? A mystery for another day.


published by Adam S (firsttubedotcom) on 2007-06-23 20:17:38 in the "Web Browsers" category
Adam S Safari is not now, nor has it ever been, my browser of choice. Aside from the fact that KHTML is generally the least compatible of browser engines these days, Safari is pretty barren from a feature standpoint. I rarely use it on my mac. I also find the lack of the "button" widget in Aqua annoying, because it makes Gmail ugly.

When I started using Safari 3.0.1 beta at work, I was impressed, but not impressed enough to ditch Opera. At home, however, I am using Camino, which I love, which is based on Gecko, the underlying Mozilla engine that also forms the core of Firefox. The problem is, as much as I love Camino, it's tough to use for development: it doesn't support extensions, it doesn't have a javascript debugger that works, it doesn't have draggable tabs, or tab restore, and it's not very easy to extend functionality. There are lots of tricks at PimpMyCamino, but even today, the most useful add-on, "CamiScript," is billed as unstable on Camino version above 1.0. Camino 1.0 was released in the first half of 2006. We're over a year later.

This is not a post to bitch about Camino though. I love 1.5 and it's serving me well. The thing is, I downloaded a nightly build of Webkit recently. Webkit is to Safari what Gecko is to Camino, and Webkit comes easily packaged in a disk image that requires no installation.

Webkit nightlies are awesome. First, there's the page inspector. From a development standpoint, this is awesome.

click image to view at full size

The inspector shows you each detail of the page load. You've got the entire page transfer size, as well as the page transfer time. You can break it down by element or by element type. You can view the headers sent and received. This is tremendously useful. It's been very interesting to see what parts of requests are properly cached and compar original load to subsequent page loads.

Then we have "Drosera," the Javascript debugger.

Javascript debugger
click image to view at full size

I haven't quite figured out how to use this tool, but I'm excited that it exists. It's something I've needed for some time on a Mac. This is all very promising.

Safari may be mostly bare, but by the time 3.0 final is released with Leopard, plus the fact that Safari exists on Windows, it, or its featureful offshoot based on Webkit, Shiira, just may be my main Mac browser.

You can get Webkit nightlies at

Tags: Web Browsers, Camino, Mac, Safari

published by Adam S (firsttubedotcom) on 2007-06-14 19:19:00 in the "Web Browsers" category
Adam S If you browser around the internet, particularly on tech sites, you'll find person after person praising Apple for releasing Safari 3.0.1 a mere 3 days after releasing the first public beta on Monday. At first, I thought - here we go! First off, it's a BETA release, and I *expect* it to be updated. Secondly, people are going crazy about Apple's fast reaction time, but I wondered if it were Microsoft, would the reaction be the same, or would it be "They release a product and it takes less than 24 hours to find a major vulnerability!?"

But alas, I ran Software Update and updated my Safari/Win install at work to 3.0.1. Whereas 3.0 was a major disappointment at work - fonts were a mess, pages had major problems with rendering, and the browser would crash randomly - a few minutes after install I can tell you that 3.0.1, on my computer at least, is a HUGE leap forward. The browser hasn't crashed on me outside of one bug that existed before (maximizing on the slave screen of a dual-monitor setup), the thing is SO much better!

Safari is far from usable as my main browser. The thing is feature-barren, is far less customizable than Firefox and Opera and even Camino, and on Windows, it sticks out like a sore thumb. That said, I just love having the rendering engine on my windows machine, I love that it's available for iPhone and Mac-friendly web development.

Kudos to Apple for porting this great app to Windows fairly successfully. Microsoft has been very slow to move to OS X and Intel; they have let RDP stagnate, they have let Office go 5 years with no update, they have no management tools that work on Mac, no IE, no WMP, not even a fully compatbile Outlook Web Access (OWA)... yet.

I am usually wary of excessive praise on Apple, but after seeing the Leopard previews pushing the evolution of the desktop and the accessibility of backups, the iPhone pushing the mobile experience, and Safari pushing web standards, I'm really feeling good about what they are doing.

Tags: Web Browsers, Safari, Mac, Apple