Let’s face it, responsive images are a real pain in the neck.
As a web designer, I want complete control over picture dimensions at any size; but i am constantly compromising designs to deal with the constraints of user-provided images. These constraints are well known to any designer:
- If I want an image to display well, I need to render it in it’s native aspect ratio, or blindly crop it without regard to composition.
- If I am lucky enough to work with content managers who upload images at all, they rarely will provide a common set of dimensions, let alone providing predictable composition, or multiple sizes for responsive use.
- Manually editing photos for better composition or ratio costs billable hours that many clients won’t respect on an invoice.
All these problems can be traced to one common issue Dumb Image Rendering.
Traditional wisdom says: “send only the final image to the client, in as small a file-size as possible”. This limits the options we have for responsive imagery, shy of loading different images at different screen-sizes (such as with the SmarterImages plugin). However, this traditional wisdom is flawed.
Retina Displays are already breaking down our ability to serve optimal image sizes, and display density seems to keep increasing, every year. In-browser image scaling technology keeps improving, and even on mobile, connection speeds are quickly becoming a non-issue for small sites.
Sending formatted images to the client serves to prevent smarter ways to handle image rendering.
jQuery SmartCrop takes an unformatted, high-resolution image, and crops and resizes it to match the dimensions imposed on the element, creating a final image that attempts to conform to basic composition rules, and highlights the central topic or Focal Point of the image.
How do we Find a Focal Point?
Focal points can be set manually for each image, but that simply trades one problem for another. The average content manager cannot be expected to manually choose the focus for every image. Instead, we use an algorithm!
First, we split the image into a set of horizontal and vertical strips. Next, we measure the average color of each strip, and identify the one that differs most from the average color of the image. This metric is the first half of our algorithm. Then, we use a Sobel Filter to measure the entropy, or edgy-ness of each strip. The strip containing the most discrete edges is most likely in sharp focus. This metric is the second half of our algorithm.
We weight these two values together, to identify the area of the image with the most entropy and furthest color from the average. This area is considered the focus of the image.
This sounds Awesome! Can I use it for [project name]?
Unfortunately, the answer is not quite yet.
The current plugin is a very rough proof of concept. It currently takes a LOT of system resources to calculate the focal point of the image, and the algorithms are far from exact. The plugin will need some severe reworking before it is useful as anything but a curiosity.
However, I firmly believe that smartCrop could be the future.