Despite all its advantages, applying Jamstack to eCommerce websites with large catalogs and frequent updates involves many challenges. If you’re running an eCommerce site on a backend platform such as Salesforce Commerce Commerce Cloud, Magento, or SAP-Hybris—you’re probably already facing some of them.
This article covers the key challenges in building large-scale eCommerce Jamstack sites and how Layer (now Edgio) can help you tackle these problems.
For the full version of Layer0 CTO Ishan Anand’s presentation at Jamstack Conference 2020, go to the official Layer0 YouTube channel.
Layer0 brings the advantages of Jamstack to eCommerce, accelerating site speeds and simplifying development workflows. By streaming cached data from the edge into the browser, before it is requested, Layer0 can keep websites 5 seconds ahead of shoppers’ taps. Sharper Image, REVOLVE, and Shoe Carnival are just examples of sites leveraging the Layer0 Jamstack platform to increase developer productivity and deliver their sub-second websites.
Using Jamstack and headless for eCommerce, especially on sites with large catalogs, frequent updates, or those on monolithic eCommerce platforms, is typically associated with dealing with the following challenges:
Long build times
Tricky site migrations
Data Pipeline Architecture
Customizations lost by APIs
Database connection limits
Styles embedded in CMS content
Backoffice workflow integration
Jamstack has high-traffic scalability built in. But the build step introduces a new scaling dimension, as typical static rendering happens during the build. As you expand your website or perform more frequent changes, you exit the sweet spot where Jamstack is fast and agile. The result is build-time friction. It is easy to sweep the problem under the rug if you’re working on a small site, but that is not the case for the typical eCommerce site.
Another important thing to remember is that sites are built as much by non-developers as by developers. Because content, marketing, and merchandising constantly change things, build time friction can quickly become a problem for the entire organization.
All this is to say that “at scale” happens more than you would think, and it’s not limited to eCommerce. Take a look at this comparison between retailers and news websites. For eCommerce sites, the number of SKUs is a proxy for the number of pages.
eCommerce sites with many products (SKUs), Publishers with many articles
Publishers with many articles
While you might think that only sites like Amazon deal with millions of SKUs, this is not true. Car parts websites are a great example—they host millions of products based on the year/model/make/vehicle search criteria (YMMV). For example, TruPar.com sells forklift parts exclusively, with 8M SKUs.
Thankfully, a few static and dynamic rendering techniques help deal with Jamstack at scale problems.
Optimizing build times
Incremental static (re)generation
Serverless server-side rendering + CDN
Parallel static rendering
Choosing the best rendering technique for each class of pages
Choosing a framework and platform that lets you mix techniques as needed
In the following paragraphs, we will discuss what these techniques mean.
With incremental builds, you can save build artifacts and only regenerate what’s changed. If only a single page is changed, you will regenerate that single page.
The framework splits the build across multiple processes or threads. This is helpful for image processing.
The caveat is that framework and cloud provider support for parallel and incremental builds vary. Not all of them support it, and those which do offer only limited support.
There is also the issue of potential excess cost. If you have a large site with tens of thousands of SKUs or more, most of your traffic follows a power distribution and you spend extra compute time rebuilding pages that will never be visited. The more you update the site, the larger the cost will grow. Keep that in mind when thinking about some of these techniques.
According to willit.build (a Gatsby build benchmark page, which provides historical build times of sites built on Gatsby Cloud) build times for Contentful and WordPress sites are about 200ms per page, which means that for a site with 10k pages, a full site build could take 25 minutes. Incremental builds can get you down to a few minutes, showing incremental builds' power. This technique can be helpful if you don’t do full builds.
Also known as the app shell or the SPA fallback model, client-side rendering is CDN routing. If your site hosts a million products, all these are routed by this CDN layer into the index.html and become one static file containing an app shell and is client-side rendered. When the browser loads that page, the client-site router will fetch and render the page content in the browser.
With client-side rendering, you can effectively host an infinite number of pages, but there are some important considerations:
The third caveat in implementing CSR is that it requires your CDN provider’s support for rewrite and redirect rules, and some do it more elegantly than others. For example, you have to shoehorn this on AWS CloudFront through their 404-page support or use Lambda@Edge handlers.
Thankfully the leading Jamstack platforms Netlify, Vercel, and Layer0 offer a fairly easy way to enable CSR.
In Netlify, you have a redirects file. With the 200 modifiers, it’s a rewrite, but it's a hidden redirect that the user never sees.
Vercel offers rewrites support in vercel.json, it also integrates very tightly with Next.js.
This technique was pioneered by Next.js and involved generating new static pages on demand in response to incoming traffic. The browser requests a new page that has not yet been visited, and for every page—regardless of what the page is—the CDN will quickly return a universal fallback page that only contains placeholder data and no content.
While the fallback page is displayed, the page’s static build process runs in the background. When that build completes, the fallback page loads the static JSON data and displays the final page. From then on, future visits will get the statically built HTML.
You can see an example here
When you visit https://static-tweet.now.sh/1346427855052353545 you’ll notice that if the tweet has never been rendered before, you’ll get a skeleton page. This happens only the first time you visit the page. If you refresh, you’ll get the static HTML, no matter what edge in the global network you are visiting. And every future visit will get the statically generated HTML page.
Because it’s static HTML, even if Twitter disappears from the internet, you still have strong guarantees of its high availability, backed by redundant storage.
So, you can imagine a site with no pages built out, and as traffic comes in, it’s gradually building static pages.
There is a version of incremental static generation called incremental static regeneration, which is essentially the same process. Still, it involves updating an existing static page in response to traffic. So if the underlying data changes, it’s re-running the build process, inspired by stale-while-revalidate, a popular yet not widely appreciated cache protocol. This will serve a stale version of the page instead of the fallback while it’s rebuilding the page and then swap that for the new version once the build process finishes.
Incremental static regeneration:
Updates existing static pages in response to traffic,
Serves as a stale version of the page instead of a fallback.
Incremental static regeneration has a minor impact on SEO and compatibility, especially on the first page. The fallback page is entirely CSR and has no data, so it’s unclear how bots will respond to it.
In addition to static techniques, eCommerce websites can also benefit from dynamic techniques like:
Serverless server-side rendering + CDN
Parallel static rendering
Using SSR in conjunction with the CDN allows you to generate pages on demand in response to traffic, which gives you some advantages. This technique is also more compatible with how traditional eCommerce platforms are made. It lets you support many pages—you can dynamically generate them when needed—and ensures high compatibility with legacy platforms.
However, this technique is also a little controversial. The Jamstack community tends to be very dogmatic about what Jamstack is and asserts that Jamstack requires static generation.
Serverless server-side rendering is effectively Jamstack-ish when 2 conditions are met:
1. Zero DevOps and servers to manage. It’s serverless and developers don’t have to manage scale-way. It’s the same serverless that many Jamstack platforms use to power their APIs, which says you can use it to power HTML data and through SSR.
2. HTML is served from the CDN. This is a critical condition. After the first cache miss, the CDN-served site is as fast as a static-generated Jamstack site. Please note that this requires proper cache management and is harder for multi-page sites.
Layer0 allows you to specify the set of URLs that should be pre-rendered and cached at the edge during deployment to ensure that users get a sub-second experience when accessing your site.
Static pre-rendering involves sending requests to your application code and caching the result right after your site is deployed. In this way, you simply build your app to implement server-side rendering and get the speed benefits of a static site for some or all of your pages. This feature is especially useful for large, complex sites with too many URLs to prerender without incurring exceptionally long build times.
SSR preloading is another technique used by Layer0 to accelerate page speeds. It is very similar to the regular SSR pipeline but based on analyzing the traffic logs after deployment. The high-trafficked pages are pre-loaded in parallel to the deploy. We let the deploy happen instantaneously and asynchronously build the high-traffic pages. In this way, you decouple deploy from the build. So you get immediate deploys while also maximizing cache hits.
Essentially, if there is a request for a page with high traffic levels, there’ll most likely be a cache hit. It’s the best way to get the best possible cache hits in this environment.
Parallel static rendering allows you to:
Analyze logs for high-traffic pages
Fetch and store HTML for high-traffic pages asynchronously after deploy
Immediately deploy while maximizing cache hits
You don’t have to choose between static and dynamic rendering techniques. You can choose what’s right for each class of pages on your site. You might want to declare the “About us,” “Return Policy,” or blog static, and other pages like cart, product, and categories as dynamic. We recommend choosing a platform provider that lets you flexibly mix the techniques as needed, especially if you’re doing this at scale.
Choose the best rendering technique for each class of pages, e.g., declare some pages static (e.g., blog, about us, etc.), and other pages dynamic (e.g., cart, products, categories, etc.)
Choose a framework and platform provider that lets you flexibly mix techniques as needed
Jamstack at scale with Layer0
Jamstack takes the server out of the equation and effectively lets the CDN manage the traffic, which it can do with ease regardless of traffic fluctuations. Layer0 does the same but differently—instead of rendering at build, we render on request but cache each build at the edge, so a build is no longer required after 1 build.
Rendering each page at build is fine for smaller sites, but build time becomes almost unbearable once you are larger. The lack of customization/personalization or the workarounds to deliver these makes Jamstack's focus on build time less relevant for large-scale database-driven websites like eCommerce and Travel.
Edge rules live in your code, just like in classic Jamstack, giving you complete control over the edge with live logs, versioning, and 1-click rollbacks.
To maximize cache hit rates, it’s important to know what these rates are in the first place, but this information is usually buried deep in your CDN’s access logs.
Layer0 has built-in performance monitoring, making it easier to understand when page cache hits and misses happen and exposing this information to the developer in a very friendly way. The Performance Monitor in Layer0 allows you to:
Understand traffic based on routes, not URLs, because that’s how developers think about their app. It also tracks each deploy, so developers can pinpoint any regression.
Measure performance issues across the stack and loading scenarios (API, SSR, Edge, etc.)
Layer0 has also created a tool to diagnose whether the response comes from the edge of the origin: DevTools. DevTools helps you determine whether the response comes from the edge of the origin. The example below presents how it works on top of an app shell built with React Storefront, showing when a request hits. The response in this example is coming through the Layer0 (now Edgio) edge network.
Layer0 DevTools allow you to diagnose whether responses come from the edge or origin
Understanding if a response comes from the edge or origin is critical for prefetching at scales, which is another thing Layer0 does for you.
Prefetching is important for performance because it unlocks instant page speeds. Traditional page speed tests, like what you test with Lighthouse, are focused on what happens after the customer clicks. But you can start to do a lot before the customer taps and get zero latency and almost infinite bandwidth.
Layer0 offers iterative (gradual, progressive) migration, which lets you iteratively migrate one section of the app at a time, following Martin Fowler’s strangler pattern. This way, you incrementally “strangle” specific functionalities and replace them with new applications and services. It’s like moving a mountain stone by stone.
Incremental, gradual, progressive migration is important for large sites. But it’s not limited to personalization! It also covers language, geography, etc. And it makes sense because large sites usually work across geography, and they must be able to customize the content to users as they visit the site.
The general guideline is: if this content is below the fold, we recommend late-loading and client-side rendering. If it’s above-the-fold personalized content, you want it in a server-rendered output.
Above the fold personalized = add personalization to the cache key
A/B testing and personalization add a new layer of complexity to building Jamstack sites. Testing is very important for large sites and big organizations, where decisions are ROI driven and must be proven to improve conversion rates.
The problems with client-side A/B testing
Usually, the only option for static sites
Poor performance that possibly nullifies the test
Layer0 Edge Experiments remedy these problems by enabling A/B testing at the edge. On the XDN, new experiences are always native, cached, and sub-second. This extends beyond A/B tests to any variant of your website.
Edge Experiments allow you to:
Route live traffic to any deployed branch at the edge of the network
Run A/B tests, canary deploys, or feature flags
Write routing rules based on probabilities or header values and even
With Edge Experiments, you can easily split tests without affecting your site's performance. Splits are executed at the edge through an easy-to-use yet powerful interface. Edge Experiments can be used for A/B and multivariate tests, canary deploys, blue-green tests, iterative migration off of a legacy website, personalization, and more.
Layer0 provides a frictionless transition to Jamstack and headless and offers a huge advantage for sites with large catalogs, frequent updates, or those running legacy eCommerce platforms. Shoe Carnival and Turnkey Vacation Rentals are two examples of developer teams at large sites using Jamstack and headless for eCommerce on Layer0.
TurnKey Vacation Rentals is a full-service vacation rental property management company for premium and luxury-level rental homes in top travel destinations across the country. Unlike sites like Airbnb, TurnKey offers only pre-vetted listings. It also handles management details centrally, using a standardized set of tech tools.
TurnKey was running an app inside of Docker on AWS Elastic Beanstalk and was looking for a solution to provide them with greater control and insight into performance.
They considered a few Jamstack solutions but wanted a platform that would support Next.js natively, like Layer0. One of the deciding factors was that with Layer0, they could avoid refactoring how their codebase and data pipeline worked.
Layer0 has helped Turnkey increase agility with some features listed below.
In the past, Turnkey used a custom pipeline built inside of Jenkins, and the team was deploying from a trunk branch, never having complete confidence in what was getting ready to go out into production.
With Layer0, the branches have individual environments, and the team at Turnkey can set up pristine environments—they don’t merge into the staging environment until they know something has passed QA. This removes the mental burden associated with QA.
Digging through server logs on Beanstalk can be a nightmare—you have to figure out exactly which logs you're looking for, which server they’re on, if they’re load-balanced, etc. With Layer0, you can live stream logs directly from your build, which allows you to find the build you want to troubleshoot, press play, and watch the log.
Turnkey had pages, not on React/Next.js, and ran on the old architecture. With Layer0, they could take what they’d already migrated, put that on the XDN and continue migrating incrementally.
Layer0 gave the team at Turnkey tools to focus on performance.
Shoe Carnival Inc. is an American retailer of footwear. The company currently operates an online store alongside 419 brick-and-mortar stores throughout the US midwest, south, and southeast regions.
Below are some of Layer0's features that the Shoe Carnival team found especially useful.
Shoe Carnival uses Salesforce Commerce Cloud, which is not meant to run headless frontends like that of Shoe Carnival. So there was a lot of engineering and understanding on the backend side to execute the data to the frontend. Those challenges could be solved because of the flexibility that the Layer0 backend sitting in-between Salesforce and the React frontend offered. The team at Shoe Carnival could freely build with React and ignore the limitations of Salesforce.
Time to a production boost
Shoe Carnival’s time to production dramatically increased. The team can be separated from Salesforce development cycles and make very quick changes in deployment.
Speed to production is a huge benefit, but the site performance generally is hard to ignore as Shoe Carnival went from 5-6 second average page loads to sub-seconds. They can cache things at a very granular level and have the tools to ensure that the customers are looking for is always available and up to date.
Incremental deployment lets the team deploy to production much faster than building a complete application to deploy it.
As for the impact of the migration to Layer0, when Shoe Carnival tested the origin site against the headless site for conversions 50/50 at the CDN level, the headless always won, outperforming the origin site and improving speed and visibility.
At Layer0, we believe Jamstack is the future of web development. Layer0 essentially brings the performance and simplicity benefits of Jamstack to front-end developer teams at large, dynamic eCommerce sites where traditional static techniques typically don't apply. We like to call it dynamic Jamstack. It makes SPA websites instant-loading and easier to develop.
Layer0 is an all-in-one development platform that lets you:
Utilize Jamstack for eCommerce via both pre and just-in-time rendering
Enable zero latency networking via prefetching of data from your product catalog APIs
Run edge rules locally and in pre-prod
Create preview URLs from GitHub, GitLab, or Bitbucket with every new branch and push
Run splits at the edge for performant A/B tests, canary deploys, and personalization