In spite of all its advantages, applying Jamstack to eCommerce websites with large catalogs and frequent updates involves dealing with a wide range of challenges. If you’re running an eCommerce site on a backend platform such as Salesforce Commerce Commerce Cloud, Magento, or SAP-Hybris—you’re probably already facing some of them.
In this article, we cover the key challenges in building large-scale eCommerce Jamstack sites and how Layer0 can help you tackle these problems.
For the full version of Layer0 CTO Ishan Anand’s presentation at Jamstack Conference 2020, go to the official Layer0 YouTube channel.
Layer0 brings the advantages of Jamstack to eCommerce, accelerating site speeds and simplifying development workflows. By streaming cached data from the edge into the browser, before it is requested, Layer0 is able to keep websites 5 seconds ahead odf shoppers’ taps. Sharper Image, REVOLVE, and Shoe Carnival are just a few examples of sites leveraging the Layer0 Jamstack platform to increase developer productivity and deliver their sub-second websites.
Using Jamstack and headless for eCommerce, especially on sites with large catalogs, frequent updates, or those on monolithic eCommerce platforms, is typically associated with dealing with the following challenges:
Long build times
Tricky site migrations
Data Pipeline Architecture
Customizations lost by APIs
Database connection limits
Styles embedded in CMS content
Backoffice workflow integration
Jamstack has high traffic scalability built in. But the build step introduces a new scaling dimension, as typical static rendering happens during the build. As you expand your website, or perform more frequent changes, you exit the sweet spot where Jamstack is really fast and agile. The result is build time friction. It is easy to sweep the problem under the rug if you’re working on a small site, but that is not the case for the typical eCommerce site.
Another important thing to remember is that sites are built as much by non-developers as they are by developers. Because content, marketing and merchandising constantly change things, so build time friction can quickly become a problem for the entire org.
All of this to say that “at scale” happens more than you would think, and it’s not limited to eCommerce. Take a look at this comparison between retailer and news websites. For eCommerce sites the number of SKUs is a proxy for the number of pages.
eCommerce sites with many products (SKUs)Publishers with many articles
Publishers with many articles
While you might think that only sites like Amazon have to deal with millions of SKUs, this is not true. Car parts websites are a great example—they host millions of products based on the year/model/make/vehicle search criteria (YMMV). For example, TruPar.com sells forklift parts exclusively and they have 8M SKUs.
Thankfully, there are a few static and dynamic rendering techniques which help deal with Jamstack at scale problems.
Optimizing build times
Incremental static (re)generation
Serverless server-side rendering + CDN
Parallel static rendering
Choosing the best rendering technique for each class of pages
Choosing a framework and platform that let you mix techniques as needed
In the following paragraphs we will discuss what these techniques really mean.
With incremental builds you can save build artifacts and only regenerate what’s changed. If only a single page changed, you will regenerate that single page.
The framework splits the build across multiple processes or threads. This is really helpful for image processing.
The caveat here is that framework and cloud provider support for parallel and incremental builds varies. Not all of them support it and those which do offer only limited support.
There is also the issue of potential excess cost. If you have a large site with tens of thousands of SKUs or more, most of your traffic follows a power distribution and you spend extra compute time rebuilding pages that will never be visited. The more you update the site, the larger that cost will grow. Keep that in mind when thinking about some of these techniques.
According to willit.build (a Gatsby build benchmark page which provides historical build times of sites built on Gatsby Cloud) build times for Contentful and WordPress sites are about 200ms per page, which means that for a site with 10k pages a full site build could take 25 minutes. Incremental builds can get you down to a few minutes, which shows the power of incremental builds. This technique can be really helpful as long as you don’t do full builds.
Also known as the app shell, or the SPA fallback model, client-side rendering is basically CDN routing. If your site hosts a million products, all these are routed by this CDN layer into the index.html, and become one static file which just contains an app shell and is client-side rendered. When that page is loaded by the browser, the client-site router will fetch and render the page content in the browser.
With client-side rendering you can effectively host an infinite number of pages, but there are some important considerations:
The third caveat in implementing CSR is that it requires your CDN provider’s support for rewrite and redirect rules, and some do it more elegantly than others. On AWS CloudFront, for example, you have to basically shoehorn this in through their 404 page support or use Lambda@Edge handlers.
Thankfully the leading Jamstack platforms Netlify, Vercel and Layer0 offer a fairly easy way to enable CSR.
In Netlify you have a redirects file. With the 200 modifier, it’s a rewrite, but it's a hidden redirect that the user never sees.
Vercel offers rewrites support in vercel.json, it also integrates very tightly with Next.js.
This technique was pioneered by Next.js and involves generating new static pages on demand in response to incoming traffic. The browser requests a new page that has not yet been visited and for every page—regardless of what the page is—the CDN will quickly return back a universal fallback page that only contains placeholder data and no content.
While the fall back page is displayed, the page’s static build process runs in the background. When that build completes, the fallback page loads the static json data and displays the final page. From then on, future visits will get the statically built HTML.
You can see an example here
a Next.js tweet placeholder
When you visit https://static-tweet.now.sh/1346427855052353545 you’ll notice that if the tweet has never been rendered before, you’ll get a skeleton page. This happens only the first time you visit the page. If you refresh, you’ll get the static HTML, no matter what edge in the global network you are visiting. And every future visit will get the statically generated HTML page.
Actually, because it’s static HTML, even if Twitter disappears from the internet, you still have strong guarantees of its high availability, backed by redundant storage.
So, you can imagine a site that has no pages built out, and as traffic comes in, it’s gradually building static pages.
There is a version of incremental static generation called incremental static regeneration, which is essentially the same process, but it involves updating an existing static page in response to traffic. So if the underlying data is changing, it’s re-running the build process, inspired by stale-while-revalidate, a popular yet not widely appreciated cache protocol. This will serve a stale version of the page, instead of the fallback, while it’s rebuilding the page, and then swap that for the new version once the build process finishes.
Incremental static regeneration:
Updates existing static pages in response to traffic,
Serves a stale version of the page instead of a fallback.
Incremental static regeneration has a minor impact on SEO and compatibility, especially on the first page. The fallback page is entirely CSR and actually has no data, so it’s not quite clear how bots will respond to it.
In addition to static techniques, eCommerce websites can also benefit from dynamic techniques like:
Serverless server-side rendering + CDN
Parallel static rendering
Using SSR in conjunction with the CDN allows you to generate pages on demand in response to traffic, which gives you a number of advantages. This technique is also more compatible with how traditional eCommerce platforms are made. It lets you support a large number of pages—you can dynamically generate these pages when needed—and ensures high compatibility with legacy platforms.
However, this technique is also a little controversial. The Jamstack community tends to be very dogmatic about what Jamstack is and asserts that Jamstack requires static generation.
Serverless server-side rendering is effectively Jamstack-ish when 2 conditions are met:
1. Zero devops and servers to manage. Basically it’s serverless in which developers don’t have to manage scale-way. In fact, it’s the same serverless that a lot of Jamstack platforms use to power their APIs, which is basically saying you can use it to power HTML data as well as through SSR.
2. HTML is served from the CDN. This is a really critical condition. After the first cache miss, the CDN served site is effectively as fast as a static generated Jamstack site. Please note that this requires proper cache management and is harder to do for multi-page sites.
Layer0 allows you to specify the set of URLs that should be pre-rendered and cached at the edge during deployment to ensure that users get a sub-second experience when accessing your site.
Static pre-rendering involves sending requests to your application code and caching the result right after your site is deployed. In this way, you simply build your app to implement server-side rendering and get the speed benefits of a static site for some or all of your pages. This feature is especially useful for large, complex sites that have too many URLs to prerender without incurring exceptionally long build times.
SSR preloading is another technique used by Layer0 to accelerate page speeds. It is very similar to the regular SSR pipeline, but it is based on an analysis of the traffic logs after deployment. The high trafficked pages are basically pre-loaded in parallel to the deploy. We let the deploy happen instantaneously and asynchronously build the high traffic pages. In this way, you actually decouple deploy from build. So you get immediate deploys while also maximizing cache hits.
Essentially, if there is a request for a page with high traffic levels, there’ll most likely be a cache hit. It’s the best way to get the best possible cache hits in this environment.
Parallel static rendering allows you to:
Analyze logs for high traffic pages
Fetch and store HTML for high traffic pages asynchronously after deploy
Immediately deploy while maximizing cache hits
You don’t have to choose between static and dynamic rendering techniques. You can choose what’s right for each class of pages on your site. You might want to declare the “About us,” “Return Policy” or blog as static, and other pages like cart, product and categories as dynamic. We recommend that you choose a platform provider that lets you flexibly mix the techniques as needed, especially if you’re doing this at scale.
Choose the best rendering technique for each class of pages, e.g.: declare some pages static (e.g. blog, about us, etc.), and other pages dynamic (e.g. cart, products, categories, etc.)
Choose a framework and platform provider that lets you flexibly mix techniques as needed
Jamstack at scale with Layer0
Jamstack takes the server out of the equation and effectively lets the CDN manage the traffic, which it can with ease regardless of traffic fluctuations. Layer0 does the same but in a different manner—instead of rendering at build we render on request, but cache each build at the edge so a build is no longer required after 1 build.
Rendering each page at build is fine for smaller sites, but once you are larger build time becomes almost unbearable and the lack of customization / personalization or the workarounds to deliver these makes Jamstack that focuses on build time less relevant for large-scale database-driven websites like eCommerce and Travel.
Edge rules live in your code, just like in classic Jamstack, giving you complete control over the edge with live logs, versioning and 1-click rollbacks.
To maximize cache hit rates it’s important to know what these rates really are in the first place, but this information is usually buried deep in your CDN’s access logs.
Layer0 comes with performance monitoring built-in, making it easier to understand when page cache hits and misses happen, and exposing this information to the developer in a very friendly way. The Performance Monitor in Layer0 allows you to:
Understand traffic based on routes, not URL, because that’s how developers think about their app. It also tracks each deploy, so developers can pinpoint any regression.
Measure performance issues across the stack and loading scenarios (API, SSR, Edge, etc.)
Layer0 has also created a tool to diagnose whether the response is coming from the edge or the origin: DevTools. DevTools helps you determine whether the response is coming from the edge or the origin. The example below presents how it works on top of an app shell built with React Storefront, showing when a request hits. The response in this example is coming through Layer0 edge network.
Layer0 DevTools allow you to diagnose whether responses come from the edge or origin
Understanding if a response comes from the edge or origin is critical for prefetching at scales, which is another thing Layer0 does for you.
Prefetching is important for performance because it unlocks instant page speeds. Traditional page speed tests, like what you test with Lighthouse, are really focused on what happens after the customer clicks. But you can start to do a lot before the customer taps, and actually get zero latency and almost infinite bandwidth.
Layer0 offers you iterative (gradual, progressive) migration, which lets you iteratively migrate one section of the app at a time, following Martin Fowler’s strangler pattern. This way you incrementally “strangle” specific functionalities and replace them with new applications and services. It’s like moving a mountain stone by stone.
Incremental, gradual, progressive migration is important for large sites. But it’s not limited to personalization! It also covers language, geography, etc. And it makes sense because large sites usually work across geographie and it’s crucial for them to be able to customize the content to users as they visit the site.
The general guideline is: if this content is below the fold, we actually recommend late-loading, client-side rendering. If it’s above the fold personalized content, then you really want it in a server rendered output.
Above the fold personalized = add personalization to cache key
A/B testing and personalization add a whole new layer of complexity in building Jamstack sites. Testing is very important for large sites and big organizations, where decisions are ROI driven and must be proven to improve conversion rates.
The problems of client-side A/B testing
Usually the only option for static sites
Poor performance that possibly nullifies the test
Layer0 Edge Experiments remedy these problems by enabling A/B testing at the edge. On the XDN, new experiences are always native, cached and sub-second. This extends beyond A/B tests to any variant of your website.
Edge Experiments allow you to:
Route live traffic to any deployed branch at the edge of the network
Run A/B tests, canary deploys, or feature flags
Write routing rules based on probabilities or header values and even
With Edge Experiments, you can easily split tests without affecting the performance of your site. Splits are executed at the edge through an easy-to-use yet powerful interface. Edge Experiments can be used for A/B and multivariate tests, canary deploys, blue-green tests, iterative migration off of a legacy website, personalization, and more.
Layer0 provides a frictionless transition to Jamstack and headless and offers a huge advantage for sites with large catalogs, frequent updates, or those running legacy eCommerce platforms. Shoe Carnival and Turnkey Vacation Rentals are two examples of developer teams at large sites that are using Jamstack and headless for eCommerce on Layer0.
TurnKey Vacation Rentals is a full-service vacation rental property management company for premium and luxury-level rental homes in top travel destinations across the country. Unlike sites like Airbnb, TurnKey offers only pre-vetted listings. It also handles management details centrally, using a standardized set of tech tools.
TurnKey was running an app inside of Docker on AWS Elastic Beanstalk and were looking for a solution that would provide them with greater control and insight into performance.
They considered a couple of Jamstack solutions, but wanted a platform that would support Next.js natively, like Layer0. The fact that with Layer0 they could avoid refactoring how their codebase and data pipeline worked was one of the deciding factors.
Layer0 has helped Turnkey increase agility with a number of features listed below.
In the past, Turnkey used a custom pipeline built inside of Jenkins, and the team was deploying from a trunk branch, never having complete confidence in what was getting ready to go out into production.
With Layer0 the branches have individual environments, and the team at Turnkey can set up pristine environments—they don’t merge into the staging environment until they know something has passed QA. This removes the mental burden associated with QA.
Digging through server logs on Beanstalk can be a nightmare—you have to figure out exactly which logs you're looking for, which server they’re on, if they’re load balanced, etc. With Layer0 you can live stream logs directly from your build, which allows you to find the build you want to troubleshoot, press play and you watch the log.
Turnkey had pages that were not on React/Next.js and were running on the old architecture. With Layer0 they could take what they’d already migrated and put that on the XDN and continue migrating incrementally.
Layer0 gave the team at Turnkey tools to focus on performance.
Shoe Carnival Inc. is an American retailer of footwear. The company currently operates an online store alongside 419 brick-and-mortar stores throughout the midwest, south, and southeast regions of the US.
Below are some of Layer0 features that the Shoe Carnival team found especially useful.
Shoe Carnival uses Salesforce Commerce Cloud, which is not really meant to run headless frontends, like that of Shoe Carnival. So there was a lot of engineering and understanding on the backend side to be able to execute the data to the frontend. Those challenges could be solved because of the flexibility that the Layer0 backend sitting in-between Salesforce and the React frontend offered. The team at Shoe Carnival could freely build with React and ignore the limitations of Salesforce.
Time to production boost
Shoe Carnival’s time to production dramatically increased. The team can be separate from Salesforce development cycles, and make very quick changes in deployment.
Speed to production is a huge benefit, but the site performance in general is hard to ignore as Shoe Carnival went from 5-6 second average page loads to sub-seconds. They can cache things at a very granular level and have the tools to make sure that what the customers are looking for is always available and up to date.
Incremental deployment let the team deploy to production much faster than having to build a complete application to deploy it.
As for the impact of the migration to Layer0,when Shoe Carnival tested the origin site against the headless site for conversions 50/50 at the CDN level, the headless always won, outperforming the origin site and improving speed and visibility.
At Layer0, we believe Jamstack is the future of web development. Layer0 essentially brings the performance and simplicity benefits of Jamstack to front-end developer teams at large, dynamic eCommerce sites where traditional static techniques typically don't apply. We like to call it dynamic Jamstack. It makes SPA websites instant-loading and easier to develop for.
Layer0 is an all-in-one development platform that lets you:
Utilize Jamstack for eCommerce via both pre and just-in-time rendering
Enable zero latency networking via prefetching of data from your product catalog APIs
Run edge rules locally and in pre-prod
Create preview URLs from GitHub, GitLab, or Bitbucket with every new branch and push
Run splits at the edge for performant A/B tests, canary deploys, and personalization
Get the information you need. When you’re ready, chat with us, get an assessment or start your free trial.