Home Technical Articles Better Transparency and Troubleshooting for Server-Side Ad Insertion (SSAI)
Applications

Better Transparency and Troubleshooting for Server-Side Ad Insertion (SSAI)

About The Author

Outline

 

Improving OTT advertising sourcing, playback, and verification

OTT is an excellent opportunity for broadcasters and content creators to go beyond the linear TV experience by making it possible to personalize video streams based on each viewer’s interests. This high level of personalization is also a critical factor in attracting advertising revenue to OTT streams by enabling the delivery of highly-targeted advertising at premium CPM rates.

However, this opportunity is being held back by ad sourcing, playback, and verification challenges. Many of the standards around OTT advertising are nascent and still evolving. Moreover, in-depth debugging and analysis around the quality of service (QoS) are often limited. It’s also important to understand the quality of experience (QoE), such as whether an ad played at consistent volume levels.

With these challenges in mind and our ongoing commitment to improve scaling and reduce latency, we developed a dedicated ad proxy service as part of our platform. Originally designed as a back-end enhancement to improve the scalability of our streaming platform, it also offers several management advantages, including far more visibility and control into the ad sourcing and delivery workflow. These tools enable publishers to optimize the delivery of the right ad to the right viewer and monitor both QoS and many aspects of QoE.

Personalized streams with the manifest server

In a previous blog post, we detailed the role of the manifest server in personalizing streams to incorporate tailored advertising content. As discussed in that post, the manifest server is responsible for making ad requests, parsing the response, and then downloading and processing advertising creatives just like any other content. The manifest server then sends an integrated stream to the player giving viewers a more consistent experience, maximizing device compatibility, and bypassing ad blockers.

While the manifest server is well-equipped to handle the playback and personalization portion, the work involved with sourcing and verifying advertising brings an additional level of complexity and new challenges. As we continue to optimize the streaming architectures that power personalized experiences to millions of concurrent viewers, this led to the development of an ad proxy service focused on supporting these activities.

Sourcing and verification challenges

To obtain ads that are going to be inserted into a stream, ad content must be fetched from an ad decision server (ADS) such as FreeWheel or Google Ad Manager. This process involves requesting ads and passing along the stream and all its information so the correct ads are placed. The challenge is that many ads on a given server are just wrappers pointing to the actual ads on a different server.

For example, if there are four ad slots to be filled, two of them may be inserted directly, but the other two may not have ad assets and instead are wrappers that say, “Your ad isn’t here, it’s somewhere else and you need to go get it here.” We try to unpack and source a playable video asset for every ad response we see. We validate responses as we unpack them to ensure a playable ad asset is ready to stitch into the stream. Given that our architecture is designed to deliver a personalized manifest to each viewer, this process is repeated for each session which can amount to a considerable load.

Ad lookup latency

Tracking down assets through several wrappers can be a major cause of latency if not handled in parallel. Some wrappers never resolve into an actual ad asset. To prevent this from degrading the video experience, we limit this “waterfalling” before moving on to fetch the next ad. Exposing data and insights during this workflow help publishers identify and resolve demand sources that don’t result in ads being served and ensure viewers have an uninterrupted viewing experience while maximizing ad revenue.

Ensuring a responsive ad experience also means looking at the impact of the ad lookup on the manifest server, which is busy assembling personalized streams with minimal latency. The manifest server doesn’t have unlimited resources dedicated to generating and storing ad performance data. It only stores the ad information it needs to generate the manifest, which can limit the availability of data to debug problematic ad calls and playback.

Ad Proxy Service takes over

Publishers today need a scalable platform that interacts and manages the increasingly complex ad insertion process and provides visibility into the workflow and relationship with their ad partners.

Shown below is the Ad Proxy Service flow architecture. At the front end of the flow, the player requests the manifest server until it has enough information to request ads from the ADS. Once that happens, instead of reaching out to the ADS itself, the manifest server hands that task off to the Ad Proxy Service. Not only does this offload work from the manifest server, but it also enables several other advantages, such as reduced latency and the capture of far more debug data.

The work of fetching and verifying an ad is handled by the Ad Proxy Service, which frees up resources for the manifest server to stitch the ads into the stream for playback and deliver a seamless viewing experience.

  1. Player requests a manifest.
  2. Content asks Ad Proxy to fetch ads. After receiving a unique identifier for the work, content moves on to other steps in a manifest generation.
  3. Ad Proxy begins doing the requested work.
    1. The work is put into a queue to wait its turn to be processed.
    2. The “worker” server pulls a job from the queue and begins requesting ad assets from the ADS and saving both the steps of the work being done and any resulting data to the database.
  4. Content asks Ad Proxy, “Where are my ads for job x,” referencing the unique identifier. Ad Proxy returns the ads to content, and content puts them in the manifest and returns that to the player.

Scaling ad lookup

As the Ad Proxy Service receives requests, it queues them up to continue receiving new requests, improving scalability. It also provides the manifest server with a job ID as a placeholder while ads are tracked down so that the manifest server can move on without having to wait for Ad Proxy. The ADS worker then begins to chew through the “ad jobs” in the queue by calling out to the ADS and sending along all the player data captured and other stream information so the ADS can supply the appropriate ads. A key advantage of this process is that the ADS workers fetch ads in parallel, eliminating potential bottlenecks and reducing latency.

Standardizing ADS data

Throughout the process, communication between the Ad Proxy and ADS is recorded along with the ads and stored in a database. The data, which can vary from provider to provider, is parsed and normalized with consistent naming conventions. This makes it much more efficient to use the ADS data during analysis or debugging.

Delivering the ads

The process is completed when the manifest server gets to the point where it needs the ads. It calls Ad Proxy and says, “Here’s the job ID you gave me, give me the ads.” Ad proxy then fetches them from the database and sends them along.

Indexing and storing ad beacon activity

The Ad Proxy Service is also responsible for capturing and storing beacon information from the player, a key to ensuring proper monetization. Beacons are stored as individual objects with a primary key. Because of this, when the manifest server requests ads, the Ad Proxy service also provides beacon information. Then, when the player hits a specific checkpoint, it fires a beacon based on what it was instructed to do in the manifest. The beacon worker then fetches the objects from the database and then makes appropriate updates to say this fired at this time; the response back from the ADS was x, it had an error or didn’t have an error, and it stores all that information.

Troubleshooting ad playback

Tracking and analysis are included in the process. The Ad Proxy architecture provides extensive information on ad performance and viewership through an API, GUI, and push logs. We know “if” and “why” there’s an ad issue, so there’s no more finger-pointing if an ad doesn’t load — you can point to the data. Every session is included without additional configuration, and data is accessible for a maximum of 14 days.

Through the API, content publishers can analyze such information as:

  • Raw request and response data from the external ADS
  • Response time and size
  • Number of ads returned
  • Ad pod location
  • Device type
  • Number of wrappers
  • Errors (e.g., No Ads return, parsing failures, connection errors)
  • Warnings from Ad Providers (e.g., an optional but recommended param is missing)
  • Request failures (e.g., VPAID)

Conclusion

Publishers looking to engage each viewer with a personalized video experience must architect their streaming workloads to scale. Creating a dedicated service for ad processing not only improves the performance of the manifest server, the engine that powers personalized ads, content, and blackouts for individual viewers, but it also creates a powerful tool for troubleshooting advertising-supported video streams and ensures a high-quality, TV-like viewing experience.

With a better understanding of the root cause of problems with Ad Proxy Services, content publishers and broadcasters have visibility into the ad operational workflow. They can correlate with other data to increase viewer retention and maximize ad revenue.