Blog Post

Debunking Misconceptions: Amazon Prime Video's Approach to Microservices and Serverless

Take a closer look at Amazon Prime Video's redesigned architecture, analyzing whether it heralds a shift back to monoliths or signifies a harmonious integration of different architectural styles.

This is the second blog in our deep dive series on serverless architectures. In the first installment, we explored the benefits and trade-offs of microservices and serverless architectures, highlighting the case of Amazon Prime Video's architectural redesign for cost optimization.

In this blog, we will take a closer look at Amazon Prime Video's redesigned architecture, analyzing whether it heralds a shift back to monoliths or signifies a harmonious integration of different architectural styles for optimal performance and cost efficiency. Additionally, we'll discuss the concept of a "serverless-first" mindset and the importance of considering alternative architectural approaches based on specific use cases and requirements.

Drawing from our own experience with serverless components in building a tracing collector API, we'll share the challenges we encountered and how they led us to reassess our approach. Now, let's dive into it.  

Is Amazon moving away from micro-services and serverless?

Here’s how Prime Video rearchitected their system in their own words:  

Source: Scaling up the Prime Video audio/video monitoring service and reducing costs by 90%

So clearly, Amazon Prime Video is not actually giving up on microservice-based serverless architectures. As explained in the post, they are redesigning a portion of their architecture (stream monitoring), in order to cut operational costs.  

Although they removed serverless components from some places that were in the initial design (that version uses AWS Step Functions for the orchestration of detectors), they added them in other places.  

Analyzing the New Design

Prime Video's new architecture design wasn't an outright reversal to the monolith but rather a thoughtful integration of different architectural styles tailored to their specific use case. The redesign combined serverless components, a monolith-style structure, and microservices. The monolithic part came into play with all of the components being compiled into a single process, eliminating the need for intermediary storage, simplifying orchestration logic, and significantly reducing operational costs.  

However, certain functionalities still leveraged microservices, and AWS Lambda—a serverless component—was used to distribute incoming requests, allowing for efficient horizontal scaling. This underlines the utility and scalability of serverless components, despite the move towards a more monolithic structure for the particular problem they faced. In light of these observations, it's important to revisit the concept of a serverless-first mindset.  

What about a serverless-first mindset?  

Adopting a serverless-first mindset doesn't necessarily mean using serverless technology at every possible opportunity. Instead, it's about recognizing the potential benefits that serverless architectures can bring and considering them as a primary option when designing and developing new applications or services.  

However, it's also crucial to understand that serverless may not always be the best fit depending on the specific use case, the nature of the workload, or the requirements of the system. Therefore, while a serverless-first approach encourages the exploration and application of serverless solutions, it also requires a balanced viewpoint and the wisdom to discern when a different architectural approach may be more suitable.  

The Prime Video team's post illustrates how they applied a serverless-first mindset to build their audio and video monitoring service. Initially, they started with a serverless architecture, leveraging its benefits of scalability and reduced operational overhead.  

Shipping the product faster gives some advantages:

  • You will experience real cases in which some of them will be edge cases hard to predict or produce in advance.
  • You will get feedback from your users quicker so it will help you to build the correct product and have product-market fit earlier.
  • The sooner you ship, the more motivated your team will be and the more ownership your team will have of the product.  

However, as Prime Video sought further cost optimizations, they strategically transitioned some components of their system to containers, taking advantage of the greater control over resources and potential cost savings that containers can offer. This approach exemplifies a balanced "serverless-first" mindset, where serverless solutions are considered the primary option, but the team remains open to other alternatives, such as containers, when they provide additional advantages for specific use cases or requirements.  

Our Experience with Serverless Components  

We had a similar experience implementing our tracing collector API, which initially, was implemented in AWS Lambda behind an AWS API Gateway. In the flow, the collector was getting the data from agents and sending them to an AWS Kinesis stream to be processed asynchronously. Building the collector API with serverless components (AWS API Gateway and AWS Lambda) helped us to ship the first version faster and let our users try and provide feedback quicker. As we worked to release this initial version, we ran into some challenges that led us to rethink our approach, as we will share in the next installment of this series.  

Cloud and Infrastructure
News & Trends
ITOps
This is some text inside of a div block.

You might also like

Blog post

Bridging the IT-business comms gap comes down to this one word: Ask

Blog post

A Gulf Tale: Navigating the Potholes of Customer Experience in the Digital Era

Blog post

The Power of Synthetic Data to Drive Accurate AI and Data Models