DCT

2:17-cv-00547

SRC Labs LLC v. Amazon Web Services Inc

I. Executive Summary and Procedural Information

  • Parties & Counsel:
  • Case Identification: 2:17-cv-00547, E.D. Va., 10/18/2017
  • Venue Allegations: Venue is alleged to be proper in the Eastern District of Virginia because Defendants maintain multiple regular and established places of business in the district, including data centers in Northern Virginia and a corporate office in Herndon with over 500 employees.
  • Core Dispute: Plaintiff alleges that Defendant’s Amazon EC2 F1 cloud computing instances, which utilize Field Programmable Gate Arrays (FPGAs), infringe five patents related to reconfigurable and high-performance computer architectures.
  • Technical Context: The technology relates to hybrid computing systems that combine traditional microprocessors (CPUs) with reconfigurable processors (FPGAs) to accelerate computationally intensive tasks in server environments.
  • Key Procedural History: The complaint alleges that Plaintiff SRC Labs met with Defendant Amazon four times in 2015 to present its technology and patent portfolio, over a year before Amazon announced the accused EC2 F1 product. On August 1, 2017, the patents-in-suit were assigned to the Saint Regis Mohawk Tribe, a sovereign American Indian Tribe, which Plaintiffs assert renders the patents immune from inter partes review proceedings.

Case Timeline

Date Event
1997-12-17 ’687 Patent Priority Date
2002-08-13 ’687 Patent Issue Date
2002-10-31 ’324 and ’800 Patents Priority Date
2003-06-18 ’867 Patent Priority Date
2006-12-12 ’867 Patent Issue Date
2007-05-29 ’324 Patent Issue Date
2009-11-17 ’800 Patent Issue Date
2014-05-27 ’311 Patent Priority Date
2015-05-12 First meeting between SRC and Amazon
2015-06-05 Second meeting between SRC and Amazon
2015-07-01 Third meeting between SRC and Amazon
2015-07-28 Fourth meeting between SRC and Amazon
2015-10-06 ’311 Patent Issue Date
2016-11-30 Amazon announces EC2 F1 Instance launch
2017-04-19 EC2 F1 Instance becomes generally available in N. Virginia
2017-08-01 Patents-in-suit assigned to Saint Regis Mohawk Tribe
2017-10-18 Complaint Filing Date

II. Technology and Patent(s)-in-Suit Analysis

U.S. Patent 6,434,687 - System and method for accelerating web site access and processing utilizing a computer system incorporating reconfigurable processors operating under a single operating system image

The Invention Explained

  • Problem Addressed: The patent’s background describes the performance limitations of conventional microprocessor-based web servers, which struggle to rapidly process user-specific data (e.g., demographics) to customize web page content without causing significant delays for the site visitor (’687 Patent, col. 1:36-62).
  • The Patented Solution: The invention proposes a hybrid server architecture that integrates standard microprocessors with reconfigurable processors (such as FPGAs) that operate under a single operating system image. Computationally intensive algorithms for tasks like demographic data processing, database searches, or encryption can be implemented directly in the hardware gates of the reconfigurable processors, enabling processing speeds up to 1000 times faster than software-based approaches (’687 Patent, Abstract; col. 2:6-25). Figure 14 illustrates a method where N data elements are processed in parallel by instantiating N corresponding processing elements on the reconfigurable hardware (’687 Patent, Fig. 14).
  • Technical Importance: This architecture allows for the hardware acceleration of critical e-commerce and dynamic content functions directly within the server’s memory space, avoiding the latency associated with offloading tasks to a separate, I/O-attached processor (’687 Patent, col. 2:42-48).

Key Claims at a Glance

  • The complaint asserts independent claim 1 and method claim 18 (Compl. ¶138).
  • Independent Claim 1 includes the following essential elements:
    • A method for processing data at an internet site.
    • Providing a reconfigurable server with at least one microprocessor and at least one reconfigurable processing element.
    • Receiving N data elements from a remote computer.
    • Instantiating N of the reconfigurable processing elements at the server.
    • Processing the N data elements with the corresponding N reconfigurable processing elements.

U.S. Patent 7,149,867 - System and method of enhancing efficiency and utilization of memory bandwidth in reconfigurable hardware

The Invention Explained

  • Problem Addressed: Conventional memory hierarchies using general-purpose caches are often inefficient for algorithms running on reconfigurable hardware. They can waste significant memory bandwidth by fetching entire cache lines when only a small portion is needed, and they perform poorly with the non-sequential memory access patterns common in high-performance computing (’867 Patent, col. 1:49–col. 2:2).
  • The Patented Solution: The patent describes a reconfigurable processor containing a "data prefetch unit." This unit is configured by the specific program being executed to retrieve only the data required by the algorithm from a main ("second") memory and place it into a local ("first") memory. This creates an explicit, application-specific memory hierarchy that operates in parallel with the computational logic, avoiding the overhead and waste of a generic cache system (’867 Patent, Abstract; col. 4:1–12).
  • Technical Importance: This approach allows reconfigurable hardware to achieve very high memory bandwidth efficiency by tailoring the data-fetching strategy directly to the needs of the algorithm, addressing a critical performance bottleneck (’867 Patent, col. 11:18–24).

Key Claims at a Glance

  • The complaint asserts independent claim 1 and dependent claims 3 and 4 (Compl. ¶157).
  • Independent Claim 1 includes the following essential elements:
    • A reconfigurable processor that instantiates an algorithm as hardware.
    • A first memory with a first characteristic bandwidth/utilization.
    • A data prefetch unit coupled to the first memory.
    • The data prefetch unit retrieves only computational data required by the algorithm from a second memory and places it in the first memory.
    • The data prefetch unit operates independently of and in parallel with logic blocks using the data.
    • The first memory and data prefetch unit are configured to conform to the algorithm's needs and to match the data's format and location in the second memory.

U.S. Patent 7,225,324 - Multi-adaptive processing systems and techniques for enhancing parallelism and performance of computational functions

Technology Synopsis

The ’324 Patent describes techniques for parallelizing computations by processing multiple "dimensions" of a problem (e.g., different time steps or spatial planes) concurrently. The invention allows subsequent stages of a computational pipeline to begin processing data from a subsequent data dimension while earlier stages are still working on the previous dimension, enabling the logic to be operative on every clock cycle (’324 Patent, Abstract; col. 6:14-20).

Asserted Claims

Independent claims 1 and 17 (Compl. ¶170).

Accused Features

The complaint alleges that the EC2 F1 Instance's ability to perform deeply pipelined, hardware-accelerated operations for parallelized computing infringes the ’324 Patent (Compl. ¶¶90, 170).

U.S. Patent 7,620,800 - Multi-adaptive processing systems and techniques for enhancing parallelism and performance of computational functions

Technology Synopsis

The ’800 Patent, a continuation of the ’324 Patent, also discloses techniques for enhancing parallelism in reconfigurable computing systems. It describes methods for implementing systolic wavefront and multi-dimensional pipeline computations that can be employed in applications like seismic analysis, search algorithms, and bioinformatics (’800 Patent, Abstract).

Asserted Claims

Independent claims 1 and 17 (Compl. ¶188).

Accused Features

The infringement allegations against the ’800 Patent target the parallel and pipelined processing capabilities of the Amazon EC2 F1 Instance (Compl. ¶¶90, 188).

U.S. Patent 9,153,311 - System and method for retaining DRAM data when reprogramming reconfigurable devices with DRAM memory controllers

Technology Synopsis

The ’311 Patent addresses the problem of data loss in DRAM when an associated FPGA, which contains the memory controller, is reprogrammed. The invention uses a "data maintenance block" to place the DRAM into a self-refresh mode, which preserves the data while the FPGA is being reconfigured and its I/O pins are tri-stated (’311 Patent, Abstract).

Asserted Claims

Independent claims 1, 3, 9, and 10 (Compl. ¶206).

Accused Features

The complaint alleges infringement based on the functionality of the EC2 F1 Instance that allows customers to reprogram the FPGA "as many times as they like," which implies a need to preserve memory state for running applications during such reconfiguration (Compl. ¶¶86, 206).

III. The Accused Instrumentality

Product Identification

The accused instrumentality is Defendants' Amazon Elastic Compute Cloud (EC2) F1 Instance service (Compl. ¶63).

Functionality and Market Context

The EC2 F1 is a cloud computing service that allows customers to rent virtual servers incorporating both conventional CPUs (Intel Broadwell processors) and Field Programmable Gate Arrays (FPGAs) (Compl. ¶¶66, 68, 80). Customers use an FPGA Developer Amazon Machine Image (AMI) to write, compile, and deploy custom logic onto the FPGAs, creating hardware accelerators for compute-intensive applications such as genomics, financial analysis, and big data search (Compl. ¶¶77, 95, 99). The FPGAs are connected to the CPU via a dedicated PCIe fabric and are equipped with their own local 64 GiB DDR4 memory, allowing for high-bandwidth data transfer (Compl. ¶¶82, 84, 87). The service is marketed as providing up to a 30x speedup for certain applications compared to servers using CPUs alone (Compl. ¶¶77, 89). A presentation slide from a 2015 meeting between SRC and Amazon shows SRC's value proposition for a similar hyper-performance server architecture, which Plaintiffs allege Defendants later adopted (Compl. p. 20).

IV. Analysis of Infringement Allegations

The complaint references claim-chart exhibits that are not provided. The following tables are constructed based on the narrative infringement theory presented in the complaint.

U.S. Patent 6,434,687 Infringement Allegations

Claim Element (from Independent Claim 1) Alleged Infringing Functionality Complaint Citation Patent Citation
providing a reconfigurable server at said site incorporating at least one microprocessor and at least one reconfigurable processing element Defendants provide the EC2 F1 Instance service, a cloud server that incorporates both Intel CPUs and Xilinx FPGAs. ¶¶63, 68, 80 col. 2:6-12
receiving N data elements at said site relative to a remote computer coupled to said site The EC2 F1 service allows customers to run applications that receive and process data from remote users over the internet. ¶66 col. 2:36-42
instantiating N of said reconfigurable processing elements at said reconfigurable server Customers program the FPGAs to create desired logical functions and parallel data paths to process multiple streams of data concurrently. ¶¶74, 86, 90 col. 22:15-17
processing said N data elements with corresponding ones of said N reconfigurable processing elements The programmed FPGAs are used to accelerate applications by handling compute-intensive, parallelized operations on user data. ¶¶89, 90 col. 22:17-21

U.S. Patent 7,149,867 Infringement Allegations

Claim Element (from Independent Claim 1) Alleged Infringing Functionality Complaint Citation Patent Citation
a first memory having a first characteristic memory bandwidth and/or memory utilization Each FPGA in the EC2 F1 instance includes its own local 64 GiB DDR4 memory. ¶84 col. 5:27-31
a data prefetch unit...retrieves only computational data required...from a second memory and places the retrieved computational data in the first memory The customer-programmed logic on the FPGA allegedly functions as a data prefetch unit, retrieving only necessary data from main system memory (the alleged second memory) into the local FPGA memory (the alleged first memory). ¶¶87, 99 col. 4:1-12
wherein the data prefetch unit operates independent of and in parallel with logic blocks using the computional data The custom hardware accelerators created by customers on the FPGAs allegedly perform data retrieval and computation in a parallel, pipelined manner. ¶90 col. 4:8-10
wherein at least the first memory and data prefetch unit are configured to conform to needs of the algorithm Customers write custom VHDL or Verilog code to define the hardware accelerator, which inherently configures the data access patterns and memory usage to the specific needs of their algorithm. ¶¶86, 99 col. 4:10-12

Identified Points of Contention

  • Scope Questions: A central question for the ’687 patent will be whether a customer programming an FPGA to have parallel data paths meets the claim limitation of "instantiating N... processing elements." For the ’867 patent, a key dispute may arise over whether customer-programmed logic on a general-purpose FPGA constitutes a "data prefetch unit" as that term is used in the patent.
  • Technical Questions: A factual question for the ’687 patent is what evidence shows that N discrete processing "elements" are created, as opposed to a single, complex circuit. For the ’867 patent, the analysis will question whether the accused system's memory architecture functionally separates a "first memory" and a "second memory" in the manner required by the claim, and whether any logic on the FPGA performs the specific function of retrieving only required data.

V. Key Claim Terms for Construction

U.S. Patent 6,434,687

The Term

"instantiating N of said reconfigurable processing elements"

Context and Importance

This term is critical because infringement depends on whether programming an FPGA to perform parallel operations is equivalent to creating N distinct processing "elements" to process N data elements. The defense may argue that programming an FPGA creates a single, monolithic circuit, not N separate "elements." Practitioners may focus on this term because its construction will determine if the claim reads on parallel processing within a single FPGA or requires a more modular architecture.

Intrinsic Evidence for Interpretation

  • Evidence for a Broader Interpretation: Figure 14 of the patent illustrates a process where "N DEMOGRAPHIC DATA ELEMENTS" are processed in parallel by "N PROCESSING ELEMENTS," suggesting that "instantiating" refers to creating the parallel logical constructs needed to process N data items simultaneously (’687 Patent, Fig. 14; col. 22:15-17).
  • Evidence for a Narrower Interpretation: The claim preamble refers to "at least one reconfigurable processing element" in the singular. The specification also refers to a "MAP element" as a discrete unit, which could support an argument that "instantiating N... elements" requires creating N discrete, separable hardware units rather than just parallel pathways within one unit (’687 Patent, col. 5:2-4).

U.S. Patent 7,149,867

The Term

"data prefetch unit"

Context and Importance

The complaint does not allege that the EC2 F1 Instance contains a pre-built component named a "data prefetch unit." Therefore, Plaintiff’s infringement theory must rely on construing this term to cover a function performed by customer-programmed logic on the FPGA. The definition of this term is central to whether a general-purpose programmable device can infringe a claim that recites a specific functional component.

Intrinsic Evidence for Interpretation

  • Evidence for a Broader Interpretation: The patent provides a functional definition, stating a "Data prefetch Unit is a functional unit that moves data between members of a memory hierarchy" (’867 Patent, col. 5:40-43). The abstract further describes it as retrieving data and supplying it to a computational unit. This functional language may support reading the term onto any block of logic that performs this specified function.
  • Evidence for a Narrower Interpretation: The patent’s abstract and claims distinguish between the "data prefetch unit" and the "computational unit" (or "logic blocks") as separate components that operate in parallel. A defendant may argue that a single block of customer-programmed logic that both fetches and processes data does not contain a distinct "data prefetch unit" as contemplated by the patent.

VI. Other Allegations

Indirect Infringement

The complaint alleges that Defendants induce infringement by providing customers with the FPGA Developer AMI, a Hardware Developer Kit, detailed documentation, and use cases that actively encourage and instruct users on how to program the FPGAs in an infringing manner (Compl. ¶¶142-144, 173-176).

Willful Infringement

The willfulness claim is based on alleged pre-suit, actual knowledge. The complaint details four meetings in 2015 where SRC allegedly disclosed its patent portfolio and reconfigurable computing technology to Amazon/AWS personnel (Compl. ¶¶107-113). The complaint alleges that Defendants "blatantly and intentionally copied the inventions" by launching the accused EC2 F1 Instance service approximately sixteen months after these meetings (Compl. ¶¶148, 151). A slide from one presentation shows SRC's intellectual property portfolio broken down by technology area, including "Reconfigurable Processor," "System Architecture," and "Software/Compiler" (Compl. p. 21).

VII. Analyst’s Conclusion: Key Questions for the Case

  • A core issue will be one of functional equivalence: can claim terms reciting specific structural components, such as a "data prefetch unit," be construed to cover functions performed by customer-programmed logic on a general-purpose FPGA, or does the claim language require a distinct, pre-defined hardware block?
  • A key evidentiary question will be one of intent and copying: what information was exchanged during the 2015 meetings between SRC and Amazon, and what evidence, if any, demonstrates that this information was used to develop the accused EC2 F1 service? The resolution of this question will be central to the claim of willful infringement.
  • A significant procedural question will be the impact of sovereign immunity: can the assignment of the patents to the Saint Regis Mohawk Tribe shield them from validity challenges via inter partes review at the U.S. Patent and Trademark Office, and how will this affect the overall litigation strategy and potential for settlement?