DCT
2:18-cv-00317
SRC Labs LLC v. Amazon Web Services Inc
I. Executive Summary and Procedural Information
- Parties & Counsel:- Plaintiff: SRC Labs, LLC (Texas) & Saint Regis Mohawk Tribe (New York)
- Defendant: Amazon Web Services, Inc., Amazon.com, Inc., & Vadata, Inc. (all Delaware)
- Plaintiff’s Counsel: Sands Anderson PC; Shore Chan DePumpo LLP
 
- Case Identification: 2:17-cv-00547, E.D. Va., 10/18/2017
- Venue Allegations: Plaintiffs allege venue is proper in the Eastern District of Virginia because Defendants maintain multiple regular and established places of business in the district, including numerous large-scale offices and data centers in Northern Virginia. The complaint notes that Defendants previously represented to the same court their strong connection to the Commonwealth of Virginia.
- Core Dispute: Plaintiffs allege that Defendants’ Elastic Compute Cloud (EC2) F1 instances, which utilize Field-Programmable Gate Arrays (FPGAs) for hardware acceleration, infringe five patents related to reconfigurable computer architectures and processing methods.
- Technical Context: The patents relate to reconfigurable computing, a field where hardware logic can be customized for specific tasks, potentially offering significant performance improvements over general-purpose CPUs for highly parallelizable workloads.
- Key Procedural History: The complaint alleges that Plaintiffs (SRC) and Defendants (Amazon/AWS) held four meetings in 2015, during which SRC presented its reconfigurable computing technology and patent portfolio. Plaintiffs allege that Defendants launched the accused EC2 F1 product approximately one year after these meetings. This history forms the basis for the willful infringement allegations. The complaint also notes the 2017 assignment of the patents-in-suit to the Saint Regis Mohawk Tribe, which asserts it has not waived sovereign immunity for the purpose of inter partes review proceedings. Subsequent to the complaint's filing, inter partes review proceedings resulted in the cancellation of all asserted claims for U.S. Patents 7,149,867; 7,225,324; and 7,620,800, as well as claim 1 of U.S. Patent 6,434,687.
Case Timeline
| Date | Event | 
|---|---|
| 1997-12-17 | ’687 Patent Priority Date | 
| 2002-08-13 | ’687 Patent Issue Date | 
| 2002-10-31 | ’324 & ’800 Patents Priority Date | 
| 2003-06-18 | ’867 Patent Priority Date | 
| 2006-12-12 | ’867 Patent Issue Date | 
| 2007-05-29 | ’324 Patent Issue Date | 
| 2009-11-17 | ’800 Patent Issue Date | 
| 2010-09-30 | Alleged date of constructive notice via product marking | 
| 2014-05-27 | ’311 Patent Priority Date | 
| 2015-05-12 | First alleged meeting between SRC and Amazon | 
| 2015-07-XX | Alleged month of meetings giving rise to actual knowledge | 
| 2015-10-06 | ’311 Patent Issue Date | 
| 2016-11-30 | Amazon announces launch of accused EC2 F1 Instance | 
| 2017-10-18 | Complaint Filing Date | 
II. Technology and Patent(s)-in-Suit Analysis
U.S. Patent 6,434,687 - System and method for accelerating web site access and processing utilizing a computer system incorporating reconfigurable processors operating under a single operating system image (Issued Aug. 13, 2002)
- The Invention Explained:- Problem Addressed: The patent’s background describes the performance limitations of conventional microprocessor-based web servers, which struggle to rapidly process user demographic data to customize web page content in real-time. This processing delay is critical, as the average internet user will only wait a limited time for a page to update (’687 Patent, col. 1:52-58).
- The Patented Solution: The invention proposes a hybrid computer system that combines standard microprocessors with one or more reconfigurable processors (e.g., FPGAs) that share system resources under a single operating system image (’687 Patent, col. 2:6-15). Computationally intensive tasks, such as processing demographic data, can be offloaded to the reconfigurable processors, where algorithms are implemented directly in hardware gates. This approach allows for highly parallel execution; for example, N data elements can be processed in a single iteration by instantiating N corresponding processing elements in the reconfigurable hardware (’687 Patent, Fig. 14).
- Technical Importance: The described system aimed to overcome the architectural bottlenecks of general-purpose CPUs for highly parallelizable tasks common to e-commerce and web services, such as database searching and data encryption (’687 Patent, col. 2:53-65).
 
- Key Claims at a Glance:- The complaint asserts independent claims 1 and 18 (Compl. ¶138).
- Independent Claim 1 (as asserted) includes the following elements:- A method for processing data at an internet site comprising:
- providing a reconfigurable server at said site incorporating at least one microprocessor and at least one reconfigurable processing element;
- receiving N data elements at said site relative to a remote computer coupled to said site;
- instantiating N of said reconfigurable processing elements at said reconfigurable server; and
- processing said N data elements with corresponding ones of said N reconfigurable processing elements.
 
 
U.S. Patent 7,149,867 - System and method of enhancing efficiency and utilization of memory bandwidth in reconfigurable hardware (Issued Dec. 12, 2006)
- The Invention Explained:- Problem Addressed: Traditional microprocessor memory hierarchies, particularly caches, are described as being inefficient for many high-performance workloads because their fixed structures (e.g., cache line width, replacement policies) do not align with the specific data access patterns of every algorithm, leading to wasted memory bandwidth (’867 Patent, col. 1:15-24; col. 2:30-41).
- The Patented Solution: The patent discloses implementing "explicit memory hierarchies" within a reconfigurable processor. The core component is a "data prefetch unit" that is configured to retrieve only the computational data required by a specific algorithm from a main ("second") memory and place it into a local, fast ("first") memory. This avoids the overhead of fetching entire, fixed-size cache lines that may contain unneeded data, thereby tailoring memory access to the algorithm's specific needs (’867 Patent, col. 4:1-10; Abstract).
- Technical Importance: This approach enables memory access patterns to be customized for the algorithm being executed on the reconfigurable hardware, which may maximize memory bandwidth efficiency and overall system performance (’867 Patent, col. 3:19-24).
 
- Key Claims at a Glance:- The complaint asserts independent claim 1 and dependent claims 3 and 4 (Compl. ¶157).
- Independent Claim 1 includes the following elements:- A reconfigurable processor that instantiates an algorithm as hardware comprising:
- a first memory having a first characteristic memory bandwidth and/or memory utilization; and
- a data prefetch unit coupled to the memory,
- wherein the data prefetch unit retrieves only computational data required by the algorithm from a second memory and places the retrieved computational data in the first memory,
- wherein the data prefetch unit operates independent of and in parallel with logic blocks using the computational data,
- and wherein the data prefetch unit is configured to match the format and location of data in the second memory.
 
 
Multi-Patent Capsule: U.S. Patent 7,225,324
- Patent Identification: U.S. Patent 7225324, "Multi-adaptive processing systems and techniques for enhancing parallelism and performance of computational functions," issued May 29, 2007.
- Technology Synopsis: The patent addresses the inefficiency of sequential processing in conventional systems where logic blocks are idle while waiting for others to complete. It proposes techniques for multi-dimensional pipelining and systolic wavefront computations in reconfigurable hardware, allowing different stages or data dimensions of a problem to be processed concurrently and data to be passed seamlessly between them (’324 Patent, col. 6:15-21).
- Asserted Claims: 1 and 17 (Compl. ¶170).
- Accused Features: The accused EC2 F1 Instance enables users to implement parallel data processing algorithms on FPGAs (Compl. ¶¶75-76, 90).
Multi-Patent Capsule: U.S. Patent 7,620,800
- Patent Identification: U.S. Patent 7620800, "Multi-adaptive processing systems and techniques for enhancing parallelism and performance of computational functions," issued November 17, 2009.
- Technology Synopsis: As a continuation of the ’324 patent, this patent further describes systems for enhancing parallelism through multi-dimensional pipelining and systolic wavefront processing on reconfigurable hardware. These techniques are designed to maximize the utilization of computational logic by ensuring it is operative on every clock cycle (’800 Patent, col. 6:15-21).
- Asserted Claims: 1 and 17 (Compl. ¶188).
- Accused Features: The accused EC2 F1 Instance provides a platform for users to create and execute highly parallelized hardware accelerators (Compl. ¶¶75-76, 90).
Multi-Patent Capsule: U.S. Patent 9,153,311
- Patent Identification: U.S. Patent 9,153,311, "System and method for retaining DRAM data when reprogramming reconfigurable devices with DRAM memory controllers," issued October 6, 2015.
- Technology Synopsis: The patent addresses the problem of data loss in DRAM when an associated FPGA, which contains the memory controller, is reprogrammed. The solution is a "data maintenance block" that manages the DRAM's self-refresh commands independently, allowing the FPGA to be reconfigured while the data in DRAM is preserved (’311 Patent, col. 2:3-12).
- Asserted Claims: 1, 3, 9, and 10 (Compl. ¶206).
- Accused Features: The complaint alleges the EC2 F1 Instance infringes by allowing users to reprogram FPGAs while maintaining data in the instance's memory (Compl. ¶¶86, 206).
III. The Accused Instrumentality
- Product Identification: The accused instrumentality is the Amazon Elastic Compute Cloud (EC2) F1 Instance service (Compl. ¶63).
- Functionality and Market Context: The EC2 service allows customers to rent virtual computer instances (Compl. ¶66). The accused F1 instance type is distinguished by its inclusion of Field-Programmable Gate Arrays (FPGAs), which customers can program to create custom hardware accelerators for computationally intensive applications (Compl. ¶¶68, 76). Defendants provide an FPGA Developer Amazon Machine Image (AMI) with tools to help customers write, compile, and debug custom FPGA designs (Compl. ¶¶95-96). The complaint alleges that FPGAs in F1 instances are connected via a dedicated PCI Express fabric, allowing them to share the same memory space, and that properly programmed FPGAs can provide significant speedups for tasks such as genomics, financial risk analysis, and big data search (Compl. ¶¶77, 87). The complaint includes a presentation slide from a 2015 meeting between SRC and Amazon, which outlines SRC's own vision for a "hyper-performance server" combining CPUs and reconfigurable processors (Compl. ¶111, p. 20).
IV. Analysis of Infringement Allegations
The complaint references claim-chart exhibits that are not provided in the submitted document. The narrative infringement theories are summarized below.
- ’687 Patent Infringement Allegations Summary: The complaint alleges that Defendants' EC2 F1 Instance is a "reconfigurable server" that incorporates both microprocessors (Intel Broadwell processors) and "reconfigurable processing elements" (Xilinx FPGAs) (Compl. ¶¶79-80, 138). The infringement theory is based on the use of this service by Defendants and their customers to receive data ("N data elements"), program the FPGAs to create parallel processing structures ("instantiating N...processing elements"), and process that data concurrently (Compl. ¶¶90, 138).
- Identified Points of Contention:- Scope Questions: A central question may be whether providing a cloud computing service where customers can program FPGAs meets the claim limitation of "instantiating N of said reconfigurable processing elements," particularly if the patent is construed to require a one-to-one correspondence between data elements and processing elements.
- Technical Questions: The infringement analysis may turn on what evidence exists that the accused F1 instances are used to instantiate processing elements in the specific parallel manner described in the patent, as illustrated in Figure 14 of the ’687 Patent.
 
- ’867 Patent Infringement Allegations Summary: The complaint alleges that the EC2 F1 Instance provides a system that embodies the claimed invention (Compl. ¶157). The theory suggests that customers use Amazon's tools to program the FPGAs (the "reconfigurable processor") to create a "data prefetch unit" that retrieves data from the instance's main DRAM ("second memory") and places it into the FPGA's on-chip block RAM ("first memory") in a manner tailored to a specific algorithm (Compl. ¶¶83-84, 99). This allegedly allows users to create the explicit, efficient memory hierarchies claimed by the patent.
- Identified Points of Contention:- Scope Questions: A key question for the court may be whether user-programmed logic for data retrieval, created using Defendants' development tools, constitutes a "data prefetch unit" as a distinct element required by the claim.
- Technical Questions: The dispute may focus on whether the accused functionality, as practiced by F1 instance users, retrieves "only computational data required" in a manner functionally distinct from conventional caching mechanisms, a distinction that is central to the patent's described solution.
 
V. Key Claim Terms for Construction
- Term: "instantiating N of said reconfigurable processing elements" (’687 Patent, Claim 1)- Context and Importance: The construction of this term is central to the scope of Claim 1. The dispute may focus on whether "instantiating N" requires creating a number of hardware processing elements precisely equal to the number of data elements (N) being processed, or if it can be read more broadly to mean configuring the hardware to process N data elements in parallel.
- Intrinsic Evidence for a Broader Interpretation: The specification describes the invention in general terms as loading "demographic data processing algorithms... into the reconfigurable processors," which can then be "implemented in hardware gates" to process data faster (’687 Patent, col. 2:19-25).
- Intrinsic Evidence for a Narrower Interpretation: Figure 14 and its accompanying description explicitly show a one-to-one mapping where "N DEMOGRAPHIC DATA ELEMENTS" are processed by N instantiated "PROCESS DATA ELEMENT" blocks, which the specification states "instantiate[s] N processing elements" to achieve a single-iteration process (’687 Patent, col. 21:10-18; Fig. 14).
 
- Term: "data prefetch unit" (’867 Patent, Claim 1)- Context and Importance: Practitioners may focus on this term because the infringement theory depends on customers creating this "unit" themselves using Defendants' tools. The definition will determine whether the claim covers any user-programmed data retrieval logic or requires a more specific, pre-defined hardware structure.
- Intrinsic Evidence for a Broader Interpretation: The patent provides a broad, functional definition: "Data prefetch Unit—is a functional unit that moves data between members of a memory hierarchy" (’867 Patent, col. 5:40-42). This language could support an interpretation covering user-created logic.
- Intrinsic Evidence for a Narrower Interpretation: The specification describes specific embodiments, such as an "XY-slice data prefetch unit" and a "Gather data prefetch unit" (’867 Patent, col. 9:15-30). A party might argue that the term should be construed in light of these specific examples rather than as any generic data-fetching logic.
 
VI. Other Allegations
- Indirect Infringement: The complaint alleges induced infringement for the asserted patents, claiming Defendants encourage infringement by marketing the EC2 F1 Instance, providing detailed documentation and development kits (the FPGA Developer AMI), and promoting use cases that allegedly practice the patented methods. The complaint identifies a list of "F1 Instance Partners" as exemplary direct infringers (Compl. ¶¶141-144, 173-175, 191-193).
- Willful Infringement: The complaint alleges willful infringement based on pre-suit knowledge. It describes a series of four meetings in 2015 where SRC allegedly presented its technology and "emphasized the strength of its patent portfolio" to Amazon and AWS executives and engineers (Compl. ¶¶107, 112, 148, 161). The complaint includes a slide from these meetings titled "Intellectual Property Developed by SRC's Team of Experienced Engineers," which explicitly states that "Patents cover all aspects of reconfigurable computing" and notes "Priority dates as far back as 1997" (Compl. p. 21). Plaintiffs allege Defendants "blatantly and intentionally copied the inventions" after these meetings and launched the accused product "a little over a year" later (Compl. ¶¶116, 151, 164).
VII. Analyst’s Conclusion: Key Questions for the Case
- A central issue will be one of pre-suit conduct and knowledge: The case may turn on the factual record of the 2015 meetings between the parties. The court will likely need to resolve what specific technical information and notice of patents were provided to Amazon, which will be critical to the allegations of willful copying versus independent development.
- A key evidentiary question will be one of functional correspondence: Does the accused EC2 F1 service, as used by customers, perform the specific methods required by the claims? For the '687 patent, this involves whether users "instantiate N" processing elements in the claimed manner. For the '867 patent, this involves whether users program a "data prefetch unit" that operates distinctly from conventional caching by retrieving "only" the data required by the algorithm.
- A threshold legal question will be the viability of the infringement claims: Given that inter partes review proceedings subsequent to the filing of the complaint resulted in the cancellation of every asserted claim from the ’867, ’324, and ’800 patents, as well as asserted claim 1 of the ’687 patent, the court will need to address the continued justiciability of these counts.