DCT

1:25-cv-01019

Xockets Inc v. Amazon.com Inc

I. Executive Summary and Procedural Information

  • Parties & Counsel:
  • Case Identification: 1:25-cv-01019, W.D. Tex., 06/30/2025
  • Venue Allegations: Plaintiff alleges venue is proper in the Western District of Texas because Defendant maintains a regular and established place of business in the district, specifically its Annapurna Labs facility in Austin, and has committed alleged acts of infringement there, including the research, design, development, and testing of the accused technologies.
  • Core Dispute: Plaintiff alleges that Defendant’s cloud computing platforms, particularly the AWS Nitro System, infringe three patents related to Data Processing Unit (DPU) architecture for offloading network, security, and storage tasks from main server processors.
  • Technical Context: The technology at issue involves Data Processing Units (DPUs), specialized processors designed to accelerate cloud infrastructure services in data centers, a crucial component for scaling modern artificial intelligence and distributed computing workloads.
  • Key Procedural History: The complaint alleges that Defendant became aware of the patented technology through a public demonstration at a 2015 industry conference and a confidential "Deep Dive" meeting in 2017 conducted under the premise of a potential acquisition. Plaintiff further alleges that it put Defendant on notice of the asserted patents in March 2024 as part of a patent portfolio sales process, in which Defendant ultimately declined to participate.

Case Timeline

Date Event
2012-05-22 Earliest Priority Date for ’209 and ’350 Patents
2013-01-17 Earliest Priority Date for ’924 Patent
2015-09-01 Xockets demonstrates DPU technology at Strata Conference (Fall 2015)
2017-05-01 "Deep Dive" meeting between Xockets and Amazon (May 2017)
2017-12-31 Amazon deploys Nitro v3 system (by end of 2017)
2020-05-12 U.S. Patent No. 10,649,924 issues
2021-08-03 U.S. Patent No. 11,080,209 issues
2021-08-23 U.S. Patent No. 11,082,350 issues
2024-03-01 Xockets initiates sales process and contacts Amazon (March 2024)
2025-06-30 Complaint Filed

II. Technology and Patent(s)-in-Suit Analysis

U.S. Patent No. 11,080,209 - "Server Systems and Methods for Decrypting Data Packets With Computation Modules Insertable Into Servers That Operate Independent of Server Processors"

  • Patent Identification: U.S. Patent No. 11,080,209, issued August 3, 2021.

The Invention Explained

  • Problem Addressed: The patent addresses the inefficiency and high power consumption of conventional server processors (e.g., x86 architecture) when handling high-volume packet processing and security tasks, such as encryption/decryption, which are common in cloud data centers (’209 Patent, col. 1:41-52). The high cost of "context switching" for these tasks further degrades performance (Compl. ¶8).
  • The Patented Solution: The invention proposes a removable "computation module" that can be inserted into a standard server socket, such as a DIMM (memory) slot, to operate as a dedicated, power-efficient co-processor (’209 Patent, Abstract; col. 2:14-24). This module contains specialized circuits to offload functions like packet header detection, virtual switching, and decryption, executing them independently of the main server processor to accelerate performance and free up main CPU resources (’209 Patent, Abstract).
  • Technical Importance: This architecture allows for the acceleration of network and security functions on commodity servers through a modular hardware upgrade, rather than requiring a complete server replacement (Compl. ¶70).

Key Claims at a Glance

  • The complaint asserts independent claim 18 and dependent claim 20 (Compl. ¶140).
  • The essential elements of independent claim 18 include:
    • A server system comprising a plurality of servers interconnected by a network.
    • Each server includes a server processor that executes an operating system.
    • The server also includes at least one computation module that is separate from the server processor and coupled to it by a bus.
    • This computation module has "first processing circuits" configured to perform header detection, packet classification, and operate as a virtual switch.
    • The module also has "at least decryption circuits" implemented on programmable logic devices to decrypt packets.
    • A "wherein" clause requires that the computation module executes these functions (header detection, classification, switching, and decryption) "independent of the server processor."

U.S. Patent No. 10,649,924 - "Network Overlay Systems and Methods Using Offload Processors"

  • Patent Identification: U.S. Patent No. 10,649,924, issued May 12, 2020.

The Invention Explained

  • Problem Addressed: The patent’s background section describes the computational burden created by "overlay" networks, which are common in cloud computing (’924 Patent, col. 1:36-44). These virtual networks require packets to be encapsulated with new headers for transport over a physical network and then decapsulated at their destination, tasks that consume significant host processor resources (’924 Patent, col. 1:52-56).
  • The Patented Solution: The invention discloses an "offload processor module" that connects to a host server's system bus, often illustrated as a memory bus socket like a DIMM slot (’924 Patent, col. 2:25-27; Abstract). This module is configured to receive network packets and perform the necessary encapsulation or decapsulation for the overlay network, operating independently of the host processor to offload this specialized workload (’924 Patent, Abstract).
  • Technical Importance: This solution aims to increase the efficiency of virtualized cloud infrastructure by dedicating the task of overlay network management to specialized, off-chip hardware, thereby freeing the main CPU for its primary application-processing functions (Compl. ¶24).

Key Claims at a Glance

  • The complaint asserts at least independent claim 9 (Compl. ¶189).
  • The essential steps of independent method claim 9 include:
    • Receiving network packet data in an "offload processor module" that is mounted to a system bus of a host server.
    • Executing, via processing circuits on the module, the step of either "encapsulating" packet data to create packets for a logical network or "decapsulating" them for delivery.
    • Performing this encapsulation or decapsulation "independent of any host processor."
    • Transporting the resulting packets out of the offload processor module.
    • The method operates where "the logical network is overlaid on a physical network."

U.S. Patent No. 11,082,350 - "Network Server Systems, Architectures, Components and Related Methods"

  • Patent Identification: U.S. Patent No. 11,082,350, issued August 23, 2021.
  • Technology Synopsis: This patent relates to expanding the capability of a DPU by incorporating general-purpose processors (e.g., ARM cores) with a hardware scheduling technology. This combination allows the programmable processors to perform run-to-completion stream processing of network packet flows at line rate, effectively enabling them to function with the speed of dedicated hardware accelerators (Compl. ¶25).
  • Asserted Claims: The complaint asserts at least independent claim 1 (Compl. ¶223).
  • Accused Features: The complaint accuses Amazon's AWS server systems equipped with the AWS Nitro DPU, as well as those utilizing NVIDIA's NVLink Switch DPU, of infringement (Compl. ¶¶225, 236).

III. The Accused Instrumentality

Product Identification

The accused instrumentalities are primarily server systems incorporating the AWS Nitro System, which the complaint defines as a Data Processing Unit (DPU). This includes various AWS offerings such as Compute Server Systems (e.g., EC2, Trainium Servers, UltraCluster) and Storage Server Systems (e.g., EBS Servers) that utilize the Nitro DPU (Compl. ¶¶139, 188). For the ’350 Patent, the allegations also extend to systems using NVIDIA NVLink Switch DPUs (Compl. ¶222).

Functionality and Market Context

The complaint alleges the AWS Nitro System is comprised of a Nitro Controller and Nitro Cards that offload data plane infrastructure functions—including security, networking, storage, and ML/AI services—from the server's main host processor (Compl. ¶143). Functionally, it is alleged to operate as a programmable virtual switch, perform header detection on incoming packets, classify them by session, and execute decryption using hardware offload engines, all independent of the host CPU (Compl. ¶¶149-154). The complaint alleges that the Nitro DPU, developed at Amazon's Annapurna Labs in Austin, Texas, is a key technology for Amazon's cloud business, with over 20 million units deployed (Compl. ¶¶14, 40, 104). A video screenshot included in the complaint shows the exterior of the Annapurna Labs facility in Austin (Compl. p. 10).

IV. Analysis of Infringement Allegations

11,080,209 Patent Infringement Allegations

Claim Element (from Independent Claim 18) Alleged Infringing Functionality Complaint Citation Patent Citation
A server system, comprising: a plurality of servers interconnected by a network... Amazon EC2 compute instances (a plurality of servers) form a server system interconnected by the AWS Cloud network (e.g., EFA or ENA networking). ¶¶143-144 col. 5:26-28
each server including a server processor configured to execute an operating system for the server, The Amazon EC2 compute instances include host processors on the main board which execute operating systems such as Ubuntu or Windows. ¶146 col. 5:29-31
at least one computation module, separate from the server processor and coupled to the server processor by at least one bus... The AWS Nitro DPU is alleged to be a computation module that is separate from the host processors and is coupled to them via a bus. ¶148 col. 5:32-35
...first processing circuits...configured to execute header detection on packets received by the server, classifying received packets by a session identifier, and operate as a virtual switch to provide packets to circuits... The AWS Nitro DPU allegedly includes software-defined hardware devices (first processing circuits) that execute header detection, classify packets by session identifier, and operate as a programmable virtual switch. ¶¶149-150 col. 5:38-47
...at least decryption circuits implemented on programmable logic devices and configured to decrypt received packets; The AWS Nitro DPU allegedly includes hardware offload engines (decryption circuits) implemented on separate programmable pipelines (programmable logic devices) that decrypt received packets. ¶¶151-152 col. 5:48-51
wherein the computation modules execute header detection, classifying of packets, virtual switching of packets, and decryption of packets independent of the server processor of their respective server. The AWS Nitro DPU allegedly offloads the functions of header detection, packet classification, virtual switching, and decryption from the host processor of its respective server. ¶154 col. 5:52-56

10,649,924 Patent Infringement Allegations

Claim Element (from Independent Claim 9) Alleged Infringing Functionality Complaint Citation Patent Citation
receiving network packet data from a data source in an offload processor module that is mounted to a system bus of a host server... The AWS Nitro DPU (offload processor module) in an Amazon EC2 instance (host server) is connected via a PCIe bus (system bus) and receives network packet data from the network (data source). ¶¶192, 194 col. 8:46-49
encapsulating the network packet data to create encapsulated network packets... or decapsulating the network packet data..., the encapsulating and decapsulating being executed by processing circuits... independent of any host processor; The AWS Nitro DPU contains processing circuits (ASICs) that perform encapsulation and decapsulation functions for overlay networks, and these functions are offloaded from the host processor. ¶¶196-197 col. 2:30-41
transporting the encapsulated network packets or the decapsulated network packets out of the offload processor module; The resulting encapsulated or decapsulated packets are transported out of the AWS Nitro DPU. ¶196 col. 8:49-51
wherein the logical network is overlaid on a physical network. The AWS Nitro DPU provides overlay virtual network services using protocols such as Scalable Reliable Datagram (SRD), where a logical network is overlaid on a physical network. ¶198 col. 1:36-44

Identified Points of Contention:

  • Scope Questions: A central issue may be whether the AWS Nitro System, which is described as being built into Amazon’s server systems, meets the claim limitations of a "computation module" (’209 Patent) or "offload processor module" (’924 Patent). The patents’ specifications frequently illustrate these inventions as removable modules inserted into memory sockets (e.g., DIMMs) (’209 Patent, col. 2:14-19; ’924 Patent, col. 2:25-27). This raises the question of whether the scope of these terms can be construed to cover a more integrated hardware component connected via a different type of bus, such as PCIe, as alleged for the Nitro DPU (Compl. ¶194).
  • Technical Questions: The infringement allegations for all three patents rely on the assertion that the Nitro DPU performs its functions "independent of the server processor." The degree of actual operational independence between the Nitro DPU and the host CPU it serves will likely be a point of significant factual dispute, requiring technical evidence regarding the specific architecture and control paths of the accused systems.

V. Key Claim Terms for Construction

U.S. Patent No. 11,080,209

  • The Term: "computation module"
  • Context and Importance: This term defines the infringing article in claim 18. Its construction will determine whether the AWS Nitro System, as an integrated component, falls within the claim's scope. Practitioners may focus on this term because the complaint alleges infringement by an integrated system, while the patent specification repeatedly emphasizes a "removable" and "insertable" module.
  • Intrinsic Evidence for Interpretation:
    • Evidence for a Broader Interpretation: The body of claim 18 itself only requires the module to be "separate from the server processor and coupled to the server processor by at least one bus" (’209 Patent, cl. 18). This language does not explicitly require physical removability or a specific form factor.
    • Evidence for a Narrower Interpretation: The patent's abstract describes a "removable computation module configured for insertion into the socket" and the detailed description consistently refers to modules insertable into DIMM sockets (’209 Patent, Abstract; col. 2:14-24). This consistent framing could support an interpretation limited to physically distinct, socketed hardware.

U.S. Patent No. 10,649,924

  • The Term: "offload processor module that is mounted to a system bus"
  • Context and Importance: This term in method claim 9 defines the hardware environment where the infringing method is performed. The dispute will likely concern whether a PCIe-connected DPU meets this limitation, given the patent's focus on memory bus connections.
  • Intrinsic Evidence for Interpretation:
    • Evidence for a Broader Interpretation: A PCIe bus is a type of system bus, and the plain language of the claim does not specify a memory bus. The specification acknowledges other I/O busses like PCI (’924 Patent, col. 3:60-63).
    • Evidence for a Narrower Interpretation: The summary of the invention and background consistently describe a "memory bus connected module" (’924 Patent, col. 1:19-20) and its mounting in a "memory bus socket such as a dual in line memory module (DIMM) socket" (’924 Patent, col. 2:25-27). This context may suggest the invention is specific to modules that interface directly with the memory bus.

VI. Other Allegations

  • Indirect Infringement: While the complaint focuses on direct infringement by Defendant's operation of its data centers, it also includes language asserting that Defendant induces and contributes to infringement by others, presumably its AWS customers (Compl. ¶¶139, 188). The factual basis for inducement would likely rely on Amazon providing instructions, documentation, and a platform that directs customers to use the accused server systems in an infringing manner.
  • Willful Infringement: The complaint alleges willful infringement based on both pre-suit and post-suit knowledge (Compl. ¶¶108, 181, 215). The allegations for pre-suit knowledge are based on: (1) Defendant's alleged attendance at a 2015 conference where the technology was demonstrated (Compl. ¶112); (2) Defendant's 2017 "Deep Dive" diligence into Plaintiff's technology (Compl. ¶¶74-77); and (3) citations to Plaintiff's patents in Defendant's own patent filings (Compl. ¶¶114-115). The complaint provides a floor plan from the 2015 conference showing the proximity of the parties' booths (Compl. p. 23). Post-suit knowledge is based on communications beginning in March 2024, when Plaintiff allegedly notified Defendant of the patents as part of a sales process (Compl. ¶116).

VII. Analyst’s Conclusion: Key Questions for the Case

This case will likely present the court with several key technical and legal questions. The resolution of these issues will be central to the outcome of the litigation.

  • A core issue will be one of definitional scope: can the terms "computation module" and "offload processor module," which are heavily contextualized in the patent specifications as physically removable cards inserted into memory sockets (DIMMs), be construed to cover Defendant's allegedly more integrated Nitro System, which connects via a PCIe bus?
  • A second central question will be one of causation and copying: what evidence exists to demonstrate that the architecture of the AWS Nitro System, which launched within a year of the 2017 "Deep Dive" meeting, was derived from the specific technical information Plaintiff disclosed to Defendant during those confidential discussions? The answer will be critical to the allegations of willful infringement.
  • Finally, a key evidentiary question will be the degree of operational independence of the accused Nitro DPU. The case may turn on technical evidence demonstrating whether the DPU's processing of network, storage, and security functions is truly "independent of the server processor" as required by the claims, or if there is a level of host processor involvement that places the accused system outside the claims' scope.