1:25-cv-01623
ThroughPuter Inc v. Amazon Web Services Inc
I. Executive Summary and Procedural Information
- Parties & Counsel:- Plaintiff: ThroughPuter, Inc. (Delaware)
- Defendant: Amazon Web Services, Inc. (Delaware)
- Plaintiff’s Counsel: Gardella Alciati P.A.
 
- Case Identification: 1:25-cv-01623, W.D. Tex., 10/07/2025
- Venue Allegations: Plaintiff alleges venue is proper in the Western District of Texas because Defendant has a regular and established place of business in the district and has committed alleged acts of infringement there, specifically citing Defendant's facilities in Austin, including the Annapurna Labs location where the accused AWS Nitro System is allegedly designed, tested, and built.
- Core Dispute: Plaintiff alleges that Defendant’s AWS Nitro System infringes five patents related to multi-core processor management, resource allocation, and data processing architectures.
- Technical Context: The technology concerns methods for efficiently and dynamically managing hardware resources in multi-core and multi-processor computing environments, a field critical to the performance and cost-effectiveness of modern cloud data centers.
- Key Procedural History: The complaint does not mention prior litigation or administrative proceedings concerning the patents-in-suit. It does, however, note that the inventor and Plaintiff received industry recognition beginning in 2012, including invitations to present the technology at high-performance and cloud computing conferences and to publish articles, suggesting the technology was disclosed publicly in technical forums years before the suit was filed.
Case Timeline
| Date | Event | 
|---|---|
| 2011-09-27 | Earliest Priority Date for U.S. Patent No. 8,561,078 | 
| 2011-11-04 | Earliest Priority Date for U.S. Patent No. 8,789,065 | 
| 2011-11-04 | Earliest Priority Date for U.S. Patent No. 10,133,599 | 
| 2011-11-04 | Earliest Priority Date for U.S. Patent No. 10,310,902 | 
| 2012-10-23 | Earliest Priority Date for U.S. Patent No. 10,318,353 | 
| 2013-10-15 | U.S. Patent No. 8,561,078 Issued | 
| 2014-07-22 | U.S. Patent No. 8,789,065 Issued | 
| 2018-11-20 | U.S. Patent No. 10,133,599 Issued | 
| 2019-06-04 | U.S. Patent No. 10,310,902 Issued | 
| 2019-06-11 | U.S. Patent No. 10,318,353 Issued | 
| 2024-02-08 | Inter Partes Review Certificate Issued for '353 Patent | 
| 2025-10-07 | Complaint Filed | 
II. Technology and Patent(s)-in-Suit Analysis
U.S. Patent No. 8,561,078 - "Task Switching and Inter-Task Communications for Multi-Core Processors" (Issued Oct. 15, 2013)
The Invention Explained
- Problem Addressed: The complaint describes a fundamental tension in computing architecture between maximizing the processing speed of a single, intensive program and efficiently sharing a pool of hardware resources among many different programs (Compl. ¶¶ 20-22). Historically, optimizing for one of these goals often compromised the other, leading to inefficient resource utilization or slower individual application performance (Compl. ¶23).
- The Patented Solution: The invention proposes a system with hardware-based resource management to resolve this tension (Compl. ¶27). As described in the patent, the solution involves a hardware controller that repeatedly assigns tasks to an array of processing cores, and a hardware "cross-connect" that manages memory access between the cores and task-specific memory segments using configurable multiplexers (’078 Patent, Abstract, col. 1:21-38). This hardware-level management allows for dynamic sharing of processor resources among multiple applications while enabling accelerated processing speeds (Compl. ¶27).
- Technical Importance: This approach sought to provide a new parallel computing architecture that could simultaneously increase execution speed and improve resource utilization, making high-performance cloud computing more cost-effective and feasible for a broader range of applications (Compl. ¶¶ 26, 28).
Key Claims at a Glance
- The complaint asserts independent claim 1 (Compl. ¶49).
- The essential elements of claim 1 include:- A data processing system comprising an array of processing cores for processing a set of software programs.
- A hardware logic module, referred to as a controller, for repeatedly assigning individual processing cores to process individual tasks of each software program.
- A memory providing task-specific memory segments.
- A hardware logic module, referred to as a cross-connect, for connecting the cores and memory segments, wherein the controller configures either a task-specific multiplexer to connect a core to a memory segment or a core-specific multiplexer to connect a memory segment to a core.
 
- The complaint reserves the right to assert additional claims (Compl. ¶74).
U.S. Patent No. 8,789,065 - "System and Method for Input Data Load Adaptive Parallel Processing" (Issued Jul. 22, 2014)
The Invention Explained
- Problem Addressed: As of 2011, there was a need for a system capable of supporting a large number of concurrent applications on a shared pool of parallel processors, which required a new architecture to balance speed and resource utilization (Compl. ¶26).
- The Patented Solution: The invention is a system for "input data load adaptive processing" (’065 Patent, Title). It employs a series of logic subsystems that work in concert: one subsystem dynamically demultiplexes incoming data packets into program-specific buffers; another monitors the volume of data in those buffers; and based on that monitored volume, another subsystem periodically assigns processor cores to the program instances where the data has accumulated (’065 Patent, Claim 1; Compl. ¶91). This creates a feedback loop where processing power is allocated based on real-time data load.
- Technical Importance: This load-adaptive approach allows a shared computing system to dynamically allocate its resources to where they are most needed, improving overall efficiency and throughput by reacting to unpredictable changes in workload from multiple applications (Compl. ¶27).
Key Claims at a Glance
- The complaint asserts independent claim 1 (Compl. ¶90).
- The essential elements of claim 1 include:- A system comprising a collection of hardware input data ports and an array of hardware buffers, where each buffer is specific to an individual destination program instance.
- A logic subsystem for dynamically demultiplexing input data packets from the ports to the destination-specific buffers based on overhead information in the packet.
- A logic subsystem for monitoring volumes of data packets at the buffers and periodically assigning processing cores to program instances based at least in part on the monitored volumes.
- A logic subsystem for multiplexing data packets from the buffers to the assigned processor cores.
- A "wherein" clause describing a method for inserting a destination program identification when the system is part of a multi-stage group, to provide isolation.
 
- The complaint reserves the right to assert additional claims (Compl. ¶118).
U.S. Patent No. 10,318,353 - "Concurrent Program Execution Optimization" (Issued June 11, 2019)
Technology Synopsis
The patent describes a system for processing a set of computer program instances across a plurality of processing stages. Each task of a program is hosted at a different stage, and the system uses a group of hardware multiplexers to connect inter-task communications (ITC) data to the appropriate stage and processing core. The '353 patent was subject to an Inter Partes Review, with a certificate issued on February 8, 2024, canceling claims 1 and 2 but confirming the patentability of claims 8-14 and 21-24 ('353 Patent, IPR Certificate).
Asserted Claims
The complaint asserts claim 3 (dependent on canceled claim 1), which may be subject to amendment or clarification (Compl. ¶135).
Accused Features
The complaint alleges that the pipelined packet processing operations of the Nitro Cards, where different tasks (e.g., firewall evaluation, VPC functions, rate limiting) are performed in sequence, constitute the claimed "plurality of processing stages" (Compl. ¶¶ 139-140). The on-chip SoC network is alleged to be the "group of multiplexers" for ITC data (Compl. ¶147).
U.S. Patent No. 10,133,599 - "Application Load Adaptive Multi-stage Parallel Data Processing Architecture" (Issued Nov. 20, 2018)
Technology Synopsis
This patent discloses a system for dynamic resource management of a pool of processing resources. It comprises a first subsystem to allocate processing units based on demand and quotas, a second subsystem to select high-priority program instances, and a third subsystem to assign those instances to the allocated units, including prioritizing placement on specific types of processing units.
Asserted Claims
The complaint asserts independent claim 1 (Compl. ¶184).
Accused Features
The infringement allegation targets the Nitro Card's use of different types of processing units (e.g., Nitro cores, ASICs, encryption engines) and its system for managing resources among multiple packet flows based on allowances, priorities, and a credit-based system (Compl. ¶¶ 187, 189, 192).
U.S. Patent No. 10,310,902 - "System and Method for Input Data Load Adaptive Parallel Processing" (Issued June 4, 2019)
Technology Synopsis
This patent, related to the '065 patent, describes a system that allocates an array of cores among application programs based on the volume of data in input buffers and processing quotas. A second subsystem assigns each allocated core to a different program instance, and a third subsystem establishes direct data access from an input buffer to its assigned core.
Asserted Claims
The complaint asserts independent claim 1 (Compl. ¶225).
Accused Features
The accused features include the Nitro Card's array of Nitro cores, its use of an SoC system-level cache segmented into input buffers for different packet flows, and its controller hardware that allocates and assigns cores to process data from those buffers (Compl. ¶¶ 229, 232, 234).
III. The Accused Instrumentality
Product Identification
The complaint identifies the "AWS Nitro System," and specifically the "Nitro Cards" within that system, as the accused instrumentalities (Compl. ¶37).
Functionality and Market Context
The complaint alleges the AWS Nitro System is a foundational technology for Amazon's EC2 instances, designed to offload virtualization, storage, and networking tasks from a server's main CPU to dedicated hardware (Compl. ¶17, ¶40). The accused Nitro Cards are described as hardware components containing a system-on-a-chip (SoC) with an array of "Nitro cores" (alleged to be Arm cores) and other specialized circuits (ASICs) that operate independently from the host system board (Compl. ¶37, ¶39, ¶42). The complaint alleges these cards process inbound and outbound data packets in a pipeline, performing functions like evaluating firewall rules, applying rate limits, and managing network traffic using multiple queues (Compl. ¶40, ¶41). A diagram cited in the complaint depicts this functionality, showing how "flow hashes" are used to assign data to a specific "queue" and "Nitro processor" (Compl. ¶54, citing Ex. 18 at 54). The complaint positions the Nitro System as "essential to every AWS server," enabling faster innovation and improved performance for customers (Compl. ¶17).
IV. Analysis of Infringement Allegations
U.S. Patent No. 8,561,078 Infringement Allegations
| Claim Element (from Independent Claim 1) | Alleged Infringing Functionality | Complaint Citation | Patent Citation | 
|---|---|---|---|
| A data processing system comprising: an array of processing cores for processing a set of software programs... | The accused Nitro Card contains an array of "Nitro cores" that handle data packet processing, executing specific instructions (programs) composed of a series of tasks. | ¶54 | col. 1:21-23 | 
| a hardware logic module, referred to as a controller, for repeatedly assigning individual processing cores of the array of processing cores to process individual tasks of each software program... | Specialized hardware on the Nitro Card processes packet headers to direct data packets to specific flow-specific buffers. Controller hardware polls these buffers and connects the packet to a Nitro core to process the tasks, a process that repeats as new packet flows initiate and old ones terminate. | ¶56 | col. 1:24-27 | 
| a memory providing task-specific memory segments | The SoC system-level cache on the Nitro Card is segmented into input buffers, with each buffer receiving a discrete packet flow that requires a specific type of processing, making each buffer specific to a particular processing task. | ¶58 | col. 1:27-29 | 
| a hardware logic module, referred to as a cross-connect, for connecting the array of processing cores and the task-specific memory segments... wherein the controller configures at least one of the following: (a) at least one given task-specific multiplexer... or (b) at least one given core-specific multiplexer... | The hardware logic in the SoC network connects the input buffers (task-specific memory segments) to the Nitro cores (array of processing cores). The SoC network hardware performs a multiplexing operation, connecting multiple input buffers to a particular processing core. | ¶60-61 | col. 1:29-38 | 
Identified Points of Contention
- Scope Questions: A central question may be whether a "packet processing program" dealing with data flows, as alleged in the complaint (Compl. ¶54), falls within the scope of a "software program" comprising a "list of tasks" as contemplated by the patent. Further, does the Nitro Card's hardware for routing packets based on 5-tuple headers (Compl. ¶56) perform the function of the claimed "controller" that "repeatedly assign[s] individual processing cores"?
- Technical Questions: The infringement theory relies on the SoC network hardware functioning as both the "cross-connect" and containing the configurable "multiplexers" (Compl. ¶¶ 60-61). A key technical dispute may arise over whether the architecture of the SoC network actually performs the specific selective connections required by sub-elements (a) and (b) of the claim, or if its function is technically distinct.
U.S. Patent No. 8,789,065 Infringement Allegations
| Claim Element (from Independent Claim 1) | Alleged Infringing Functionality | Complaint Citation | Patent Citation | 
|---|---|---|---|
| A system for input data load adaptive processing... comprising: a collection of hardware input data ports of the processor, where each port of the collection is shared dynamically among data packets... | The Nitro Card has multiple hardware inputs (PCIe and Ethernet interfaces) that receive inbound data packets, which are shared dynamically among different packet flows. | ¶98 | col. 4:45-48 | 
| an array of hardware buffers, where each buffer of the array is specific to an individual destination program instance... | The SoC system-level cache is segmented into input buffers (queues), with each queue being specific to a particular packet flow and its corresponding processing program. A diagram cited by the complaint shows "Queue Assignment" leading to processor assignment (Compl. ¶100, citing Ex. 18 at 14). | ¶100 | col. 4:49-51 | 
| a logic subsystem for dynamically, at individual packet granularity, demultiplexing input data packets from said input ports to said destination program instance specific buffers... | Specialized hardware on the Nitro Card executes a "five tuple hash" on packet headers to distribute individual data packets into flow-specific (and thus program instance specific) input buffers. | ¶102 | col. 4:45-53 | 
| a logic subsystem for monitoring volumes of data packets at the program instance specific buffers and for periodically assigning processing cores... based at least in part on the respective monitored volumes... | Controller hardware repeatedly polls the input buffers. This controller applies congestion control and fair queuing algorithms, which are based on the state of the queues (monitored volumes), to assign Nitro cores to handle the packet flows. | ¶104 | col. 4:55-60 | 
| a logic subsystem for multiplexing data packets dynamically from the destination program instance specific buffers to the processor cores... | The SoC network includes multiplexers that route data packets from the various input buffers (queues) to the appropriate Nitro processor cores, creating a many-queues-to-one-core connection managed by SoC control hardware. | ¶106 | col. 4:51-55 | 
| wherein, in case the system is a part of a multi-stage group... inserts an identification of a destination program for a data packet passed from the given processor to other processors... | Multiple Nitro Cards are used in a data center network. Data passed between cards (e.g., metadata) indicates what processing is to be performed by the receiving card (the destination program), which allegedly provides isolation between different programs. | ¶108 | col. 5:18-25 | 
Identified Points of Contention
- Scope Questions: A likely point of contention is whether the process of executing a "five tuple hash" on a packet header, as alleged for the Nitro Card (Compl. ¶102), meets the claim requirement of demultiplexing "based on a destination program instance indication." The defense could argue a hash is a calculated distribution mechanism, not an "indication" read from the packet's overhead.
- Technical Questions: The complaint alleges that the controller hardware "monitors volumes" by polling buffers and applying queuing algorithms (Compl. ¶104). The court may need to determine if this alleged functionality is technically equivalent to the "monitoring" required by the claim, and whether the subsequent assignment of cores is "based on" this monitoring in the specific manner the patent describes.
V. Key Claim Terms for Construction
U.S. Patent No. 8,561,078
- The Term: "task-specific memory segment"
- Context and Importance: This term is critical because the plaintiff's infringement read maps it directly onto the "input buffers" within the Nitro Card's SoC system-level cache (Compl. ¶58). The viability of the infringement case for this patent may depend on whether a hardware queue for packet flows can be considered a "task-specific memory segment."
- Intrinsic Evidence for Interpretation:- Evidence for a Broader Interpretation: The claim language itself is general, referring to "a memory providing" such segments. This could support a reading that encompasses any partitioned area of memory, including hardware-managed queues, that is dedicated to a specific function.
- Evidence for a Narrower Interpretation: The specification states that "the task-specific memory segments are configured to store the processing contexts of their respective tasks" (’078 Patent, col. 8:51-53). A defendant may argue this requires storing more than just incoming data packets, such as CPU register states or program counters, which may not be present in the alleged input buffers.
 
U.S. Patent No. 8,789,065
- The Term: "monitoring volumes of data packets"
- Context and Importance: The infringement allegation for this element relies on the Nitro Card's controller polling buffers and applying "congestion control and fair queuing algorithms" (Compl. ¶104). Practitioners may focus on whether this operational behavior satisfies the claim's requirement of "monitoring volumes."
- Intrinsic Evidence for Interpretation:- Evidence for a Broader Interpretation: The claim requires assigning cores "at least in part based on the respective monitored volumes" (’065 Patent, Claim 1). This flexible "at least in part" language may support an argument that any system aware of queue depth or congestion (which is inherently volume-related) and which uses that information in its assignment logic meets the claim.
- Evidence for a Narrower Interpretation: A defendant may argue that "monitoring volumes" implies a more direct and quantitative measurement (e.g., counting bytes or packets) that is then used as a primary input for the assignment logic, rather than the more indirect inference from the state of a fair queuing algorithm.
 
VI. Other Allegations
- Indirect Infringement: The complaint alleges induced infringement for all five patents, stating that AWS knowingly encourages and instructs its customers, developers, and users to use the AWS Nitro System in an infringing manner. This inducement is alleged to occur through materials such as "manuals, white papers, and trainings" (Compl. ¶¶ 78, 122, 172, 213, 255).
- Willful Infringement: The complaint alleges willful infringement for all patents, based on Defendant having actual notice and knowledge of the patents "by no later than the filing of this Complaint" (Compl. ¶¶ 64, 109, 158, 199, 241). It also includes a pre-suit allegation that Defendant was "willfully blind to the existence of the... patent" (Compl. ¶¶ 66, 111, 160, 201, 243).
VII. Analyst’s Conclusion: Key Questions for the Case
- A core issue will be one of definitional scope: can patent terms rooted in software execution concepts, such as "program instance" and "task", be construed to cover the hardware-centric constructs of "packet flows" and "queues" that operate in the accused AWS Nitro data plane? The outcome of this question may determine whether the patents, which describe managing software tasks, can read on a system designed for accelerating network and storage I/O.
- A key evidentiary question will be one of functional equivalence: does the accused Nitro system’s alleged use of hardware controllers for polling queues and applying fair queuing algorithms perform the specific function of "monitoring volumes" of data and "periodically assigning" processing cores based on that volume, as required by claims in the '065 and '902 patents, or is there a fundamental mismatch in technical operation?
- A third central question relates to the multi-stage architecture claims of the '065 and '353 patents. The court will likely need to determine whether the sequential processing pipeline on a single Nitro Card, or the communication between two separate Nitro Cards across a network, constitutes the "multi-stage group" of processors envisioned by the patents, and whether metadata passed between them serves to provide the claimed "isolation."