DCT

1:25-cv-01625

ThroughPuter Inc v. Google LLC

Key Events
Complaint

I. Executive Summary and Procedural Information

  • Parties & Counsel:

  • Case Identification: 1:25-cv-01625, W.D. Tex., 10/07/2025

  • Venue Allegations: Plaintiff alleges venue is proper because Defendant has a regular and established place of business in Austin, Texas, and has committed acts of infringement within the district.

  • Core Dispute: Plaintiff alleges that Defendant’s Titanium System, specifically its Infrastructure Processing Units used in Google Cloud Platform data centers, infringes five patents related to multi-core/manycore processor architectures and dynamic resource management.

  • Technical Context: The technology addresses the challenge in cloud and high-performance computing of efficiently allocating processor resources to simultaneously accelerate individual applications and maximize overall system utilization.

  • Key Procedural History: The complaint notes that the inventor presented the patented technology at multiple high-performance and cloud computing conferences between 2012 and 2015, suggesting early public engagement with the technical concepts. Notably, an Inter Partes Review (IPR) was concluded for U.S. Patent No. 10,318,353 (’353 Patent) in February 2024 (IPR2022-00574), resulting in the cancellation of claims 1 and 2. The complaint asserts claim 3 of the '353 patent, which is dependent on the now-cancelled claim 1, raising a significant question regarding the viability of this infringement count.

Case Timeline

Date Event
2010-01-01 Plaintiff's technology development began (approximate)
2011-09-27 Earliest Priority Date (’078 Patent)
2011-11-04 Earliest Priority Date (’065, ’599, ’902 Patents)
2013-08-23 Earliest Priority Date ('353 Patent)
2013-10-15 U.S. Patent No. 8,561,078 Issued
2014-07-22 U.S. Patent No. 8,789,065 Issued
2018-11-20 U.S. Patent No. 10,133,599 Issued
2019-06-04 U.S. Patent No. 10,310,902 Issued
2019-06-11 U.S. Patent No. 10,318,353 Issued
2024-02-08 IPR Certificate Issued for '353 Patent (cancelling claims 1-2)
2025-10-07 Complaint Filing Date

II. Technology and Patent(s)-in-Suit Analysis

U.S. Patent No. 8,561,078 - “Task Switching and Inter-Task Communications for Multi-Core Processors,” Issued October 15, 2013

The Invention Explained

  • Problem Addressed: The complaint describes a fundamental tension in computing between maximizing the processing speed of a single application, which might involve dedicating significant resources, and maximizing the utilization of a shared pool of hardware resources across many applications (Compl. ¶¶ 19-20). Pursuing one objective often compromises the other, leading to either wasted capacity or slower individual programs (Compl. ¶20).
  • The Patented Solution: The '078 Patent discloses a data processing system architecture that uses hardware logic modules to manage tasks in a multi-core environment ('078 Patent, Abstract). A "controller" module repeatedly assigns individual processing cores to specific software tasks, and a "cross-connect" module connects those cores to dedicated "task-specific memory segments" ('078 Patent, Abstract; col. 8:51-67). This hardware-based management allows for dynamic task switching and communication, aiming to improve both speed and resource utilization simultaneously (Compl. ¶23).
  • Technical Importance: This architecture provided a potential solution for cloud computing platforms needing to support numerous concurrent applications on shared hardware, enabling accelerated processing while optimizing resource use (Compl. ¶23).

Key Claims at a Glance

  • The complaint asserts infringement of at least independent claim 1 (Compl. ¶47).
  • The essential elements of Claim 1 are:
    • An array of processing cores for processing a set of software programs.
    • A hardware logic module, referred to as a controller, for repeatedly assigning individual processing cores to process individual tasks.
    • A memory providing task-specific memory segments.
    • A hardware logic module, referred to as a cross-connect, for connecting the cores and memory segments.
    • The controller is further defined as configuring either (a) a task-specific multiplexer within the cross-connect to link a core to a memory segment for the same task, or (b) a core-specific multiplexer to link a memory segment to a specific core.
  • The complaint does not explicitly reserve the right to assert dependent claims for this patent.

U.S. Patent No. 8,789,065 - “System and Method for Input Data Load Adaptive Parallel Processing,” Issued July 22, 2014

The Invention Explained

  • Problem Addressed: In shared, multi-core processing environments, the volume and type of incoming data for different applications can fluctuate unpredictably. A static allocation of processing resources can lead to inefficiencies, where some cores are overwhelmed while others are idle, creating bottlenecks and performance degradation (Compl. ¶¶ 20, 92).
  • The Patented Solution: The '065 patent describes a system that adapts to input data load in real-time ('065 Patent, Abstract). The architecture uses a logic subsystem to dynamically demultiplex incoming data packets from shared input ports into program-specific hardware buffers based on packet metadata ('065 Patent, col. 4:45-54). A separate logic subsystem monitors the volume of data in these buffers and periodically re-assigns processing cores to different program instances based on that monitored load ('065 Patent, col. 5:1-9).
  • Technical Importance: This load-adaptive approach enables a processing system to dynamically allocate its computing power where it is most needed, improving throughput and responsiveness in environments with variable workloads, such as cloud data centers (Compl. ¶25).

Key Claims at a Glance

  • The complaint asserts infringement of at least independent claim 1 (Compl. ¶88).
  • The essential elements of Claim 1 are:
    • A collection of shared hardware input data ports.
    • An array of hardware buffers, each specific to an individual destination program instance.
    • A logic subsystem for dynamically demultiplexing input data packets from the ports to the program-specific buffers based on overhead information.
    • A logic subsystem for monitoring data volumes at the buffers and periodically assigning processing cores based at least in part on the monitored volumes.
    • A logic subsystem for multiplexing data packets from the buffers to the cores assigned to that program instance.
    • A concluding "wherein" clause describing behavior when the system is part of a multi-stage group of processors.
  • The complaint does not explicitly reserve the right to assert dependent claims for this patent.

Multi-Patent Capsule: U.S. Patent No. 10,318,353

  • Patent Identification: U.S. Patent No. 10,318,353, “Concurrent Program Execution Optimization,” Issued June 11, 2019.
  • Technology Synopsis: This patent relates to systems for processing computer programs that are broken down into a plurality of tasks, with each task being hosted at a different "processing stage." The invention uses a group of multiplexers to connect data between these stages, with at least one multiplexer being a hardware resource dedicated to a local task, enabling efficient pipelined execution (Compl. ¶133).
  • Asserted Claims: Claim 3 (dependent on claim 1) (Compl. ¶133).
  • Accused Features: The complaint alleges that the pipelined processing performed by the N1 core array in the Accused TOP IPUs, particularly in applications like vSwitch, constitutes a "plurality of processing stages." The coherent mesh network is alleged to function as the claimed "group of multiplexers" connecting inter-task communication data between these stages (Compl. ¶¶ 137, 143, 145).

Multi-Patent Capsule: U.S. Patent No. 10,133,599

  • Patent Identification: U.S. Patent No. 10,133,599, “Application Load Adaptive Multi-Stage Parallel Data Processing Architecture,” Issued November 20, 2018.
  • Technology Synopsis: This patent describes a system for dynamic resource management comprising three subsystems. The first allocates processing units of different types (e.g., general-purpose cores, specialized engines) based on application demand and resource quotas. The second selects high-priority program instances, and the third assigns those instances to the allocated processing units, prioritizing placement based on the type of processing unit demanded ('599 Patent, Abstract).
  • Asserted Claims: Claim 1 (Compl. ¶181).
  • Accused Features: The complaint alleges the Accused TOP IPU infringes by using a first subsystem to allocate different types of processing units (N1 cores and encryption engines) based on demand (Compl. ¶185). The traffic shaper is identified as the second subsystem that selects high-priority instances based on QoS requirements, and the packet processing hardware is identified as the third subsystem that assigns those instances to either the N1 cores or the crypto engine (Compl. ¶¶ 187, 190).

Multi-Patent Capsule: U.S. Patent No. 10,310,902

  • Patent Identification: U.S. Patent No. 10,310,902, “System and Method for Input Data Load Adaptive Parallel Processing,” Issued June 4, 2019.
  • Technology Synopsis: Similar to the '065 patent, this patent discloses a system for hosting applications on a processor array. It features subsystems that allocate cores based on data volume in input buffers and processing quotas, assign cores to specific program instances, and establish direct data access from the input buffers to the assigned core ('902 Patent, Abstract).
  • Asserted Claims: Claim 1 (Compl. ¶221).
  • Accused Features: The complaint alleges infringement through the Accused TOP IPU's use of a first subsystem (firmware updates) to allocate N1 cores based on data volume and QoS-related bandwidth allotments (processing quotas) (Compl. ¶¶ 227-228). A second subsystem (QoS and traffic shaping hardware) allegedly assigns cores to different instances, and a third subsystem (the coherent mesh network) establishes direct data access between input buffers and the assigned cores (Compl. ¶¶ 230, 235).

III. The Accused Instrumentality

Product Identification

  • The accused products are Google’s Titanium Offload Processors (TOPs), also referred to as Infrastructure Processing Units (IPUs), which are part of its “Titanium System” used in Google Cloud Platform (GCP) data centers (Compl. ¶¶ 31-32). The complaint specifically identifies the E2000, E2100, and E2200 systems-on-a-chip (SoCs) co-developed with Intel (Compl. ¶33).

Functionality and Market Context

  • The Accused TOP IPUs are described as a "scalable and flexible way to offload network and I/O processing from the host CPU" (Compl. ¶31). Their function is to handle networking tasks like data packet routing, freeing up the main server processor for other workloads (Compl. ¶31).
  • The architecture includes a "networking complex" with a programmable packet processing pipeline and hardware traffic shaper, and a "compute complex" containing an array of Arm N1 Neoverse CPU cores ("N1 cores") (Compl. ¶33).
  • Customer-provided software is loaded onto the N1 cores to perform complex packet-processing tasks (Compl. ¶35). Data is passed from the hardware pipeline to the software on the N1 cores via a System Level Cache (SLC) that is segmented into input buffers (Compl. ¶¶ 55, 96).
  • The complaint describes a diagram from an exhibit that shows the traffic shaper positioned between the packet processing pipeline and the system level cache, illustrating the data flow for handoff to software (Compl. ¶36).
  • The N1 cores are interconnected with each other and with the SLC via a coherent mesh network, such as the Arm CoreLink CMN-600 (Compl. ¶34).

IV. Analysis of Infringement Allegations

'078 Patent Infringement Allegations

Claim Element (from Independent Claim 1) Alleged Infringing Functionality Complaint ¶ Patent Col.
a data processing system comprising: an array of processing cores for processing a set of software programs... The Accused TOP IPU is a system-on-a-chip that includes an array of Arm N1 Neoverse CPUs ("N1 cores") which execute customer-provided software programs for packet processing. ¶¶48-51 col. 8:51-54
a hardware logic module, referred to as a controller, for repeatedly assigning individual processing cores... to process individual tasks... The packet processing pipeline and traffic shaper hardware act as a controller, assigning tasks to individual N1 cores by directing data packets for specific packet flows to core-specific input buffers. ¶¶52-53 col. 8:55-58
a memory providing task-specific memory segments; and The System Level Cache (SLC) is segmented into input buffers, with each buffer configured to hold data packets for a given packet flow requiring a specific processing task, thus serving as task-specific memory segments. ¶¶54-55 col. 8:59-60
a hardware logic module, referred to as a cross-connect, for connecting the array of processing cores and the task-specific memory segments... The on-chip coherent mesh network (e.g., Arm CMN-600) serves as the cross-connect, providing physical hardware connections between the N1 cores and the input buffers (memory segments) in the SLC. ¶¶56-57 col. 8:61-63
wherein the controller configures... (a) at least one given task-specific multiplexer... or (b) at least one given core-specific multiplexer... The coherent mesh network is alleged to perform multiplexing operations that map any input buffer to a respective N1 core, satisfying sub-element (a), and connect all input buffers to each core, satisfying sub-element (b). ¶¶57-58 col. 8:64-67
  • Identified Points of Contention:
    • Scope Questions: A central question may be whether the term "cross-connect" as used in the patent can be construed to read on a general-purpose "coherent mesh network." The defense may argue that "cross-connect" implies a more specific switching fabric structure disclosed in the patent, whereas a mesh network provides a different topology.
    • Technical Questions: The infringement theory characterizes the routing and mapping functions of the coherent mesh network as performing the roles of both a "task-specific multiplexer" and a "core-specific multiplexer." A key technical question is whether the mesh network's operation constitutes these distinct multiplexing functions as required by the claim, or if it performs a more generalized routing function that does not map cleanly onto the claim's specific structural language.

'065 Patent Infringement Allegations

Claim Element (from Independent Claim 1) Alleged Infringing Functionality Complaint ¶ Patent Col.
a collection of hardware input data ports of the processor, where each port... is shared dynamically among data packets... The Accused TOP IPU has multiple inputs, including PCIe lanes and Ethernet ports, that receive streams of data packets on a packet-by-packet basis. ¶¶93-94 col. 4:45-49
an array of hardware buffers, where each buffer of the array is specific to an individual destination program instance... The hardware system level cache is segmented into input buffers, with each buffer being specific to a distinct packet flow and its corresponding processing program (program instance). ¶¶95-96 col. 4:49-52
a logic subsystem for dynamically, at individual packet granularity, demultiplexing input data packets from said input ports to said destination program instance specific buffers based on a destination program instance indication... The packet processing pipeline hardware determines the required processing for each packet from its metadata and routes it to the corresponding destination buffer on an individual packet basis. ¶¶97-98 col. 4:53-59
a logic subsystem for monitoring volumes of data packets... and for periodically assigning processing cores... based at least in part on the respective monitored volumes... The QoS and traffic shaping hardware monitors the status of the buffer queues to manage congestion and assigns core bandwidth to packet flows based on these monitored conditions. ¶¶99-101 col. 5:1-9
a logic subsystem for multiplexing data packets dynamically from the destination program instance specific buffers to the processor cores... The coherent mesh network acts as a logic subsystem that controls and provides the multiplexed connections from the input buffers in the SLC to the N1 processor cores assigned to that program instance. ¶¶102-103 col. 5:10-18
  • Identified Points of Contention:
    • Scope Questions: The claim requires three distinct "logic subsystems." The complaint maps these claims to different hardware blocks and functions within the Accused TOP IPU (packet pipeline, QoS/traffic shaper, coherent mesh network). A potential dispute will be whether these constitute distinct "subsystems" as contemplated by the patent, or if they are merely different functional aspects of a single, highly integrated packet processing engine.
    • Technical Questions: What evidence does the complaint provide that the "assigning [of] processing cores" is based "at least in part on the respective monitored volumes of packets," as the claim requires? The complaint alleges the QoS and traffic shaping logic does this to maintain QoS (Compl. ¶101), but the defense may argue that core assignment is governed by other factors and that queue monitoring is solely for congestion control, not resource allocation.

V. Key Claim Terms for Construction

  • Term: "cross-connect" (’078 Patent, Claim 1)

    • Context and Importance: Plaintiff alleges the accused "coherent mesh network" is a "cross-connect." The viability of the infringement allegation depends on whether this modern, packet-based interconnect falls within the scope of a term that may have been understood differently at the time of invention. Practitioners may focus on this term to determine if the claim requires a specific switch topology or can read on more general network-on-chip fabrics.
    • Intrinsic Evidence for a Broader Interpretation: The specification may describe the "cross-connect" in functional terms as any hardware for "connecting the array of processing cores and the task-specific memory segments" ('078 Patent, col. 8:61-63), which could support its application to various interconnect types.
    • Intrinsic Evidence for a Narrower Interpretation: The detailed description or figures of the '078 patent may depict the cross-connect as a specific structure, such as a crossbar switch or a particular arrangement of multiplexers, which could be used to argue for a narrower definition that excludes a mesh network.
  • Term: "logic subsystem" (’065 Patent, Claim 1)

    • Context and Importance: Claim 1 recites three distinct "logic subsystems," each performing a different function. The infringement case hinges on mapping these elements to different hardware blocks in the accused device. The defense will likely scrutinize whether these blocks are truly separate "subsystems" or merely functional descriptions of an integrated whole.
    • Intrinsic Evidence for a Broader Interpretation: The patent may use the term "logic subsystem" broadly to refer to any collection of hardware logic that performs a recited function, without requiring it to be a physically separate or standalone unit.
    • Intrinsic Evidence for a Narrower Interpretation: The specification's description or figures may depict the subsystems as structurally distinct blocks with defined interfaces, suggesting they must be more than just different functions performed by the same integrated hardware. The use of three separate limitations for three subsystems may itself imply structural differentiation.

VI. Other Allegations

  • Indirect Infringement: The complaint alleges inducement of infringement for all asserted patents. The factual basis for this allegation is that Google provides customers and developers with materials such as development kits, tutorials, manuals, white papers, and training on how to configure and use the Accused TOP IPU in an infringing manner (Compl. ¶¶ 75, 120).
  • Willful Infringement: Willfulness is alleged for all patents. The allegations are primarily based on knowledge of the patents as of the filing of the complaint, supporting a claim for post-suit willfulness (Compl. ¶¶ 61-62, 107-108). The complaint also makes a conclusory allegation of pre-suit willful blindness (Compl. ¶¶ 63, 109).

VII. Analyst’s Conclusion: Key Questions for the Case

  • A core issue will be one of structural scope: can claim terms rooted in specific hardware architectures, such as "cross-connect" and distinct "logic subsystems," be construed to cover the functions of a modern, highly integrated, and multi-purpose hardware block like a coherent mesh network or a unified packet processing engine? The case may turn on whether the infringement allegations depend on a functional mapping that is inconsistent with the structural requirements of the claims.
  • A key evidentiary question will be one of functional operation: does the accused product’s QoS and traffic shaping logic "periodically assign processing cores" based on "monitored volumes of packets" as required by Claim 1 of the '065 patent, or is its function limited to packet-level scheduling and congestion control, with core allocation being managed by a different mechanism and based on different criteria?
  • A threshold legal question will be the viability of the '353 patent claim: given that independent claim 1 of the '353 patent was cancelled in an Inter Partes Review, the court must decide whether Plaintiff can maintain an infringement action on dependent claim 3. This presents a significant legal challenge to that portion of the lawsuit.