1:25-cv-00587
Onesta IP LLC v. Qualcomm Inc
I. Executive Summary and Procedural Information
- Parties & Counsel:- Plaintiff: Onesta IP, LLC (Delaware)
- Defendant: Qualcomm Inc. (Delaware); Nothing Technology Ltd (United Kingdom); OnePlus Technology (Shenzhen) Co., Ltd. (China)
- Plaintiff’s Counsel: Mintz Levin Cohn Ferris Glovsky and Popeo PC
 
- Case Identification: 1:25-cv-00587, W.D. Tex., 04/17/2025
- Venue Allegations: Venue for Qualcomm is based on its alleged regular and established places of business within the Western District of Texas. Venue for foreign defendants Nothing Technology and OnePlus is asserted on the basis that suits against foreign entities are proper in any judicial district where they are subject to personal jurisdiction.
- Core Dispute: Plaintiff alleges that Defendants’ mobile system-on-chip (SoC) processors, and the smartphones incorporating them, infringe five patents related to processor architecture, task scheduling, and memory management.
- Technical Context: The patents address fundamental technologies for managing graphics and general-purpose computing tasks, memory allocation, and processor power efficiency in complex System-on-Chip (SoC) architectures, which are critical for modern mobile devices.
- Key Procedural History: The complaint does not allege any significant procedural history, such as prior litigation or administrative challenges involving the asserted patents.
Case Timeline
| Date | Event | 
|---|---|
| 2006-06-30 | ’350 Patent Priority Date (Application Filing) | 
| 2010-05-18 | ’350 Patent Issue Date | 
| 2010-09-01 | ’381 Patent Priority Date (Application Filing) | 
| 2011-06-29 | ’943 Patent Priority Date (Application Filing) | 
| 2012-03-29 | ’809 and ’019 Patents Priority Date (Provisional Filing) | 
| 2014-10-07 | ’381 Patent Issue Date | 
| 2015-08-25 | ’809 Patent Issue Date | 
| 2016-12-13 | ’943 Patent Issue Date | 
| 2023-08-29 | ’019 Patent Issue Date | 
| 2025-04-17 | Complaint Filing Date | 
II. Technology and Patent(s)-in-Suit Analysis
U.S. Patent No. 8,854,381 - "Processing Unit That Enables Asynchronous Task Dispatch"
- Patent Identification: U.S. Patent No. 8,854,381, “Processing Unit That Enables Asynchronous Task Dispatch,” issued October 7, 2014. (Compl. ¶11).
The Invention Explained
- Problem Addressed: The patent’s background section describes the inefficiency of conventional Graphics Processing Units (GPUs) that process tasks serially from a single command buffer. It notes that handling a high-priority task requires a time-consuming "context switch," which swaps out the state data of a current task, limiting performance and responsiveness. (’381 Patent, col. 1:59-2:51).
- The Patented Solution: The invention proposes a processing unit architecture with multiple "virtual engines" that can receive different types of tasks (e.g., low-latency and standard-priority) from the operating system in parallel. These tasks are then executed concurrently on a single, shared "shader core" that is configured to manage the state data for multiple tasks simultaneously, thereby avoiding the need for traditional context switching. (’381 Patent, Abstract; col. 3:1-13).
- Technical Importance: This architecture was designed to allow a GPU to handle mixed workloads more efficiently, such as running time-sensitive, general-purpose compute tasks alongside traditional, high-throughput graphics rendering. (’381 Patent, col. 2:52-63).
Key Claims at a Glance
- The complaint asserts at least claim 5, which depends on independent claim 1. (Compl. ¶72).
- Claim 5, as paraphrased in the complaint, recites an apparatus with the following essential elements (Compl. ¶73):- A plurality of engines associated with a first processing unit, configured to receive a plurality of tasks from a scheduling module and load the state data for each task.
- A shader core associated with the first processing unit, configured to receive tasks from the engines and to execute a first task while executing a second task, based on their respective state data.
- The first task is a graphics-processing task, and the second task is a general-computing task.
 
- The complaint does not explicitly reserve the right to assert other dependent claims of the ’381 Patent.
U.S. Patent No. 9,519,943 - "Priority-Based Command Execution"
- Patent Identification: U.S. Patent No. 9,519,943, “Priority-Based Command Execution,” issued December 13, 2016. (Compl. ¶12).
The Invention Explained
- Problem Addressed: The patent’s background addresses the latency problem in systems where high-priority computational commands sent to a GPU can be delayed behind a queue of lower-priority rendering commands, preventing the CPU from receiving timely results. (’943 Patent, col. 1:33-45).
- The Patented Solution: The invention describes a processing device with a set of command queues, including at least one dedicated "high priority queue." A command processor is architected to always retrieve and service commands from the high-priority queue before retrieving commands from any other queues, ensuring that critical tasks are executed with minimal delay. (’943 Patent, Abstract; col. 2:60-67).
- Technical Importance: This hardware-level prioritization mechanism allows a system to guarantee faster turnaround for certain workloads, which is critical for applications that rely on the GPU for more than just graphics rendering. (’943 Patent, col. 1:37-45).
Key Claims at a Glance
- The complaint asserts independent claim 11. (Compl. ¶91).
- Claim 11, as paraphrased in the complaint, recites a processing device with the following essential elements (Compl. ¶92):- A set of queues configured to hold commands received from a central processing unit (CPU).
- A command processor configured to retrieve commands from the queues, where the set includes a high priority queue.
- The command processor is configured to retrieve a high priority command from the high priority queue before retrieving commands from other queues.
- A processing core configured to execute the commands sent from the command processor.
 
- The complaint does not explicitly reserve the right to assert dependent claims of the ’943 Patent.
Multi-Patent Capsule: U.S. Patent No. 7,717,350
- Patent Identification: U.S. Patent No. 7,717,350, “Portable Computing Platform Having Multiple Operating Modes and Heterogeneous Processors,” issued May 18, 2010. (Compl. ¶13).
- Technology Synopsis: The patent addresses power management in portable devices by employing heterogeneous processors (e.g., a high-power, high-performance processor and a low-power, lower-performance one). The system operates in different modes, selectively activating processors based on system preferences to balance performance needs with battery life conservation. (’350 Patent, Abstract).
- Asserted Claims: Claim 17. (Compl. ¶112).
- Accused Features: The accused Qualcomm SoCs incorporate heterogeneous CPU cores (e.g., Prime, Performance, and Efficiency cores), and their operation is allegedly dependent on system preferences managed by a scheduler to balance performance and power consumption. (Compl. ¶¶ 117, 120).
Multi-Patent Capsule: U.S. Patent No. 11,741,019
- Patent Identification: U.S. Patent No. 11,741,019, “Memory Pools in a Memory Model for a Unified Computing System,” issued August 29, 2023. (Compl. ¶14).
- Technology Synopsis: The patent describes a memory model for a unified computing system where processors like a CPU and GPU share memory. It discloses a "mapper" that receives a memory operation and maps it to one of several "virtual memory pools," which correspond to different physical memory resources, thereby creating a unified address space for the processors. (’019 Patent, Abstract).
- Asserted Claims: Claim 11. (Compl. ¶131).
- Accused Features: The accused SoCs allegedly implement a "unified memory architecture" and support shared virtual memory (SVM), which allows the CPU and GPU to share a single address space. This functionality is allegedly enabled by memory controllers and memory management units (MMU/SMMU) that act as the claimed "mapper." (Compl. ¶¶ 137-138).
Multi-Patent Capsule: U.S. Patent No. 9,116,809
- Patent Identification: U.S. Patent No. 9,116,809, “Memory Heaps in a Memory Model for a Unified Computing System,” issued August 25, 2015. (Compl. ¶15).
- Technology Synopsis: Related to the ’019 Patent, this invention details a method for allocating memory where a memory operation referencing a shared memory address (SMA) is mapped to one of a plurality of "memory heaps." The mapping is based on the SMA, and the result is provided to the processor to complete the operation, unifying memory access for different processors. (’809 Patent, Abstract).
- Asserted Claims: Claim 1. (Compl. ¶150).
- Accused Features: The accused SoCs' support for shared virtual memory (SVM) and unified memory architecture, where memory operations from the CPU or GPU are mapped to shared system memory via hardware like the System Memory Management Unit (SMMU). (Compl. ¶¶ 156, 158).
III. The Accused Instrumentality
Product Identification
The complaint accuses Qualcomm’s Snapdragon 8 Gen 3 and Snapdragon 8+ Gen 1 mobile processors ("Exemplary Qualcomm SoCs") and downstream products that incorporate them, specifically identifying the OnePlus 13R and Nothing Phone (2) smartphones. (Compl. ¶¶ 43, 74-76).
Functionality and Market Context
The accused SoCs are described as "premium-tier" mobile platforms that integrate a multi-core Kryo CPU and an Adreno 700 series GPU. (Compl. ¶¶ 76, 95). The complaint alleges these SoCs feature a "unified shader architecture," support for "Low Priority Asynchronous Compute (LPAC)," heterogeneous CPU cores for power management, and a "unified memory architecture" with shared virtual memory. (Compl. ¶¶ 78, 79, 117, 137). The screenshot from a product brief for the Snapdragon 8+ Gen 1 describes it as a "premium-tier powerhouse" delivering boosts across "on-device experiences." (Compl. p. 22). The end-user smartphones are alleged to incorporate these SoCs to perform their core computing functions. (Compl. ¶¶ 74, 75).
IV. Analysis of Infringement Allegations
U.S. Patent No. 8,854,381 Infringement Allegations
| Claim Element (from Dependent Claim 5) | Alleged Infringing Functionality | Complaint Citation | Patent Citation | 
|---|---|---|---|
| an apparatus comprising: a plurality of engines associated with a first processing unit... configured to receive... a plurality of tasks and to load a state data associated with each... | The Adreno 700 series GPU allegedly includes a "Command Processor," "Render Front End," and microcontrollers that function as a plurality of engines receiving tasks from the CPU's Kernel Graphics Support Layer (KGSL) driver. | ¶77 | col. 7:1-14 | 
| a shader core associated with the first processing unit... configured to... execute a first task from the plurality while executing a second task from the plurality... | The Adreno GPU's "unified shader architecture," including its "Shader Processors," allegedly supports Low Priority Asynchronous Compute (LPAC), which enables the concurrent execution of different tasks. | ¶¶78-79 | col. 3:9-13 | 
| wherein the first task comprises a graphics-processing task and the second task comprises a general-computing task. | The complaint alleges that the accused GPUs concurrently process both graphics tasks and general-compute tasks, citing Qualcomm documentation regarding asynchronous compute. | ¶80 | col. 8:19-24 | 
Identified Points of Contention
- Scope Questions: A central question may be whether the various functional blocks identified in the Adreno architecture (e.g., "Command Processor," "Render Front End") collectively meet the definition of a "plurality of engines" that receive tasks "substantially in parallel," as contemplated by the patent.
- Technical Questions: The analysis may focus on whether the accused "Low Priority Asynchronous Compute" feature constitutes executing one task "while" another is executing, as required by the claim. A dispute could arise over whether this is true concurrent execution on shared resources or a more conventional form of rapid, fine-grained time-slicing that does not meet the claim limitation. The block diagram of the Adreno 700 series GPU architecture is provided as evidence of the distinct front-end components alleged to be "engines." (Compl. p. 23).
U.S. Patent No. 9,519,943 Infringement Allegations
| Claim Element (from Independent Claim 11) | Alleged Infringing Functionality | Complaint Citation | Patent Citation | 
|---|---|---|---|
| a processing device, comprising: a set of queues, each queue... being configured to hold commands received from a central processing unit (CPU)... | The Adreno 700 series GPU allegedly includes microcontrollers and a command processor that manage a plurality of queues, such as ring buffers, to hold commands sent from the Kryo CPU. | ¶96 | col. 5:35-42 | 
| a command processor... wherein the set of queues includes a high priority queue... wherein the command processor is configured to retrieve a high priority command held in the high priority queue before retrieving commands held in other queues... | The complaint alleges that the accused SoCs can pause a low-priority workload to service a high-priority task, such as UI rendering, which is presented as evidence of the claimed priority-based command retrieval. | ¶98 | col. 2:60-67 | 
| and a processing core configured to execute the received command, wherein the command processor is configured to retrieve commands... and send the retrieved commands to the processing core for execution. | The Adreno GPU's "Shader Processors" and "uSPTPs" allegedly function as the claimed "processing core" that executes the commands retrieved and sent by the command processor. | ¶101 | col. 4:3-5 | 
Identified Points of Contention
- Scope Questions: A likely point of dispute will be the definition of "retrieve." The complaint points to documentation on "context switching" to pause a low-priority task (Compl. ¶98); the defense may argue this is a preemption mechanism that occurs after command retrieval, not the claimed act of selecting from the high-priority queue before retrieving from others.
- Technical Questions: The infringement analysis will likely require evidence of how the Adreno command processor's scheduling logic actually operates. The core question is whether it implements a strict priority retrieval system as claimed, or a more complex, weighted scheduling algorithm where high-priority commands are favored but not necessarily retrieved first in all circumstances. A diagram showing command processing for the Adreno 7xx GPU is offered to illustrate separate processing paths for standard and low-priority asynchronous compute tasks. (Compl. p. 37).
V. Key Claim Terms for Construction
Term: "execute a first task... while executing a second task" (’381 Patent, Claim 1)
Context and Importance
This term is central to the infringement theory for the ’381 Patent. Its construction will determine whether the accused GPU's handling of concurrent workloads, such as its LPAC feature, meets the claim requirements. Practitioners may focus on this term because the distinction between true simultaneous execution and rapid time-slicing or preemption is a critical technical and legal issue.
Intrinsic Evidence for Interpretation
- Evidence for a Broader Interpretation: The specification uses terms like "substantially in parallel" and "concurrently," which could suggest that the tasks do not need to be executing instructions in the exact same clock cycle, but rather that their execution periods overlap significantly. (’381 Patent, col. 3:10-13, 8:26-31).
- Evidence for a Narrower Interpretation: The description of partitioning the shader core’s resources (e.g., SIMDs) for different tasks could support a narrower reading that requires spatial partitioning and simultaneous, parallel execution on distinct hardware subsets. (’381 Patent, col. 5:12-29).
Term: "retrieve a high priority command... before retrieving commands held in other queues" (’943 Patent, Claim 11)
Context and Importance
This phrase defines the core priority mechanism of the ’943 Patent. The case may turn on whether the accused command processor's actions constitute "retrieving" from one queue before another, or if it uses a different form of prioritization, like preempting an already-executing low-priority task.
Intrinsic Evidence for Interpretation
- Evidence for a Broader Interpretation: The patent’s stated goal of reducing latency for high-priority commands could be used to argue for a construction that covers any mechanism that logically prioritizes execution, not just the literal act of fetching from memory. (’943 Patent, col. 1:40-45).
- Evidence for a Narrower Interpretation: The claim language focuses on the sequence of "retrieving" from queues. The specification describes a command processor that checks the high-priority queue first, suggesting the term "retrieve" refers to the pre-execution act of fetching a command from a buffer for processing. (’943 Patent, col. 6:5-15).
VI. Other Allegations
Indirect Infringement
The complaint alleges induced infringement against all Defendants, asserting that by selling the SoCs (Qualcomm) and the smartphones containing them (Nothing, OnePlus), and by providing marketing, user manuals, and technical documentation, they encourage and instruct customers and end-users to operate the products in an infringing manner. (Compl. ¶¶ 81, 84, 102, 105). Contributory infringement is also alleged on the basis that the accused SoCs are a material part of the invention and have no substantial non-infringing uses. (Compl. ¶¶ 83, 104).
Willful Infringement
Willfulness is alleged against all Defendants based on knowledge of the asserted patents obtained "at least as early as the filing of the present Complaint." (Compl. ¶¶ 53, 61, 69). This alleges post-suit willfulness rather than pre-suit knowledge.
VII. Analyst’s Conclusion: Key Questions for the Case
- A central issue will be one of architectural mapping: do the functional blocks described in Qualcomm's public-facing documentation (e.g., "Command Processor," "LPAC," heterogeneous cores) and evidenced by diagrams in the complaint (Compl. pp. 23, 37) correspond to the specific claim elements of "engines," "queues," and "mappers" as defined and enabled by the patent specifications, or is there a fundamental mismatch in architecture?
- A key evidentiary question will be one of functional operation: does the accused system's method of handling priority tasks constitute "retriev[ing]" from a high-priority queue before others as claimed in the ’943 Patent, or is it a post-retrieval preemption mechanism? Similarly, for the ’381 Patent, does the system's concurrent processing constitute executing one task "while" executing another, or is it a form of rapid time-slicing that falls outside the claim's scope?
- A third core question will be one of definitional scope: for the memory management patents (’019 and ’809), can the combination of the accused SoC’s hardware units (e.g., SMMU, GPU MMU) and associated driver software be properly characterized as the claimed "mapper," and do the system's memory regions qualify as the claimed "virtual memory pools" or "heaps" operating within a unified system as the patents describe?