DCT

1:17-cv-01370

ZiiLabs v. NVIDIA Corp

Key Events
Complaint

I. Executive Summary and Procedural Information

  • Parties & Counsel:
  • Case Identification: 1:17-cv-01370, D. Del., 10/02/2017
  • Venue Allegations: Venue is asserted based on Defendant's incorporation in Delaware, commission of infringing acts in the state, and regular business conduct within the district.
  • Core Dispute: Plaintiff alleges that Defendant’s graphics processing units (GPUs) and related products infringe four patents related to graphics memory management and processing techniques.
  • Technical Context: The technology at issue concerns methods for managing memory in 3D graphics systems, a critical function for achieving high performance in computationally intensive applications like video games and professional design software.
  • Key Procedural History: The complaint alleges that Defendant had pre-suit knowledge of three of the asserted patents since at least August 7, 2013, and of the fourth patent since at least November 2016, based on notice letters sent by Plaintiff. The complaint also notes that Intel Corporation acquired a license to Plaintiff's patent portfolio in November 2012.

Case Timeline

Date Event
1994-01-01 3DLabs Inc. Ltd. formed (Plaintiff's predecessor)
1995-01-01 3DLabs introduces the GLINT 300SX GPU
1996-01-01 3DLabs launches the Permedia GPU
1999-06-09 Earliest Priority Date for ’615, ’061, and ’425 Patents
2002-01-01 3DLabs acquired by Creative Technology Ltd.
2003-12-31 Earliest Priority Date for ’943 Patent
2004-01-27 ’615 Patent Issued
2006-05-23 ’061 Patent Issued
2009-01-01 3DLabs rebranded as ZiiLabs Inc., Ltd.
2010-05-04 ’425 Patent Issued
2012-11-01 Intel licenses ZiiLabs’ patent portfolio
2013-08-07 Plaintiff allegedly provides notice of ’615, ’061, ’425 Patents to Defendant
2015-08-04 ’943 Patent Issued
2016-11-01 Plaintiff allegedly provides notice of ’943 Patent to Defendant
2017-10-02 Complaint Filed

II. Technology and Patent(s)-in-Suit Analysis

U.S. Patent No. 6,683,615: "Doubly-Virtualized Texture Memory"

  • Patent Identification: U.S. Patent No. 6,683,615, "Doubly-Virtualized Texture Memory," issued January 27, 2004.

The Invention Explained

  • Problem Addressed: The patent describes the difficulty for software applications and drivers to manage the memory on a graphics card ("texture memory") when the required graphical data exceeds the available space (U.S. Patent No. 6,683,615, col. 4:15-28). This forces difficult trade-offs regarding which data to delete and can lead to memory fragmentation and performance degradation (Id.).
  • The Patented Solution: The invention proposes a "doubly virtualized" memory hierarchy for graphics data. The graphics card's dedicated memory is treated as a primary level that can be "paged" into the host computer's main physical memory (RAM), which in turn can be paged into the host's bulk storage (e.g., a hard disk) ('615 Patent, Abstract; col. 5:1-9). This system allows graphics hardware to manage a much larger virtual memory space for textures than is physically present on the graphics card itself, automating memory management in a manner similar to a CPU ('615 Patent, col. 5:1-24).
  • Technical Importance: This architecture aimed to solve the growing problem of massive texture data in 3D applications by applying established virtual memory concepts directly to graphics subsystems, thereby simplifying software development and enabling more complex visual scenes without requiring prohibitively expensive amounts of on-card memory ('615 Patent, col. 4:45-57).

Key Claims at a Glance

  • The complaint asserts independent claim 1 and dependent claims 2-10 (Compl. ¶4).
  • Independent Claim 1 includes the following essential elements:
    • A computer system with a CPU, main memory, and a bulk storage unit.
    • Specialized graphics-processing logic with its own specialized graphics memory, separate from the main memory.
    • The main memory is configured to provide a "first level of virtualizing" for the graphics memory.
    • The bulk storage is configured to provide a "second level of virtualizing" for the graphics memory.

U.S. Patent No. 7,050,061: "Autonomous Address Translation in Graphic Subsystem"

  • Patent Identification: U.S. Patent No. 7,050,061, "Autonomous Address Translation in Graphic Subsystem," issued May 23, 2006.

The Invention Explained

  • Problem Addressed: The patent notes that prior art address translation hardware for graphics (such as Intel's GART) was inflexible and outside of the application's control, preventing optimization ('061 Patent, col. 7:6-14). Software-based management of texture memory, conversely, was described as "very difficult (or impossible) to do properly" ('061 Patent, col. 7:35-42).
  • The Patented Solution: The invention discloses a graphics chip with an integrated memory management function that can autonomously handle logical-to-physical address translation ('061 Patent, Abstract). This allows the graphics hardware itself to manage page faults for texture data—fetching needed data from host memory on demand—without requiring intervention from the host CPU, thereby offloading memory management duties from the host system ('061 Patent, col. 9:5-12).
  • Technical Importance: By moving memory management tasks from the host driver to the graphics hardware, this approach sought to reduce latency and CPU overhead, a critical step for enabling the performance required by real-time 3D graphics applications ('061 Patent, col. 9:5-12).

Key Claims at a Glance

  • The complaint asserts independent claim 1 and dependent claim 2 (Compl. ¶4).
  • Independent Claim 1 includes the following essential elements:
    • A graphics processing chip comprising rendering acceleration logic.
    • A texture memory management function integrated on the chip.
    • The function manages texture storage in both host memory and "normal texture memory" (i.e., on-card memory).

Multi-Patent Capsules

  • Patent Identification: U.S. Patent No. 7,710,425, "Graphic Memory Management with Invisible Hardware-Managed Page Faulting," issued May 4, 2010.

  • Technology Synopsis: This patent discloses a graphics accelerator that handles page faults for texture data "invisibly" to the host processor ('425 Patent, Abstract). When the accelerator requires texture data located in the host's main memory, the accelerator's own hardware manages the data transfer without interrupting the host CPU, streamlining memory operations and reducing system overhead ('425 Patent, col. 5:6-21).

  • Asserted Claims: Claims 1-11, with claim 2 identified as representative (Compl. ¶¶4, 54).

  • Accused Features: The accused functionality is the hardware-level memory management within NVIDIA's GPUs that handles the paging of texture data from host memory (Compl. ¶¶5, 54).

  • Patent Identification: U.S. Patent No. 9,098,943, "Multiple Simultaneous Bin Sizes," issued August 4, 2015.

  • Technology Synopsis: This patent addresses a conflict in "tiled rendering" or "binning" architectures, where a screen is divided into regions (bins) for processing ('943 Patent, Abstract). The invention resolves the tension between needing large bins for efficient geometric processing and small bins for efficient rendering (to fit in on-chip memory) by allowing the "database bin size" to be different from, and a multiple of, the "display bin size" ('943 Patent, Abstract).

  • Asserted Claims: Claims 1-26, with claim 1 identified as representative (Compl. ¶¶4, 68).

  • Accused Features: The complaint targets the tiled rendering architectures in NVIDIA's GPUs, specifically alleging they use a multi-bin-size approach to manage graphics processing (Compl. ¶¶5, 68).

III. The Accused Instrumentality

Product Identification

The complaint accuses NVIDIA's graphics cards, GPUs, and GPU designs implementing the "Fermi, Kepler, Maxwell, Pascal, and Volta architectures," and products containing them (Compl. ¶5).

Functionality and Market Context

The accused products are high-performance graphics processors that are central to Defendant's business and are widely used in personal computers, workstations, and servers for applications including gaming, professional design, and artificial intelligence. The complaint alleges these products contain sophisticated memory management and rendering pipeline architectures that infringe the asserted patents (Compl. ¶¶1, 5).

IV. Analysis of Infringement Allegations

No probative visual evidence provided in complaint.

The complaint references but does not attach claim chart exhibits. The infringement theories are therefore summarized from the complaint's narrative allegations.

’615 Patent Infringement Allegations

The complaint alleges that the Accused Products, when used in a computer system, directly infringe by creating the three-level memory hierarchy recited in the claims (Compl. ¶¶25-26). The theory suggests that the GPU's on-card memory serves as the claimed "specialized graphics memory," the host system's RAM provides the "first level of virtualizing," and the host's disk-based page file provides the "second level of virtualizing" (Compl. ¶¶29, 32).

’061 Patent Infringement Allegations

The complaint alleges that the Accused Products contain a "texture memory management function" that is integrated onto the GPU chip (Compl. ¶¶39-40). This function is alleged to manage texture storage both locally on the graphics card and in the host system's main memory, thereby practicing the elements of at least Claim 1 of the ’061 Patent (Id.).

Identified Points of Contention

  • Scope Questions: For the ’615 Patent, a central question may be whether the ordinary operation of a GPU using a host operating system's memory management constitutes the claimed system where main memory is "configured to provide a first level of virtualizing for said graphics memory." For the ’061 Patent, a dispute may arise over whether NVIDIA’s combination of on-chip hardware and driver software meets the "integrated on said chip" limitation for the claimed memory management function.
  • Technical Questions: What evidence does the complaint provide that the accused GPU architectures actively manage a "second level of virtualizing" into bulk storage, as required by the '615 Patent, beyond what is provided by the host OS? Similarly, for the '061 Patent, what evidence demonstrates that the on-chip hardware, independent of the driver, "manages texture storage in host memory"?

V. Key Claim Terms for Construction

Term: "main memory is configured to provide a first level of virtualizing for said graphics memory" (’615 Patent, Claim 1)

Context and Importance

This term is central to defining the patented system architecture. Its construction will determine whether a standard computer architecture with a GPU using system RAM infringes, or if a more specialized, graphics-centric control over that memory is required. Practitioners may focus on whether "configured to provide" requires action by the graphics subsystem or if it can describe a passive capability of the host system.

Intrinsic Evidence for Interpretation

  • Evidence for a Broader Interpretation: The specification describes the concept broadly as the graphics memory being "paged into host physical memory" ('615 Patent, Abstract), which could suggest that any mechanism enabling this data swapping meets the limitation.
  • Evidence for a Narrower Interpretation: The detailed description illustrates a graphics-specific memory management system, including a Memory Allocator and Download Controller, that actively manages the paging process ('615 Patent, FIG. 10; col. 5:1-9). This may support an interpretation that the graphics subsystem itself must perform the "configuring."

Term: "a texture memory management function, integrated on said chip" (’061 Patent, Claim 1)

Context and Importance

The "integrated on said chip" limitation is a potential focal point for non-infringement arguments. The dispute will likely involve the degree to which the management "function" must be implemented in hardware on the GPU versus in the software driver running on the host CPU.

Intrinsic Evidence for Interpretation

  • Evidence for a Broader Interpretation: The patent states that the claimed function "manages both texture storage in host memory and also texture storage in normal texture memory" ('061 Patent, Claim 1), suggesting that if any part of this management involving host memory is handled by on-chip logic, the claim may be met.
  • Evidence for a Narrower Interpretation: The patent emphasizes "autonomous" hardware-based page faulting that offloads duties from the host ('061 Patent, Abstract; col. 9:5-12). This could support an argument that the core decision-making logic for managing host memory storage must reside on the chip, not in the driver.

VI. Other Allegations

Indirect Infringement

The complaint alleges inducement by asserting that Defendant provides instructions, user guides, and technical support that encourage customers to use the Accused Products in an infringing manner (e.g., installing a graphics card into a host computer) (Compl. ¶¶29, 43, 57, 71). Contributory infringement is also alleged on the basis that the products are not staple commodities and are specifically adapted for infringement (Compl. ¶¶32, 46, 60, 74).

Willful Infringement

The complaint alleges willful infringement for all four patents based on Defendant’s alleged pre-suit knowledge. For the ’615, ’061, and ’425 patents, knowledge is alleged as of an August 7, 2013 notice letter (Compl. ¶¶33, 47, 61). For the ’943 patent, knowledge is alleged as of a November 2016 communication (Compl. ¶75).

VII. Analyst’s Conclusion: Key Questions for the Case

  • A core issue will be one of definitional scope: can the broad, system-level claims of the ’615 patent, describing a memory hierarchy that includes the host's RAM and disk, be construed to cover the standard operation of a graphics card in a modern computer, or do they require a more specialized, graphics-centric control over that hierarchy?
  • A key technical question will be the locus of control for memory management: do the accused NVIDIA GPUs perform the claimed memory management and page faulting functions "autonomously" and "invisibly" via hardware "integrated on said chip" as required by the ’061 and ’425 patents, or is the driver software executing on the host CPU so integral to the process that these on-chip limitations are not met?
  • The case may also turn on a question of architectural functionality: does NVIDIA’s tiled rendering technology practice the specific method of using distinct "database bin" and "display bin" sizes to manage rendering workloads as claimed in the ’943 patent, or does it employ a fundamentally different architectural solution to the same problem?