DCT

6:22-cv-01102

AttestWave LLC v. Google LLC

I. Executive Summary and Procedural Information

  • Parties & Counsel:
  • Case Identification: 6:22-cv-01102, W.D. Tex., 10/24/2022
  • Venue Allegations: Venue is alleged to be proper based on Defendant maintaining an established place of business within the district and allegedly committing acts of patent infringement there.
  • Core Dispute: Plaintiff alleges that certain of Defendant's products and services infringe a patent related to methods for validating the proper execution and integrity of software operating over a computer network.
  • Technical Context: The technology addresses the challenge of ensuring that client-side software in a network environment operates according to predefined rules and has not been tampered with, a key issue in network security and quality-of-service management.
  • Key Procedural History: The complaint does not mention any prior litigation, inter partes review proceedings, or licensing history related to the patent-in-suit.

Case Timeline

Date Event
2002-03-16 U.S. Patent No. 7,305,704 Priority Date
2007-12-04 U.S. Patent No. 7,305,704 Issue Date
2022-10-24 Complaint Filing Date

II. Technology and Patent(s)-in-Suit Analysis

U.S. Patent No. 7,305,704 - "Management of trusted flow system"

  • Patent Identification: U.S. Patent No. 7,305,704, “Management of trusted flow system,” issued December 4, 2007 (the “’704 Patent”).

The Invention Explained

  • Problem Addressed: The patent’s background section describes a fundamental problem in software-based networks like the internet: unlike traditional telephone networks where the network owner controls the user device, internet users have control over the software on their own machines. This allows a malicious or misbehaving user to modify software to overuse network resources or launch attacks, with no reliable way for the network to proactively verify the integrity of the user's software ('704 Patent, col. 2:32-41).
  • The Patented Solution: The invention proposes a system to validate software integrity remotely. It pairs a client-side “Trusted Flow Generator” (TFG) with a network-side “Trusted Tag Checker” (TTC). The TFG is designed to “interlock” a program's normal operational logic (e.g., sending data packets) with a signal-generating function that creates an unpredictable “security tag” ('704 Patent, col. 2:10-22). This tag is embedded in the data flow. The TTC, located at a network checkpoint like a firewall, independently calculates what the tag should be and compares it to the tag received in the packet ('704 Patent, Abstract; FIG. 1). A match provides assurance that the client software is authentic and operating correctly, allowing the network to trust the flow ('704 Patent, col. 2:18-22).
  • Technical Importance: This system aimed to provide a proactive mechanism for ensuring software and traffic-flow integrity, in contrast to the reactive nature of conventional firewalls or intrusion detection systems that typically act only after misbehavior is detected ('704 Patent, col. 2:51-56).

Key Claims at a Glance

  • The complaint does not specify which claims it asserts, referring only to "exemplary claims" in a separate exhibit not attached to the publicly filed complaint (Compl. ¶12). Independent claim 1 is representative of the system described.
  • The essential elements of independent claim 1 include:
    • A system for validating proper execution of software modules at a remote location.
    • A “trusted flow generator” (TFG) subsystem at the remote location (e.g., a client device) that executes trusted software.
    • A “trusted tag checker” (TTC) subsystem at a validating location (e.g., a network appliance).
    • The TFG locally generates a “sequence of security tags” that is responsive only to the “proper execution” of the software module.
    • A communications network couples the TFG and TTC.
    • The TTC also generates its own sequence of security tags and validates the remote software by comparing its locally generated tags against the tags received from the TFG.
  • The complaint reserves the right to assert additional claims of the ’704 Patent (Compl. ¶12).

III. The Accused Instrumentality

Product Identification

  • The complaint does not identify any specific accused products or services by name. It refers to “Exemplary Defendant Products” that are purportedly identified in an “Exhibit B” (Compl. ¶12). This exhibit was not filed with the complaint.

Functionality and Market Context

  • The complaint does not provide sufficient detail for analysis of the accused instrumentality's functionality or market context.

IV. Analysis of Infringement Allegations

The complaint alleges that infringement is detailed in claim charts attached as Exhibit B, which was not provided with the filing (Compl. ¶12, ¶17). The narrative allegations state that the accused products “practice the technology claimed by the ’704 Patent” and “satisfy all elements of the Exemplary ’704 Patent Claims” (Compl. ¶17). Without access to Exhibit B or more specific factual allegations, a detailed infringement analysis is not possible.

No probative visual evidence provided in complaint.

  • Identified Points of Contention: Based on the patent’s claims and the generalized nature of the allegations, the infringement analysis may raise several technical and legal questions:
    • Architectural Question: A threshold question will be whether any Google product or service can be shown to map onto the claimed two-part TFG/TTC architecture. This would require evidence of a client-side component that generates security tags interlocked with software execution and a distinct network-side component that validates these specific tags.
    • Technical Question: What evidence does the Plaintiff possess to demonstrate that any security feature in a Google product generates a "sequence of security tags" that is, as the claim requires, "responsive only to proper execution" of a software module? The "interlocking" of a program's operational rules with tag generation is a core concept of the invention, and a potential point of dispute will be whether the accused functionality performs this specific operation.
    • Scope Questions: Claim 1 uses functional language (e.g., "means for validating"). This raises the question of whether its elements are subject to 35 U.S.C. § 112(f). If a court determines they are, the scope of the claims would be limited to the corresponding structures and algorithms disclosed in the patent's specification (such as the flowcharts in FIGS. 8-11) and their equivalents, which could significantly narrow the infringement inquiry.

V. Key Claim Terms for Construction

  • The Term: "security tags"

    • Context and Importance: The definition of this term is fundamental to the scope of the invention. The dispute will likely focus on whether this term covers any form of security data transmitted with a packet or is limited to the specific type of unpredictable, execution-dependent signal described in the patent.
    • Intrinsic Evidence for a Broader Interpretation: The patent states the mechanism involves "signaling and allow[ing] piggybacking of proper signals for various purposes, e.g., authentication" ('704 Patent, col. 1:44-46). The patent also discloses that a security tag can be as simple as a single bit of information ('704 Patent, col. 10:40-41).
    • Intrinsic Evidence for a Narrower Interpretation: The specification strongly links the tags to a "cryptographic pseudo-random generator" whose "output cannot be predicted" ('704 Patent, col. 2:15-18). This supports an interpretation that the tags must be unpredictable and generated via a specific cryptographic process that is "interlocked" with the trusted program's performance ('704 Patent, col. 2:10-14).
  • The Term: "responsive only to proper execution of each said respective software module"

    • Context and Importance: This phrase defines the critical "interlocking" feature. Practitioners may focus on this term because its interpretation will determine how tightly the tag generation must be linked to the software's behavior to meet the claim limitation.
    • Intrinsic Evidence for a Broader Interpretation: A party could argue this simply means the software is running without crashing or throwing a fatal error.
    • Intrinsic Evidence for a Narrower Interpretation: The specification suggests "proper execution" means conforming to "allowed and defined specifications" and "rules of transmissions," such as a TCP connection's allocated window size ('704 Patent, col. 1:37-39; col. 2:3-6). This supports a narrower reading where the software must not only run but also adhere to specific, predefined operational parameters for its execution to be considered "proper."

VI. Other Allegations

  • Indirect Infringement: The complaint alleges that Google induces infringement by distributing "product literature and website materials" that instruct end users to use its products in a manner that infringes the ’704 Patent (Compl. ¶15).
  • Willful Infringement: The complaint does not use the term "willful," but it alleges "Actual Knowledge of Infringement" arising from the service of the complaint and its attached (but un-filed) claim charts (Compl. ¶14). This allegation forms a basis for potential post-suit willful infringement but does not assert any pre-suit knowledge on Google's part.

VII. Analyst’s Conclusion: Key Questions for the Case

  • A primary issue will be one of evidentiary sufficiency: Given the complaint’s reliance on an unattached exhibit, the initial phase of the case will likely focus on identifying the specific accused Google products and what factual evidence Plaintiff can produce to allege these products implement the claimed TFG/TTC architecture.
  • A key technical question will be one of functional correspondence: Does any security feature in the accused products perform the specific “interlocking” function required by the claims—generating a sequence of security tags demonstrably and exclusively tied to the “proper execution” of a software module—or is there a fundamental mismatch in technical operation?
  • The case may also hinge on claim construction, specifically whether the functional language in the asserted claims will be construed under 35 U.S.C. § 112(f). Such a finding would limit the claims to the specific algorithms disclosed in the specification and their equivalents, creating a potentially higher bar for proving infringement.