DCT

2:19-cv-00183

Accelerated Memory Tech LLC v. F5 Networks Inc

I. Executive Summary and Procedural Information

  • Parties & Counsel:
  • Case Identification: 2:19-cv-00183, W.D. Wash., 02/07/2019
  • Venue Allegations: Venue is alleged to be proper because Defendant F5 Networks, Inc. is a Washington corporation that resides in and maintains a physical presence and established place of business within the Western District of Washington.
  • Core Dispute: Plaintiff alleges that Defendant’s BIG-IP Platform, which provides web application services like dynamic caching and load balancing, infringes a patent related to methods for improving server efficiency by caching intermediate state information.
  • Technical Context: The technology addresses performance bottlenecks in web servers by avoiding the repetitive processing of requests for the same digital resource, a foundational concept in content delivery networks and high-traffic web infrastructure.
  • Key Procedural History: The complaint alleges that Defendant had pre-suit knowledge of the patent and infringement allegations via a letter and claim chart received on August 16, 2018. Public records associated with the patent indicate that on January 29, 2020, after this complaint was filed, the patent owner filed a terminal disclaimer for all claims (1-8) of the patent-in-suit. Such a disclaimer has the effect of dedicating the patent claims to the public, which may render the infringement claims moot.

Case Timeline

Date Event
1999-05-25 ’062 Patent Priority Date (Application Filing)
2003-01-28 ’062 Patent Issue Date
2018-08-16 Defendant allegedly receives pre-suit notice letter
2019-02-07 Complaint Filing Date
2020-01-29 Disclaimer of all claims of the '062 Patent filed

II. Technology and Patent(s)-in-Suit Analysis

  • Patent Identification: U.S. Patent No. 6,513,062, “Method, Apparatus, and Computer Program Product for Efficient Server Response Generation Using Intermediate State Caching,” issued January 28, 2003.

The Invention Explained

  • Problem Addressed: The patent describes conventional web servers as "not highly efficient" because they must repeatedly perform the same "rewrite mapping process" for every request, even for the same resource (Compl. ¶9; ’062 Patent, col. 5:40-54). This process, which transforms a public-facing resource name (e.g., a URL) into a server's internal name for that resource (e.g., a file path), is described as computationally expensive and redundant when handling numerous requests for the same content. (’062 Patent, col. 5:7-9).
  • The Patented Solution: The invention proposes to improve efficiency by caching "intermediate state information" generated during the first request for a resource (’062 Patent, Abstract). Specifically, the result of the mapping from the external name to the internal name is stored. (’062 Patent, col. 7:11-14). When a subsequent request for the same resource arrives, the server can retrieve this cached mapping from a data structure, such as the hash table shown in Figure 1, and use it to generate a response without re-performing the expensive mapping process. (’062 Patent, col. 8:51-64; Fig. 1).
  • Technical Importance: This method of caching intermediate processing results, rather than just final content, aimed to reduce server load and improve response times for high-volume, repetitive web requests. (’062 Patent, col. 5:16-25).

Key Claims at a Glance

  • The complaint asserts independent Claim 1. (Compl. ¶15).
  • The essential elements of Claim 1 include:
    • receiving a first request for a first resource;
    • deriving intermediate state information used in generating a first response to said first request, said intermediate state information comprising a result of mapping an external name of the first request for the first resource to an internal name associated with the first resource;
    • caching said intermediate state information;
    • receiving a second request for said first resource;
    • retrieving said intermediate state information; and
    • generating a second response to said second request using said intermediate state information.

III. The Accused Instrumentality

Product Identification

The accused instrumentality is Defendant's "BIG-IP Platform," which includes services such as "WebAccelerator" and "BIG-IP LTM" (Local Traffic Manager). (Compl. ¶¶12, 24).

Functionality and Market Context

The complaint describes the BIG-IP Platform as a system that provides security, networking, and storage products, performing functions like handling HTTP requests, dynamic caching, and load balancing. (Compl. ¶¶3, 12). The accused "dynamic caching" feature is alleged to cache data from dynamic web applications by inspecting HTTP requests and identifying patterns. (Compl. ¶20). The complaint also highlights the platform's use of "persistence" in its load balancing features to direct requests for specific content to a particular cache server. (Compl. ¶24). A screenshot from an F5 technical article explains how to load balance requests based on the requested content (e.g., "site1.com" to "cache1") to improve cache efficiency (Compl. ¶24).

IV. Analysis of Infringement Allegations

The complaint includes Figure 1 from the patent, which illustrates a hash table data architecture for storing URI descriptor data structures containing cached intermediate state information (Compl. p. 4).

’062 Patent Infringement Allegations

Claim Element (from Independent Claim 1) Alleged Infringing Functionality Complaint Citation Patent Citation
receiving a first request for a first resource The BIG-IP Platform receives HTTP requests for a resource, such as content from an application server. ¶17 col. 6:49-51
deriving intermediate state information...comprising a result of mapping an external name of the first request for the first resource to an internal name... The BIG-IP system analyzes an HTTP request to derive intermediate state information, where an external name (e.g., a URL) is mapped to an internal name (e.g., a destination IP address or server name). ¶¶18-19 col. 7:11-14
caching said intermediate state information The BIG-IP platform's "dynamic caching" feature caches the derived information, such as the association between a website and a specific cache location. ¶22 col. 7:27-34
receiving a second request for said first resource The platform receives repeated requests for the same resource, which are handled through features like session persistence. ¶25 col. 8:51-54
retrieving said intermediate state information The platform uses session persistence to retrieve information directing a subsequent request to the same resource that handled the first request. ¶26 col. 8:54-61
generating a second response to said second request using said intermediate state information The platform uses the retrieved information from its dynamic cache and load balancing system to generate the subsequent response by accessing the appropriate cache. ¶27 col. 8:61-64
  • Identified Points of Contention:
    • Scope Questions: A central question for infringement is whether the "intermediate state information" of the patent, described as a specific data structure caching a "rewrite mapping," can be interpreted to cover the load balancing rules and persistence mechanisms of the accused BIG-IP platform. The complaint's theory appears to equate a load balancing rule with the claimed cached "mapping," a connection that may be disputed.
    • Technical Questions: What evidence demonstrates that the accused "dynamic caching" feature performs the specific claimed step of "mapping an external name... to an internal name" and then caching the result of that mapping? The complaint relies on high-level product descriptions, raising the question of whether the underlying technical operation of the BIG-IP platform matches the specific sequence of steps required by the claim.

V. Key Claim Terms for Construction

  • The Term: "intermediate state information"

  • Context and Importance: This term is the central inventive concept. Its construction will be determinative of infringement, as the dispute hinges on whether F5's caching and load-balancing data qualifies as the claimed "intermediate state information."

  • Intrinsic Evidence for Interpretation:

    • Evidence for a Broader Interpretation: The patent summary states the term may comprise "an internal name corresponding to the first resource and a type of the first resource; the first resource; or a plurality of response header lines for the first resource." (’062 Patent, col. 5:28-33). This language suggests the term is not limited to a single form.
    • Evidence for a Narrower Interpretation: Figure 1 and the detailed description illustrate a very specific "URI Descriptor data structure" (1400) containing multiple distinct fields, including an internal name, file type, body length, and pointers to the cached response body and headers. (’062 Patent, col. 6:5-16; Fig. 1). A party could argue the term is limited to this disclosed embodiment.
  • The Term: "mapping an external name... to an internal name"

  • Context and Importance: This term defines the specific action that creates the "intermediate state information." Practitioners may focus on this term because the plaintiff alleges it is met by F5's load balancing logic, which a defendant may argue is a different technical process.

  • Intrinsic Evidence for Interpretation:

    • Evidence for a Broader Interpretation: The specification describes the process simply as a "translation from the URI to an internal name for the resource." (’062 Patent, col. 7:12-14). Plaintiff may argue this broad language covers any function that associates an incoming URL with a specific server or data location.
    • Evidence for a Narrower Interpretation: The patent repeatedly characterizes the conventional "rewrite mapping process" it seeks to avoid as "computationally expensive." (’062 Patent, col. 7:20-22). A defendant may argue that the claimed "mapping" must also refer to such an expensive process, not to a simple and fast hash function used for load balancing.

VI. Other Allegations

  • Indirect Infringement: The complaint alleges induced infringement under 35 U.S.C. § 271(b), asserting that F5, with knowledge of the patent from a pre-suit notice letter, intentionally encourages its customers to infringe by providing user manuals, articles, and other documentation describing how to configure and use the BIG-IP Platform in an infringing manner. (Compl. ¶31).
  • Willful Infringement: The willfulness allegation is predicated on F5 having knowledge of the ’062 Patent and its alleged infringement since at least August 16, 2018, the date it allegedly received a notice letter and claim chart from Plaintiff. (Compl. ¶30).

VII. Analyst’s Conclusion: Key Questions for the Case

  1. A threshold procedural question, which may be dispositive, is the effect of the disclaimer: Given that the patent owner filed a disclaimer for all asserted claims after the suit was initiated, the primary question is whether any viable infringement claim remains or if the entire case is moot.

  2. Setting the disclaimer aside, a core issue would be one of definitional scope: Can the term "intermediate state information," which is described in the patent in the context of caching a computationally expensive "rewrite mapping," be construed to cover the functionally different concepts of load balancing rules and session persistence data used in the accused F5 BIG-IP platform?

  3. A central evidentiary question would be one of technical equivalence: Does the accused platform's high-level "dynamic caching" functionality actually perform the specific, ordered steps of Claim 1—deriving a mapping from an external to an internal name, caching that mapping, and then retrieving that specific mapping for a second request—or is there a fundamental mismatch in the underlying technical operation?