DCT

4:24-cv-03200

University Of British Columbia v. Caption Health Inc

I. Executive Summary and Procedural Information

  • Parties & Counsel:
  • Case Identification: 5:24-cv-03200, N.D. Cal., 12/20/2024
  • Venue Allegations: Venue is alleged to be proper as Defendants maintain a regular and established place of business within the Northern District of California and have committed acts of infringement in the district.
  • Core Dispute: Plaintiff alleges that Defendants’ AI-driven cardiac ultrasound guidance software infringes patents related to using neural networks for assessing the quality and view category of echocardiographic images.
  • Technical Context: The technology addresses the significant challenge of acquiring high-quality cardiac ultrasound images, a process that traditionally requires years of specialized operator training, by using artificial intelligence to provide real-time feedback and guidance.
  • Key Procedural History: The complaint alleges that Plaintiff's inventors disclosed the patented innovations to the founder of Caption Health's predecessor in May 2017. Plaintiff also alleges it provided Defendant Caption Health with notice of infringement of the '591 patent beginning on May 5, 2022. Defendant GE Healthcare acquired Defendant Caption Health in February 2023. During prosecution of the '029 Patent, an office action rejection based on the publication for the '591 Patent was overcome by establishing common ownership.

Case Timeline

Date Event
2016-04-21 U.S. Patent No. 11,129,591 Priority Date
2017-05-01 Plaintiff's inventors meet with founder of Caption Health
2018-08-31 U.S. Patent No. 10,751,029 Priority Date
2020-08-25 U.S. Patent No. 10,751,029 Issued
2021-09-28 U.S. Patent No. 11,129,591 Issued
2022-05-05 Plaintiff sends first notice letter re: '591 Patent
2023-02-01 GE Healthcare acquires Caption Health
2024-12-20 Complaint Filing Date

II. Technology and Patent(s)-in-Suit Analysis

U.S. Patent No. 11,129,591 - Echocardiographic Image Analysis (Issued Sep. 28, 2021)

The Invention Explained

  • Problem Addressed: The patent background describes the difficulty in capturing high-quality echocardiographic images, even for experienced operators, which can delay diagnosis and treatment (Compl. ¶39; ’591 Patent, col. 1:29-34). It notes that different anatomical views have distinct criteria for what constitutes a "quality" image, making a one-size-fits-all analysis approach ineffective (Compl. ¶40; ’591 Patent, col. 5:62-67).
  • The Patented Solution: The invention is a system that uses neural networks to assess the quality of echocardiographic images. Critically, the system employs an architecture that is specialized for different anatomical "view categories." It proposes using a neural network with both "common shared layers" for general feature extraction and separate "view-specific layers" for quality assessment tailored to each view, which is described as more efficient than using fully separate networks (Compl. ¶¶ 40-41; ’591 Patent, col. 12:10-35, FIG. 8).
  • Technical Importance: This approach provided a method for automating image quality assessment in a computationally efficient manner, reducing the reliance on highly skilled human operators and potentially decreasing the need for repeat scans (Compl. ¶¶ 19, 23; ’591 Patent, col. 5:30-45).

Key Claims at a Glance

  • Independent Claim 1 (System):
    • A computer-implemented system for facilitating echocardiographic image analysis.
    • Receives signals for a first echocardiographic image and associates it with a first view category.
    • Determines a first quality assessment value based on the image and its view category.
    • The system repeats this process for a second image associated with a second, different view category.
    • Each view category is associated with a "respective set of assessment parameters," which are a "set of neural network parameters" defining a neural network with input and output layers for quality assessment.
  • Independent Claim 15 (Method):
    • A computer-implemented method mirroring the steps of the system in Claim 1.
    • Receiving signals for images, associating them with different view categories, and determining view-category-specific quality assessment values using sets of assessment parameters.
    • Producing signals representing the quality assessment values.

U.S. Patent No. 10,751,029 - Ultrasonic Image Analysis (Issued Aug. 25, 2020)

The Invention Explained

  • Problem Addressed: The patent identifies that providing only a quality score may be insufficient for inexperienced operators, who also need to know what anatomical view or "image property" they are capturing ('029 Patent, col. 1:22-31). Performing both quality assessment and property assessment (e.g., view categorization) separately can be computationally demanding, hindering implementation on portable devices (Compl. ¶43).
  • The Patented Solution: The invention provides a system that simultaneously determines both a quality assessment value and an image property (such as view category) from a set of ultrasound images. It achieves this efficiency by first "deriving one or more extracted feature representations" from the images, and then using these shared representations as the basis for both determinations ('029 Patent, Abstract; Compl. ¶¶ 43, 49). This combined approach is described as yielding faster processing time compared to separate assessments ('029 Patent, col. 5:46-51).
  • Technical Importance: The claimed method enables efficient, combined image analysis on devices with limited computing power, such as mobile or portable ultrasound systems, by using a shared neural network architecture to reduce processing time and resource usage (Compl. ¶50; ’029 Patent, col. 5:40-51).

Key Claims at a Glance

  • Independent Claim 4 (Method):
    • A computer-implemented method of facilitating ultrasonic image analysis.
    • Receiving signals for a set of ultrasound images.
    • Deriving one or more "extracted feature representations" from the images.
    • Determining, based on the derived representations, a quality assessment value.
    • Determining, based on the same derived representations, an image property associated with the images.
    • Producing signals representing both the quality value and the image property.
  • Independent Claim 23 (System):
    • A system with at least one processor configured to perform the steps of the method in Claim 4.
    • The processor is configured to input an ultrasound image into a "commonly defined first feature extracting neural subnetwork" to generate a "first feature representation."

III. The Accused Instrumentality

Product Identification

  • The accused products are GE Healthcare's "Venue Family" and "Vscan Air SL" point-of-care ultrasound systems that incorporate software named "Caption Guidance" or "Caption AI" (Compl. ¶¶ 52, 67).

Functionality and Market Context

  • The "Caption Guidance" software is an AI-driven tool for echocardiography that provides real-time, on-screen visual guidance to prompt users on probe movements (Compl. ¶¶ 18, 53, 87). The complaint references a screenshot from the accused product's user interface, which displays a list of cardiac views, the ultrasound image, and a "Quality Meter" that rises as the image quality improves (Compl. p. 18). The system is alleged to automatically capture images ("auto-capture") once a preset quality threshold is met (Compl. ¶¶ 60, 72). The complaint also references a workflow diagram from a patent application associated with the accused technology, showing a process of selecting a view, receiving guidance, and acquiring an image (Compl. p. 19). The technology is marketed as enabling even novice users to capture diagnostic-quality cardiac images (Compl. ¶51).

IV. Analysis of Infringement Allegations

'591 Patent Infringement Allegations

Claim Element (from Independent Claim 1) Alleged Infringing Functionality Complaint Citation Patent Citation
a computer-implemented system for facilitating echocardiographic image analysis...comprising at least one processor... The Accused Products are computer-implemented systems, such as the Venue Family ultrasound products, that incorporate processors and are used for echocardiographic image analysis (Compl. ¶¶ 23, 68). ¶68 col. 1:15-18
associating the first at least one echocardiographic image with a first view category of a plurality of predetermined echocardiographic image view categories... The Accused Products guide users to acquire images from 10 standard diagnostic views of the heart (e.g., PLAX, Ap4) and associate the captured images with these view categories (Compl. ¶70). ¶70 col. 5:46-49
determining, based on the first at least one echocardiographic image and the first view category, a first quality assessment value... The Accused Products use a "Quality Meter" that provides a view-category-specific quality assessment for the captured image, which is determined by a Deep Convolutional Neural Network trained on millions of images labeled for quality within specific views (Compl. ¶¶ 71, 76). ¶¶71, 76 col. 6:13-20
wherein each of the plurality of...image view categories is associated with a respective set of assessment parameters, each of the sets of assessment parameters being a set of neural network parameters... The complaint alleges that the Accused Products use neural networks with parameters trained to provide quality assessment values specific to certain view categories, and that these constitute the claimed "sets of assessment parameters" (Compl. ¶¶ 74, 76). ¶74 col. 12:10-21
receiving signals representing a second at least one echocardiographic image...associating the second...image with a second view category...said second view category being different from the first view category... The Accused Products are configured to guide a user through a sequence of different views (e.g., from an apical four-chamber view to a parasternal long-axis view), performing the quality assessment steps for each distinct view in the sequence (Compl. ¶¶ 61, 73). ¶73 col. 17:60-63
  • Identified Points of Contention:
    • Scope Questions: A central question may be whether the accused product's AI architecture constitutes distinct "sets of assessment parameters" for each view category as required by the claims. The analysis may focus on whether the accused product uses functionally or structurally separate neural network parameters for each view, or if it uses a single, monolithic neural network where parameters are not neatly divisible by view category.
    • Technical Questions: What evidence does the complaint provide that the accused system's single "Deep Convolutional Neural Network" (Compl. ¶74) operates as a system with multiple "sets of neural network parameters" that are individually associated with each view category? The dispute may turn on the specific software architecture of the Caption AI system.

'029 Patent Infringement Allegations

Claim Element (from Independent Claim 4) Alleged Infringing Functionality Complaint Citation Patent Citation
receiving signals representing a set of ultrasound images of the subject... The Accused Products acquire a series of echocardiographic images when an operator scans a patient using an ultrasound transducer (Compl. ¶88). ¶88 col. 5:2-4
deriving one or more extracted feature representations from the set of ultrasound images... The Accused Products allegedly use neural networks (DCNNs) to extract feature representations from the ultrasound images. These representations are described as patterns correlated with image quality and view category (Compl. ¶¶ 89-90). ¶¶89, 90 col. 5:34-36
determining, based on the derived one or more extracted feature representations, a quality assessment value representing a quality assessment of the set... The Accused Products use the extracted features to determine a quality assessment value, which is displayed to the user via the "Quality Meter" (Compl. ¶91). ¶91 col. 5:36-38
determining, based on the derived one or more extracted feature representations, an image property associated with the set of ultrasound images... The Accused Products use the same extracted feature representations to determine the view category of the ultrasound images. The system is trained to guide operators to 10 specific diagnostic views (the "image property") (Compl. ¶¶ 92, 94). ¶¶92, 94 col. 5:38-39
producing signals representing the quality assessment value and the image property... The Accused Products automatically capture an image and associate it with both the determined quality assessment value and the determined view category, thereby producing signals representing both outputs (Compl. ¶93). ¶93 col. 6:2-6
  • Identified Points of Contention:
    • Scope Questions: The infringement analysis will likely focus on the term "deriving one or more extracted feature representations." The question will be whether the accused AI system first generates a discrete, intermediate data representation that is then separately used for both quality and property assessment, as the claim language suggests, or if the system architecture generates these outputs in a more integrated, single-step fashion.
    • Technical Questions: Does the accused system's alleged use of "interconnected DL algorithms making 3 simultaneous estimates" (Compl. ¶75) for quality, probe position, and corrective movements map onto the sequential claim structure of first "deriving" a feature representation and then "determining" quality and property from it?

V. Key Claim Terms for Construction

  • The Term: "a set of neural network parameters" ('591 Patent, Claim 1)

    • Context and Importance: This term is central to the core inventive concept of the '591 patent. The claim requires that each view category is associated with a respective set of these parameters. The case may turn on whether the accused product's AI model can be characterized as having distinct "sets" of parameters for each view, or if it is a single, integrated model.
    • Intrinsic Evidence for Interpretation:
      • Evidence for a Broader Interpretation: The specification states that the goal is for "neural network parameters that are eventually used to assess image quality can evaluate quality based on criteria specific to certain view" ('591 Patent, col. 11:25-28). This language could support a more functional definition, where any parameters that result in view-specific assessment qualify, regardless of their architectural separation.
      • Evidence for a Narrower Interpretation: Figure 8 and the accompanying text explicitly describe an architecture with "common shared layers" and "separate view-specific layers" ('591 Patent, col. 12:10-21). A defendant may argue this disclosure limits the "set of neural network parameters" to the parameters within these structurally separate, view-specific portions of the network.
  • The Term: "deriving one or more extracted feature representations" ('029 Patent, Claim 4)

    • Context and Importance: This term establishes the sequential nature of the claimed method: first derive representations, then determine quality and property from those representations. Infringement will depend on whether the accused system's process aligns with this two-step concept. Practitioners may focus on this term because it distinguishes the invention from a single-step, end-to-end model that directly outputs quality and property.
    • Intrinsic Evidence for Interpretation:
      • Evidence for a Broader Interpretation: The abstract broadly refers to "deriving one or more extracted feature representations from the set of ultrasound images" without specifying their structure ('029 Patent, Abstract). This could be argued to cover any internal data state generated by a neural network after processing an input image.
      • Evidence for a Narrower Interpretation: The specification teaches that the architecture may "allow the analyzer to be implemented by a device that does not require an extremely high computing power" because "extracted feature representations for both assessments could be extracted from a commonly defined neural subnetwork" (Compl. ¶43; a similar statement appears at '029 Patent, col. 2:5-8 of a related patent). This suggests the "feature representations" are the specific output of a common, efficiency-enabling subnetwork, potentially narrowing the term's scope.

VI. Other Allegations

  • Indirect Infringement: The complaint alleges both induced and contributory infringement. Inducement is based on Defendants selling the Accused Products and providing instructions (e.g., product demos, user guidance) that encourage customers to use the products in an infringing manner (Compl. ¶¶ 78, 97).
  • Willful Infringement: Willfulness is alleged for both patents. For the '591 Patent, the complaint alleges pre-suit knowledge based on a May 2017 meeting between Plaintiff's inventors and the founder of Caption Health's predecessor, as well as a notice letter sent on May 5, 2022 (Compl. ¶¶ 25-26, 82). For the '029 Patent, the complaint alleges Defendants had actual knowledge before the filing of the amended complaint (Compl. ¶101).

VII. Analyst’s Conclusion: Key Questions for the Case

  • A core issue will be one of architectural alignment: Does the accused "Caption AI" system, described as a Deep Convolutional Neural Network, operate using distinct "sets of neural network parameters" for each anatomical view as required by the '591 patent, or is it a monolithic architecture where such "sets" are not structurally or functionally separable?
  • A key evidentiary question will be one of operational sequence: Does the accused system's method of providing "simultaneous estimates" for image quality and guidance align with the '029 patent's claimed process of first "deriving...extracted feature representations" from an image and subsequently "determining" both a quality value and an image property based on those shared representations?
  • A third question will relate to knowledge and intent: Given the alleged pre-suit meeting between the inventors and the defendant's founder, the court will examine what knowledge Defendants possessed regarding the patented technology and when they acquired it, which will be central to the claim of willful infringement.