DCT

5:25-cv-01370

Artificial Intelligence Imaging Association Inc v. Mvision Ai Inc

Key Events
Complaint

I. Executive Summary and Procedural Information

  • Parties & Counsel:
  • Case Identification: 5:25-cv-01370, W.D. Tex., 10/24/2025
  • Venue Allegations: Plaintiff alleges venue is proper in the Western District of Texas because Defendant maintains a "regular and established place of business" in San Antonio and has committed alleged acts of infringement within the district, including selling its software to local healthcare institutions.
  • Core Dispute: Plaintiff alleges that Defendant’s GBS Contour+ radiotherapy planning software infringes a patent related to methods for generating synthetic image data to train machine learning models.
  • Technical Context: The technology at issue involves creating artificial, computer-generated images to serve as training data for artificial intelligence, a technique used to overcome the scarcity of real-world data in specialized fields like medical imaging.
  • Key Procedural History: The complaint states that Plaintiff sent Defendant a formal demand letter on June 9, 2025, identifying the patent-in-suit and alleging infringement, an event that may be central to allegations of willful infringement.

Case Timeline

Date Event
2019-04-25 ’272 Patent Priority Date
2022-02-22 ’272 Patent Issue Date
2023-01-16 Defendant publishes blog post on generating synthetic CT images
2023-04-01 Defendant publishes study on "Clinical Evaluation" of Contour+ models
2024-01-19 Defendant publishes post on synthetic CT performance
2025-04-23 Defendant publishes another clinical evaluation of Contour+ models
2025-06-09 Plaintiff sends demand letter to Defendant
2025-10-24 Complaint Filed

II. Technology and Patent(s)-in-Suit Analysis

U.S. Patent No. 11,257,272 - "Generating Synthetic Image Data for Machine Learning"

  • Patent Identification: U.S. Patent No. 11,257,272, "Generating Synthetic Image Data for Machine Learning," issued February 22, 2022.

The Invention Explained

  • Problem Addressed: The patent's background section describes the significant challenge of acquiring vast and varied image datasets required to train accurate machine learning models for computer vision tasks. It notes that existing datasets are often scarce, limited in scope, and expensive to create through manual photography (’272 Patent, col. 2:19-39).
  • The Patented Solution: The invention provides an automated method for generating synthetic image data. The process involves programmatically assembling a virtual 3D scene by combining elements from different libraries: background images, 3D object models, and texture materials. A "virtual camera" is then placed within this scene, using settings that mimic real-world camera parameters (e.g., intrinsics, calibration) to capture a 2D image of the scene. This allows for the rapid creation of large, diverse, and precisely controlled training datasets (’272 Patent, Abstract; col. 3:22-41).
  • Technical Importance: This automated approach is presented as a solution to the data scarcity bottleneck in AI development, replacing time-consuming manual data collection with a scalable, synthetic alternative (’272 Patent, col. 2:40-51).

Key Claims at a Glance

  • The complaint asserts infringement of Claim 1 of the ’272 Patent (Compl. ¶¶8, 15, 22).
  • Independent Claim 1 requires the following essential method steps:
    • Receiving, by a processor, a database of background images, an object database including 3D models, a library of texture materials, and a library of camera setting files;
    • Constructing a synthetic image scene rendered by a graphics rendering engine, which includes selecting and arranging a background and a 3D model with texture;
    • Placing a virtual camera with defined settings (e.g., position, intrinsics, calibration);
    • Rendering projection coordinates as pixel coordinates to create the synthetic image; and
    • Appending the synthetic image to a training dataset for machine learning computer vision tasks.
  • The complaint alleges infringement of "one or more" claims, suggesting a reservation of rights to assert other claims, including dependent claims (Compl. ¶10).

III. The Accused Instrumentality

Product Identification

  • Product Identification: Defendant Mvision's GBS Contour+ software (Compl. ¶1).

Functionality and Market Context

  • Functionality and Market Context:
    • The GBS Contour+ software is described as an "AI-powered auto-segmentation" tool used in radiotherapy planning to automatically delineate organs-at-risk and lymph nodes in medical scans (Compl. ¶13).
    • The core accused functionality is the software's alleged use of deep learning models, specifically generative adversarial networks (GANs), to generate synthetic CT (Computed Tomography) images from real MR (Magnetic Resonance) images. This process is allegedly used to "enlarge or augment training datasets" for its own AI models (Compl. ¶14).
    • The complaint alleges this synthetic data generation capability is a key feature used to market the product to healthcare providers (Compl. ¶16).

IV. Analysis of Infringement Allegations

The complaint provides a chart summarizing its infringement theory for Claim 1 (Compl. pp. 10-11).

'272 Patent Infringement Allegations

Claim Element (from Independent Claim 1) Alleged Infringing Functionality Complaint Citation Patent Citation
receiving, by a processor, a database of background images, an object database including 3D models, a library of texture materials... The software allegedly receives real MR images as "backgrounds" and uses "anatomical libraries" with implied 3D models for organ rendering. ¶¶23, 25 col. 47:41-44
constructing a synthetic image scene rendered by a graphics rendering engine, including selecting and arranging a background... 3D model with texture A GAN generator allegedly "constructs scenes using 3D modeling techniques" and applies tissue densities, which are equated with textures. ¶¶23, 25 col. 47:45-54
placing a virtual camera with defined settings (position data, intrinsics, calibration, capture) The software allegedly "applies virtual imaging parameters" that simulate CT scanner settings, which are described as being "akin to camera intrinsics/calibration." ¶¶23, 25 col. 47:55-60
rendering projection coordinates as pixel coordinates The accused GAN process is alleged to explicitly output a "pixel-based synthetic CT." ¶¶25, 11 col. 47:61-62
appending to a training dataset for ML computer vision tasks Defendant's blog is quoted as stating that synthetically generated images are used to "enlarge or augment training datasets required by machine learning models." ¶¶14, 23, 11 col. 47:63-67
  • Identified Points of Contention:
    • Scope Questions: A principal issue may be whether the term "graphics rendering engine," as used in the patent, can be construed to read on the "GAN generator" allegedly used in the accused software. The complaint anticipates this dispute by arguing for infringement under the doctrine of equivalents, asserting that a GAN performs the same function (rendering) in substantially the same way to achieve the same result (a synthetic image) (Compl. ¶26).
    • Technical Questions: The complaint alleges that the accused software's process of generating synthetic CTs from MR images maps to the claimed steps of receiving four distinct libraries (backgrounds, 3D models, textures, camera settings) and composing a scene. The analysis will raise the question of whether the accused GAN-based process operates this way, or if it uses a fundamentally different, integrated learning approach that does not involve the discrete assembly of pre-defined scene components as claimed.

V. Key Claim Terms for Construction

  • The Term: "graphics rendering engine"

  • Context and Importance: This term is central because the complaint’s theory hinges on equating a "GAN generator" with the claimed "graphics rendering engine" (Compl. ¶26). Practitioners may focus on this term because the technical definition will determine whether a machine learning model that generates images by learning a data distribution is the same as a traditional graphics engine that renders images by projecting geometric data.

  • Intrinsic Evidence for Interpretation:

    • Evidence for a Broader Interpretation: The specification provides examples such as "Blender or Unity" but also allows for "a custom scene creation library, framework, program, application, or other implementation," which could support an argument that the term is not limited to conventional 3D renderers (’272 Patent, col. 15:31-34).
    • Evidence for a Narrower Interpretation: The patent’s detailed description of the rendering process focuses on traditional computer graphics concepts like rendering "projection coordinates" into "pixel coordinates," which may support a narrower definition limited to engines that perform geometric projection rather than probabilistic generation (’272 Patent, col. 18:55-67, col. 19:20-27).
  • The Term: "receiving... a database of background images, an object database including 3D models, a library of texture materials, and a library of camera setting files"

  • Context and Importance: Claim 1 recites receiving four distinct data sources as inputs. The infringement allegation relies on mapping features of the accused software to each of these inputs, with some being "implied" (Compl. ¶18b). The case may turn on whether the accused GAN-based system actually uses four separate, discrete inputs as claimed.

  • Intrinsic Evidence for Interpretation:

    • Evidence for a Broader Interpretation: The patent describes the function of these databases in general terms (e.g., the object database "stores objects incorporated into image scenes"), which may support a view that any data source serving that function meets the limitation (’272 Patent, col. 28:3-4).
    • Evidence for a Narrower Interpretation: The claim language ("a database... an object database... a library... and a library") and patent diagrams (e.g., Fig. 8, elements 801-804) depict four distinct inputs. This structure could support an argument that a system must receive these specific, separate data types, rather than, for instance, a single dataset of paired MR/CT images from which a GAN learns a transformation.

VI. Other Allegations

  • Indirect Infringement: The complaint alleges inducement based on Defendant's marketing materials, user guides, and technical support, which allegedly encourage customers to use the accused synthetic data features (Compl. ¶¶16, 29). It alleges contributory infringement by providing a non-staple "synthetic data generation module" especially designed for the infringing purpose (Compl. ¶¶32-33).
  • Willful Infringement: Willfulness is alleged based on both pre-suit and post-suit knowledge. The complaint alleges pre-suit knowledge based on Defendant's monitoring of the industry (Compl. ¶44) and actual knowledge as of the June 9, 2025 demand letter. It further alleges Defendant continued its infringing conduct after receiving notice (Compl. ¶45).

VII. Analyst’s Conclusion: Key Questions for the Case

  • A core issue will be one of technical equivalence: Can a Generative Adversarial Network (GAN) that learns to translate one image type to another (e.g., MR to CT) be considered equivalent to the claimed "graphics rendering engine," which is described as composing a scene from discrete geometric and textural inputs? The resolution will depend on both claim construction and factual evidence regarding the operational principles of the accused technology.
  • A key evidentiary question will be one of process architecture: Does the accused GBS Contour+ software actually perform the claimed method of receiving four separate input libraries (backgrounds, 3D models, textures, camera settings) and assembling them into a synthetic scene, or does its underlying AI model employ a fundamentally different, end-to-end learning process that does not map onto these discrete steps? Discovery into the software's design will be critical to resolving this factual dispute.