DCT
3:25-cv-07658
Artificial Intelligence Industry Association Inc v. Parallel Domain Inc
Key Events
Amended Complaint
I. Executive Summary and Procedural Information
- Parties & Counsel:
- Plaintiff: Artificial Intelligence Industry Association, Inc. (Florida)
- Defendant: Parallel Domain, Inc. (Delaware)
- Plaintiff’s Counsel: Fry Law Corporation
- Case Identification: 3:25-cv-07658, N.D. Cal., 12/04/2025
- Venue Allegations: Plaintiff alleges venue is proper in the Northern District of California because Defendant has its principal place of business in Palo Alto and has committed acts of infringement in the district by selling accused software products to California-based customers.
- Core Dispute: Plaintiff alleges that Defendant’s synthetic data generation platform for machine learning infringes three patents related to generating synthetic images, embedding calibration data into stereoscopic video, and processing stereoscopic 3D video.
- Technical Context: The technology at issue involves the creation of artificial, computer-generated data (synthetic data) to train and test artificial intelligence models, particularly for computer vision tasks in fields like autonomous driving.
- Key Procedural History: The complaint states that prior to filing suit, Plaintiff sent Defendant a formal demand letter identifying the Asserted Patents, and that Defendant rejected Plaintiff’s offer to license the patents and continued its allegedly infringing activities.
Case Timeline
| Date | Event |
|---|---|
| 2015-04-29 | Earliest Priority Date for ’315 Patent and ’693 Patent |
| 2018-03-27 | Issue Date for U.S. Patent No. 9,930,315 |
| 2018-09-11 | Issue Date for U.S. Patent No. 10,075,693 |
| 2019-04-25 | Priority Date for ’272 Patent |
| 2022-02-22 | Issue Date for U.S. Patent No. 11,257,272 |
| 2023-06-01 | Alleged announcement of Defendant's "Data Lab" API |
| 2025-12-04 | Complaint Filing Date |
II. Technology and Patent(s)-in-Suit Analysis
U.S. Patent No. 11,257,272 - “Systems and Methods for Generating Labeled Image Data for Machine Learning Using a Multi-Stage Image Processing Pipeline”
- Issued: February 22, 2022 (the “’272 Patent”)
The Invention Explained
- Problem Addressed: The patent addresses the scarcity of suitable image data for training computer vision (CV) networks. Manually capturing vast, diverse, and specialized datasets (e.g., stereo images with depth data) is expensive, time-consuming, and often results in low-quality or limited data (’272 Patent, col. 2:19-38).
- The Patented Solution: The invention provides an automated method for generating synthetic training data. The system constructs a virtual 3D scene by selecting and arranging elements from databases of background images, 3D models, and textures. It then places a virtual camera within this scene, defines its view using specific camera settings files, and renders synthetic images from that perspective, including additional data channels like depth or optical flow (’272 Patent, Abstract; col. 3:22-43; Fig. 11).
- Technical Importance: This automated approach enables the rapid creation of large, highly varied, and perfectly labeled datasets, which is critical for training robust machine learning models and avoiding "overfitting" problems common with limited datasets (’272 Patent, col. 5:40-49).
Key Claims at a Glance
- The complaint asserts at least Claim 17 of the ’272 Patent (Compl. ¶41).
- Independent Claim 17 is a method claim comprising the essential elements of:
- Receiving databases of background images, 3D models, texture materials, scene metadata, and camera setting files.
- Constructing a first synthetic image scene with a specified image scene class using a computer graphics engine.
- Placing a virtual camera to capture a series of camera views.
- Rendering projection coordinates as synthetic images for each camera view.
- Constructing a second synthetic image scene of the same scene class and capturing views from the same camera positions.
- The complaint does not explicitly reserve the right to assert dependent claims for this patent.
U.S. Patent No. 10,075,693 - “Embedding Calibration Metadata Into Stereoscopic Video Files”
- Issued: September 11, 2018 (the “’693 Patent”)
The Invention Explained
- Problem Addressed: When playing back stereoscopic 3D video, the playback device requires camera and sensor parameters (e.g., lens distortion, sensor position) to render the images correctly. This becomes complex when video from different cameras is combined into a single file, as each portion may require different calibration parameters (’693 Patent, col. 1:41-67).
- The Patented Solution: The invention describes a system for embedding calibration and other sensor metadata directly into the stereoscopic video file in real-time as it is being captured. The data can be embedded using a metadata channel in the video file format, such as subtitle or closed captioning fields, allowing the metadata to be synchronized with the video on a frame-by-frame basis (’693 Patent, Abstract; col. 2:4-13).
- Technical Importance: This method ensures that calibration data travels with the video content, simplifying playback, enabling accurate rendering in VR environments, and allowing for the combination of footage from multiple sources without losing critical sensor information (’693 Patent, col. 2:7-13).
Key Claims at a Glance
- The complaint asserts at least Claim 1 of the ’693 Patent (Compl. ¶¶ 53, 128).
- Independent Claim 1 is a system claim for recording stereoscopic 3D video, comprising the essential elements of:
- A computer store containing a stereoscopic video feed and contemporaneous metadata feeds.
- A computer processor programmed to obtain the feeds.
- The processor is programmed to embed the metadata feeds into the stereoscopic video feed in real-time.
- The embedding utilizes subtitle or closed captioning metadata fields in the video file format.
- The complaint does not explicitly reserve the right to assert dependent claims for this patent.
U.S. Patent No. 9,930,315 - “Stereoscopic 3D Camera for Virtual Reality Experience”
- Issued: March 27, 2018 (the “’315 Patent”)
- Technology Synopsis: The patent claims methods for recording and processing stereoscopic 3D video, which includes embedding both static and time-varying calibration data (e.g., lens distortion, motion data from a gyroscope) into the video sequence in real-time. The invention also covers methods for stabilizing the captured video by applying transformations to remove undesired motion, making it suitable for playback in a virtual reality environment (’315 Patent, col. 7:50-8:13).
- Asserted Claims: At least Claim 1 (Compl. ¶¶ 58, 116).
- Accused Features: Defendant's simulation systems are accused of infringing by using stereoscopic cameras or 3D sensors, embedding sensor metadata like calibration in real-time during data generation, and applying motion filtering and stabilization in their 3D-to-2D mapping processes (Compl. ¶59).
III. The Accused Instrumentality
Product Identification
- The accused instrumentalities are Defendant Parallel Domain's software products and services for synthetic data generation, including its simulation APIs, SDKs (specifically the PD-SDK), web tools, PD Replica, Reactor, and various public datasets (Compl. ¶¶ 3, 46, 60).
Functionality and Market Context
- The platform programmatically generates high-fidelity synthetic sensor data—including camera, lidar, and radar data with full annotations—for training and testing machine learning, computer vision, and perception systems (Compl. ¶¶ 3, 46). It allows users to create "digital twins" of real-world environments and procedurally generate vast datasets with precise control over sensor configurations (e.g., camera intrinsics and extrinsics), environmental conditions, and object behaviors (Compl. ¶¶ 62, 65, 66).
- The complaint includes a screenshot from a YouTube video titled "Why You Need Synthetic Data," which describes the creation of synthetic images for machine learning training as the foundation of Defendant's business model (Compl. ¶5).
- The complaint alleges that these products are marketed to major entities in the autonomous systems space, including Google, Continental, Woven Planet, and Toyota Research Institute, directly competing with Plaintiff's offerings in the AI imaging market (Compl. ¶¶ 17, 24).
IV. Analysis of Infringement Allegations
’272 Patent Infringement Allegations
| Claim Element (from Independent Claim 17) | Alleged Infringing Functionality | Complaint Citation | Patent Citation |
|---|---|---|---|
| receiving, by a processor, a database of background images, an object database including 3D models, a library of texture materials, and a library of camera setting files | Defendant's platform accesses a database of background environments, a 3D model database with over 3,000 asset types, a texture library, and a camera settings library (CameraIntrinsic class). | ¶¶ 41, 62, 65 | col. 51:22-31 |
| constructing a first synthetic image scene with a specified image scene class using a computer graphics engine | The platform uses a GPU-accelerated graphics rendering engine to construct photorealistic synthetic scenes with pixel-level detail. | ¶¶ 41, 62 | col. 51:32-41 |
| placing a virtual camera to capture a series of camera views | Defendant's system places virtual cameras in scenes using its SensorExtrinsic class, which provides control over position, orientation, and transformation matrices. | ¶¶ 41, 66 | col. 51:42-45 |
| rendering projection coordinates as synthetic images for each camera view | The platform renders 3D scenes to 2D image projections using camera matrices, outputting RGB images and other render channels like depth and segmentation. | ¶¶ 41, 67 | col. 51:50-54 |
| constructing a second synthetic image scene with the same scene class and capturing views with the same camera positions | Defendant's SDK and Data Lab API allow for systematic variation of scenarios while maintaining consistent scene classes to avoid overfitting and ensure balanced datasets. | ¶¶ 41, 68 | col. 51:55-63 |
- Identified Points of Contention:
- Scope Questions: A potential point of dispute may be whether the term "receiving... a database," as used in the patent, reads on accessing an integrated, procedurally generated asset library as alleged. The defense could argue the patent contemplates distinct, pre-existing databases, whereas the accused platform generates assets and variations dynamically.
- Technical Questions: The analysis may question whether Defendant's "procedural generation pipeline" (Compl. ¶62) performs the specific claim step of "constructing a... scene" by "selecting" and "arranging" elements, or if it employs a fundamentally different generative process.
’693 Patent Infringement Allegations
Note: The complaint asserts at least Claim 1 of the ’693 Patent, a system claim (Compl. ¶53). However, the provided claim chart in Exhibit D maps the accused product to a method claim titled "A method for mapping stereoscopic data from a three-dimensional space to a two-dimensional space and filtering motion data" (Compl., Ex. D), which corresponds more closely to the subject matter of method claims in the patent family. The following chart summarizes the infringement theory for a method claim as presented in Exhibit D.
| Claim Element (Mapping a method claim per Ex. D) | Alleged Infringing Functionality | Complaint Citation | Patent Citation |
|---|---|---|---|
| obtaining a video stream from a stereoscopic camera that captures stereoscopic video streams | The platform generates synthetic stereoscopic video streams from simulated stereoscopic cameras with multiple viewpoints. | ¶78 | col. 7:42-50 |
| parameters representing a field of view of the stereoscopic camera | The CameraIntrinsic class provides configurable Field of View (FOV) parameters for each simulated camera. | ¶82 | col. 5:29-37 |
| embedding calibration information into the stereoscopic video sequence in real time | The platform is alleged to embed comprehensive calibration metadata (intrinsics, extrinsics, distortion) during the generation process. | ¶¶ 79, 98 | col. 2:4-13 |
| embedding a static calibration information once per the stereoscopic video sequence | Static parameters including lens distortion (k1-k6), principal point (cx, cy), and camera model are embedded once per sequence. | ¶¶ 92, 97 | col. 13:30-40 |
| embedding a time varying calibration information once per frame | Dynamic per-frame data including sensor poses and orientations (yaw/pitch/roll) are embedded for each frame via the SensorExtrinsic class. | ¶¶ 92, 98 | col. 13:35-44 |
- Identified Points of Contention:
- Scope Questions: For Claim 1, a central question will be whether storing calibration data in a separate "calibration" folder that is accessed programmatically via an SDK (Compl. ¶¶ 54, 80) meets the limitation of "embed[ding] these metadata feeds into the stereoscopic video feed... utilizing subtitle or closed captioning fields." The defense may argue this is a different, non-infringing method of data association.
- Technical Questions: What evidence does the complaint provide that the metadata is embedded "in real-time" during the simulation/rendering process, as required by Claim 1, rather than being generated and stored as a separate file after the image frames are created?
V. Key Claim Terms for Construction
The Term: "embed... into the stereoscopic video feed... utilizing subtitle or closed captioning fields" (’693 Patent, Claim 1)
- Context and Importance: This term is critical because the complaint alleges infringement based on a system that stores calibration data in a separate folder (Compl. ¶80), not necessarily within the video file's subtitle or caption fields. The viability of the infringement claim against the ’693 Patent may depend entirely on whether this separate-but-linked storage method can be construed as meeting this claim limitation.
- Intrinsic Evidence for Interpretation:
- Evidence for a Broader Interpretation: The specification states that one goal is to "embed the camera and sensor parameters directly into the video file" (’693 Patent, col. 1:53-55). A party might argue that any method that logically and permanently associates the metadata with the video file on a frame-by-frame basis, making it inseparable for playback purposes, achieves this goal, regardless of the specific file format field used.
- Evidence for a Narrower Interpretation: The claim explicitly recites "utilizing subtitle or closed captioning fields." The specification further discusses this specific embodiment: "The remaining metadata can be encoded into the stereoscopic video sequence using subtitle metadata or a table in the metadata header" (’693 Patent, col. 8:12-15). This language suggests that the use of these specific fields is a required element of the invention, not merely an example.
The Term: "constructing a synthetic image scene" (’272 Patent, Claim 17)
- Context and Importance: Practitioners may focus on this term because the accused platform uses a "procedural generation pipeline" (Compl. ¶62). The dispute will likely center on whether this automated, generative process is equivalent to the more discrete steps of "selecting" and "arranging" elements from databases as described in the patent.
- Intrinsic Evidence for Interpretation:
- Evidence for a Broader Interpretation: The patent's detailed description frames the process broadly, stating that scenes are assembled "by combining background images, scene objects, and textures" (’272 Patent, col. 15:20-22). This could be argued to encompass any method that combines these categories of assets to form a scene.
- Evidence for a Narrower Interpretation: The claim language recites discrete steps of "selecting a background image," "selecting a 3D model," and "arranging" them (’272 Patent, Claim 1, col. 48:6-25). This language, along with flowcharts like Figure 11, could support an interpretation that the invention requires a specific, sequential process of choosing and placing distinct elements, rather than a holistic generative algorithm.
VI. Other Allegations
- Indirect Infringement: The complaint alleges inducement of infringement based on Defendant’s technical documentation, tutorials, and customer support, which allegedly instruct customers on how to use the platform in an infringing manner (Compl. ¶10). It also alleges contributory infringement, asserting the accused software constitutes a material component especially made for infringement with no substantial non-infringing use (Compl. ¶11).
- Willful Infringement: Willfulness is alleged based on Defendant’s purported pre-suit knowledge of the Asserted Patents via a formal demand letter and its subsequent rejection of a license offer (Compl. ¶¶ 14, 15, 111). The complaint asserts that continued infringement despite this notice constitutes willful and egregious conduct (Compl. ¶¶ 111, 123).
VII. Analyst’s Conclusion: Key Questions for the Case
- A core issue will be one of technical implementation: Does storing calibration data in a separate, programmatically linked "calibration" folder, as the accused system allegedly does, meet the ’693 Patent's specific claim requirement to "embed... into the stereoscopic video feed... utilizing subtitle or closed captioning fields"?
- A second central question will be one of definitional scope: Can the ’272 Patent's claim term "constructing a synthetic image scene," which is described as selecting and arranging discrete elements, be construed to cover the accused platform's use of a "procedural generation pipeline" to create synthetic environments?
- A key damages-related question will be one of intent: Given the complaint’s allegations of a pre-suit demand letter and a rejected license offer, the court will need to determine whether the alleged infringement, if found, was willful, which could expose the defendant to enhanced damages.