DCT
3:25-cv-07658
Artificial Intelligence Industry Association Inc v. Parallel Domain Inc
I. Executive Summary and Procedural Information
- Parties & Counsel:- Plaintiff: Artificial Intelligence Industry Association, Inc. (Florida)
- Defendant: Parallel Domain, Inc. (Delaware)
- Plaintiff’s Counsel: Fry Law Corporation
 
- Case Identification: 3:25-cv-07658, N.D. Cal., 09/09/2025
- Venue Allegations: Plaintiff alleges venue is proper because Defendant has a principal place of business in the district and has committed acts of infringement there, including selling and offering for sale the accused software products to customers in California.
- Core Dispute: Plaintiff alleges that Defendant’s synthetic data generation software products infringe patents related to the creation of stereoscopic 3D video, the embedding of calibration metadata into video files, and the generation of synthetic image data for training machine learning models.
- Technical Context: The technology at issue involves generating high-fidelity synthetic and stereoscopic image data, which is critical for training and validating computer vision algorithms, particularly for autonomous systems and artificial intelligence.
- Key Procedural History: The complaint states that prior to filing, Plaintiff sent Defendant a formal demand letter identifying the Asserted Patents and alleging infringement, which may be relevant to subsequent claims of willful infringement.
Case Timeline
| Date | Event | 
|---|---|
| 2015-04-29 | Priority Date for ’315 Patent | 
| 2015-04-29 | Priority Date for ’693 Patent | 
| 2018-03-27 | Issue Date for U.S. Patent No. 9,930,315 | 
| 2018-09-11 | Issue Date for U.S. Patent No. 10,075,693 | 
| 2018-10-19 | Priority Date for ’272 Patent | 
| 2022-02-22 | Issue Date for U.S. Patent No. 11,257,272 | 
| 2025-09-09 | Complaint Filing Date | 
II. Technology and Patent(s)-in-Suit Analysis
U.S. Patent No. 9,930,315 - "Stereoscopic 3D Camera for Virtual Reality Experience," issued March 27, 2018
The Invention Explained
- Problem Addressed: Creating an immersive virtual reality (VR) experience requires capturing stereoscopic 3D video that accurately reflects how human eyes perceive depth, but manufacturing variances between the lenses and sensors of a stereoscopic camera can introduce distortions that suppress the 3D effect (’315 Patent, col. 3:41-51). Furthermore, stabilizing video captured by a moving camera for a VR headset presents unique technical challenges (’315 Patent, col. 8:7-14).
- The Patented Solution: The invention claims methods for recording and processing stereoscopic 3D video by embedding calibration and motion data (e.g., from a gyroscope) into the video sequence in real-time (’315 Patent, col. 13:20-28). This embedded data is then used during playback to perform stabilization, such as unwrapping distorted fisheye video into a rectilinear format and correcting for unwanted motion, thereby providing a stable and immersive VR experience (’315 Patent, col. 8:39-47; FIG. 1).
- Technical Importance: This approach enables real-time video stabilization tailored for VR playback by embedding the necessary sensor and calibration data directly into the video stream, obviating the need for complex post-processing and facilitating a more seamless user experience (’315 Patent, col. 4:1-4).
Key Claims at a Glance
- The complaint asserts infringement of at least independent Claim 1 (Compl. ¶51).
- Claim 1 is a method for recording stereoscopic 3D video, with essential elements including:- recording sequences of stereoscopic images by multiple image sensors;
- combining the recorded sequences into a stereoscopic video sequence; and
- embedding calibration information into the stereoscopic video sequence in real time, where the information includes both static calibration data (e.g., lens distortion) and time-varying data (e.g., inertial measurement data).
 
- The complaint does not explicitly reserve the right to assert dependent claims.
U.S. Patent No. 10,075,693 - "Embedding Calibration Metadata Into Stereoscopic Video Files," issued September 11, 2018
The Invention Explained
- Problem Addressed: A stereoscopic video player requires camera and sensor parameters to properly render video frames. When videos from different cameras are combined, or when camera parameters change during recording, the player needs a way to associate the correct parameters with the corresponding video segments on a frame-by-frame basis (’693 Patent, col. 1:41-67).
- The Patented Solution: The invention provides a system and method for embedding time-sequenced metadata (such as calibration, IMU, and location data) from camera sensors directly into the stereoscopic video feed in real-time as it is recorded (’693 Patent, Abstract). The patent discloses using metadata channels within a video file format, such as subtitle or closed captioning fields, to carry this time-stamped data, allowing a playback device to parse the information and calibrate the video for display (’693 Patent, col. 8:7-14; FIG. 11).
- Technical Importance: This method creates a self-contained video file where per-frame calibration and sensor data are synchronized with the video content, enabling accurate playback even when combining footage from multiple sources or when capture conditions change over time (’693 Patent, col. 2:5-12).
Key Claims at a Glance
- The complaint asserts infringement of at least independent Claim 1 (Compl. ¶63).
- Claim 1 is a computerized system for recording stereoscopic 3D video, with essential elements including:- a computer store containing a stereoscopic video feed and contemporaneous metadata feeds from a sensor;
- a computer processor programmed to obtain the video and metadata feeds; and
- the processor being further programmed to embed the metadata feeds into the video feed in real-time by encoding them into subtitles or closed captioning metadata fields of the video file format.
 
- The complaint does not explicitly reserve the right to assert dependent claims.
U.S. Patent No. 11,257,272 - "Systems and Methods for Generating Labeled Image Data for Machine Learning Using a Multi-Stage Image Processing Pipeline," issued February 22, 2022
- Technology Synopsis: The ’272 Patent addresses the difficulty and expense of obtaining large, high-quality, and richly annotated datasets required to train computer vision models (’272 Patent, col. 2:18-38). The patented solution is a system and method for automatically generating synthetic image datasets by constructing virtual 3D scenes from databases of objects and backgrounds, capturing views with a virtual camera defined by specific settings files, rendering synthetic images, and providing them to a training dataset (’272 Patent, col. 2:39-51).
- Asserted Claims: The complaint asserts infringement of at least independent Claim 1 and independent Claim 17 (Compl. ¶17, ¶25, ¶74).
- Accused Features: The complaint alleges that Defendant’s systems, which assemble virtual scenes from 3D objects and real-world scans, render synthetic images using configurable camera settings, and use the data to train machine learning models, practice the claimed methods (Compl. ¶26).
III. The Accused Instrumentality
Product Identification
- The accused instrumentalities are Defendant’s "simulation APIs, SDKs, and web tools for machine learning, computer vision, and perception systems," including "PD Replica for generating digital twins from real-world capture data" (Compl. ¶1).
Functionality and Market Context
- The complaint alleges that Defendant's products are used to generate high-fidelity synthetic data for training and testing machine learning models (Compl. ¶1, ¶16). The system allegedly allows users to generate synthetic data with "exact sensor configurations, environments, weather conditions, and annotation rulesets" (Compl. ¶1).
- Functionality described in the complaint includes generating frames from multiple virtual cameras, computing calibration data (extrinsics and intrinsics) for those cameras, and storing that metadata alongside the image data, which users can access via a software development kit (PD-SDK) (Compl. ¶30). The system is also alleged to generate "digital twins" from real-world captures to create synthetic datasets for training perception models (Compl. ¶23).
- The complaint positions the accused products as serving industries such as automotive autonomy and robotics, where high-fidelity simulation and synthetic data are used for machine learning optimization (Compl. ¶24).
IV. Analysis of Infringement Allegations
’315 Patent Infringement Allegations
| Claim Element (from Independent Claim 1) | Alleged Infringing Functionality | Complaint Citation | Patent Citation | 
|---|---|---|---|
| a method for recording stereoscopic 3D video, comprising: recording sequences of stereoscopic images by multiple image sensors... | Defendant's systems are alleged to use stereoscopic cameras or 3D sensors for capturing and simulating depth-aware data. | ¶35 | col. 9:60-63 | 
| combining the recorded sequences of stereoscopic images into a stereoscopic video sequence; | Defendant's systems allegedly generate simulated video feeds from this captured stereoscopic or 3D sensor data. | ¶35 | col. 4:50-52 | 
| and embedding calibration information into the stereoscopic video sequence in a real time, said calibration information comprising a static calibration information...and a time varying calibration information... | Defendant's systems are alleged to embed sensor metadata, including calibration and annotations, into the simulated video feeds for real-time processing. | ¶35 | col. 13:23-44 | 
Identified Points of Contention:
- Scope Questions: A central question may be whether Defendant’s generation of synthetic data from virtual sensors constitutes "recording sequences of stereoscopic images by multiple image sensors" within the meaning of a claim rooted in the context of physical camera hardware. The complaint alleges infringement via "simulating" the claimed steps (Compl. ¶35).
- Technical Questions: The complaint alleges the embedding of metadata occurs in "real-time processing" (Compl. ¶35). The evidentiary burden will be on Plaintiff to demonstrate that Defendant's system performs this embedding contemporaneously with data generation, as required by the "in real time" limitation, rather than as a subsequent batch process.
’693 Patent Infringement Allegations
| Claim Element (from Independent Claim 1) | Alleged Infringing Functionality | Complaint Citation | Patent Citation | 
|---|---|---|---|
| a computerized system for recording stereoscopic three-dimensional (3D) video...comprising: (a) a computer store containing data, wherein the data comprises: a stereoscopic video feed...and a plurality of contemporaneous metadata feeds... | Defendant’s systems are alleged to generate frames from multiple virtual cameras and store them as image data, alongside corresponding computed calibration metadata stored in a "calibration" folder. | ¶30 | col. 7:15-24 | 
| b) a computer processor...programmed to...embed...the plurality of contemporaneous metadata feeds into the stereoscopic video feed... | Defendant’s systems allegedly provide image data with "embedded calibration parameters" that users access via the PD-SDK for downstream use. | ¶30 | col. 8:1-7 | 
| by encoding the contemporaneous metadata feeds into the subtitles or closed captioning metadata fields of the video file format... | The complaint alleges that storing calibration metadata alongside image data in a folder for access via an SDK infringes this limitation. | ¶30, ¶27 | col. 8:9-14 | 
Identified Points of Contention:
- Scope Questions: A primary dispute will likely concern whether providing calibration data in a separate "calibration" folder, accessible via an SDK, meets the claim limitation of "embedding" the metadata "into the stereoscopic video feed by encoding" it "into the subtitles or closed captioning metadata fields of the video file format." This raises a question of whether the accused functionality is structurally and functionally equivalent to the specific mechanism recited in the claim.
- Technical Questions: Does the accused system operate in "real-time" as required by the claim? The complaint alleges real-time embedding in a conclusory manner (Compl. ¶27), but the description of generating and storing data in folders (Compl. ¶30) could suggest an offline or batch process, which may be a point of factual dispute.
No probative visual evidence provided in complaint.
V. Key Claim Terms for Construction
For the ’315 Patent:
- The Term: "embedding calibration information into the stereoscopic video sequence in a real time"
- Context and Importance: The temporal aspect of this term is critical. Defendant's potential non-infringement argument may hinge on showing its process is not "real time." Practitioners may focus on whether "in real time" requires the embedding to be contemporaneous with the image capture/generation, or if it can encompass a rapid, subsequent process.
- Intrinsic Evidence for Interpretation:- Evidence for a Broader Interpretation: The specification discusses embedding parameters "at the time of capture" but also describes processing video frames "prior to being encoded into the video file," which could suggest a brief but permissible delay between capture and embedding (e.g., ’315 Patent, col. 4:53-56).
- Evidence for a Narrower Interpretation: Claim 1 contrasts "static calibration information" embedded once with "time varying calibration information" embedded "once per frame," suggesting a tight temporal link between the frame's creation and the embedding of its associated time-varying data (’315 Patent, col. 13:31-38).
 
For the ’693 Patent:
- The Term: "embedding...by encoding the contemporaneous metadata feeds into the subtitles or closed captioning metadata fields of the video file format"
- Context and Importance: This term defines the specific technical mechanism for embedding. The infringement case may depend on whether Defendant's method of providing metadata in a separate folder for SDK access can be construed as meeting this limitation. Practitioners may focus on whether this claim language requires the metadata to be physically integrated within the data structure of the video file itself.
- Intrinsic Evidence for Interpretation:- Evidence for a Broader Interpretation: The patent's goal is to allow a player to "utilize the parameters during the play-back of the video file" (’693 Patent, col. 2:10-12). An argument could be made that any method that makes the parameters available to the player in a synchronized manner achieves this goal, even if not literally in the subtitle track.
- Evidence for a Narrower Interpretation: The claim language is highly specific, reciting "subtitles or closed captioning metadata fields." The specification reinforces this, stating metadata can be encoded "using subtitle metadata or a table in the metadata header" (’693 Patent, col. 8:12-14). This specific disclosure of technical implementation may support a narrower construction limited to modification of the video file structure itself.
 
VI. Other Allegations
- Indirect Infringement: The complaint alleges induced infringement under 35 U.S.C. § 271(b), asserting that Defendant encourages infringing use through its "detailed technical documentation, tutorials, and customer support services" (Compl. ¶1, ¶39). Contributory infringement under § 271(c) is alleged on the basis that Defendant’s software modules are material components "exclusively designed and marketed for the infringing functionalities" with no substantial non-infringing use (Compl. ¶41).
- Willful Infringement: Willfulness is alleged based on Defendant’s continued infringement despite having received a pre-suit demand letter from Plaintiff, which allegedly provided actual knowledge of the Asserted Patents (Compl. ¶3, ¶39, ¶46).
VII. Analyst’s Conclusion: Key Questions for the Case
- A core issue will be one of structural and functional scope: can the act of providing calibration metadata in a separate data folder, accessible via an SDK, be construed to meet the specific claim limitation of "embedding...by encoding...into the subtitles or closed captioning metadata fields of the video file format"? This question will likely be central to the infringement analysis of the ’693 patent.
- A second key question will be one of definitional interpretation: does generating data from virtual cameras and sensors in a simulation environment constitute "recording sequences of stereoscopic images by multiple image sensors" as recited in the claims of the ’315 patent, which are described in the context of physical hardware?
- A third dispositive question will be evidentiary and temporal: what evidence will emerge to show whether the accused systems perform the claimed "embedding" of metadata "in real time" (as required by the ’315 and ’693 patents), or as a non-contemporaneous, offline process that might fall outside the scope of the claims?