DCT

5:25-cv-01197

Plus One Robotics Inc v. Artificial Intelligence Industry Association Inc

I. Executive Summary and Procedural Information

  • Parties & Counsel:
  • Case Identification: 5:25-cv-01197, W.D. Tex., 09/23/2025
  • Venue Allegations: Plaintiff Plus One Robotics alleges venue is proper in the Western District of Texas because Defendants sent multiple demand letters to its CEO in San Antonio, allegedly monitor and leverage unrelated patent litigation against Plus One pending in the district, and employ individuals based in Texas.
  • Core Dispute: Plaintiff seeks a declaratory judgment that its robotic vision systems do not infringe five patents asserted by Defendants, which are described as non-practicing patent monetization entities.
  • Technical Context: The dispute is in the field of robotic vision systems, which use artificial intelligence and 3D imaging to enable robots to perceive and interact with objects, a technology critical to modern logistics and industrial automation.
  • Key Procedural History: The complaint details a pre-suit licensing dispute initiated by Defendants in mid-2025. This dispute included letters, meetings, and a draft complaint from Defendants that identified three of the five patents now at issue. The complaint alleges that Defendants threatened to interfere with Plaintiff's separate, pending litigation in the same district (Fortna Systems, Inc. v. Plus One Robotics, Inc.) and to contact Plaintiff's customers if licensing demands were not met.

Case Timeline

Date Event
2015-04-29 Earliest Priority Date for ’315, ’5693, ’919, ’9693 Patents
2018-03-27 ’315 Patent Issued
2018-03-27 ’5693 Patent Issued
2018-04-17 ’919 Patent Issued
2018-10-19 Earliest Priority Date for ’272 Patent
2021-04-13 ’9693 Patent Issued
2022-02-22 ’272 Patent Issued
2025-06-12 Defendants send initial letter to Plaintiff (Compl. ¶19)
2025-07-18 Defendants send "FINAL DEMAND" letter to Plaintiff (Compl. ¶20)
2025-07-28 Plaintiff responds to Defendants via counsel (Compl. ¶22)
2025-08-11 Defendants respond to Plaintiff's counsel (Compl. ¶23)
2025-09-08 Defendants send Plaintiff a draft complaint (Compl. ¶25)
2025-09-09 Parties hold videoconference meeting (Compl. ¶26)
2025-09-18 Parties hold second videoconference meeting (Compl. ¶26)
2025-09-19 Plaintiff requests additional time to consider offer (Compl. ¶28)
2025-09-21 Defendants set settlement deadline and threaten filing (Compl. ¶29)
2025-09-23 Plaintiff files Complaint for Declaratory Judgment (Compl. p. 1)

II. Technology and Patent(s)-in-Suit Analysis

No probative visual evidence provided in complaint.

U.S. Patent No. 11,257,272 - "Generating Synthetic Image Data for Machine Learning," Issued February 22, 2022

The Invention Explained

  • Problem Addressed: The patent addresses the challenge of training accurate machine learning models for computer vision tasks, which requires vast amounts of specialized and varied image data. It notes that manually capturing sufficient real-world images is often expensive, time-consuming, and results in limited datasets (’272 Patent, col. 2:5-18, 2:30-39).
  • The Patented Solution: The invention provides a system for automatically generating large, diverse datasets of synthetic images. The system assembles a virtual 3D scene by combining background images, 3D models of objects, and texture materials from various libraries, and then uses a "virtual camera" to capture images of that scene ('272 Patent, Abstract; col. 3:23-40). The settings for this virtual camera (e.g., lens properties, position) can be precisely controlled and varied to mimic the performance and potential defects of real-world cameras, thereby creating realistic training data ('272 Patent, col. 4:1-12).
  • Technical Importance: This automated approach to data generation allows for the rapid creation of tailored, high-volume datasets needed to train robust neural networks for complex tasks like object recognition and depth sensing, without the logistical constraints of real-world photography (’272 Patent, col. 2:40-51).

Key Claims at a Glance

  • The complaint asserts non-infringement of independent claim 17 (Compl. ¶32).
  • Claim 17 essential elements:
    • A method for generating synthetic image data.
    • Constructing a first synthetic image scene using a computer graphics engine.
    • Populating a foreground portion of the scene with 3D models from an object database and covering them with texture materials from a library.
    • Incrementally varying at least one camera parameter to provide a series of camera views, each with a unique image plane.
    • Using the training data comprising the synthetic image scenes to train a machine learning system.
  • The complaint also makes a general reference to non-infringement of Claim 1, an apparatus claim with similar elements (Compl. ¶35).

U.S. Patent No. 9,930,315 - "Stereoscopic 3D Camera for Virtual Reality Experience," Issued March 27, 2018

The Invention Explained

  • Problem Addressed: When recording and playing back stereoscopic 3D video, camera and sensor parameters (e.g., lens distortion, sensor size) are critical for proper rendering. These parameters can be unique to each camera and may even change during recording. Combining footage from different cameras into a single video file creates a challenge for a playback device, which needs to know the correct, and potentially different, parameters for each segment of the video (’9693 Patent, col. 1:24-col. 2:14).
  • The Patented Solution: The invention proposes a method for embedding "time varying calibration information" directly into a stereoscopic video sequence on a "once per frame" basis (’315 Patent, Claim 1). By embedding this data within each frame, for example using video steganography, the system ensures that a playback device can always access the precise calibration data needed to correctly render that specific frame, regardless of its source or any changes in camera parameters during recording (’315 Patent, Claim 1).
  • Technical Importance: This method allows for the seamless combination and playback of 3D video from multiple, disparate sources and accommodates dynamic changes in camera settings, enhancing the flexibility and quality of 3D video production and consumption (’9693 Patent, col. 1:59-col. 2:2).

Key Claims at a Glance

  • The complaint asserts non-infringement of independent claim 1 (Compl. ¶37).
  • Claim 1 essential elements:
    • A method for recording stereoscopic 3D video.
    • Combining recorded sequences of stereoscopic images into a stereoscopic video sequence.
    • Embedding "time varying calibration information once per frame" of the sequence.
    • Combining multiple stereoscopic video sequences from multiple recording devices.
    • Embedding the calibration information from the multiple sequences into each frame of the combined sequence "using a video steganography process."
  • The complaint notes this patent is one of four related patents in the suit concerning stereoscopic 3D cameras for virtual reality experiences (Compl. ¶¶14-18).

Multi-Patent Capsules

  • U.S. Patent No. 10,075,693, "Embedding Calibration Metadata Into Stereoscopic Video Files," Issued March 27, 2018
    • Technology Synopsis: This patent addresses the need to embed camera and sensor parameters into a video file captured by a stereoscopic camera. The solution involves embedding this metadata, which can be fixed for the whole file or change frame-by-frame, directly into the video file, such as by encoding it into the subtitle or closed captioning fields, allowing a player to decode and utilize the parameters during playback (’5693 Patent, col. 2:3-14; col. 7:60-col. 8:12).
    • Asserted Claims: Independent claim 1 is asserted for non-infringement (Compl. ¶42).
    • Accused Features: The complaint alleges that Plaintiff's products do not infringe because they do not record video, use a time-sequenced set of metadata, or embed data into video files using subtitle or closed captioning fields (Compl. ¶44).
  • U.S. Patent No. 9,948,919, "Stereoscopic 3D camera for virtual reality experience," Issued April 17, 2018
    • Technology Synopsis: This patent, based on its title and the allegations, relates to the playback and rendering of stereoscopic video. The claimed method appears to involve receiving a stereoscopic video sequence, determining a fisheye distortion parameter for display pixels, and rendering the corrected video on a stereoscopic display (Compl. ¶47).
    • Asserted Claims: Independent claim 1 is asserted for non-infringement (Compl. ¶47).
    • Accused Features: The complaint alleges non-infringement because Plaintiff's products are not video playback devices, do not record video, and do not use fisheye lenses (Compl. ¶49).
  • U.S. Patent No. 10,979,693, "Stereoscopic 3D camera for virtual reality experience," Issued April 13, 2021
    • Technology Synopsis: This patent claims a method for filtering motion data from video frames. The method involves comparing sets of frames to characterize motion, filtering that motion, and then applying a series of matrix operations to a reference frame to obtain a modified, stabilized reference frame, a process relevant to video stabilization (’9693 Patent, Abstract; Claim 1).
    • Asserted Claims: Independent claim 1 is asserted for non-infringement (Compl. ¶52).
    • Accused Features: The complaint alleges non-infringement because Plaintiff's products use static cameras and therefore have no motion to filter; consequently, they do not perform the claimed steps of comparing frames to characterize motion or applying matrix operations to filter it (Compl. ¶54).

III. The Accused Instrumentality

Product Identification

  • The accused instrumentalities are Plaintiff’s core products, identified as PickOne Vision Software, Yonder, InductOne, and DepalOne (Compl. ¶2).

Functionality and Market Context

  • The complaint describes these products as innovative robotic vision systems (Compl. ¶3). Functionally, the complaint alleges these systems rely on analyzing static images and use training data derived from real, not synthetic, images (Compl. ¶34, ¶39). It is alleged that the systems utilize static cameras, which means there is no motion between video frames to be analyzed or filtered as required by certain asserted patents (Compl. ¶34, ¶54).

IV. Analysis of Infringement Allegations

11,257,272 Infringement Allegations

Claim Element (from Independent Claim 17) Alleged Infringing Functionality Complaint Citation Patent Citation
constructing, by a computer graphics engine, a first synthetic image scene The complaint alleges Plaintiff's products do not construct synthetic image scenes. ¶34 col. 4:23-34
populating a foreground portion of the first synthetic image scene with 3D models from the object database The complaint alleges Plaintiff's products do not use an object database of 3D models. ¶34 col. 16:5-15
incrementally varying at least one parameter included in the camera settings file to provide a first series of camera views, wherein each camera view has a unique image plane The complaint alleges Plaintiff's systems use static cameras and do not use a series of camera views with unique image planes. ¶34 col. 4:46-50
using a training data comprising synthetic image scenes to train a machine learning system The complaint alleges Plaintiff's products use data from real images, not synthetic image scenes, for training. ¶34 col. 2:40-51
  • Identified Points of Contention:
    • Technical Question: The central dispute appears to be factual: does Plaintiff's machine learning model training rely on "synthetic image scenes" generated from 3D models, or on data derived from "real images"? The complaint alleges the latter (Compl. ¶34).
    • Scope Question: Does the term "synthetic image scene" as used in the patent encompass real-world images that have been augmented or processed, or is its scope limited to scenes constructed primarily from computer-generated 3D models as described in the specification?

9,930,315 Infringement Allegations

Claim Element (from Independent Claim 1) Alleged Infringing Functionality Complaint Citation Patent Citation
combining recorded sequences of stereoscopic images into a stereoscopic video sequence The complaint alleges Plaintiff's products analyze static images and do not record or combine sequences into a video. ¶39 ’9693 Patent, col. 4:50-54
embedding a time varying calibration information once per frame of the stereoscopic video sequence The complaint alleges Plaintiff's products do not embed time-varying calibration information as they do not process video. ¶39 ’9693 Patent, col. 4:18-23
combining multiple stereoscopic video sequences recorded by multiple stereoscopic recording devices The complaint alleges Plaintiff's products do not combine multiple video sequences. ¶39 ’9693 Patent, col. 1:59-62
embedding calibration information from the multiple stereoscopic video sequences... using a video steganography process The complaint alleges Plaintiff's products do not perform this embedding step. ¶39 ’9693 Patent, col. 4:18-23
  • Identified Points of Contention:
    • Technical Question: The primary dispute is factual: do Plaintiff's products operate on "stereoscopic video sequences" or on "static images"? The complaint alleges the latter, which would place the products' functionality outside the claimed method (Compl. ¶39).
    • Scope Question: Raises the question of whether a rapid succession of analyzed static images could, under some interpretation, constitute a "video sequence." The complaint, however, frames this as a clear operational difference.

V. Key Claim Terms for Construction

For the ’272 Patent:

  • The Term: "synthetic image scene"
  • Context and Importance: This term is central because Plaintiff’s core non-infringement argument is that its products are trained on "real images, not synthetic image scenes" (Compl. ¶34). The case may turn on whether the data Plaintiff uses falls within the scope of this term.
  • Intrinsic Evidence for Interpretation:
    • Evidence for a Narrower Interpretation: The specification repeatedly describes the process of creating a "synthetic image scene" by actively "assembling 2D backgrounds, objects, and textures" and populating it with "3D models" from a database (’272 Patent, col. 3:27-30; col. 15:20-22). This language suggests a scene constructed from computer-generated elements, not a captured real-world image.
    • Evidence for a Broader Interpretation: The patent's goal is to create "realistic" training data that "simulat[es] the actual use and performance of exiting camera devices" (’272 Patent, col. 4:8-16). An argument could be made that heavily processed or augmented real-world images, designed to simulate different conditions, serve the same purpose and could be considered "synthetic" in a broader sense.

For the ’315 Patent and related patents:

  • The Term: "stereoscopic video sequence"
  • Context and Importance: Plaintiff alleges its products analyze "static images, rather [than] video" (Compl. ¶39). Therefore, whether the data processed by Plaintiff's systems constitutes a "stereoscopic video sequence" is a threshold question for infringement.
  • Intrinsic Evidence for Interpretation:
    • Evidence for a Narrower Interpretation: Claim 1 of the ’315 patent refers to "recording recorded sequences" and embedding information "once per frame," language that strongly implies a temporal series of images captured to represent motion. The ’5693 patent further discusses processing frames "during the recording process" and during "play-back," reinforcing a temporal, motion-capture context (’5693 Patent, col. 1:30-31; col. 2:13-14).
    • Evidence for a Broader Interpretation: The patent does not appear to provide an explicit definition of "video sequence" that would distinguish it from a sufficiently rapid series of static images. However, the consistent context of recording, playback, and per-frame processing throughout the specification points toward the conventional understanding of video as capturing motion over time.

VI. Other Allegations

The complaint seeks a declaratory judgment of non-infringement "directly or indirectly" (Compl., Prayer for Relief, ¶(i)), but does not provide specific factual allegations for the court to analyze regarding theories of indirect or willful infringement. The complaint's focus is on denying the elements of direct infringement.

VII. Analyst’s Conclusion: Key Questions for the Case

This declaratory judgment action appears to hinge on fundamental, fact-based questions regarding the operation of Plaintiff's technology, rather than nuanced claim construction disputes. The key questions for the court will likely be:

  • A core evidentiary question of data origin: Is the machine learning training data used by Plaintiff’s products, as alleged, derived from "real images," or does it meet the definitional and functional requirements of the "synthetic image scenes" constructed from 3D models as claimed in the ’272 patent?
  • A central technical question of operational mode: Do Plaintiff’s products, as alleged, operate by analyzing "static images," or do they process a temporal "stereoscopic video sequence" as required by the claims of the ’315, ’5693, ’919, and ’9693 patents?
  • A question of functional mismatch: Assuming the complaint's technical assertions are accurate, does the alleged use of static cameras and real-image training data create a functional gap so wide that the accused products cannot perform the core steps recited in the asserted claims, such as filtering motion data or embedding time-varying calibration information into a video stream?