DCT

3:24-cv-00902

Motive Tech Inc v. Samsara Inc

Key Events
Amended Complaint

I. Executive Summary and Procedural Information

  • Parties & Counsel:
  • Case Identification: 3:24-cv-00902, N.D. Cal., 07/09/2025
  • Venue Allegations: Venue is asserted in the Northern District of California on the basis that both parties maintain their principal places of business in San Francisco, and a substantial portion of the acts giving rise to the litigation occurred there.
  • Core Dispute: Plaintiff alleges that Defendant’s vehicle dashcam and fleet management platform infringe patents related to automated camera calibration and AI-based driver distraction detection.
  • Technical Context: The lawsuit concerns technology in the vehicle telematics and fleet management industry, where AI-powered dashcams are used to enhance driver safety and operational efficiency.
  • Key Procedural History: The complaint alleges a multi-year campaign of misconduct by Defendant, including creating fraudulent customer accounts to access Plaintiff's platform, misappropriating trade secrets by hiring former employees, and engaging in false advertising. The filing follows a prior lawsuit initiated by Defendant against Plaintiff in Delaware, which was subsequently transferred to the Northern District of California.

Case Timeline

Date Event
2013-01-01 Motive founded (as KeepTruckin)
2013-12-01 Motive launches its first Android and iPhone based platform
2015-01-01 Motive begins designing its first proprietary Vehicle Gateway (LBB-1)
2015-01-01 Samsara is incorporated
2015-08-21 Motive's first Vehicle Gateway is pictured and dated
2016-01-01 Samsara releases its first Vehicle Gateway
2016-04-10 First alleged creation of a fake Motive account by a Samsara employee
2017-04-01 Motive acquires AI startup Ingrain
2018-01-01 Motive releases its first road-facing dashcam
2019-02-01 Samsara releases its AI dashcam
2021-06-15 Priority Date for ’243 Patent
2021-08-01 Motive releases its AI Dashcam
2021-10-04 Priority Date for ’580 Patent and ’276 Patent
2024-01-16 ’580 Patent Issued
2024-08-13 ’243 Patent Issued
2024-11-05 ’276 Patent Issued
2025-07-09 Complaint Filing Date

II. Technology and Patent(s)-in-Suit Analysis

U.S. Patent No. 11,875,580 - "Camera Initialization for Lane Detection and Distance Estimation Using Single-View Geometry"

The Invention Explained

  • Problem Addressed: The patent addresses the challenge of calibrating vehicle-mounted cameras, such as dashcams, that are not installed in a predefined or fixed position. Traditional solutions either required such precise installations, which are prone to human error, or relied on expensive auxiliary sensors like radar or Lidar to determine camera orientation and position relative to the road. (’580 Patent, col. 1:10-22).
  • The Patented Solution: The invention provides a method to automatically initialize a single, adjustable camera without extra hardware. The system receives video from the camera, uses a predictive model (e.g., a neural network) to identify a horizon line in the images, and computes camera parameters like height and viewing angle from this line. (’580 Patent, col. 1:36-45). To ensure accuracy, this process includes a human-in-the-loop verification step: the system overlays the predicted horizon line on the video and transmits it to an "annotator device for manual review" and confirmation before finalizing the parameters. (’580 Patent, col. 1:45-51).
  • Technical Importance: This approach allows for the use of more flexible and less expensive hardware for advanced driver-assistance systems by automating the critical calibration process for retrofitted, non-standard camera installations. (’580 Patent, col. 1:23-28).

Key Claims at a Glance

  • The complaint asserts independent claim 1. (Compl. ¶159).
  • The essential elements of claim 1 are:
    • Receiving a video from a camera device over a network.
    • Identifying lines, including a horizon line, in the video using a predictive model.
    • Computing a camera parameter based on the identified lines.
    • Overlaying the lines on the video.
    • Transmitting the overlaid video to an annotator device for manual review.
    • Receiving a confirmation from the annotator device that the lines are accurate.
    • Transmitting the camera parameter data to the camera device.
  • The complaint does not explicitly reserve the right to assert dependent claims for this patent.

U.S. Patent No. 12,136,276 - "Camera Initialization for Lane Detection and Distance Estimation Using Single-View Geometry"

The Invention Explained

  • Problem Addressed: As a continuation of the ’580 Patent, this patent addresses the same fundamental problem of calibrating adjustable, single-view cameras in vehicles without requiring fixed positioning or costly additional sensors. (’276 Patent, col. 1:15-24).
  • The Patented Solution: The ’276 Patent claims a method that emphasizes an interactive refinement process. After an initial horizon line is detected and overlaid on an image, the image is sent to a computing device. The system then receives a modification of that line, described as "a new line at a second position," from the computing device. (’276 Patent, claim 1). The final camera parameter is then computed based on this user-modified new line, suggesting a workflow where a user can manually correct or adjust the system's initial prediction to improve calibration accuracy. (’276 Patent, col. 1:50-56).
  • Technical Importance: This method provides an interactive calibration workflow, which may enhance the accuracy and reliability of camera initialization compared to a purely automated or simple confirmation-based system.

Key Claims at a Glance

  • The complaint asserts independent claim 1. (Compl. ¶251).
  • The essential elements of claim 1 are:
    • Receiving an image of a roadway from a vehicle-installed camera.
    • Detecting a horizon line in the image.
    • Overlaying a line on the image.
    • Transmitting the overlaid image to a computing device over a network.
    • Receiving a modification of the line from the computing device (a new line at a new position).
    • Computing a camera parameter based on the new, modified line.
    • Transmitting the camera parameter data to the camera device.
  • The complaint does not explicitly reserve the right to assert dependent claims for this patent.

U.S. Patent No. 12,062,243 - "Distracted Driving Detection Using A Multi-Task Training Process"

Technology Synopsis

The patent addresses shortcomings in conventional AI models for detecting driver distraction, which often perform poorly on real-world data. (Compl. ¶61). It discloses a "multi-task model" that uses a single "backbone network" coupled with multiple specialized "prediction heads" for different sub-tasks (e.g., mobile phone detection, face detection, pose estimation). By training these heads simultaneously and minimizing a joint loss function, the core network learns more robust features, leading to more accurate distraction predictions. (Compl. ¶62; ’243 Patent, Abstract).

Asserted Claims

Independent claim 1 is asserted. (Compl. ¶264).

Accused Features

The complaint accuses Samsara's CM31 and CM32 dashcams and its "Samsara Cloud" platform of infringement. It alleges these products train and use machine learning models with neural network backbones and specific "event-detection modules" to process images and detect distracted driving events like mobile phone use. (Compl. ¶¶266-267).

III. The Accused Instrumentality

Product Identification

The accused instrumentalities are Samsara’s in-vehicle dashcam devices (including models CM31 and CM32) and its software-as-a-service platform, referred to as the "Connected Operations Cloud" or "Samsara Cloud." The complaint groups these components together as the "Infringing Products." (Compl. ¶¶63-65).

Functionality and Market Context

The Infringing Products operate together to provide fleet management services. The dashcams capture image and video data from the vehicle, while the cloud platform receives this data to provide analytics and control interfaces for customers. (Compl. ¶64). The complaint alleges that the accused system performs automatic calibration of the dashcams and uses AI models to detect unsafe driving behaviors. (Compl. ¶¶107, 161, 266). For example, a side-by-side user interface comparison provided in the complaint suggests Samsara's product offers electronic logging features similar to Motive's. (Compl. p. 16, ¶34). The complaint positions Samsara as a direct competitor that entered the market after Motive and has allegedly struggled to match Motive's technology, resorting to copying and false advertising. (Compl. ¶¶29, 34).

IV. Analysis of Infringement Allegations

U.S. Patent No. 11,875,580 Infringement Allegations

Claim Element (from Independent Claim 1) Alleged Infringing Functionality Complaint Citation Patent Citation
receiving, over a network from a camera device, a video comprising a set of image frames; The Infringing Products "receive image and video data from dashcams" which are connected to its software platform. ¶161 col. 5:1-11
identifying one or more lines in the video using a predictive model, the one or more lines including a horizon line; The Infringing Products "perform calibration of the dashcams via predictive models." ¶161 col. 5:26-34
computing at least one camera parameter based on the one or more lines; The Infringing Products "perform calibration of the dashcams" which inherently involves computing camera parameters. ¶161 col. 5:40-45
overlaying the one or more lines on the video to generate an overlaid video; The system provides "user-control of such dashcams via a web-based portal and/or mobile application," which suggests a graphical interface where lines may be overlaid for user interaction. ¶161 col. 5:46-51
transmitting the overlaid video to an annotator device for manual review; The system's "web-based portal and/or mobile application" allegedly functions as the annotator device, receiving data from the core platform for user review and control. ¶161 col. 5:52-55
receiving a confirmation from the annotator device, the confirmation indicating that the one or more lines are accurate; and The system's "user-control" function implies that user input, serving as a confirmation, is received from the web portal or mobile application. ¶161 col. 6:1-4
transmitting data representing at least one camera parameter to the camera device. After calibration via user control, the resulting camera parameters are used by the system, which implies transmission of the parameter data. ¶161 col. 6:28-30

Identified Points of Contention

  • Scope Questions: The analysis may focus on whether the "user-control ... via a web-based portal" alleged in the complaint (Compl. ¶161) meets the claim requirements of an "annotator device" performing "manual review" and providing "confirmation." A court may need to determine if a fleet manager adjusting a setting constitutes an "annotator" in the sense contemplated by the patent.
  • Technical Questions: The complaint alleges the use of "predictive models" for "calibration," but provides limited detail on the specific technical operation. (Compl. ¶161). A central question may be whether Samsara's models perform the specific function of identifying a "horizon line" and computing parameters from that line, as required by the claim, or if they use a different calibration methodology.

U.S. Patent No. 12,136,276 Infringement Allegations

Claim Element (from Independent Claim 1) Alleged Infringing Functionality Complaint Citation Patent Citation
receiving an image of a roadway recorded by a camera device installed within a vehicle; The accused dashcams "receive image and video data" as part of their operation. ¶253 col. 5:1-11
detecting a horizon line in the image; The Infringing Products allegedly "perform calibration of the dashcams via predictive models," which the complaint equates with detecting a horizon line. ¶253 col. 5:26-34
overlaying a line on the image to generate an overlaid image; The alleged "user-control ... via a web-based portal" implies the generation of a graphical overlay for user interaction. A "Feature Differentiation" chart allegedly produced by Samsara shows its dashcams can detect events like rolling stops, which requires on-screen analysis. (Compl. ¶¶104, 253). ¶253 col. 5:46-51
transmitting the overlaid image to a computing device over a network; Data is transmitted from the dashcam to the "software-as-a-service platform" and made accessible via a "web-based portal," which functions as the computing device. ¶253 col. 5:52-55
receiving a modification of the line from the computing device, the modification comprising a new line at a second position; The "user-control" functionality allegedly allows a user to provide input via the portal that modifies the system's state, which the complaint maps to this limitation. ¶253 col. 6:1-4
computing a camera parameter based on the new line; and The complaint alleges that the user's input through the "user-control" feature affects the final "calibration" of the dashcam. ¶253 col. 5:40-45
transmitting data representing the camera parameter to the camera device. The resulting parameters from the user-controlled calibration are transmitted for use by the system. ¶253 col. 6:28-30

Identified Points of Contention

  • Scope Questions: Similar to the ’580 Patent, a primary dispute may be whether general "user-control" of calibration settings qualifies as "receiving a modification of the line... comprising a new line at a second position." This raises the question of whether the claim requires a specific graphical line-editing function that may or may not be present in the accused products.
  • Technical Questions: Evidence will be required to establish that Samsara's calibration process involves the specific sequence of detecting a line, transmitting it for modification, receiving a modified line, and re-computing parameters based on that specific modification, as opposed to a more generalized settings adjustment workflow.

V. Key Claim Terms for Construction

  • The Term: "annotator device" (from ’580 Patent, claim 1) and "computing device" (from ’276 Patent, claim 1)

  • Context and Importance: The infringement theory for both lead patents appears to depend on casting Samsara's "web-based portal and/or mobile application" as the claimed "annotator device" or "computing device" where a user performs review or modification. (Compl. ¶¶161, 253). The construction of this term will be critical to determining whether a standard user interface for fleet managers falls within the scope of a device intended for "manual review" and "confirmation" or "modification" of a predicted horizon line.

  • Intrinsic Evidence for Interpretation:

    • Evidence for a Broader Interpretation: The specification states that an "annotator device can comprise a workstation or web-based application allowing for human review," which could be read broadly to include any computer or mobile device used by a customer to interact with the system. (’580 Patent, col. 5:65-68).
    • Evidence for a Narrower Interpretation: The patent repeatedly refers to a "human annotator" performing "manual review," which may suggest a more specialized role or a more structured verification workflow than a typical end-user adjusting a setting. (’580 Patent, col. 6:6-12). The term "annotator" itself may imply a task more specific than general "user control."
  • The Term: "identifying one or more lines in the video using a predictive model, the one or more lines including a horizon line" (from ’580 Patent, claim 1)

  • Context and Importance: This term is central to the technical mechanism of infringement. The complaint alleges that Samsara's "calibration ... via predictive models" meets this limitation. (Compl. ¶161). The dispute will likely turn on whether Samsara's accused calibration technology actually identifies a geometric "horizon line" as the basis for its calculations, or if it uses other visual cues or methods that do not require explicit horizon detection. The complaint provides visuals of Samsara's marketing, which claim its AI can detect "tailgating" and "distracted driving," suggesting the system performs some form of advanced image analysis. (Compl. p. 6).

  • Intrinsic Evidence for Interpretation:

    • Evidence for a Broader Interpretation: The term "predictive model" is broad, and the patent suggests it can be "a convolutional neural network." (’580 Patent, col. 5:29-31). This could encompass a wide range of modern AI-based image analysis systems.
    • Evidence for a Narrower Interpretation: The claim explicitly requires the model to identify "a horizon line." The specification is heavily focused on this specific geometric feature as the input for computing camera parameters like height and road plane normal. (’580 Patent, col. 1:40-45). An accused system that calibrates without specifically identifying and using a horizon line may be argued to fall outside the claim's scope.

VI. Other Allegations

  • Indirect Infringement: The complaint does not contain specific counts for indirect or contributory infringement in its causes of action for patent infringement.
  • Willful Infringement: The complaint does not allege willful infringement and does not request enhanced damages for patent infringement in its prayer for relief.

VII. Analyst’s Conclusion: Key Questions for the Case

  • A core issue will be one of definitional scope: can the term "annotator device," used in the context of a structured "manual review" and "confirmation" workflow, be construed to cover a general-purpose "web-based portal" used by a fleet manager for "user-control" of camera settings?
  • A key evidentiary question will be one of technical mechanism: what proof exists that Samsara's automated camera "calibration" system performs the specific, multi-step process recited in the claims—namely, using a predictive model to identify a geometric "horizon line" and computing parameters based on that identified or user-modified line?
  • A central question for the '243 patent will be one of architectural equivalence: does Samsara's use of distinct "event-detection modules" for tasks like phone usage detection meet the claim requirement of a multi-head architecture where different "prediction heads" are coupled to a common "backbone network" to improve the network's training through a joint loss function?