PTAB
IPR2026-00108
Samsara Inc v. Motive Technologies Inc
1. Case Identification
- Case #: IPR2026-00108
- Patent #: 12,062,243
- Filed: November 14, 2025
- Petitioner(s): Samsara Inc.
- Patent Owner(s): Motive Technologies, Inc.
- Challenged Claims: 1-33
2. Patent Overview
- Title: Distracted Driver Detection
- Brief Description: The ’243 patent describes a computer-implemented, multitask training pipeline for detecting distracted driving. The system uses a convolutional neural network (CNN) with a shared feature-extraction backbone coupled to multiple task-specific prediction heads, such as a distraction-classification head, an object-detection head, and a pose-estimation head, to jointly learn and classify driver behaviors in real-time.
3. Grounds for Unpatentability
Ground 1: Obviousness over Chun and Chao - Claims 1-2, 6-14, 18-20, and 24-33 are obvious over Chun in view of Chao.
- Prior Art Relied Upon: Chun (a 2019 conference paper on driver and seat belt detection) and Chao (Application # 2020/0242381).
- Core Argument for this Ground:
- Prior Art Mapping: Petitioner argued that Chun taught the core architecture of the ’243 patent: a multi-task CNN for in-vehicle monitoring that uses a shared feature pyramid network (FPN) backbone with multiple prediction heads for tasks like pose estimation and seatbelt detection. However, Chun did not explicitly name a distraction-classification head. Chao supplied this missing element by teaching a CNN architecture with a classification head that consumes backbone features to output specific categorical distraction labels (e.g., “Drink,” “Call,” “Text”). Chao also expressly taught using loss minimization and backpropagation to train the network, and storing the trained model in memory for inference.
- Motivation to Combine (for §103 grounds): A POSITA would combine these references because both addressed the same problem of distracted driving using compatible, modular CNN architectures. Petitioner asserted that Chun identified distracted behaviors (e.g., drinking, phone use) as the problem it was solving; adding Chao’s explicit classification head was the natural next step to convert the features Chun already extracted into the desired categorical distraction outputs.
- Expectation of Success (for §103 grounds): Success was expected because adding a standard classification head to a shared FPN backbone was a routine multi-task extension. The combination involved applying a known technique (Chao's classification head) to a known system (Chun's backbone) to yield a predictable result.
Ground 2: Obviousness over Chun, Chao, and Shanmugamani - Claims 3-5, 10, 15-17, and 21-23 are obvious over Chun and Chao further in view of Shanmugamani.
- Prior Art Relied Upon: Chun, Chao, and Shanmugamani (a 2018 book on deep learning for computer vision).
- Core Argument for this Ground:
- Prior Art Mapping: This ground built upon the Chun-Chao combination to address dependent claims reciting specific, conventional CNN components. Petitioner argued that claims 3-5 require particular internal structures for the object-detection and pose-estimation heads, such as a bounding-box regression network, and hidden layers comprising specific sequences of convolution, batch normalization, and activation layers. While Chun and Chao taught the high-level system, Shanmugamani was presented as a contemporaneous textbook that explicitly taught these exact implementation details and provided standard TensorFlow recipes for building such detection and classification heads.
- Motivation to Combine (for §103 grounds): A POSITA implementing the system of Chun (which used TensorFlow) would naturally consult a reference like Shanmugamani for standard TensorFlow layer patterns and implementation details. The motivation was to simply build the object-detection and pose-estimation heads of the primary combination using the conventional, off-the-shelf components taught by Shanmugamani for those exact tasks.
Ground 3: Obviousness over Zhengyang and He - Claims 1-6, 9-18, and 20-24 are obvious over Zhengyang in view of He.
Prior Art Relied Upon: Zhengyang (Chinese Publication # CN111860253) and He (a 2017 conference paper on the Mask R-CNN framework).
Core Argument for this Ground:
- Prior Art Mapping: Petitioner argued that Zhengyang taught a unified, multi-task CNN for driver-attribute recognition with a shared SE-ResNet backbone and multiple prediction heads for tasks like detecting smoking and phone calls (distraction classification) and object localization. However, Zhengyang did not explicitly disclose a pose-estimation head. He provided this element, teaching a modular keypoint (pose-estimation) head that could be readily integrated into a shared backbone architecture (Mask R-CNN) to run in parallel with object detection and segmentation heads.
- Motivation to Combine (for §103 grounds): A POSITA would combine the references to improve the accuracy of Zhengyang’s driver state recognition system, a goal explicitly stated in Zhengyang. Adding a pose-estimation head from He would provide highly discriminative features for distraction detection (e.g., head tilt, arm position). The combination was straightforward because He’s head was designed to be a modular addition to a multi-task framework like Zhengyang's.
- Expectation of Success (for §103 grounds): A POSITA would have had a high expectation of success because both references employed standard deep learning building blocks, including ResNet-style backbones and modular task heads. Adding a pose-estimation head to a shared backbone was a well-understood practice.
Additional Grounds: Petitioner asserted an additional obviousness challenge (Ground 4) based on Zhengyang, He, and Chao for claims 7-8, 19, and 25-33. This ground primarily used Chao to explicitly teach performing downstream actions (e.g., alerts, vehicle control modifications) based on the distraction tags generated by the Zhengyang-He system.
4. Relief Requested
- Petitioner requests institution of IPR and cancellation of claims 1-33 of the ’243 patent as unpatentable.