DCT

2:24-cv-00722

NEC Corp v. Yi Tech Inc

I. Executive Summary and Procedural Information

  • Parties & Counsel:
  • Case Identification: 2:24-cv-00722, E.D. Tex., 09/03/2024
  • Venue Allegations: Venue is alleged to be proper because the Defendant is a foreign corporation not resident in the United States, which may be sued in any judicial district.
  • Core Dispute: Plaintiff alleges that Defendant’s surveillance cameras, associated software, and cloud services infringe six patents related to object detection, video analysis, user interface controls, and machine learning for surveillance systems.
  • Technical Context: The technology at issue resides in the field of AI-enhanced video surveillance, a market focused on improving security and automation through features like intelligent object recognition, customizable alerts, and machine learning-based video analysis.
  • Key Procedural History: The complaint alleges that Plaintiff sent a letter to Defendant on June 3, 2024, offering to license the patents-in-suit and identifying specific accused products. Defendant allegedly did not engage in licensing negotiations, which forms the basis for Plaintiff's allegations of pre-suit knowledge and willful infringement.

Case Timeline

Date Event
2014-09-16 U.S. Patent No. 10,223,619 Priority Date
2015-02-17 U.S. Patent No. 10,970,995 Priority Date
2015-03-19 U.S. Patent No. 11,373,061 Priority Date
2015-03-27 U.S. Patent No. 10,769,468 Priority Date
2017-03-17 U.S. Patent No. 10,706,336 Priority Date
2018-05-07 U.S. Patent No. 11,537,814 Priority Date
2019-03-05 U.S. Patent No. 10,223,619 Issued
2020-07-07 U.S. Patent No. 10,706,336 Issued
2020-09-08 U.S. Patent No. 10,769,468 Issued
2021-04-06 U.S. Patent No. 10,970,995 Issued
2022-06-28 U.S. Patent No. 11,373,061 Issued
2022-12-27 U.S. Patent No. 11,537,814 Issued
2024-06-03 Plaintiff sent notice letter to Defendant
2024-09-03 Complaint Filed

II. Technology and Patent(s)-in-Suit Analysis

U.S. Patent No. 11,373,061 - "Object Detection Device, Object Detection Method, and Recording Medium"

  • Issued: June 28, 2022

The Invention Explained

  • Problem Addressed: The patent’s background section states that, at the time of the invention, creating high-quality "teacher data" for training object detection models was inefficient, often requiring multiple operational steps by a user or sacrificing accuracy for ease of use (Compl. ¶28; ’061 Patent, col. 2:14-30).
  • The Patented Solution: The invention claims to solve this problem with a system that streamlines the data refinement process. The system first detects an object in an image and displays it, then allows a user to define a more precise "second region" by indicating at least two points on the image, and finally "generates teacher data" based on this user-defined second region to improve the detection model (Compl. ¶27, ¶29; ’061 Patent, Abstract).
  • Technical Importance: This approach seeks to improve the accuracy of machine learning models by providing an efficient, interactive method for users to correct or refine the data used for training object detectors (Compl. ¶29).

Key Claims at a Glance

  • The complaint asserts at least independent claim 1 (Compl. ¶30).
  • Claim 1 of the ’061 Patent recites a data generating system comprising:
    • at least one memory storing instructions; and
    • at least one processor configured to execute instructions to:
      • detect an object from an image;
      • control a display device to display a first region of the detected object;
      • receive, through an input device, an input defining a second region in the image, the input being performed by indicating at least two points in the image; and
      • generate teacher data based on the second region.

U.S. Patent No. 10,223,619 - "Video Monitoring Apparatus, Control Apparatus, Control Method, and Non-Transitory Readable Storage Medium"

  • Issued: March 5, 2019

The Invention Explained

  • Problem Addressed: The patent identifies a limitation in conventional video analysis systems, which were capable of detecting only simple behaviors like a person crossing a line, but struggled to detect more complicated, multi-step behaviors (Compl. ¶46; ’619 Patent, col. 1:19-33, 1:64-67).
  • The Patented Solution: The invention provides a system that can recognize complex behaviors by evaluating a logical relationship between two or more distinct events. The apparatus includes separate storage for a first condition (for a first event) and a second condition (for a second event), and an output circuit that generates a signal only when a pre-designated logical relationship (e.g., event A THEN event B) between the two detected events is satisfied (Compl. ¶45, ¶47; ’619 Patent, Abstract, col. 2:6-13).
  • Technical Importance: This technology enables more sophisticated surveillance by moving beyond simple, isolated event triggers to the recognition of compound behaviors, allowing for more context-aware and meaningful alerts (’619 Patent, col. 1:67-2:2).

Key Claims at a Glance

  • The complaint asserts at least independent claim 1 (Compl. ¶48).
  • Claim 1 of the ’619 Patent recites a video monitoring apparatus comprising:
    • a first storage which stores a first condition for detecting a first event from a video;
    • a second storage which stores a second condition for detecting a second event from the video;
    • an output circuit which outputs a signal indicating that a logical relationship designated in advance by the first event and the second event is satisfied;
    • a first input circuit which receives an input of the first condition;
    • a second input circuit which receives an input of the second condition; and
    • a conditional operator input circuit which receives an input of a conditional operator indicating the logical relationship.

Multi-Patent Capsule: U.S. Patent No. 10,769,468

  • Patent Identification: "Mobile Surveillance Apparatus, Program, and Control Method," issued September 8, 2020 (Compl. ¶11).
  • Technology Synopsis: The patent addresses poor operability in mobile surveillance applications where a single user action, like a finger slide, could have multiple meanings. The invention provides a method to switch between different processes (e.g., setting an event detection region versus changing the display range) based on a distinct touch operation performed at multiple places in a predetermined order, enhancing user control (Compl. ¶62, ¶64; ’468 Patent, col. 1:20-41, 2:9-12).
  • Asserted Claims: At least Claim 9 is asserted (Compl. ¶65).
  • Accused Features: The complaint accuses features in the YI Home Application that allow users to interact with a surveillance image on a touch panel to perform processes like setting an "Activity Detection Zone" (Compl. ¶66, ¶67, ¶71).

Multi-Patent Capsule: U.S. Patent No. 10,706,336

  • Patent Identification: "Recognition in Unlabeled Videos with Domain Adversarial Learning and Knowledge Distillation," issued July 7, 2020 (Compl. ¶12).
  • Technology Synopsis: The patent addresses the challenge of recognizing objects in unlabeled videos when training data consists of labeled still images. The solution involves pre-training a recognition engine on labeled still images and then adapting it to the video domain using techniques like domain adversarial learning, which leverages the still images, synthetically degraded versions of those images, and the unlabeled video frames to improve recognition accuracy in the target video domain (Compl. ¶80, ¶81; ’336 Patent, Abstract).
  • Asserted Claims: At least Claim 1 is asserted (Compl. ¶82).
  • Accused Features: Features marketed as “Advance Human Detection” and “Automated AI alters,” which allegedly use convolutional neural networks (CNNs) to recognize objects in video sequences (Compl. ¶82, ¶83, ¶84).

Multi-Patent Capsule: U.S. Patent No. 10,970,995

  • Patent Identification: "System for Monitoring Event Related Data," issued April 6, 2021 (Compl. ¶13).
  • Technology Synopsis: The patent addresses surveillance scenarios where a camera's fixed view may not adequately capture the full extent of an event. The invention is a control method that detects an event, identifies the type of event, and then controls a predetermined imaging range of the camera based on that identified type, allowing the system to dynamically adjust its view to better capture the event (Compl. ¶97, ¶99; ’995 Patent, col. 1:19-37).
  • Asserted Claims: At least Claim 6 is asserted (Compl. ¶100).
  • Accused Features: The complaint targets products with object detection and imaging range adjustment, such as PTZ cameras with "Motion Tracking," which allegedly identify an event type (e.g., human detection) and control the camera's imaging range accordingly (Compl. ¶100, ¶101, ¶106).

Multi-Patent Capsule: U.S. Patent No. 11,537,814

  • Patent Identification: "Data Providing System and Data Collection System," issued December 27, 2022 (Compl. ¶14).
  • Technology Synopsis: This patent addresses the problem of skewed training data, which degrades the accuracy of machine learning models. The invention provides a system that identifies an object using a learned model and then determines whether to transmit that data to a central computer based on the result, thereby enabling the collection of data that can be used to improve the model's identification accuracy (Compl. ¶115, ¶116, ¶117; ’814 Patent, col. 2:2-23, Abstract).
  • Asserted Claims: At least Claim 1 is asserted (Compl. ¶118).
  • Accused Features: The complaint accuses products with AI learning functionality that allegedly identify objects using a learned model and transmit data to a server for processing (Compl. ¶118, ¶119).

III. The Accused Instrumentality

Product Identification

  • The accused instrumentalities are a range of surveillance products sold under the "YI" and "Kami" brands, including but not limited to the YI Pro 2, YI Home 3 Camera, Kami Doorbell Camera, and Kami Outdoor Security Camera. The infringement allegations also extend to associated software, such as the YI Home Application and Kami Home application, and related cloud services like KamiCloud (Compl. ¶30, ¶48).

Functionality and Market Context

  • The accused products are AI-powered smart cameras that perform real-time analysis of video feeds (Compl. ¶20). Key accused features include "Smart Detection," "Advanced Human Detection," and facial recognition, which are designed to distinguish between different types of motion (e.g., humans, animals, vehicles) to reduce false alarms (Compl. ¶30, ¶36, ¶48). Users can define specific "activity detection zone(s)" or "exclusion zones" within the camera's field of view via a mobile application, which controls when alerts are sent to their smartphones (Compl. ¶30, ¶37). The complaint provides a screenshot from a product webpage illustrating how a user can draw a box to define an "Activity Zone" for motion alerts (Compl. p. 19). These features position the products in the competitive smart home security market, where intelligent and customizable alerts are a primary value proposition.

IV. Analysis of Infringement Allegations

U.S. Patent No. 11,373,061 Infringement Allegations

Claim Element (from Independent Claim 1) Alleged Infringing Functionality Complaint Citation Patent Citation
an object detection device comprising at least one processor configured to execute instructions to: detect an object from an image The accused products include processors that execute instructions to detect objects using features like “Smart Detection” and “Advanced Human Detection.” ¶31 col. 1:35-39
control a display device to display a first region of the detected object In combination with the YI Home Application, the products control a display device (e.g., a smartphone) to display a region showing the detected object. ¶31 col. 1:40-43
receive, through an input device, an input defining a second region in the image, the input being performed by indicating at least two points in the image The complaint alleges this is met by features that allow users to set “activity detection zones and/or exclusion zones,” referencing tutorial videos that show users how to define these regions. ¶32, ¶37 col. 1:44-48
and generate teacher data based on the second region The complaint alleges the products generate teacher data from the user-defined second region, pointing to marketing claims that the products “Help Improve Smart Detection Accuracy.” ¶32, ¶30 col. 1:49-51
  • Identified Points of Contention:
    • Scope Question: A primary issue may be whether the user action of drawing a rectangular "activity zone" in the accused application, as shown in a tutorial screenshot (Compl. p. 19), constitutes "indicating at least two points" as recited in the claim.
    • Technical Question: A key factual question will be what evidence supports the allegation that the user-defined zones are used for "generating teacher data" to retrain or update the underlying detection model, as opposed to merely acting as a real-time filter for user notifications.

U.S. Patent No. 10,223,619 Infringement Allegations

Claim Element (from Independent Claim 1) Alleged Infringing Functionality Complaint Citation Patent Citation
a first storage which stores a first condition for detecting a first event from a video The accused products store a first user-defined condition, such as enabling "Motion Detection" in the Smart Detection Settings menu. ¶49, ¶51 col. 2:7-8
a second storage which stores a second condition for detecting a second event from the video The products also store a second user-defined condition, such as enabling "Human Detection" or defining a specific "Activity Zone." A screenshot shows these distinct settings options (Compl. p. 18). ¶49, ¶51 col. 2:9-11
an output circuit which outputs a signal indicating that a logical relationship designated in advance by the first event and the second event is satisfied The system outputs a signal (a smartphone notification) only when the combined conditions are met (e.g., motion is detected AND it is classified as human), satisfying an implicit logical "AND" relationship. ¶49 col. 2:11-13
a first input circuit...a second input circuit...and a conditional operator input circuit... The user interface of the mobile application, which allows users to toggle settings and draw zones, allegedly functions as the claimed input circuits for receiving the conditions and their logical relationship. ¶50 col. 6:8-15
  • Identified Points of Contention:
    • Scope Question: The analysis may focus on whether applying multiple conditions (e.g., "is motion" and "is human") to a single detected occurrence meets the claim language of detecting a "first event" and a "second event." The patent's examples describe sequential actions, such as "break-opening a key and then enter thereinto" (’619 Patent, col. 2:1-2), raising the question of whether the claim requires two temporally or logically distinct events.
    • Technical Question: A question may arise as to whether the software toggles and settings menus in the accused application can be mapped to the distinct hardware-esque structures recited in the claim (e.g., "first storage," "second storage," "output circuit," "input circuit").

V. Key Claim Terms for Construction

For the ’061 Patent:

  • The Term: "generating teacher data"
  • Context and Importance: This term is central to the patent's purpose. The infringement case may depend on whether the accused products' functionality of setting "activity zones" constitutes more than a simple alert filter and is actually used to refine or retrain the object detection model, as the phrase "teacher data" implies in the machine learning context.
  • Intrinsic Evidence for Interpretation:
    • Evidence for a Broader Interpretation: The patent states its objective is to provide an object detection device that "efficiently creates good-quality teacher data" ('061 Patent, col. 2:29-30), which could suggest any user input that improves system performance might fall within the term's scope.
    • Evidence for a Narrower Interpretation: The detailed description specifies that a "learning unit" uses the "training data to learn the dictionary" ('061 Patent, Abstract). This suggests a specific machine learning process where a model's parameters (the "dictionary") are updated, a potentially narrower function than simply applying a user-defined filter.

For the ’619 Patent:

  • The Term: "a first event" and "a second event"
  • Context and Importance: The claim requires two distinct events to be detected before a logical relationship is evaluated. The dispute will likely focus on whether the accused products, which apply multiple conditions to a single motion detection, are actually detecting two separate "events."
  • Intrinsic Evidence for Interpretation:
    • Evidence for a Broader Interpretation: The claim requires only a "logical relationship" between the events, which could be argued to include a contemporaneous "AND" condition applied to a single occurrence (e.g., an event is both "motion" AND "human").
    • Evidence for a Narrower Interpretation: The specification provides examples of complex, sequential behaviors, such as a thief "staying in front of a door for 60 minutes or more" and then "entering forcibly through the door," or a vehicle "passing a road westbound" and then "passing a road eastbound" to detect a U-turn (’619 Patent, col. 5:20-33). This suggests the invention is directed at two distinct occurrences separated in time or space.

VI. Other Allegations

  • Indirect Infringement: The complaint alleges inducement of infringement, stating that Defendant provides user manuals, websites, and YouTube tutorial videos that instruct customers on how to use the accused features, such as setting up "Activity Detection Zones" (Compl. ¶35, ¶37, ¶53, ¶54). Contributory infringement is also alleged on the basis that the accused products contain components material to the patented inventions that are not staple articles of commerce (Compl. ¶39, ¶56).
  • Willful Infringement: The complaint alleges willful infringement based on Defendant's alleged knowledge of the patents since at least June 3, 2024, the date of a notice letter from Plaintiff. It is alleged that Defendant continued its infringing activities despite an objectively high likelihood that its actions constituted infringement (Compl. ¶42, ¶59).

VII. Analyst’s Conclusion: Key Questions for the Case

  • A core issue will be one of definitional scope: can the patent language of detecting a "first event" and a "second event" (’619 Patent) be construed to cover the application of multiple simultaneous conditions to a single instance of motion, or does the patent's specification limit the claims to detecting a sequence of distinct occurrences?
  • A key evidentiary question will be one of technical function: what evidence demonstrates that the accused products' user-customizable "activity zones" are used for the claimed purpose of "generating teacher data" to improve an underlying AI model (’061 Patent), as opposed to functioning as a simple, non-learning filter for user alerts?
  • The case will also examine the structural mapping of claim elements to accused functionality: can the asserted apparatus claims, which recite distinct functional circuits and storage units (e.g., ’619 Patent), be met by the integrated software features of a modern smart camera application, or is there a fundamental mismatch between the claimed architecture and the accused system's implementation?