DCT

2:24-cv-00720

NEC Corp v. Anker Innovations Technology Co Ltd

Key Events
Amended Complaint

I. Executive Summary and Procedural Information

  • Parties & Counsel:
  • Case Identification: 2:24-cv-00720, E.D. Tex., 11/13/2025
  • Venue Allegations: Plaintiff alleges venue is proper as to the foreign Defendants under 28 U.S.C. § 1391(c)(3), which provides that a non-resident of the United States may be sued in any judicial district.
  • Core Dispute: Plaintiff alleges that Defendant’s eufy brand of smart home security products infringes six U.S. patents related to AI-driven image processing, object detection, and video surveillance system management.
  • Technical Context: The technology at issue resides in the field of intelligent video surveillance, a commercially significant market for consumer smart-home security and enterprise-level monitoring applications.
  • Key Procedural History: The complaint alleges that Plaintiff sent a letter to Defendants on June 3, 2024, identifying five of the six asserted patents and accused Anker products, and offering to enter into license negotiations. This pre-suit notice may form the basis for allegations of willful infringement.

Case Timeline

Date Event
2012-07-31 U.S. Patent No. 10,999,635 Priority Date
2013-05-31 U.S. Patent No. 9,953,240 Priority Date
2013-06-28 U.S. Patent No. 11,210,526 Priority Date
2013-09-26 U.S. Patent No. 10,037,467 Priority Date
2015-02-17 U.S. Patent No. 10,970,995 Priority Date
2018-04-24 U.S. Patent No. 9,953,240 Issued
2018-05-07 U.S. Patent No. 11,537,814 Priority Date
2018-07-31 U.S. Patent No. 10,037,467 Issued
2021-04-06 U.S. Patent No. 10,970,995 Issued
2021-05-04 U.S. Patent No. 10,999,635 Issued
2021-12-28 U.S. Patent No. 11,210,526 Issued
2022-12-27 U.S. Patent No. 11,537,814 Issued
2024-06-03 Plaintiff sends pre-suit notice letter to Defendants
2025-11-13 Complaint Filed

II. Technology and Patent(s)-in-Suit Analysis

U.S. Patent No. 9,953,240 - "Image Processing System, Image Processing Method, and Recording Medium for Detecting a Static Object"

The Invention Explained

  • Problem Addressed: The patent's background section describes that conventional video surveillance techniques struggled to generate an appropriate background model in environments with constant changes, such as areas with high foot traffic, making it difficult to detect objects that have been left behind or people standing still (’240 Patent, col. 1:43-50).
  • The Patented Solution: The invention addresses this by identifying "static areas" where motion is below a threshold. It then generates several background images using image data from different time spans (e.g., a short-term, medium-term, and long-term background). By comparing these different background images, the system can identify differences corresponding to static objects and classify those objects based on how long they have remained stationary ('240 Patent, Abstract; col. 2:38-48).
  • Technical Importance: This method allows for more reliable detection of loitering individuals or abandoned objects in dynamic environments, a key function for security and public safety applications ('240 Patent, col. 1:25-28).

Key Claims at a Glance

  • The complaint asserts at least independent claim 1 (Compl. ¶44).
  • Claim 1 requires a system with a memory and processor configured to:
    • Identify static areas from input images where motion is below a threshold value.
    • Generate a first image from a first time span, a second image from a second time span, and a third image from a third time span, all using the identified static areas.
    • Compare the first and second images to identify an area of difference.
    • Classify static objects based on a length of a static time period by comparing the first, second, and third images.
  • The complaint reserves the right to assert additional claims (Compl. ¶44).

U.S. Patent No. 10,037,467 - "Information Processing System"

The Invention Explained

  • Problem Addressed: The patent notes that prior art systems for searching for objects in video were often limited to facial features and could not effectively extract appropriate "feature quantities" for other search objects, leading to imprecise search results (’467 Patent, col. 1:58-65).
  • The Patented Solution: The invention proposes a system that detects an object (e.g., a person) and identifies multiple "object elements" from that object (e.g., face, clothes). For each element, the system selects the "best frame" that satisfies a specific criterion (e.g., the clearest face shot). It then extracts a "feature quantity" (e.g., biometric data, color information) from that best frame and associates it with the object, creating a richer data profile for more accurate searching ('467 Patent, Abstract; col. 2:9-26).
  • Technical Importance: This approach enables more robust and multi-faceted searching for individuals or objects in video footage by leveraging a combination of features beyond just the face ('467 Patent, col. 3:36-40).

Key Claims at a Glance

  • The complaint asserts at least independent claim 1 (Compl. ¶62).
  • Claim 1 requires a system with a processor configured to:
    • Detect and track an object in moving image data, and detect a plurality of object elements from that object.
    • Extract a feature quantity of each of the object elements from a frame image.
    • Select the frame image that satisfies a predefined frame selection criterion for each object element.
    • Associate information specifying the selected frame image with the feature quantity extracted from it.
  • The complaint reserves the right to assert additional claims (Compl. ¶62).

U.S. Patent No. 10,970,995 - "System for Monitoring Event Related Data"

  • Technology Synopsis: The patent describes a solution to the problem of surveillance cameras failing to capture the entirety of an event that extends beyond the camera's initial field of view (’995 Patent, col. 1:24-29). The invention is a system that uses sensor data to detect an event, identifies the type of event, and then automatically controls a camera's imaging range (e.g., pan, tilt, zoom) based on that event type to ensure the full event is captured appropriately ('995 Patent, col. 1:30-37; Abstract).
  • Asserted Claims: At least independent claim 6 is asserted (Compl. ¶80).
  • Accused Features: The complaint accuses the "AI Auto Tracking" and "Smart A.I. Tracking" features in various eufy cameras, which allegedly use AI to detect an event (e.g., a person, pet, or vehicle) and automatically adjust the camera's position to follow the detected object (Compl. ¶81, ¶83, ¶86). The "See it All with 360° AI Auto Tracking" visual in the complaint shows a camera with pan and tilt capabilities for tracking subjects (Compl. ¶83).

U.S. Patent No. 10,999,635 - "Image Processing System, Image Processing Method, and Program"

  • Technology Synopsis: The patent addresses the challenge of accurately re-identifying and tracking a person across the fields of view of multiple, separate cameras (’635 Patent, col. 1:20-35). The patented solution is a display control system that registers individuals and presents a user interface with selectable tabs for each registered person. Selecting a tab displays a window with video clips and a timeline diagram showing that specific person's appearances across the different cameras in the system ('635 Patent, Abstract; col. 2:1-8).
  • Asserted Claims: At least independent claim 5 is asserted (Compl. ¶98).
  • Accused Features: The complaint targets the "Cross-Camera Tracking Function" within the eufy Security App. This feature is alleged to use "BionicMind™ A.I." to identify the same individual across multiple cameras and merge the separate video clips into a single, stitched video for review (Compl. ¶105). A screenshot of the "Security Report" shows "Cross-Camera Stories" that compile clips of an individual from different locations (Compl. ¶102).

U.S. Patent No. 11,210,526 - "Video Surveillance System, Video Processing Apparatus, Video Processing Method, and Video Processing Program"

  • Technology Synopsis: The patent identifies a limitation in existing machine learning-based surveillance systems: an inability to improve their analytical accuracy through user feedback during operation (’526 Patent, col. 1:39-47). The invention provides a system where an operator can designate a region in a video, assign a new or corrected category to the event in that region, and have that information accumulated as new "learned data." This allows the underlying video analysis model to be dynamically trained and improved over time ('526 Patent, Abstract; col. 1:53-62).
  • Asserted Claims: At least independent claim 1 is asserted (Compl. ¶117).
  • Accused Features: The complaint accuses the "Enhance My AI" and "Video Donation Program" features of the eufy ecosystem. These features allegedly allow users to "donate" recorded videos to "train our AI," thereby providing new labeled data to improve the accuracy of the system's detection capabilities for objects like people, packages, and vehicles (Compl. ¶119, ¶122). The "Enhance My AI" interface shown in the complaint allows users to join the "Video Donation Program" (Compl. ¶119).

U.S. Patent No. 11,537,814 - "Data Providing System and Data Collection System"

  • Technology Synopsis: The patent addresses the problem of skewed training data, where a machine learning model may become highly accurate for one type of observation but inaccurate for another due to an imbalanced dataset (’814 Patent, col. 2:2-14). The patented system identifies an object, determines whether the captured data is "transmission target data" (e.g., data for which the model has low identification accuracy), and selectively transmits that target data to a central computer to improve the model. The system also includes functions for controlling data transmission timing ('814 Patent, Abstract).
  • Asserted Claims: At least independent claim 1 is asserted (Compl. ¶134).
  • Accused Features: The complaint alleges that the accused products create a data providing system by identifying objects using local AI and determining whether to transmit data (e.g., a notification) to a central computer (server) at a predetermined timing. A specific accused feature is the ability for users to set an "Interval time (mins) between event notification and recording," which is alleged to be part of the claimed timing control (Compl. ¶135, ¶139). The complaint includes a screenshot of the eufy app showing this "Interval time (mins)" setting (Compl. ¶137).

III. The Accused Instrumentality

Product Identification

  • The complaint collectively refers to the accused products as "the Accused '240 Eufy Products" and "the Accused '467 Eufy Products," among others. These include a wide range of eufy-branded smart home security devices, such as the Video Doorbell E340, eufyCam S330, HomeBase S380, and Pet Camera Pro N140, as well as the accompanying eufy Security Application and the "BionicMind AI Service" (Compl. ¶44, ¶62-63).

Functionality and Market Context

  • The accused products form a video surveillance ecosystem that uses local and cloud-based AI to provide features such as package detection, human and pet recognition, and object tracking (Compl. ¶47, ¶68). The complaint alleges these products are part of Anker's "eufy" brand, which focuses on smart home devices (Compl. ¶5). Key accused functionalities include "Delivery Guard" for monitoring packages, "AI Features" for identifying different types of objects, "AI Auto Tracking" for following moving objects, "Cross-Camera Tracking" for stitching videos of the same person from multiple cameras, and "Enhance My AI" for user-assisted AI training (Compl. ¶50, ¶68, ¶86, ¶105, ¶122).

IV. Analysis of Infringement Allegations

U.S. Patent No. 9,953,240 Infringement Allegations

Claim Element (from Independent Claim 1) Alleged Infringing Functionality Complaint Citation Patent Citation
identify static areas from input images captured at a plurality of time points, wherein, in the static areas, the input images include motion indicating a value smaller than a threshold value The accused products' "Package Detection" feature identifies static areas in the camera's field of view to detect a stationary package left on a porch. ¶45 col. 2:50-55
generate a first image using the static areas...captured in a first time span..., a second image...in a second time span..., and a third image...in a third time span The system allegedly generates multiple background images over different time intervals to determine that an object has remained static for a specific duration, such as in the "Delivery Guard" feature. ¶46 col. 4:3-17
compare the first image and the second image and identify an area having a difference The system allegedly compares background models to detect the appearance of a new static object (the package) in the scene. ¶46 col. 4:18-21
and classify static objects...according to a length of a static time period, based on a comparison of the first image, the second image, and the third image The "Delivery Guard" feature classifies a static object as a "package" and monitors it over time, sending alerts if it is approached, which allegedly constitutes classification based on its static duration. ¶46 col. 4:22-29
  • Identified Points of Contention:
    • Technical Questions: A primary question will be evidentiary: what proof does the complaint offer that the accused "Delivery Guard" feature operates by generating three distinct images from three time spans and comparing them, as required by claim 1? The complaint alleges this functionality (Compl. ¶46) but supports it with high-level marketing materials that do not detail the underlying algorithm.
    • Scope Questions: Does the accused system's "Package Detection" feature "classify" the object "according to a length of a static time period," or does it classify it based on object recognition (i.e., the AI identifies the object as a box)? The defense may argue that the classification is based on the object's identity, not its static duration, potentially placing it outside the scope of this claim limitation.

U.S. Patent No. 10,037,467 Infringement Allegations

Claim Element (from Independent Claim 1) Alleged Infringing Functionality Complaint Citation Patent Citation
detect and track an object in moving image data, and detect a plurality of object elements from the object... The accused products' "AI Features" detect and track objects such as a "Human" and detect "object elements" from that object, such as a "Face." ¶63, ¶68 col. 2:9-14
extract a feature quantity of each of the object elements from a frame image constituting the moving image data To perform "Facial (Human) Recognition," the system allegedly extracts feature quantities (e.g., biometric data) from the detected face element. ¶64 col. 2:15-18
select the frame image satisfying a frame selection criterion for each of the object elements... The system allegedly selects an optimal frame image (e.g., one with a clear, front-facing view) to ensure accurate facial recognition, thereby satisfying a frame selection criterion. ¶64 col. 2:18-21
and associate frame specifying information for specifying the selected frame image with the feature quantity of the object element extracted from the selected frame image The extracted feature quantity is allegedly stored and linked with information identifying the frame from which it was extracted to enable later searching and person identification. ¶64 col. 2:22-26
  • Identified Points of Contention:
    • Scope Questions: Does the accused system detect a "plurality of object elements" from the same object and perform discrete frame selection for each element? While the complaint points to "Human Detection" and "Facial Detection," it raises the question of whether the system treats these as a single analysis or as the separate analyses of multiple distinct elements (e.g., face, clothes) as contemplated by the patent.
    • Technical Questions: The infringement theory rests on the premise that to achieve accurate recognition, the system must be "selecting" a frame based on a "criterion." It will be a question of fact whether the accused AI's operation meets this specific claimed step or uses a different methodology for processing facial data from a video stream.

V. Key Claim Terms for Construction

  • Term: "classify static objects... according to a length of a static time period" ('240 Patent, Claim 1)

  • Context and Importance: This term's construction is central to whether the accused "Package Detection" infringes. The dispute will likely focus on whether the classification is based on an object's identity (AI recognizes a box) or its behavior over time (the system determines it has been stationary for X minutes). Practitioners may focus on this term because it distinguishes a temporal classification from a purely object-recognition-based classification.

  • Intrinsic Evidence for Interpretation:

    • Evidence for a Broader Interpretation: The claim language is functional and does not mandate a specific algorithm. Any system that outputs different classifications or triggers different actions based on how long an object is static could be argued to fall within its scope. The specification's objective is to detect "a person standing still for a certain period of time or longer" ('240 Patent, col. 1:25-27).
    • Evidence for a Narrower Interpretation: The specification's detailed embodiments consistently describe a method of comparing background images generated from multiple, distinct time windows (e.g., 1-minute, 5-minute, 10-minute windows) to categorize the static duration ('240 Patent, Fig. 6). A defendant may argue that the term should be limited to this disclosed multi-window comparison method.
  • Term: "object elements" ('467 Patent, Claim 1)

  • Context and Importance: The scope of infringement depends on whether the accused system detects multiple, distinct "elements" from a single "object." The complaint focuses on "Human" as the object and "Face" as an element. The case may turn on whether other detected items (e.g., "Vehicle," "Pet") are considered separate objects or if the system is capable of analyzing multiple elements (e.g., face and clothes) from a single person object as the patent describes.

  • Intrinsic Evidence for Interpretation:

    • Evidence for a Broader Interpretation: The claim defines the term functionally as "an element of the object set in advance and detectable from the object" ('467 Patent, col. 2:12-14), suggesting any pre-defined, detectable feature could qualify.
    • Evidence for a Narrower Interpretation: The specification provides specific examples of object elements (which it calls "modals"), such as "face", "clothes", and "sex/age" as being elements of a "person" ('467 Patent, col. 5:50-54). A defendant might argue the term should be construed as being limited to such physical parts or attributes of a single entity, and that features like "Human" and "Vehicle" are treated as distinct objects by the accused system, not an object and its element.

VI. Other Allegations

  • Indirect Infringement: For all asserted patents, the complaint alleges induced infringement. The allegations are based on Defendants providing instructional materials—including user manuals, support websites, and YouTube tutorial videos—that allegedly instruct customers on how to use the infringing features, such as "Delivery Guard," "AI Auto Tracking," and "Cross-Camera Tracking" (Compl. ¶49-51, ¶67-69, ¶85-87, ¶104-106, ¶121-123, ¶138-140). Contributory infringement is also alleged on the basis that the accused products are especially made for use in an infringing manner and are not staple articles of commerce (Compl. ¶53, ¶71, ¶89, ¶108, ¶125, ¶142).
  • Willful Infringement: The complaint alleges that Defendants had pre-suit knowledge of the '240, '467, '995, '635, and '526 patents as of at least June 3, 2024, the date of Plaintiff's notice letter (Compl. ¶55, ¶73, ¶91, ¶110, ¶127). For the '814 patent, knowledge is alleged as of "at least the filing of this Complaint," which may support a claim for post-suit willful infringement (Compl. ¶144).

VII. Analyst’s Conclusion: Key Questions for the Case

  • A central issue will be one of technical implementation versus claimed method: does the accused eufy AI functionality, described in high-level marketing terms, actually operate using the specific, multi-step processes required by the patent claims? For example, does "Delivery Guard" generate three separate background images as required by the '240 patent, and does the facial recognition feature select an optimal frame for multiple distinct "object elements" as required by the '467 patent?
  • A key legal question will concern definitional scope: can claim terms rooted in specific algorithmic descriptions, such as "classify...according to a length of a static time period," be construed broadly enough to cover the functionality of a general-purpose AI object recognition engine that achieves a similar outcome through different means?
  • The case will also present an evidentiary question regarding user-driven AI training: does Anker's "Enhance My AI" program, which solicits video donations from users to "train our AI," constitute direct infringement of the '526 patent's method for dynamic learning and the '814 patent's method for targeted data collection?