2:24-cv-00720
NEC Corp v. Anker Innovations Technology Co Ltd
I. Executive Summary and Procedural Information
- Parties & Counsel:- Plaintiff: NEC Corporation (Japan)
- Defendant: Anker Innovations Technology Co., Ltd. (People's Republic of China) and Anker Innovations Ltd. (Hong Kong)
- Plaintiff’s Counsel: Patton Tidwell & Culbertson, LLP; Mayer Brown LLP
 
- Case Identification: 2:24-cv-00720, E.D. Tex., 09/03/2024
- Venue Allegations: Venue is alleged to be proper as Defendants are not resident in the United States and may be sued in any judicial district pursuant to 28 U.S.C. § 1391(c)(3).
- Core Dispute: Plaintiff alleges that Defendant’s "eufy" brand of smart home surveillance products, including video doorbells and security cameras, infringes six U.S. patents related to image processing, object detection, and AI-driven surveillance analysis.
- Technical Context: The technology at issue involves advanced video analysis techniques for smart security systems, such as differentiating static objects from background motion, identifying specific features of an object from video frames, and tracking subjects across multiple cameras.
- Key Procedural History: The complaint states that Plaintiff sent a letter to Defendants on June 3, 2024, offering to license five of the six patents-in-suit and identifying accused products. The complaint alleges that Defendants did not engage in licensing discussions, a fact which may be used to support allegations of willful infringement.
Case Timeline
| Date | Event | 
|---|---|
| 2012-07-31 | '635 Patent Priority Date | 
| 2013-05-31 | '240 Patent Priority Date | 
| 2013-06-28 | '526 Patent Priority Date | 
| 2013-09-26 | '467 Patent Priority Date | 
| 2015-02-17 | '995 Patent Priority Date | 
| 2018-04-24 | '240 Patent Issue Date | 
| 2018-05-07 | '814 Patent Priority Date | 
| 2018-07-31 | '467 Patent Issue Date | 
| 2021-04-06 | '995 Patent Issue Date | 
| 2021-05-04 | '635 Patent Issue Date | 
| 2021-12-28 | '526 Patent Issue Date | 
| 2022-12-27 | '814 Patent Issue Date | 
| 2024-06-03 | Plaintiff sends notice letter to Defendants regarding five of the asserted patents | 
| 2024-09-03 | Complaint Filing Date | 
II. Technology and Patent(s)-in-Suit Analysis
U.S. Patent No. 9,953,240 - "Image Processing System, Image Processing Method, and Recording Medium for Detecting a Static Object"
- Patent Identification: U.S. Patent No. 9,953,240, “Image Processing System, Image Processing Method, and Recording Medium for Detecting a Static Object,” issued April 24, 2018.
The Invention Explained
- Problem Addressed: The patent describes the difficulty conventional motion detection technology faced in creating an accurate background model for a video scene, especially in environments with high congestion or many moving objects, which complicated the detection of objects that had become stationary (Compl. ¶30; ’240 Patent, col. 1:46-50).
- The Patented Solution: The invention proposes a system that identifies "static areas" from input images and generates multiple background images using different time spans (e.g., a short-term, medium-term, and long-term background). By comparing these different background models, the system can identify an area of difference and classify a static object based on the length of time it has remained stationary (Compl. ¶31; ’240 Patent, col. 2:38-40).
- Technical Importance: This approach allows for more accurate detection of static objects, such as a package left on a porch, by distinguishing the object from both transient motion and gradual changes to the background scene over time (Compl. ¶31).
Key Claims at a Glance
- The complaint asserts at least independent claim 1 (Compl. ¶32).
- The essential elements of Claim 1 include:- A memory storing instructions and at least one processor configured to execute them to:
- Identify static areas from input images where motion is below a threshold value;
- Generate a first, second, and third image using static areas from a first, second, and third time span, respectively;
- Compare the first and second images to identify an area having a difference; and
- Classify static objects according to a length of a static time period based on a comparison of the first, second, and third images.
 
- The complaint reserves the right to pursue additional claims (Compl. ¶32).
U.S. Patent No. 10,037,467 - "Information Processing System"
- Patent Identification: U.S. Patent No. 10,037,467, “Information Processing System,” issued July 31, 2018.
The Invention Explained
- Problem Addressed: The patent notes that prior art systems for searching objects in video were often limited to face detection and struggled to extract appropriate feature quantities for other object types, which could lead to imprecise search results (Compl. ¶48; ’467 Patent, col. 1:58-65).
- The Patented Solution: The invention describes a system that detects an object in video and then identifies a plurality of "object elements" (e.g., face, clothes) from that object. It extracts feature data for each element and selects the best frame image for each element based on a pre-set "frame selection criterion." The system then associates the feature data from that best frame with information specifying the frame, storing it for more precise searching (Compl. ¶49; ’467 Patent, Abstract).
- Technical Importance: This method enables a more granular and accurate search capability for objects in video by analyzing distinct components of an object and using the optimal frame for each component's analysis (Compl. ¶49).
Key Claims at a Glance
- The complaint asserts at least independent claim 1 (Compl. ¶50).
- The essential elements of Claim 1 include:- A processor configured to:
- Detect and track an object in moving image data, and detect a plurality of predefined object elements from the object;
- Extract a feature quantity of each object element from a frame image;
- Select the frame image that satisfies a predefined frame selection criterion for each object element; and
- Associate information specifying the selected frame with the feature quantity extracted from that frame.
 
- The complaint reserves the right to pursue additional claims (Compl. ¶50).
U.S. Patent No. 10,970,995 - "System for Monitoring Event Related Data"
- Patent Identification: U.S. Patent No. 10,970,995, “System for Monitoring Event Related Data,” issued April 6, 2021.
- Technology Synopsis: The patent addresses surveillance systems that could not adequately capture events extending beyond a camera's immediate surveillance area (Compl. ¶66). The solution is a control method that uses sensor data to detect an event, identify the type of event, and then control a camera’s imaging range (e.g., pan, tilt, zoom) based on the specific type of event detected to appropriately capture it (Compl. ¶67).
- Asserted Claims: At least claim 6 (Compl. ¶68).
- Accused Features: The complaint accuses features such as "AI Auto Tracking," which uses AI to detect humans, vehicles, or pets and automatically adjust the camera’s pan and tilt to keep the subject in view (Compl. ¶69, 71).
U.S. Patent No. 10,999,635 - "Image Processing System, Image Processing Method, and Program"
- Patent Identification: U.S. Patent No. 10,999,635, “Image Processing System, Image Processing Method, and Program,” issued May 4, 2021.
- Technology Synopsis: The patent addresses the problem of re-identifying a person across multiple cameras, which previously required significant human involvement and was prone to errors (Compl. ¶84). The invention provides a display control system that registers individuals appearing across multiple cameras and presents a user interface with switchable windows for each person, including a diagram or timeline showing which cameras captured that person and when (Compl. ¶85).
- Asserted Claims: At least claim 5 (Compl. ¶86).
- Accused Features: The accused "Cross-Camera Tracking" function, which allegedly uses "BionicMind™ A.I. to identify individuals captured on camera in each video" and "merges these footage into a single video" when the same person appears across multiple cameras (Compl. ¶93).
U.S. Patent No. 11,210,526 - "Video Surveillance System, Video Processing Apparatus, Video Processing Method, and Video Processing Program"
- Patent Identification: U.S. Patent No. 11,210,526, “Video Surveillance System, Video Processing Apparatus, Video Processing Method, and Video Processing Program,” issued December 28, 2021.
- Technology Synopsis: The patent addresses the limitations of static machine learning models in surveillance, where analytical accuracy could not be improved during operation (Compl. ¶103). The solution is a system with dynamic learning, where a video analyzer detects an event of a predetermined category, and an operator can then designate a part of that detected object to generate a new category, with the system accumulating video data of this new category as learned data to improve future analysis (Compl. ¶102, 104).
- Asserted Claims: At least claim 1 (Compl. ¶105).
- Accused Features: The complaint points to the "Enhance My AI" and "Video Donation Program," which allows users to submit recorded videos to help train the AI to "detect and recognize targets such as human, package, vehicle," thereby allowing for the creation and refinement of detection categories through user input (Compl. ¶110).
U.S. Patent No. 11,537,814 - "Data Providing System and Data Collection System"
- Patent Identification: U.S. Patent No. 11,537,814, “Data Providing System and Data Collection System,” issued December 27, 2022.
- Technology Synopsis: This patent addresses the problem of skewed training data for machine learning models, where a camera might capture an object (e.g., a car) frequently in one manner but infrequently in another, leading to biased accuracy (Compl. ¶120). The invention is a data providing system that identifies an object using an AI model and then determines whether that data should be transmitted to a computer (e.g., for model retraining) based on the result of the initial identification, thereby improving the model with targeted data (Compl. ¶121).
- Asserted Claims: At least claim 1 (Compl. ¶122).
- Accused Features: The complaint alleges that the eufy system’s local AI identifies objects and that users are instructed to adjust settings, such as the "interval time between event notification and recording," which relates to the patented concept of determining whether and when to transmit data based on an identification event (Compl. ¶123, 127).
III. The Accused Instrumentality
Product Identification
- The accused instrumentalities are a broad range of Anker’s "eufy" branded smart security products, including but not limited to the Video Doorbell S330 and E340, various eufyCam and SoloCam models, the HomeBase S380 hub, and associated software services like the "BionicMind AI Service" and the "eufy security application" (Compl. ¶32, 50, 68, 86, 105, 122).
Functionality and Market Context
- The accused products constitute a smart home surveillance ecosystem that captures video and uses local and/or cloud-based AI to provide intelligent alerts and features (Compl. ¶56). Key functionalities cited in the complaint include "Delivery Guard," which detects when a package is delivered and monitors it; "AI Auto Tracking," which physically follows subjects; "Cross-Camera Tracking," which creates a unified video timeline for an individual seen by multiple cameras; and user-assisted AI training through a "Video Donation Program" (Compl. ¶35, 71, 93, 110). The complaint presents a promotional image for "Guard Deliveries with Smart AI," which shows package detection and face recognition capabilities (Compl. ¶35, p. 11). Another included document describes how the AI features can perform "Human Detection, Facial Detection, Pet Detection, and Crying Detection" using an embedded AI chip (Compl. ¶56, p. 24).
IV. Analysis of Infringement Allegations
U.S. Patent No. 9,953,240 Infringement Allegations
| Claim Element (from Independent Claim 1) | Alleged Infringing Functionality | Complaint Citation | Patent Citation | 
|---|---|---|---|
| identify static areas from input images captured at a plurality of time points, wherein, in the static areas, the input images include motion indicating a value smaller than a threshold value | The accused products' "Delivery Guard" feature identifies a stationary package left on a porch. The "Activity Zone" feature allows users to define areas for motion detection, which implies a threshold-based system for distinguishing static areas from motion. | ¶33 | col. 2:38-44 | 
| generate a first image using the static areas of respective input images captured in a first time span... a second image using... a second time span... and a third image using... a third time span... | The complaint alleges the products generate these images to classify static objects according to a static time period, such as knowing how long a package has been present. | ¶34 | col. 2:45-56 | 
| compare the first image and the second image and identify an area having a difference | The system can alert a user when someone approaches a delivered package, which requires comparing a baseline state (package only) with a new state (package plus a person) to identify a difference. | ¶34 | col. 2:57-59 | 
| classify static objects included in the input images according to a length of a static time period, based on a comparison of the first image, the second image, and the third image | The system’s ability to monitor a package over time and send pickup reminders is alleged to meet this limitation by classifying the package as a static object for a specific duration. | ¶34 | col. 2:60-65 | 
- Identified Points of Contention:- Technical Question: What evidence does the complaint provide that the accused products specifically generate three distinct background images from three corresponding time spans and compare them as required by the claim? The complaint alleges this functionality but does not detail the underlying mechanism.
- Scope Question: Does the "Delivery Guard" feature, which recognizes a package has been left unattended, perform the specific claimed step of "classify[ing] static objects... according to a length of a static time period" through a "comparison of the first image, the second image, and the third image," or does it achieve a similar result through a different, non-infringing technical method?
 
U.S. Patent No. 10,037,467 Infringement Allegations
| Claim Element (from Independent Claim 1) | Alleged Infringing Functionality | Complaint Citation | Patent Citation | 
|---|---|---|---|
| detect and track an object in moving image data, and detect a plurality of object elements from the object... | The eufy AI system is advertised as performing "Human Detection" and "Facial Detection," which represent distinct elements of a single object (a person). | ¶51 | col. 1:10-14 | 
| extract a feature quantity of each of the object elements from a frame image constituting the moving image data | The "BionicMind" AI feature performs "Facial Recognition" to differentiate between family and strangers, which requires extracting unique feature data from detected faces. | ¶52 | col. 1:15-18 | 
| select the frame image satisfying a frame selection criterion for each of the object elements... | The complaint alleges that in performing functions like facial recognition, the system necessarily selects an optimal frame for analysis based on pre-set criteria. | ¶52 | col. 1:19-22 | 
| associate frame specifying information for specifying the selected frame image with the feature quantity of the object element extracted from the selected frame image | The system associates an identified person with their recognized facial features, which are necessarily derived from specific frames, to provide notifications for recognized individuals. | ¶52 | col. 1:23-27 | 
- Identified Points of Contention:- Technical Question: What evidence supports the allegation that the accused AI performs an explicit step of "select[ing] the frame image satisfying a frame selection criterion" for each object element (e.g., face, clothing) independently, as opposed to a holistic analysis of the video stream?
- Scope Question: Does a general AI-based object recognition process, which may internally prioritize certain frames for analysis, meet the specific, ordered limitation of selecting a frame based on a "criterion being set in advance"?
 
V. Key Claim Terms for Construction
- The Term: "classify static objects... according to a length of a static time period" ('240 Patent, Claim 1) 
- Context and Importance: The viability of the infringement allegation for the '240 Patent may depend on whether this term requires a system that explicitly calculates and categorizes the duration of an object's stillness (e.g., "static for 1 minute," "static for 5 minutes"), or if it can be read more broadly to cover a system that simply recognizes an object has been static for some undefined period before triggering an action. 
- Intrinsic Evidence for Interpretation: - Evidence for a Broader Interpretation: The claim language itself does not require specific enumerated time categories, suggesting any classification based on duration could suffice.
- Evidence for a Narrower Interpretation: The specification describes differentiating between objects left for different time periods, stating the system achieves "improved static object detection" ('240 Patent, col. 9:8-9), which may support an argument that the term requires a more sophisticated, quantitative classification of time.
 
- The Term: "frame selection criterion" ('467 Patent, Claim 1) 
- Context and Importance: This term is central to the '467 Patent infringement theory. Practitioners may focus on this term because the complaint does not specify what criteria the accused products allegedly use. The dispute will likely center on whether the accused AI's internal workings include a discrete step that maps to this claim element, or if its process is fundamentally different. 
- Intrinsic Evidence for Interpretation: - Evidence for a Broader Interpretation: The patent states the criterion is "set in advance for each of the object elements" but does not limit the type of criterion, which could support a broad reading that covers any rule-based selection within an AI algorithm ('467 Patent, col. 1:20-22).
- Evidence for a Narrower Interpretation: The background section discusses prior art criteria like "a frame image having a largest face area or a frame image in which a human face is oriented in the front direction" ('467 Patent, col. 1:36-40). A defendant may argue that the term should be construed as being limited to such explicit, geometric, or quality-based criteria, rather than an implicit weighting within a neural network.
 
VI. Other Allegations
- Indirect Infringement: The complaint alleges both induced and contributory infringement for all six patents. The inducement allegations are based on Defendants providing user manuals, online support articles, and tutorial videos that allegedly instruct customers on how to use the infringing functionalities, such as setting up "Delivery Guard," "AI Auto Tracking," and "Cross-Camera Tracking" (Compl. ¶37-39, 55-57, 73-75, 92-94, 109-111, 126-128).
- Willful Infringement: The complaint alleges willful infringement for all six patents. For the '240, '467, '995, '635, and '526 patents, willfulness is based on alleged pre-suit knowledge stemming from the notice letter sent on June 3, 2024 (Compl. ¶43-44, 61-62, 79-80, 98-99, 115-116). For the '814 patent, willfulness is based on knowledge since at least the filing of the complaint (Compl. ¶132-133).
VII. Analyst’s Conclusion: Key Questions for the Case
- A core issue will be one of evidentiary proof versus conclusory allegation: The complaint makes several allegations that directly track the patent claim language (e.g., generating three distinct images from three time spans under the '240 patent; selecting a frame based on a "criterion" under the '467 patent) without providing documentary evidence detailing how the accused products perform these specific steps. A key question will be whether discovery reveals that the accused AI systems operate in the manner claimed or use a technically distinct, non-infringing method to achieve similar end-user results.
- A second key issue will concern system-level infringement and user action: The asserted patents cover complex systems and methods involving user interaction, local device processing, and potentially cloud services. A central question for the court will be how the various components of the eufy ecosystem (camera, HomeBase, app, user) collectively perform the claimed steps and whether all steps of any given method claim are attributable to a single actor, or if the allegations rely on a "divided infringement" theory.
- A third question will be one of claim scope and AI technology: The case raises the question of how patent claims drafted before the widespread adoption of modern neural networks should be construed to read on "black box" AI systems. The dispute may turn on whether the claimed logical steps can be found in the functioning of the accused AI, or if the AI's decision-making process is fundamentally different from the step-by-step methods disclosed in the patents.