6:23-cv-00716
Rafqa Star LLC v. Google LLC
I. Executive Summary and Procedural Information
- Parties & Counsel:
- Plaintiff: Rafqa Star, LLC (Texas)
- Defendant: Google LLC (Delaware)
- Plaintiff’s Counsel: Devlin Law Firm LLC
- Case Identification: 6:23-cv-00716, W.D. Tex., 10/16/2023
- Venue Allegations: Venue is based on Google maintaining an established place of business within the Western District of Texas, specifically an office in Austin.
- Core Dispute: Plaintiff alleges that Google’s augmented reality navigation and viewing features infringe a patent related to providing video and tactile feedback to a user of a portable electronic device that is in motion.
- Technical Context: The technology at issue is mobile augmented reality (AR), which overlays computer-generated information, such as navigational cues, onto a live video feed of the real world from a device's camera.
- Key Procedural History: The asserted patent claims an earliest priority date of March 11, 2011, stemming from a long chain of continuation and divisional applications. The complaint alleges the inventor’s efforts in the AR field date back to 2011.
Case Timeline
| Date | Event |
|---|---|
| 2011-03-11 | ’441 Patent Priority Date |
| 2023-09-12 | ’441 Patent Issue Date |
| 2023-10-16 | Complaint Filing Date |
II. Technology and Patent(s)-in-Suit Analysis
U.S. Patent No. 11,756,441 - "Methods, Systems, and Computer Program Products for Providing Feedback to a User of a Portable Electronic in Motion"
- Issued: September 12, 2023 (’441 Patent)
The Invention Explained
- Problem Addressed: The patent addresses the challenge of directing the attention of a user who is in motion and engaged with a portable electronic device (PED) (e.g., a smartphone) to a specific object or location in their physical environment (Compl. ¶10; ’441 Patent, Abstract).
- The Patented Solution: The invention describes a system that detects the PED is in motion, captures video of the user's surroundings, and then presents a video output on the display to direct the user's attention toward a specific object. A key aspect is that this video output is combined with a separate user interface element and a tactile output, both based on the object's location and the device's movement, to guide the user ('441 Patent, col. 2:37-43, Fig. 2). The system is designed to provide this feedback when the user is moving in a direction different from the direction of the target object, effectively correcting the user's path ('441 Patent, col. 36:16-24).
- Technical Importance: The complaint asserts that the patented concepts improve the functioning of augmented reality technologies by presenting location-relevant information based on detected movement (Compl. ¶19).
Key Claims at a Glance
- The complaint asserts infringement of at least independent claim 1 (Compl. ¶33).
- The essential elements of independent claim 1 include:
- Causing a video capture device to capture video input while the portable electronic device is moving in a "first direction" that is different from a "second direction" toward a fixed object.
- Receiving the video data from that capture.
- Presenting a "video output" on the display to direct the user in the "second direction" toward the object.
- Simultaneously presenting a "user interface element" on the display that is based on the object's location and the device's movement.
- Causing a "tactile output device" to provide a "tactile output" in conjunction with the video presentation.
- Capturing and presenting additional video as the user moves in the second direction toward the object.
- The complaint reserves the right to assert additional claims (Compl. ¶32).
III. The Accused Instrumentality
Product Identification
The accused instrumentalities are the "Google Navigate with Live View feature within the Android operating system, along with ARCore, ARKit, Google Extended Reality (XR), Google Immersive View," and associated hardware (e.g., Google Pixel devices) and backend servers that support these features (Compl. ¶27).
Functionality and Market Context
The complaint focuses on augmented reality features that overlay navigational and location-based information onto a mobile device's live camera feed (Compl. ¶27-28). These features are described as being part of a broader ecosystem of mobile devices, operating systems, and location-based services offered by Google (Compl. ¶20). The complaint alleges these instrumentalities embody a system for detecting the movement of a portable electronic device and providing feedback according to the claimed invention (Compl. ¶30).
IV. Analysis of Infringement Allegations
The complaint references an infringement analysis in an "Exhibit 2," which was not filed with the complaint (Compl. ¶32). Therefore, a detailed claim chart comparison is not possible. The narrative infringement theory alleges that the Accused Instrumentalities, when used, perform the steps of claim 1 by detecting device motion and presenting augmented reality video to a user (Compl. ¶30-31).
No probative visual evidence provided in complaint.
- Identified Points of Contention: Based on the claim language and the general description of the accused technology, the infringement analysis may focus on several key questions:
- Technical Question (Tactile Output): Claim 1 requires a "tactile output" that is provided "with at least a portion of the presentation of the video output." A central question will be whether the Accused Instrumentalities generate such a specific, coordinated tactile feedback, as distinct from a generic notification vibration. The complaint does not allege any specific facts related to a tactile output.
- Scope Question (Directional Limitation): The claim requires capturing video while moving in a "first direction" that is "different than a second direction towards a first object," and then presenting output to direct the user along that second direction. The analysis will question whether the accused AR navigation features are designed to, or in fact do, operate in this specific "course-correction" scenario, or if they primarily function when the user is already oriented toward the destination.
- Technical Question (UI Element): Claim 1 requires both a "video output for directing the user" and a separate "user interface element." A potential issue is whether the AR overlays in Google's products (e.g., arrows, signs) constitute a single "video output" or if they can be separated into the two distinct elements required by the claim.
V. Key Claim Terms for Construction
The Term:
"tactile output... with at least a portion of the presentation of the video output"(claim 1)- Context and Importance: This term is critical because standard AR navigation does not typically involve integrated tactile feedback. The infringement case may depend on whether a generic device vibration can meet this limitation, or if a more specific, coordinated haptic response is required. Practitioners may focus on this term because the claim language "with... the presentation of the video output" suggests a closer relationship than an independent notification.
- Intrinsic Evidence for Interpretation:
- Evidence for a Broader Interpretation: The patent does not appear to explicitly define "tactile output" in a narrow way, which could leave room for an argument that any vibration felt by the user while viewing the video output meets the limitation.
- Evidence for a Narrower Interpretation: Claim 1 recites the tactile output is "based on the location of the first object and the movement of the portable electronic device" ('441 Patent, col. 36:37-40). This suggests the tactile feedback is not arbitrary but is functionally linked to the navigational task, potentially arguing against a generic notification satisfying the claim.
The Term:
"user interface element"(claim 1)- Context and Importance: Claim 1 requires this element in addition to the "video output for directing the user." The viability of the infringement claim may turn on whether Google's AR overlays can be parsed into these two distinct components.
- Intrinsic Evidence for Interpretation:
- Evidence for a Broader Interpretation: The term itself is broad. An argument could be made that the directional arrows are the "video output" and informational text (e.g., street names, distance) is the separate "user interface element."
- Evidence for a Narrower Interpretation: The patent describes the user interface element as including a "tooltip and text" or "balloon and text" ('441 Patent, col. 36:61-67), which could be construed as requiring a discrete, pop-up style element rather than text simply overlaid on the video.
VI. Other Allegations
- Indirect Infringement: The complaint alleges that Google supplies the Accused Instrumentalities with knowledge of the '441 patent and with knowledge that they "constitute critical and material parts of the claimed inventions" (Compl. ¶30). This language forms the factual basis for a potential claim of contributory infringement.
- Willful Infringement: The complaint does not use the word "willful" but does request a declaration that the case is "exceptional under 35 U.S.C. § 285" and an award of attorneys' fees (Compl. Prayer for Relief ¶C). The allegation of knowledge is based on the date Google received notice of the lawsuit, which would only support a theory of post-suit willful infringement (Compl. ¶30).
VII. Analyst’s Conclusion: Key Questions for the Case
This dispute appears to center on highly specific claim requirements and whether they are present in Google's mass-market augmented reality products. The case will likely turn on the following core questions:
- A central evidentiary question will be one of technical capability: Can the plaintiff demonstrate that Google’s Accused Instrumentalities generate a "tactile output" that is functionally and temporally coordinated with the AR video display, as required by claim 1, or is this element entirely absent from the accused products?
- A key issue of claim scope will be: Can the phrase "a first direction that is different than a second direction" be construed broadly to cover any general user movement, or does it narrowly require a specific "off-course" to "on-course" correction scenario that the accused products may not implement?
- A further question of claim interpretation will be one of distinction: Do the AR overlays in Google’s products represent a single, integrated "video output," or can they be factually and legally separated into both the "video output for directing the user" and the distinct "user interface element" that claim 1 requires?