PTAB
PGR2025-00075
OneSource Solutions Intl Inc v. HippocrATic Ai Inc
Key Events
Petition
Table of Contents
petition
1. Case Identification
- Case #: PGR2025-00075
- Patent #: 12,142,371
- Petitioner(s): OneSource Solutions International, Inc. (“OSSI”)
- Patent Owner(s): Hippocratic AI, Inc.
- Challenged Claims: 1-20
2. Patent Overview
- Title: Multi-Turn Conversational System with Second Opinion Module
- Brief Description: The ’371 patent describes a system architecture using a first large language model (LLM) to conduct a multi-turn, human-like conversation guided by a task checklist. A second LLM, functioning as a "second opinion module," performs research in parallel to validate or supplement the primary conversation.
3. Grounds for Unpatentability
Ground 1: Obviousness of Claims 1 and 19 over Brown, the ’968 application, and the ’651 application
- Prior Art Relied Upon: Brown (Patent 9,824,188), Application # 2024/0185968 (’968 application), and Application # 2023/0245651 (’651 application).
- Core Argument for this Ground:
- Prior Art Mapping: Petitioner argued that Brown taught a medical chatbot capable of multi-turn dialogue and user intent determination. The ’968 application added the limitation of a checklist-based query system to guide a predefined dialogue. Finally, the ’651 application disclosed a parallel-running analysis module that collaborates with a conversational AI to perform in-depth analysis and generate recommendations, directly corresponding to the claimed “second opinion module.”
- Motivation to Combine: A POSITA would combine these references to enhance a known medical chatbot (Brown) with the structured safety of a checklist (’968 application) and the improved accuracy of a parallel validation module (’651 application). This combination would predictably result in a more robust and reliable conversational system for regulated fields like healthcare.
- Expectation of Success: Petitioner asserted success would be expected as the combination involved integrating known, compatible technologies to achieve their intended functions.
Ground 2: Obviousness of Claims 1 and 19 over the ’644, ’694, and ’854 patents
- Prior Art Relied Upon: Patent 10,748,644 (’644 patent), Patent 11,348,694 (’694 patent), and Patent 11,977,854 (’854 patent).
- Core Argument for this Ground:
- Prior Art Mapping: Petitioner asserted this alternative combination also rendered the independent claims obvious. The ’644 patent taught a system for mental health assessment with a conversational interface, control logic, and a concurrent "runtime model server logic" that functions as a second opinion module. The ’694 patent disclosed a checklist-based conversational system for determining a user's medical condition. The ’854 patent explicitly taught using an LLM trained with at least one thousand gradient update iterations, meeting the final limitation of claim 1.
- Motivation to Combine: A POSITA would be motivated to integrate the checklist-based approach of the ’694 patent into the multi-module conversational system of the ’644 patent to improve its structure and reliability. Incorporating the standard training techniques from the ’854 patent was a routine step.
- Expectation of Success: The combination was portrayed as a straightforward integration of established components to achieve a predictable improvement in conversational AI for medical applications.
Ground 3: Obviousness of Claim 2 over Brown in view of Lee
Prior Art Relied Upon: Brown (Patent 9,824,188) and Lee (Patent 11,843,565).
Core Argument for this Ground:
- Prior Art Mapping: Dependent claim 2 adds control logic comprising trigger detection, question insertion, and answer classification, with the trigger detection and answer classification modules comprising second and third LLMs. Petitioner argued Brown taught the core control logic functionalities (trigger detection, question insertion, answer classification). While Brown did not specify using separate LLMs for these tasks, Lee taught using language and classification models to extract contextual information and provide higher-quality responses.
- Motivation to Combine: A POSITA would be motivated to use multiple, specialized LLMs as taught by Lee to implement the distinct logical functions in Brown's system. This modular approach would be an obvious way to improve the accuracy and quality of the conversational system.
Additional Grounds: Petitioner asserted that all claims are also unpatentable under §112(a) for lack of enablement and written description, under §112(b) for indefiniteness, and under §101 as being directed to patent-ineligible abstract ideas. The §101 challenge argued the claims merely automate the mental steps of following a script, monitoring a conversation, and consulting a reference, without an inventive concept that improves computer technology.
4. Key Technical Contentions (Beyond Claim Construction)
- Lack of Enablement (§112(a)): Petitioner argued the specification fails to provide enabling technical detail for its core components. The “second-opinion module” was described as a “black-box,” with no disclosure on how it is invoked, how it ingests data, or how its parallel processing is synchronized with the primary LLM. Similarly, the patent allegedly failed to explain how abstract “control signals” are generated or injected into the LLM’s context to construct downstream conversation, requiring undue experimentation.
- Indefiniteness (§112(b)): Petitioner contended that key claim terms lack reasonable certainty. Terms such as “research into a portion of the... conversation,” “control signals contribute in part,” and “kickout category” were argued to be vague and subjective, with no objective boundaries defined in the specification. This ambiguity allegedly made it impossible for a POSITA to determine the scope of the claims.
5. Relief Requested
- Petitioner requested the institution of a Post-Grant Review and the cancellation of claims 1-20 of the ’371 patent in their entirety.
Analysis metadata