PTAB
IPR2025-01399
CrowdStrike Inc v. Skysong Innovations LLC
Key Events
Petition
Table of Contents
petition
1. Case Identification
- Case #: IPR2025-01399
- Patent #: 11,775,831
- Filed: September 26, 2025
- Petitioner(s): CrowdStrike, Inc.
- Patent Owner(s): Skysong Innovations, LLC
- Challenged Claims: 1-11
2. Patent Overview
- Title: Reducing Computation in Convolutional Neural Networks
- Brief Description: The ’831 patent discloses techniques to reduce computation in Convolutional Neural Networks (CNNs). The method involves performing an initial, low-precision approximate computation on multiple data sets using only their most significant bits (MSBs), identifying the data set that exhibits a maximum value, and then performing a full-precision computation only on that specific data set to conserve computational resources.
3. Grounds for Unpatentability
Ground 1: Obviousness over Ujiie and Moons - Claims 1, 10, and 11 are obvious over Ujiie in view of Moons.
- Prior Art Relied Upon: Ujiie (a 2016 IEEE conference paper titled "Approximated Prediction Strategy for Reducing Power Consumption of Convolutional Neural Network Processor") and Moons (a 2016 conference paper titled "Energy-Efficient ConvNets Through Approximate Computing").
- Core Argument for this Ground:
- Prior Art Mapping: Petitioner argued that Ujiie discloses the core architecture of the challenged claims, and Moons supplies the missing detail regarding approximation using MSBs. Ujiie taught a "LazyConvPool" (LCP) method that first performs a lightweight, approximate convolution ("AppConv") to predict a feature window, then performs a single, exact full-precision convolution ("Conv") only on the predicted window. This two-step process of approximate-then-exact computation allegedly mapped to the structure of claim 1. However, Ujiie's method of approximation, "Sign Connect," only used a single bit for kernel weights. Moons was cited to remedy this difference, as it taught "precision scaling"—reducing the number of bits for both weights and inputs to save energy. Critically, Moons taught quantizing inputs to a lower bit-width (e.g., 4-10 bits) and explicitly stated that in precision scaling, the "MSB-bit should always be placed at the MSB position." The combination of Ujiie’s two-step framework with Moons’s technique of performing the approximation using a set of MSBs allegedly rendered claim 1 obvious. Dependent claims 10 (applying the CNN to image analysis) and 11 (the first iteration approximating the full precision value) were also argued to be taught by the combination.
- Motivation to Combine: A Person of Ordinary Skill in the Art (POSITA) would combine Moons's precision scaling with Ujiie's LCP framework to improve system accuracy while retaining energy efficiency. Ujiie's Sign Connect approximation was aggressive and led to performance degradation on complex images. Moons’s multi-bit quantization offered a better, more flexible trade-off, providing higher accuracy than Ujiie's method while still achieving significant energy savings. A POSITA would thus be motivated to replace Ujiie's less accurate approximation method with the superior and more flexible technique taught by Moons to strike a better balance between accuracy and efficiency.
- Expectation of Success: A POSITA would have a reasonable expectation of success because the combination involved substituting one known approximation technique (Sign Connect) for another (precision scaling) within a predictable system. Both references addressed the same problem of reducing power consumption in CNNs, making the integration of their teachings straightforward.
Ground 2: Obviousness over Ujiie, Moons, and Kaul - Claims 2-6 and 8-9 are obvious over Ujiie and Moons in view of Kaul.
Prior Art Relied Upon: Ujiie, Moons, and Kaul (a 2016 IEEE conference paper titled "A 21.5M-Query-Vectors/s 3.37nJ/Vector Reconfigurable k-Nearest-Neighbor Accelerator with Adaptive Precision...").
Core Argument for this Ground:
- Prior Art Mapping: This ground built upon the Ujiie-Moons combination to address claim 2, which introduced an iterative process for situations where the first approximation results in a tie (i.e., the "first set of values are the same"). The base Ujiie-Moons combination did not explicitly teach resolving such ties. Petitioner asserted that Kaul provided the solution. Kaul taught an "adaptive precision" method for k-Nearest-Neighbor (kNN) computations, which were described as key building blocks for computer vision. This method started with a low-accuracy computation using MSBs and, if the result was inconclusive, performed "[i]terative refinement" by advancing from MSB to LSB "with a successive 2b [2 bits]...in each iteration." Petitioner argued this directly taught the limitation of claim 2: performing a second, higher-precision iteration using a larger set of MSBs to resolve ambiguity from the first iteration. This logic was extended to dependent claims 3-6 and 8-9, which related to processing the results of the second iteration.
- Motivation to Combine: A POSITA implementing the Ujiie-Moons system would inevitably encounter scenarios where the low-precision first pass resulted in a tie, a known problem in the field. To resolve this, a POSITA would look to known techniques for improving precision when a low-cost computation is insufficient. Kaul provided a well-understood technique for this exact problem—iteratively increasing bit-width—in the same field of computer vision. The motivation was to solve a predictable problem with a known, analogous solution to make the overall system more robust.
- Expectation of Success: The combination was asserted to be predictable. Kaul’s kNN adaptive precision technique was compatible with Ujiie’s CNN processing, as both involve vector-based multiplication and addition operations where bit-width directly impacts precision and efficiency. Applying Kaul's iterative refinement to the Ujiie-Moons framework was argued to be a straightforward application of a known principle to achieve a predictable outcome.
Additional Grounds: Petitioner asserted alternative obviousness challenges for claims 2-6 and 8-9 based on adding Ishii (Application # 2016/0259995) to the Ujiie-Moons-Kaul combination to explicitly teach checking for a tie. Additional grounds for claim 7 were asserted based on combinations including Ravindran (Application # 2016/0259994) to teach using specific 3x3 convolution kernel sizes, which was argued to be a common and obvious design choice.
4. Relief Requested
- Petitioner requests institution of an inter partes review and cancellation of claims 1-11 of the ’831 patent as unpatentable.
Analysis metadata