2:18-cv-00321
SRC Labs LLC v. Microsoft Corp
I. Executive Summary and Procedural Information
- Parties & Counsel:- Plaintiff: SRC Labs, LLC (Texas) and Saint Regis Mohawk Tribe (New York)
- Defendant: Microsoft Corporation (Washington)
- Plaintiff’s Counsel: Keller Rohrback L.L.P. and Shore Chan DePumpo LLP
 
- Case Name: SRC Labs, LLC v. Microsoft Corporation
- Case Identification: 2:18-cv-00321, W.D. Wash., 08/03/2018
- Venue Allegations: Venue is alleged to be proper in the Western District of Washington because Microsoft is a Washington corporation headquartered in the district.
- Core Dispute: Plaintiffs allege that Microsoft’s FPGA-based hardware accelerators and associated cloud computing services (e.g., Bing, Azure) infringe six patents related to reconfigurable, multiprocessor computer architectures.
- Technical Context: The patents concern architectures that integrate reconfigurable processors, such as Field-Programmable Gate Arrays (FPGAs), into multiprocessor systems to accelerate computationally intensive tasks, a key technology for modern large-scale data centers.
- Key Procedural History: The complaint alleges Microsoft received actual notice of several patents-in-suit via a letter from SRC Computers on June 23, 2010, followed by discussions in 2015 concerning a potential acquisition of SRC's patent portfolio. Subsequent to the filing of this complaint, inter partes review (IPR) proceedings were initiated against all asserted patents. According to the IPR certificates attached to the patent documents, all claims asserted in this complaint have since been either cancelled by the USPTO or statutorily disclaimed by the patent owner.
Case Timeline
| Date | Event | 
|---|---|
| 1997-12-17 | Priority Date for ’152 and ’110 Patents | 
| 2000-06-13 | ’152 Patent Issued | 
| 2001-06-12 | ’110 Patent Issued | 
| 2001-06-22 | Priority Date for ’687 Patent | 
| 2002-08-13 | ’687 Patent Issued | 
| 2002-10-31 | Priority Date for ’324 and ’800 Patents | 
| 2004-11-23 | Priority Date for ’524 Patent | 
| 2007-05-29 | ’324 Patent Issued | 
| 2008-09-02 | ’524 Patent Issued | 
| 2009-11-17 | ’800 Patent Issued | 
| 2010-06-23 | SRC Computers sends notice letter to Microsoft | 
| 2010-12-01 | Microsoft allegedly begins design of Catapult project | 
| 2013-01-01 | Microsoft's Catapult pilot program goes into production | 
| 2015-01-01 | Large-scale production of Catapult in Bing and Azure begins | 
| 2015-09-30 | Microsoft is offered opportunity to acquire SRC patent portfolio | 
| 2018-08-03 | Plaintiffs' First Amended Complaint Filed | 
II. Technology and Patent(s)-in-Suit Analysis
U.S. Patent No. 6,076,152 - Multiprocessor computer architecture incorporating a plurality of memory algorithm processors in the memory subsystem (Issued Jun. 13, 2000)
The Invention Explained
- Problem Addressed: The patent’s background section describes a trade-off in computer design: general-purpose processors are flexible but relatively slow for specific tasks, while custom, single-function hardware is fast but inflexible. It notes that early reconfigurable computers could not adequately scale to high-performance, multi-processor environments. (’152 Patent, col. 1:10-52).
- The Patented Solution: The invention proposes a multiprocessor architecture that integrates user-reconfigurable hardware elements, termed Memory Algorithm Processors (“MAPs”), directly into the memory subsystem. By placing these MAPs (e.g., FPGAs) in a location globally accessible to all system processors, the architecture aims to offload specific, computationally intensive algorithms to dedicated hardware, thereby accelerating performance without sacrificing the flexibility of the main processors. (’152 Patent, Abstract; col. 2:53-65).
- Technical Importance: This architecture represents an approach to creating hybrid computing systems that combine the performance of custom hardware with the programmability of general-purpose CPUs in a scalable manner. (’152 Patent, col. 3:1-17).
Key Claims at a Glance
- The complaint asserts independent claims 1 and 11, among others (Compl. ¶107).
- Essential elements of independent claim 11 include:- A plurality of data processors
- A memory bank with a data bus and an address bus
- A plurality of memory algorithm processors located within the memory bank
- Means for coupling the memory algorithm processors to the data and address buses, making them individually memory addressable by all data processors
- The memory algorithm processors are configurable to perform an algorithm on an operand received from a write operation.
 
- The complaint reserves the right to assert dependent claims 2-7, 12, 15, 18, and 21 (Compl. ¶107).
U.S. Patent No. 6,247,110 - Multiprocessor computer architecture incorporating a plurality of memory algorithm processors in the memory subsystem (Issued Jun. 12, 2001)
The Invention Explained
- Problem Addressed: As a continuation of the ’152 patent, the ’110 patent addresses the same technical problem of balancing processing speed and flexibility in scalable, high-performance computer systems. (’110 Patent, col. 1:13-55).
- The Patented Solution: The ’110 patent discloses the same core architecture as its parent: integrating reconfigurable MAPs, such as FPGAs, into the memory subsystem to function as hardware accelerators that are globally accessible to the system's main microprocessors. The goal is to offload user-defined algorithms to these specialized hardware units to enhance overall system performance. (’110 Patent, Abstract; col. 2:5-12).
- Technical Importance: This patent continues the development of a hybrid computing architecture aimed at accelerating specific computational tasks within a general-purpose, multiprocessor framework. (’110 Patent, col. 3:10-24).
Key Claims at a Glance
- The complaint asserts independent claims 1 and 11, among others (Compl. ¶122).
- Essential elements of independent claim 11 include:- A plurality of data processors
- A memory bank with a data bus and an address bus
- A plurality of reconfigurable memory algorithm processors within the memory bank
- Means for coupling the memory algorithm processors to the data and address buses, making them individually memory addressable by all data processors
- The memory algorithm processors are configurable to perform an algorithm on an operand received from a write operation.
 
- The complaint reserves the right to assert dependent claims 2-7, 12, 15, 18, and 21 (Compl. ¶122).
Multi-Patent Capsule: U.S. Patent No. 6,434,687
- Patent Identification: U.S. Patent No. 6,434,687, "System and method for accelerating web site access and processing utilizing a computer system incorporating reconfigurable processors operating under a single operating system image," Issued Aug. 13, 2002.
- Technology Synopsis: This patent describes using a hybrid computer system with reconfigurable processors (MAPs) to accelerate web server tasks, such as processing demographic data to customize web content. The system operates under a single operating system image, allowing the standard microprocessors and reconfigurable MAPs to share resources seamlessly. (’687 Patent, Abstract).
- Asserted Claims: 1-5, 10-13, 18, and 25 (Compl. ¶137).
- Accused Features: Microsoft's online services, including Bing (Ranking, Selection, DNN, CNN), that utilize the Catapult FPGA Accelerators (Compl. ¶137).
Multi-Patent Capsule: U.S. Patent No. 7,225,324
- Patent Identification: U.S. Patent No. 7,225,324, "Multi-adaptive processing systems and techniques for enhancing parallelism and performance of computational functions," Issued May 29, 2007.
- Technology Synopsis: This patent discloses methods for data processing in a reconfigurable computing system that enhance parallelism. It describes techniques such as creating "systolic walls" of functional units to process different dimensions of data concurrently, allowing for multi-dimensional pipelining of computations for applications like seismic imaging or fluid dynamics. (’324 Patent, Abstract).
- Asserted Claims: 1, 8, 9, 17, 18, 21, 22, and 23 (Compl. ¶152).
- Accused Features: Various Microsoft services and applications, including Bing, Brainwave, Azure Accelerated Networking, and data compression, that run on the "role or soft-shell portion of an FPGA in a Catapult Board" (Compl. ¶152).
Multi-Patent Capsule: U.S. Patent No. 7,421,524
- Patent Identification: U.S. Patent No. 7,421,524, "Switch/network adapter port for clustered computers employing a chain of multi-adaptive processors in a dual in-line memory module format," Issued Sep. 2, 2008.
- Technology Synopsis: This patent describes a switch/network adapter port (SNAP) that uses multi-adaptive processors (MAPs) in a standard memory module format (e.g., DIMM). This allows the MAPs to leverage the high-bandwidth, low-latency memory bus for inter-computer communication in a cluster, rather than relying on the slower peripheral PCI bus. (’524 Patent, Abstract).
- Asserted Claims: 1, 2, 13, and 15 (Compl. ¶164).
- Accused Features: Microsoft's FPGA Accelerators, including Catapult v2, v3, and v4 (Compl. ¶164).
Multi-Patent Capsule: U.S. Patent No. 7,620,800
- Patent Identification: U.S. Patent No. 7,620,800, "Multi-adaptive processing systems and techniques for enhancing parallelism and performance of computational functions," Issued Nov. 17, 2009.
- Technology Synopsis: As a continuation of the ’324 patent, this patent further discloses systems and techniques for enhancing parallelism in reconfigurable computing. It describes multi-dimensional pipeline and systolic wavefront computations for various applications, including seismic analysis, search algorithms, and bioinformatics. (’800 Patent, Abstract).
- Asserted Claims: 1, 8, 9, 17, 18, 21, 22, and 23 (Compl. ¶179).
- Accused Features: Microsoft services and applications running on the Catapult FPGA boards, including Bing, Brainwave, and data compression applications (Compl. ¶179).
III. The Accused Instrumentality
Product Identification
The complaint identifies the accused instrumentalities as Microsoft's "FPGA Accelerators," specifically including the "Catapult v2 (Pikes Peak, Storey Peak), Catapult v3 (Dragontail Peak, Longs Peak, Nicholas Peak), Catapult v4 (Storm Peak)" systems (Compl. ¶107, ¶123). It also accuses Microsoft's online services that utilize these accelerators, including Bing, Azure, and Office 365 (Compl. ¶75).
Functionality and Market Context
The complaint alleges that Microsoft's "Project Catapult" deploys FPGAs as an addition to each data center server, creating an "acceleration fabric" throughout the datacenter (Compl. ¶73). The complaint includes a diagram of the "Catapult V2 architecture" showing an FPGA on a server blade connected to CPUs, DRAM, and a network switch (Compl. p. 16). This architecture is alleged to feature a "Shell" for handling I/O and a "Role" for application logic, and to use a Direct Memory Access (DMA) interface to access main system memory (Compl. ¶69, ¶70). The complaint asserts that Microsoft's investment in FPGAs for this purpose was so significant that it "shifted the worldwide chip market" and led to Intel's $16.7 billion acquisition of FPGA manufacturer Altera (Compl. ¶79, ¶81).
IV. Analysis of Infringement Allegations
’152 Patent Infringement Allegations
| Claim Element (from Independent Claim 11) | Alleged Infringing Functionality | Complaint Citation | Patent Citation | 
|---|---|---|---|
| a plurality of data processors for executing at least one application program... | Microsoft's datacenter servers contain multiple CPUs that execute applications. | ¶68 | col. 2:41-43 | 
| a memory bank having a data bus and an address bus connected to said plurality of data processors | The servers include main system memory (DRAM) connected to the CPUs. | ¶68 | col. 2:43-45 | 
| a plurality of memory algorithm processors within said memory bank... | The Catapult architecture deploys FPGAs as an "addition to each data center server" to create an "acceleration fabric." | ¶73 | col. 2:45-50 | 
| means coupling said plurality of individual memory algorithm processors to said data bus and to said address bus; | The Catapult V2 architecture diagram shows the FPGA connected to the CPU and DRAM via interfaces like QPI and PCIe Gen3. | ¶68 | col. 8:58-67 | 
| said plurality of individual memory algorithm processors being individually memory addressable by all of said plurality of data processors | The "elastic reconfigurable acceleration fabric" allegedly allows harnessing thousands of FPGAs for a single service, suggesting they are addressable resources. | ¶74 | col. 9:1-4 | 
’110 Patent Infringement Allegations
| Claim Element (from Independent Claim 11) | Alleged Infringing Functionality | Complaint Citation | Patent Citation | 
|---|---|---|---|
| a plurality of data processors for executing at least one application program... | Microsoft's datacenter servers contain multiple CPUs that execute applications. | ¶68 | col. 2:48-50 | 
| a memory bank having a data bus and an address bus connected to said plurality of data processors | The servers include main system memory (DRAM) connected to the CPUs. | ¶68 | col. 2:51-53 | 
| a plurality of reconfigurable memory algorithm processors within said memory bank... | The Catapult architecture deploys FPGAs as an "addition to each data center server" to create an "acceleration fabric." | ¶73 | col. 8:6-9 | 
| means coupling said plurality of individual memory algorithm processors to said data bus and to said address bus; | The Catapult V2 architecture diagram shows the FPGA connected to the CPU and DRAM via interfaces like QPI and PCIe Gen3. | ¶68 | col. 7:1-8 | 
| said plurality of memory algorithm processors being individually configurable to perform an identified algorithm on an operand that is received from a write operation... | The FPGAs in the Catapult architecture are described as having a reconfigurable "Role" for application logic. | ¶69 | col. 8:10-16 | 
- Identified Points of Contention:- Scope Questions: A primary question concerns the interpretation of "within said memory bank." The complaint describes the accused FPGAs as being on server blades and connected via PCIe (Compl. p. 16), which raises the question of whether this configuration meets the claim requirement of being structurally "within" the memory bank, as depicted in the patents' figures (’152 Patent, Fig. 3).
- Technical Questions: Another question relates to the functional definition of a "memory algorithm processor." The patents describe this element as performing an algorithm on an operand received from a "write operation to the memory array" (’152 Patent, col. 2:38-39). It is an open question whether the complaint provides evidence that Microsoft's FPGA accelerators, which perform complex tasks like search ranking, operate in this specific manner.
 
V. Key Claim Terms for Construction
- The Term: "memory algorithm processor" - Context and Importance: This term is the core of the invention claimed in the ’152 and ’110 patents. Its construction will be critical to determining whether Microsoft's FPGA accelerators, which are general-purpose reconfigurable hardware, fall within the scope of this more specific term.
- Intrinsic Evidence for a Broader Interpretation: The specification refers to the MAP as a "reconfigurable functional unit" that performs "user definable algorithms" and may comprise an FPGA, language that could support a broader definition covering general-purpose accelerators (’152 Patent, col. 2:53-65).
- Intrinsic Evidence for a Narrower Interpretation: The claim language and specification repeatedly tie the processor's function to a specific action: performing an algorithm on an operand "received from a write operation to the memory array" (’152 Patent, col. 2:38-39, cl. 11). This suggests a specific mode of operation that may be narrower than the general function of the accused accelerators.
 
- The Term: "within said memory bank" - Context and Importance: This structural limitation defines the physical or logical location of the "memory algorithm processor". Practitioners may focus on this term because the accused Catapult architecture places the FPGA on a server blade connected via a standard bus (PCIe), which may not be considered "within" a memory bank in a literal sense.
- Intrinsic Evidence for a Broader Interpretation: A party could argue that "within" should be interpreted functionally to mean part of the memory subsystem, allowing for components that are tightly coupled to memory controllers and main memory, even if not physically on a memory module.
- Intrinsic Evidence for a Narrower Interpretation: The patent's Figure 3 explicitly shows the "MAP Assembly" as a component inside the block labeled "Memory Bank." This depiction suggests a tight physical integration that may not read on a separate accelerator card connected via a peripheral bus.
 
VI. Other Allegations
- Willful Infringement: The complaint alleges willful infringement for all asserted patents. The allegations are based on Microsoft's alleged pre-suit knowledge, stemming from a June 23, 2010 notice letter from SRC Computers identifying several of the asserted patents (Compl. ¶48, p. 13). The complaint further alleges that Microsoft's engineers "carefully evaluated each of SRC's patents" during acquisition discussions in 2015, and subsequently "blatantly and intentionally copied the inventions" (Compl. ¶113-116).
VII. Analyst’s Conclusion: Key Questions for the Case
- A threshold procedural issue dominates the case: claim viability. Given that every asserted claim across all six patents was cancelled or disclaimed in inter partes review proceedings that concluded after the complaint was filed, the central question is whether any basis for an infringement action remains.
- A core issue of locational scope would have been dispositive: can the term "within said memory bank," which the patent figures depict as a component integrated into the memory bank assembly, be construed to cover Microsoft's Catapult FPGA, which is implemented on a server blade and connected to the memory subsystem via a standard interface like PCIe?
- A key question of functional operation would also have been central: do the accused Catapult accelerators, which execute complex, high-level application logic for services like search ranking, function as the claimed "memory algorithm processors," which the patent specification consistently describes as performing their function on an operand received specifically from a "write operation to the memory array"?