[go: up one dir, main page]

WO2025095978A1 - Repair data controller for memories with serial repair interfaces - Google Patents

Repair data controller for memories with serial repair interfaces Download PDF

Info

Publication number
WO2025095978A1
WO2025095978A1 PCT/US2023/078692 US2023078692W WO2025095978A1 WO 2025095978 A1 WO2025095978 A1 WO 2025095978A1 US 2023078692 W US2023078692 W US 2023078692W WO 2025095978 A1 WO2025095978 A1 WO 2025095978A1
Authority
WO
WIPO (PCT)
Prior art keywords
repair data
memory
repair
circuit
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2023/078692
Other languages
French (fr)
Inventor
Mayank Parasrampuria
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLC filed Critical Google LLC
Priority to PCT/US2023/078692 priority Critical patent/WO2025095978A1/en
Priority to TW113141201A priority patent/TW202533244A/en
Publication of WO2025095978A1 publication Critical patent/WO2025095978A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C29/00Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
    • G11C29/70Masking faults in memories by using spares or by reconfiguring
    • G11C29/78Masking faults in memories by using spares or by reconfiguring using programmable devices
    • G11C29/84Masking faults in memories by using spares or by reconfiguring using programmable devices with improved access time or stability
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C29/00Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
    • G11C29/04Detection or location of defective memory elements, e.g. cell constructio details, timing of test signals
    • G11C29/08Functional testing, e.g. testing during refresh, power-on self testing [POST] or distributed testing
    • G11C29/12Built-in arrangements for testing, e.g. built-in self testing [BIST] or interconnection details
    • G11C29/44Indication or identification of errors, e.g. for repair
    • G11C29/4401Indication or identification of errors, e.g. for repair for self repair
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C29/00Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
    • G11C29/70Masking faults in memories by using spares or by reconfiguring
    • G11C29/78Masking faults in memories by using spares or by reconfiguring using programmable devices
    • G11C29/785Masking faults in memories by using spares or by reconfiguring using programmable devices with redundancy programming schemes
    • G11C29/789Masking faults in memories by using spares or by reconfiguring using programmable devices with redundancy programming schemes using non-volatile cells or latches
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C29/00Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
    • G11C29/70Masking faults in memories by using spares or by reconfiguring
    • G11C29/86Masking faults in memories by using spares or by reconfiguring in serial access memories, e.g. shift registers, CCDs, bubble memories
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C29/00Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
    • G11C29/70Masking faults in memories by using spares or by reconfiguring
    • G11C29/76Masking faults in memories by using spares or by reconfiguring using address translation or modifications

Definitions

  • This specification relates to devices having the capability to perform memory repair sharing, specifically in components that operate by a serial repair interface.
  • Memory repair is necessary’ to correct faults contained within memory’ structures, which can be present from manufacturing defects or can develop over time in operation.
  • the repair interface used by any given device is dependent on many factors and can differ betyveen manufacturers and designers.
  • Repair interfaces are available in both parallel and serial formats.
  • a parallel repair interface multiple spare memory structures can be exchanged directly for faulty address data. This can be performed in various ways, to include a column-based exchange, a royv-based exchange, or a combination of both.
  • a serial repair interface there are multiple unique addresses to each memory structure that must be sequenced correctly to effectuate a repair.
  • This specification describes a hardware controller and parallel mapping circuit for providing repair data to a memory circuit at an IP block (or device) of a system-on-chip (“SoC”).
  • SoC system-on-chip
  • a memory circuit includes memory- macros that can have redundancy or repair features.
  • a 1 -Kilobyte (Kb) memory array includes a main sector and a spare (or redundant) sector.
  • Each of the main sector and the spare sector can include multiple rows and banks.
  • the main sector can include multiple main rows and main banks, whereas the spare/redundant sector can include one or more spare rows and one or more spare banks.
  • the memory’ macro has a redundancy feature that alloyvs for repair data to be written to a spare row in response to detecting a fault associated with a main row.
  • the redundancy feature of a memory’ macro is activated and controlled using an interface of the memory’.
  • the interface may be a serial repair interface or a parallel repair interface.
  • serial repair interface locations or sectors of a memory structure are programmed and/or repaired serially, for example, by sequentially traversing memory addresses for the corresponding memory locations. This repair or programming is performed via a serial repair interface which controls the selection of addresses that identify memory locations of a redundant memory sector that receives repair data when a fault is detected at the corresponding location of a main memory sector. Repairing memory using a serial repair interface usually occurs one memory structure at a time, with the serial interface repairing each address of the memory' structure in succession. Relative to parallel repair interfaces, repair sharing in serial repair interfaces typically requires different repair processes at least because of the serial or sequential way in which repair operations are performed on the individual addresses of each memory structure. Additionally, parallel repair interfaces allow for concurrent repair operations that offer certain benefits and advantages in speed and/or efficiency over serial repair interfaces.
  • the hardware controller and parallel mapping circuit of the disclosed techniques can be used to implement repair sharing via a serial repair interface to realize some (or all) of the repair speeds and efficiency advantages normally associated with parallel repair formats. For example, these techniques can be used to reduce boot times, reduce an overall circuit footprint of an SoC, and have additional benefits to the system, such as improved latency and response times when launching or executing an application at a user/mobile device that includes the SoC.
  • BISR built-in self repair
  • BISR circuitry can be configured to perform a repair operation by obtaining and transferring repair data to a spare memory cell when a chip that includes the memory is powered up.
  • the BISR circuitry' can include one or more custom BISR loaders (CBLs) and Virtual Memory Wrappers (VMWs).
  • CBLs custom BISR loaders
  • VMWs Virtual Memory Wrappers
  • one CBL is assigned to each BISR chain within the SoC.
  • the BISR chains within the SoC may be organized in different ways.
  • one chain or multiple chains may be assigned to each device or IP block within the SoC.
  • one or more VMWs can be assigned to each device or IP block.
  • the SoC can include a Tensor Processing Unit (TPU) that is assigned five VMWs for its different memory' structures, whereas a Digital Signal Processor (DSP) on the SoC may only be assigned a single VMW for its memory' structure(s).
  • TPU Tensor Processing Unit
  • DSP Digital Signal Processor
  • Other combinations and quantities of VMWs to memory structures in a device or IP block are also within the scope of this disclosure.
  • the hardware controller and parallel mapping circuit can be used to execute repair operations during an example boot sequence.
  • a BISR loading operation is completed by a BISR controller at the SoC level.
  • the BISR controller is configured to generate a completion signal in response to completing a BISR loading operation to load a subset of repair data for implementing repair operations.
  • the completion signal can be represented as a bisr_done parameter and may be referenced within this disclosure as a bisr_done signal. This bisr_done signal is then provided to each CBL in each device or IP block within the SoC, for example, by the BISR controller.
  • Each CBL initiates a clock (e.g...
  • a local clock in response to receiving the bisr done signal from the BISR controller.
  • the CBL can initiate the clock concurrent with detection of the bisr_done signal or shortly thereafter.
  • this clock signal increments based on a clock frequency associated with the SoC. Other examples use other sources of timing.
  • Each CBL is configured to generate a load enable signal based on a corresponding clock signal.
  • the corresponding clock signal may be locally generated, passed from a higher-level SoC processor, or both.
  • the CBL can iteratively increment the bisr_load enable signal based on a clock frequency of its corresponding clock signal and iteratively pass a respective instance of an incremented load enable signal to a corresponding VMW in its BISR chain.
  • this load enable signal is an incrementing bisrjoad signal.
  • Each VMW conducts its repair operation for one or more addresses of a memory' based on its corresponding bisr_load signal.
  • a memory address is selected and the VMW is used to apply repair data in response to detecting an incremented bisrjoad signal from the CBL.
  • each VMW contains at least one BISR and a Pseudo Parallel Map (PPM) that receives the bisrjoad signal and maps the repair operation to a memory address. Because each memory structure within the IP block may have a differing number of memory addresses, the number of required repair operations is potentially different across VMWs. To compensate for differentiation in the number of memory addresses across IP blocks, each PPM within each VMW includes a dummy register. A size of the dummy register is determined and/or configured based on a threshold amount of memory addresses across all VMWs serviced by the CBL, where the threshold amount can be defined or indicated based on an integer value, N.
  • a value of N can be assigned to the largest amount of memory addresses expected in a particular IP block.
  • N maximum value
  • repair data to a memory circuit
  • the method including: loading, using a first interface, repair data to a pseudo parallel mapping circuit coupled to the memory circuit; generating, by a repair data controller, control signals that control shifting the repair data from the pseudo parallel mapping circuit to the memory' circuit; and shifting, using a second interface, the repair data from the pseudo parallel mapping circuit to redundant sectors of the memory circuit based on multiple input/output (I/O) maps of the memory circuit.
  • I/O input/output
  • implementations include a system for providing repair data to a memory circuit, the system including: a processing device; and a non-transitory machine-readable storage device storing instructions that are executable by the processing device to cause performance of operations including: loading, using a first interface, repair data to a pseudo parallel mapping circuit coupled to the memory circuit; generating, by a repair data controller, control signals that control shifting the repair data from the pseudo parallel mapping circuit to the memory' circuit; and shifting, using a second interface, the repair data from the pseudo parallel mapping circuit to redundant sectors of the memory circuit based on multiple input/output (I/O) maps of the memory circuit.
  • I/O input/output
  • implementations include a non-transitory machine-readable storage device storing instructions used to provide repair data to a memory circuit, the instructions being executable by a processing device to cause performance of operations including: loading, using a first interface, repair data to a pseudo parallel mapping circuit coupled to the memory circuit; generating, by a repair data controller, control signals that control shifting the repair data from the pseudo parallel mapping circuit to the memory' circuit; and shifting, using a second interface, the repair data from the pseudo parallel mapping circuit to redundant sectors of the memory circuit based on multiple input/output (I/O) maps of the memory circuit.
  • I/O input/output
  • the loading of the repair data includes: loading, using the first interface, respective portions of the repair data in parallel to a built-in self-repair register of the pseudo parallel mapping circuit.
  • the shifting the repair data includes: serially shifting, using the second interface, respective portions of the repair data from the built-in self-repair register of the pseudo parallel mapping circuit to the redundant sectors of the memory circuit.
  • shifting the repair data includes: shifting, by the first register, a respective portion of the repair data to a redundant sector of the memory circuit that corresponds to a main sector of the memory circuit; where the respective portion of repair data is shifted to the redundant sector based on a fault at the corresponding main sector.
  • Some examples additionally include: generating the control signals includes generating multiple clock signals; and shifting the repair data includes shifting the repair data based on the multiple clock signals, such that respective portions of the repair data are shifted serially over the multiple clock cycles using the second interface.
  • the pseudo parallel mapping circuit includes a dummy register and shifting the repair data includes: performing one or more serial shifts using the dummy register when a size of a corresponding I/O map is less than a size of the dummy register.
  • the dummy register is configured to maintain a constant shift cycle when the repair data is provided to a memory instance of the memory circuit.
  • a size of the dummy register coincides with a maximum size of an I/O map in a particular memory instance of the memory circuit.
  • Some examples additionally include: generating a repair signature indicating one or more faults at the memory circuit; and activating a redundancy feature of a memory macro of the memory’ circuit based on the repair signature; where, in response to activating the redundancy feature of the memory macro, the repair data is loaded to the pseudo parallel mapping circuit and shifted to the redundant sectors of the memory’ circuit to address the one or more faults indicated by the repair signature.
  • the first interface is a parallel interface of the pseudo parallel mapping circuit; and the second interface is a serial interface of the pseudo parallel mapping circuit.
  • Implementing the disclosed system and techniques, or a related design allows for a reduction in the overall circuit footprint for a One-Time Programmable (OTP) memory.
  • OTP One-Time Programmable
  • the disclosed technique results in a lower physical system footprint for repairing infrastructure.
  • the reduced repair chain length of this system can result in improved system boot up times or repair register reloading times, as well as the ability to use an example “FAST Loading” feature of a given memoi macro.
  • the reduced boot time also allows the direct repair of memory addresses needed for the boot process which would otherwise be unrepairable. Also achieved is a reduction in the max chain length and the total repair bits required.
  • the CBL and VMWs in this example are configured to work with existing serial repair interfaces, it does not require any changes to external hardware or software to implement (e.g., a solution that is vendor independent).
  • FIG. 1 illustrates an example SoC.
  • FIG. 2 illustrates an example system including a custom BISR loader (CBL) and a virtual memory wrapper (VMW).
  • CBL custom BISR loader
  • VMW virtual memory wrapper
  • FIG. 3 illustrates an example system that incorporates CBL and a collection of VMWs into an example IP block.
  • FIG. 4 illustrates a detailed view of an example CBL.
  • FIG. 5 illustrates a detailed view of an example PPM.
  • FIG. 6 illustrates an example process of providing repair data using the techniques of this specification.
  • FIG. 1 is a block diagram of an example computing system 100 that includes a system-on-chip 102 (“SoC 102”).
  • SoC 102 includes multiple SoCs.
  • the SoC 102 includes a central processing unit 104 (‘‘CPU 104”), a shared memory 106 (“memory 106”), a repair data controller 108, and an IP/circuit block 110.
  • CPU 104 central processing unit 104
  • memory 106 shared memory
  • repair data controller 108 an IP/circuit block 110.
  • the CPU 104 can be a general purpose CPU (e.g., a single or multi-core CPU).
  • the CPU 104 generates one or more indicators, such as an app-launch indicator or a function call that is triggered in response to executing or launching an application at a user device.
  • the application can be a camera application that uses an imaging sensor to generate image data or a gaming application that requires substantial memory and graphics processing resources to render graphical content of the game.
  • the CPU 104 also generates one or more application values, such as pixel values or frame rate.
  • the application values may be associated with a function call, may be descriptive of an event that occurs during execution of the application, or both.
  • the memory 106 is a system memory, shared memory . or both. In the example of FIG. 1, memory 106 is depicted external to circuit block 110. However, memory 106 can include portions of memory that are: i) specific to circuit block 110, ii) external to circuit block 1 10, or iii) both.
  • the memory 106 can be a random access memory of the SoC 102, such as a dynamic random access memory (DRAM), a synchronous DRAM (SDRAM), or double data rate (DDR) SDRAM.
  • DRAM dynamic random access memory
  • SDRAM synchronous DRAM
  • DDR double data rate
  • aspects of memory 106 are configured as a shared scratchpad memory that supports parallel access of its memory resources by two or more processors of the circuit 1 10.
  • the memory 106 can also include various other types of memory, such as high bandwidth memory 7 (HBM), narrow memory 7 (e.g., for storing 8-bit values), wide memory (e.g., for storing 16-bit or 32-bit values), etc.
  • HBM high bandwidth memory 7
  • narrow memory 7 e.g., for storing 8-bit values
  • wide memory e.g., for storing 16-bit or 32-bit values
  • the repair data controller 108 is implemented in hardware and software. Aspects of the repair data controller 108 can be also implemented as firmware of the SoC 102 or firmware of a device of the SoC 102, such as the CPU 104.
  • the repair data controller 108 includes a virtual memory 7 wrapper 120 and a custom BISR loader 122 implemented in hardware and software. Each of the virtual memory wrapper 120 and the custom BISR loader 122 can include memory resources (e.g., flip-flops, registers, buffers, etc.) that are implemented in hardware and control logic (e.g., programmed code) that is implemented in software.
  • the virtual memory wrapper 120 and custom BISR loader 122 are described in more detail below with reference to the example of Fig. 2.
  • the repair data controller 108 is configured to enable parallel repair data functions for a memory macro having a serial repair interface, as well as control signaling and associated data values, that are used to provide repair data to memory circuits in devices of the system 100.
  • the devices can include processors, processor cores, or special-purpose processing devices, such as individual IP devices of the circuit block 110.
  • the circuit block 110 can include an image signal processor (ISP) 112, a tensor processing unit (TPU) 114, a digital signal processor (DSP) 116, and a graphics processing unit (GPU) 118.
  • ISP image signal processor
  • TPU tensor processing unit
  • DSP digital signal processor
  • GPU graphics processing unit
  • the circuit block 110 is referred to alternatively as an IP block 110, where the IP block can include one or more proprietary hardware elements.
  • each of the ISP 112, TPU 114, DSP 116, and GPU 118 can be a respective proprietary IP block (or IP device) of a particular entity or device manufacturer.
  • the repair data controller 108 uses operations performed by the virtual memory' wrapper 120 and custom BISR loader 122 to dynamically control, execute, and/or manage repair operations at the SoC 102 in support of a heterogeneous compute operations that involve executing an application at a device that includes the SoC 102. More specifically, the repair data controller 108 is configured to generate control signaling 124 and use one or more discrete signal values of the control signaling 124 to arrange and synchronize repair data operations across memory' circuits of system 100, such as memory' circuits in the processing units that are included among the IP block 110, the CPU 104, or both. In some implementations, each processor (e.g...
  • ISP 112, TPU 114, DSP 116, GPU 118, or CPU 104) of the SoC 102 includes multiple cores and the repair data controller 108 can generate control signaling 124 to synchronize or otherwise manage repair data operations at each core of the processors.
  • system 100 and the SoC 102 is an integrated circuit of an example user/client device, consumer electronic device, or mobile device.
  • Each of these devices can be example items such as a smartphone 130a, tablet 130b, laptop 130c, or smartwatch or wearable device 130d.
  • the devices may also include other items such as an eNotebook, Netbook, smart speaker, or mobile computer.
  • the system 100, including its SoC 102 is an integrated circuit of a desktop computer, network server, or related cloud-based asset.
  • FIG. 2 illustrates an example repair data controller 108 that includes a custom BISR loader (CBL) 122 and a virtual memory wrapper (VMW) 120.
  • CBL custom BISR loader
  • VMW virtual memory wrapper
  • BISR registers 224 are also included in the repair data controller 108.
  • PPM Pseudo Parallel Map
  • FIG. 2 illustration of the BISR registers 224 of the repair data controller 108 is a representative example.
  • BISR registers 224 are components added to the SoC 102 by a third-party’ vendor.
  • the BISR registers 224 may be configured based on vendor-specific design preferences.
  • a signal is transmitted to the CBLs 122 of the repair data controller 108 included on the SoC 102.
  • this signal is a bisr_done signal 202 transmitted by a BISR controller (not illustrated).
  • the bisr_done signal 202 includes a repair signature that indicates faults in a memory structure of the SoC 102 or an IP block 110 within the SoC 102.
  • Each CBL 122 in the SoC 102 generates (or passes) a clock signal (bisr_clk) 212 based on a bisr_done signal 202 received from the BISR controller.
  • the bisr clock signal 212 is supplied as a clock signal to a CLK input of the PPM 226 in a VMW 120. More specifically, the bisr clock signal 212 is used to control and/or trigger a serial shifting of repair data from BISR registers 224 to memory structure 208 to implement repair operations at the memory structure.
  • a first pulse of the bisr clock signal 212 causes the PWM 226 to obtain or receive repair data from its associated BISR register 224.
  • the repair data is transferred to the PPM's 226 redundant memory addresses 213 via a parallel interface (e.g., parallel BISR ports 214) that provides a communication interface between the BISR register 224 and PPM 226.
  • the CBL 122 references the repair signature included in the bisr_done signal 202.
  • the CBL 122 sends an enable signal 210 (bisr_load_en) to the PPM 226 which receives this signal as serial_shift 215.
  • the PPM 226 then shifts in (bisr SI) 207 one iteration of repair data from the redundant memory addresses 213 to the main sector memon addresses 209.
  • the PPM 226 can be configured to repeat this process for each pulse of bisr clk 212.
  • the PPM 226 can execute a sequential shift operation to shift respective iterations of repair data to corresponding address locations of memory structure 208. This iterative shifting of repair data can be repeated until data at the main sector addresses 209 match a corresponding redundant memory address 213 (e.g., redundant data for ”IOMap
  • a shift out 211 of corrupted data bisr_SO
  • a counter within the PPM 226 is also incremented for each iterative shift of repair data via repair data signal 207.
  • this preset limit is a constant (e.g., ‘‘N” clock cycles or pulses) that is set based on the largest number of memory' addresses 209 expected within the memory structures 208 and VMWs 120 serviced by the CBL 122 (e.g., an I/O map size in a particular memory instance or structure).
  • a particular memory structure 208 can contain 1,500 unique memory addresses 209, representing the largest number of unique memory addresses across 100 different memory' structures 208.
  • the value of N would be set to 1,500, such that a shift-in operation performed via the repair data signal 207 is conducted for each of the 1,500 memory addresses 209.
  • the shared counter value of N would result in complete repairs for the other 100 memory structures 208 and VMWs 120, which comparatively have a smaller number of unique memory' addresses 209.
  • the value of N is set based on the largest expected number of unique memory addresses 209 in the IP block 110 or SoC 102.
  • the preset limit used by the CBL 122 to pause or disable shift-in operations, via signal path 207 can be based on other values of N, different signals received by the CBL 122, or both.
  • each memory structure 208 within the IP block 110 may have a differing number of memory addresses 209, the number of required repair operations is potentially different across VMWs 120.
  • only one signal of bisr_load_en 210 is generated by the CBL 122 and sent to each VMW 120.
  • each PPM 226 within each VMW 120 can configure a dummy register based on the largest expected amount of memory addresses 209 across all VMWs 120 serviced by the CBL 122 (e.g., an I/O map size in a particular memory instance or structure).
  • the dummy register size can be statically or dynamically configured.
  • a value of N can be assigned based on the largest amount of memory 7 addresses 209 expected in a particular IP block 110.
  • N maximum value
  • VMW 120 and the PPM 226 can be implemented in hardware, software, or both.
  • the VMW 120 is implemented in software while the PPM 226 is implemented in hardware.
  • different implementations of hardw are and softw are arrangements for VMW 120 and/or PPM 226 are within the scope of this disclosure.
  • a reset signal 206 is sent to the BISR registers 224, PPM 226, memory structures 208, or a combination of these components.
  • FIG. 3 illustrates an example system 300 that incorporates CBL 122 and a collection of VMWs 120 into an example IP block 110. Additionally, example system 300 includes a “fast loading” data stream 306, a power management unit (PMU) 310, and a collection of multiplexers (MUXs) 312.
  • PMU power management unit
  • MUXs multiplexers
  • each IP block 110 contains multiple memory' structures each with an associated VMW 120.
  • bisr_load_en 210 and bisr_clk 212 originate in a shared CBL 122 that transmits these signals to each of the VMWs 120 in the IP block 110.
  • a shared BISR shift in signal 302 is sent from a BISR controller (not illustrated) to each of the BISR registers 224 to transfer repair data to each of the PPMs 226 within the VMWs 120.
  • a common data shift out 304 is then returned to the BISR controller from the last BISR register 224 in the series.
  • BISR registers 224 and BISR controllers can include a “fast loading” mode where additional repair data is transmitted to each of the BISR registers 224 on the SoC 102 or IP block 110 in a fastloading data stream 306.
  • This fast-loading data stream 306 is provided by the BISR controller, a one-time-programmable (OTP), or another component external to the SoC 102 or IP block 110. In some examples, utilizing a fast-loading mode is desirable to achieve faster SoC boot-up times.
  • the fastloading repair data stream 306 is combined in a multiplexer 312 with the repair data received from the BISR register 224. The combined data stream is then provided to the associated PPM 226 within the VMW 120.
  • the CBL when the repair operation by the CBL 122 is completed, the CBL sends a completion signal (bisr_done) 308 to other components within the SoC 102.
  • the bisr_done signal 308 is sent to a PMU 310 within the SoC 102.
  • the PMU 310 receives the bisr_done signal 308, the PMU 310 performs various operations, for example, enabling the SoC 102 or other components to utilize the newly repaired memory structures.
  • FIG. 4 illustrates a detailed view 7 400 of an example CBL 122.
  • the example CBL 122 includes clock gates 404 and 414, a counter 408, sticky flops 402 and 406, comparators 410 and 412, and other various logic gates and circuitry. While certain logic gates are pictured, other logic gate combinations are possible. In some implementations, the logic gates and circuitry of CBL 122 is implemented using digital circuits that process discrete signals, e.g., corresponding to discrete voltages and current values. For these implementations, references to low " or “high” values and/or signals can correspond to binary or bit values of "0” and “1,” respectively.
  • the CBL 122 receives a clock signal 401 from an external source, for example, a clock frequency maintained in a master clock of the SoC 102.
  • this external clock signal 401 is used to ultimately generate the bisr-clk signal 212 of the CBL 122.
  • the clock signal 401 is received by the CBL 122 at a sticky flop 402 and clock gate 404.
  • the clock signal used to generate bisr_clk 212 originates within the CBL 122.
  • CBL 122 when CBL 122 receives the bisr_done signal 202 from the BISR controller, this signal is received by stick flop 402.
  • receiving the bisr_done signal 202 causes the stick flop 402 to enable the clock gate 404 and counter 408.
  • the enable output signal of sticky flop 402 is returned to the input of stick flop 402 through OR gate 403.
  • clock gate 404 Upon being enabled by stick flop 402, clock gate 404 sends a clock signal to clock gate 414, counter 408, and stick flop 406.
  • counter 408 upon being enabled by stick flop 402, counter 408 starts/initiates a count sequence and generates a count/counter signal.
  • This counter signal is transmitted from counter 408 to two comparators 410 and 412.
  • One comparator 410 is set to a counter value of “0,” such that a high value is passed when the counter is not enabled or at a zero value.
  • This high signal is inverted by an inverter 411 (NOT gate) such that the resulting value of bisr_load_en 210 for a counter value of zero is low.
  • the second comparator 412 is set to a preset value (e.g., N as described with reference to FIG. 2) and passes a low value for a counter value below this setpoint.
  • OR gate 413 which additionally accepts the output from comparator 410. If either input to OR gate 413 is high (e.g., a non-zero counter value below the setpoint), OR gate 413 passes a high signal and enables clock gate 414.
  • comparator 410 Upon the counter 408 incrementing to a non-zero value, the output of comparator 410 shifts to a low value, which is then inverted to a high value by NOT gate 411. This high value is then transmitted from the CBL 122 as bisr_load_en 210, enabling downstream components as discussed above in FIGS. 2 and 3. Additionally, when comparator 410 passes a high value (i.e. , at non-zero counter values), an enable signal is passed to clock gate 414.
  • NOT gate 409 the output of NOT gate 409 is provided to AND gate 407, which also receives input from stick flop 402. While the counter is below the max setpoint of comparator 412. NOT gate 409 passes a high signal to AND gate 407 (which additionally receives a high signal from enabled sticky flop 402). Upon the count reaching the setpoint value, and comparator 412 triggering a low output from NOT gate 409, AND gate 407 shifts to passing a low value output which disables the counter 408. In summary, when the counter 408 reaches the setpoint value, the logic within the CBL 122 is such that the counter 408 is then disabled and ceases to increment.
  • the counter 408 increments the count value.
  • the second comparator 412 passes a high value which is then inverted by NOT gate 409 to a low value.
  • This low value is received by OR gate 413 (which is already in receipt of a low value from comparator 410 for a non-zero count value) which then passes a low value and disables clock gate 414.
  • clock gate 414 While clock gate 414 is enabled by the signal from OR gate 413 (e.g., non-zero counter values below the setpoint), clock gate 414 passes the clock signal 401 received from clock gate 404 as bisr_clk 212 to the downstream components as discussed above in FIGS. 2 and 3. Upon being disabled by OR gate 413, clock gate 414 will cease passing a bisr_clk signal 212.
  • CBL 122 also receives a reset signal 206 from an external source.
  • this reset signal 206 is received by NOT gate 405 which inverts any high reset signal 206 to a low value.
  • the low output of NOT gate 405 is received by sticky flop 406, which in response sends a low value to both AND gates 407 and 409.
  • AND gate 407 receives a low signal from stick flop 406, it passes a low signal to counter 408 and disables the counter.
  • AND gate 409 receives a low signal from sticky' flop 406, a reset signal is passed to counter 408 which resets the count to a zero value.
  • CBL 122 While certain logical operations are described above, these are simply representative examples of how the functionality of CBL 122 is implemented in circuity. Other circuitry configurations within the scope of this disclosure that achieve the same or similar functionality of CBL 122 are possible. Moreover, changes in the functionality of CBL 122 in the different examples within this specification may require different implementations of the circuitry and logic described above.
  • FIG. 5 illustrates a detailed view of an example PPM 226.
  • the example PPM 226 includes a BISR 510 and dummy register 520, each of which contain sticky flops and multiplexers.
  • the maximum value of the counter is based on the highest expected number of unique memory addresses 209, and one CBL 122 may service many different IP blocks 110 within the SoC 102, there may be instances where other memory structures 208 serviced by the CBL 122 have a lower number of memory addresses 209 (e.g., a smaller size I/O map) than the maximum value of the CBL’s 122 counter (e.g.. the value of N).
  • each PPM 226 includes a dummy register 520 that contains additional sticky’ flops 522.
  • the above example is merely representative, and other examples of the above operation of the dummy register can include more (or fewer) data addresses and sticky flops 522.
  • a dummy register can include significantly more data addresses and sticky flops 522, such as hundreds or thousands.
  • the multiplexers 524 also receive an input from a tie-0 value 526.
  • the PPM 226 when conducting a repair operation the PPM 226 is first loaded with repair data via a parallel interface. In some examples, this is performed with parallel BISR ports 214 which transmit each instance of repair data to an associated multiplexer 504 and sticky flop 502.
  • BISRs can have different numbers of multiplexers 504 (e g., multiplexers 504a through 504n) and sticky flops 502 (e.g., sticky flops 502a through 502n).
  • each sticky' flop 502 serially shifts in 211 the repair data through its associated multiplexer 504 to the next sticky flop 502 in the series.
  • FIG. 6 illustrates an example process 600 of providing repair data using the techniques of this specification.
  • the process 600 includes loading repair data, using a first interface, to a pseudo parallel mapping (PPM) circuit coupled to a memory circuit (610).
  • this first interface includes a set of parallel BISR ports 214 that receive repair data from BISR registers 224 (like those discussed above with reference to FIG. 2).
  • the PPM 226 contains multiple sticky flops and multiplexers that correspond to the number of unique memory addresses in the memory circuit that will receive repair data.
  • the process 600 includes generating, at a repair data controller, control signals that control shifting the repair data from the pseudo parallel mapping (PPM) circuit to the memory circuit (620).
  • the repair data controller is a custom-BISR loader (CBL) 122 that generates shift-in commands using a counter.
  • the limit of the CBL 122 counter is based on the largest number of unique memory' addresses across all memory circuits serviced by the CBL 122.
  • the process 600 includes shifting, using a second interface, the repair data from the pseudo parallel mapping (PPM) circuit to redundant sectors of the memory circuit based on multiple input/output (I/O) maps of the memory circuit (630).
  • the second interface is the connection between the PPM 226 and memory structure 208 that allows the transmission of repair data.
  • Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them.
  • Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non-transitory storage medium for execution by, or to control the operation of, data processing apparatus.
  • the computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.
  • the program instructions can be encoded on an artificially- generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus.
  • data processing apparatus refers to data processing hardware and encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers.
  • the apparatus can also be, or further include, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
  • the apparatus can optionally include, in addition to hardware, code that creates an execution environment for computer programs, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
  • a computer program which may also be referred to or described as a program, software, a software application, an app. a module, a software module, a script, or code) can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a program may, but need not, correspond to a file in a file system.
  • a program can be stored in a portion of a file that holds other programs or data, e.g., one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, e.g., files that store one or more modules, sub-programs, or portions of code.
  • a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a data communication network.
  • a system of one or more computers to be configured to perform particular operations or actions means that the system has installed on it software, firmware, hardware, or a combination of them that in operation cause the system to perform the operations or actions.
  • one or more computer programs to be configured to perform particular operations or actions means that the one or more programs include instructions that, when executed by data processing apparatus, cause the apparatus to perform the operations or actions.
  • an “engine,” or “software engine,” refers to a software implemented input/output system that provides an output that is different from the input.
  • An engine can be an encoded block of functionality’, such as a library, a platform, a software development kit ("SDK”), or an object.
  • SDK software development kit
  • Each engine can be implemented on any appropriate ty pe of computing device, e.g., servers, mobile phones, tablet computers, notebook computers, music players, e-book readers, laptop or desktop computers, PDAs, smart phones, or other stationary or portable devices, that includes one or more processors and computer readable media. Additionally, two or more of the engines may be implemented on the same computing device, or on different computing devices.
  • the processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output.
  • the processes and logic flows can also be performed by special purpose logic circuitry, e.g., an FPGA or an ASIC, or by a combination of special purpose logic circuitry' and one or more programmed computers.
  • Computers suitable for the execution of a computer program can be based on general or special purpose microprocessors or both, or any other kind of central processing unit.
  • a central processing unit will receive instructions and data from a read-only memory' or a random access memory' or both.
  • the essential addresss of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data.
  • the central processing unit and the memory' can be supplemented by, or incorporated in, special purpose logic circuitry.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
  • a computer need not have such devices.
  • a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device, e.g., a universal serial bus (USB) flash drive, to name just a few.
  • PDA personal digital assistant
  • GPS Global Positioning System
  • USB universal serial bus
  • Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g.. EPROM. EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magnetooptical disks; and CD-ROM and DVD-ROM disks.
  • semiconductor memory devices e.g.. EPROM. EEPROM, and flash memory devices
  • magnetic disks e.g., internal hard disks or removable disks
  • magnetooptical disks e.g., CD-ROM and DVD-ROM disks.
  • embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and pointing device, e g, a mouse, trackball, or a presence sensitive display or other surface by which the user can provide input to the computer.
  • a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
  • keyboard and pointing device e.g, a mouse, trackball, or a presence sensitive display or other surface by which the user can provide input to the computer.
  • Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
  • a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user’s device in response to requests received from the web browser.
  • a computer can interact with a user by sending text messages or other forms of message to a personal device, e.g., a smartphone, running a messaging application, and receiving responsive messages from the user in return.
  • Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface, a web browser, or an app through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back-end, middleware, or front-end components.
  • the components of the system can be interconnected by any form or medium of digital data communication, e.g.. a communication network. Examples of communication networks include a local area network (LAN) and a wide area network (WAN), e.g., the Internet.
  • LAN local area network
  • WAN wide area network
  • the computing system can include clients and servers.
  • a client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
  • a server transmits data, e.g., an HTML page, to a user device, e.g., for purposes of displaying data to and receiving user input from a user interacting with the device, which acts as a client.
  • Data generated at the user device e.g.. a result of the user interaction, can be received at the server from the device.

Landscapes

  • For Increasing The Reliability Of Semiconductor Memories (AREA)
  • Information Transfer Systems (AREA)
  • Hardware Redundancy (AREA)

Abstract

Systems, methods, and media for repair sharing in serial repair interfaces. The techniques include: loading, using a first interface, repair data to a pseudo parallel mapping circuit coupled to the memory circuit; generating, by a repair data controller, control signals that control shifting the repair data from the pseudo parallel mapping circuit to the memory circuit; and shifting, using a second interface, the repair data from the pseudo parallel mapping circuit to redundant sectors of the memory circuit based on a plurality of input/output (I/O) maps of the memory circuit.

Description

REPAIR DATA CONTROLLER FOR MEMORIES WITH SERIAL REPAIR INTERFACES
BACKGROUND
This specification relates to devices having the capability to perform memory repair sharing, specifically in components that operate by a serial repair interface.
Memory repair is necessary’ to correct faults contained within memory’ structures, which can be present from manufacturing defects or can develop over time in operation. The repair interface used by any given device is dependent on many factors and can differ betyveen manufacturers and designers.
Repair interfaces are available in both parallel and serial formats. In a parallel repair interface, multiple spare memory structures can be exchanged directly for faulty address data. This can be performed in various ways, to include a column-based exchange, a royv-based exchange, or a combination of both. In a serial repair interface, there are multiple unique addresses to each memory structure that must be sequenced correctly to effectuate a repair.
SUMMARY
This specification describes a hardware controller and parallel mapping circuit for providing repair data to a memory circuit at an IP block (or device) of a system-on-chip (“SoC”).
In general, a memory circuit includes memory- macros that can have redundancy or repair features. For example, a 1 -Kilobyte (Kb) memory array includes a main sector and a spare (or redundant) sector. Each of the main sector and the spare sector can include multiple rows and banks. The main sector can include multiple main rows and main banks, whereas the spare/redundant sector can include one or more spare rows and one or more spare banks. The memory’ macro has a redundancy feature that alloyvs for repair data to be written to a spare row in response to detecting a fault associated with a main row. The redundancy feature of a memory’ macro is activated and controlled using an interface of the memory’. For example, the interface may be a serial repair interface or a parallel repair interface.
In a serial repair interface, locations or sectors of a memory structure are programmed and/or repaired serially, for example, by sequentially traversing memory addresses for the corresponding memory locations. This repair or programming is performed via a serial repair interface which controls the selection of addresses that identify memory locations of a redundant memory sector that receives repair data when a fault is detected at the corresponding location of a main memory sector. Repairing memory using a serial repair interface usually occurs one memory structure at a time, with the serial interface repairing each address of the memory' structure in succession. Relative to parallel repair interfaces, repair sharing in serial repair interfaces typically requires different repair processes at least because of the serial or sequential way in which repair operations are performed on the individual addresses of each memory structure. Additionally, parallel repair interfaces allow for concurrent repair operations that offer certain benefits and advantages in speed and/or efficiency over serial repair interfaces.
The hardware controller and parallel mapping circuit of the disclosed techniques can be used to implement repair sharing via a serial repair interface to realize some (or all) of the repair speeds and efficiency advantages normally associated with parallel repair formats. For example, these techniques can be used to reduce boot times, reduce an overall circuit footprint of an SoC, and have additional benefits to the system, such as improved latency and response times when launching or executing an application at a user/mobile device that includes the SoC.
An example solution of an RDC for memories w ith a serial repair interface utilizes built-in self repair (BISR) circuitry' and associated logic. In general, BISR methods are used to improve the design yield of memory macros by replacing faulty elements or cells of a memory with spare ones. For example, BISR circuitry can be configured to perform a repair operation by obtaining and transferring repair data to a spare memory cell when a chip that includes the memory is powered up. The BISR circuitry' can include one or more custom BISR loaders (CBLs) and Virtual Memory Wrappers (VMWs). In some examples, one CBL is assigned to each BISR chain within the SoC. The BISR chains within the SoC may be organized in different ways. For example, one chain or multiple chains may be assigned to each device or IP block within the SoC. Additionally, in some examples, one or more VMWs can be assigned to each device or IP block. For example, the SoC can include a Tensor Processing Unit (TPU) that is assigned five VMWs for its different memory' structures, whereas a Digital Signal Processor (DSP) on the SoC may only be assigned a single VMW for its memory' structure(s). Other combinations and quantities of VMWs to memory structures in a device or IP block are also within the scope of this disclosure.
The hardware controller and parallel mapping circuit can be used to execute repair operations during an example boot sequence. In some examples, once a boot-up process occurs on the SoC, a BISR loading operation is completed by a BISR controller at the SoC level. The BISR controller is configured to generate a completion signal in response to completing a BISR loading operation to load a subset of repair data for implementing repair operations. The completion signal can be represented as a bisr_done parameter and may be referenced within this disclosure as a bisr_done signal. This bisr_done signal is then provided to each CBL in each device or IP block within the SoC, for example, by the BISR controller. Each CBL initiates a clock (e.g.. a local clock) in response to receiving the bisr done signal from the BISR controller. For example, the CBL can initiate the clock concurrent with detection of the bisr_done signal or shortly thereafter. In some examples, this clock signal increments based on a clock frequency associated with the SoC. Other examples use other sources of timing.
Each CBL is configured to generate a load enable signal based on a corresponding clock signal. The corresponding clock signal may be locally generated, passed from a higher-level SoC processor, or both. The CBL can iteratively increment the bisr_load enable signal based on a clock frequency of its corresponding clock signal and iteratively pass a respective instance of an incremented load enable signal to a corresponding VMW in its BISR chain. In some examples, this load enable signal is an incrementing bisrjoad signal. Each VMW conducts its repair operation for one or more addresses of a memory' based on its corresponding bisr_load signal. For example, a memory address is selected and the VMW is used to apply repair data in response to detecting an incremented bisrjoad signal from the CBL. For example, a bisrjoad signal of N=1 can cause the VMW to select and repair memory address “IOMap[l],” while a bisrjoad signal of N=2 causes the VMW to select and repair “I0Map[2].‘’ This process continues until all memory addresses have been sequentially repaired across the entire memory structure. Once all memory addresses associated with each VMW have been repaired, the clock signal associated with the CBL is disabled and the bisrjoad signal is reset for the next boot-process. A detailed description of this process is presented later in this specification.
To perform repair operations across the SoC, each VMW contains at least one BISR and a Pseudo Parallel Map (PPM) that receives the bisrjoad signal and maps the repair operation to a memory address. Because each memory structure within the IP block may have a differing number of memory addresses, the number of required repair operations is potentially different across VMWs. To compensate for differentiation in the number of memory addresses across IP blocks, each PPM within each VMW includes a dummy register. A size of the dummy register is determined and/or configured based on a threshold amount of memory addresses across all VMWs serviced by the CBL, where the threshold amount can be defined or indicated based on an integer value, N. For example, a value of N can be assigned to the largest amount of memory addresses expected in a particular IP block. In this example, a VMW in a different IP block with a BISR chain length of N-7 memory addresses can use the PPM to generate a dummy register of D=7 such that the VMW can accept the maximum value (“N”) of the bisr_load signal. A detailed description of the operation of the VMW is presented later in this specification.
In general, innovative aspects of the subject matter described in this specification can be implemented in a method for providing repair data to a memory circuit, the method including: loading, using a first interface, repair data to a pseudo parallel mapping circuit coupled to the memory circuit; generating, by a repair data controller, control signals that control shifting the repair data from the pseudo parallel mapping circuit to the memory' circuit; and shifting, using a second interface, the repair data from the pseudo parallel mapping circuit to redundant sectors of the memory circuit based on multiple input/output (I/O) maps of the memory circuit.
Other implementations include a system for providing repair data to a memory circuit, the system including: a processing device; and a non-transitory machine-readable storage device storing instructions that are executable by the processing device to cause performance of operations including: loading, using a first interface, repair data to a pseudo parallel mapping circuit coupled to the memory circuit; generating, by a repair data controller, control signals that control shifting the repair data from the pseudo parallel mapping circuit to the memory' circuit; and shifting, using a second interface, the repair data from the pseudo parallel mapping circuit to redundant sectors of the memory circuit based on multiple input/output (I/O) maps of the memory circuit.
Other implementations include a non-transitory machine-readable storage device storing instructions used to provide repair data to a memory circuit, the instructions being executable by a processing device to cause performance of operations including: loading, using a first interface, repair data to a pseudo parallel mapping circuit coupled to the memory circuit; generating, by a repair data controller, control signals that control shifting the repair data from the pseudo parallel mapping circuit to the memory' circuit; and shifting, using a second interface, the repair data from the pseudo parallel mapping circuit to redundant sectors of the memory circuit based on multiple input/output (I/O) maps of the memory circuit. The foregoing and other implementations can each optionally include one or more of the following features, alone or in combination.
In some examples, the loading of the repair data includes: loading, using the first interface, respective portions of the repair data in parallel to a built-in self-repair register of the pseudo parallel mapping circuit.
In some examples, the shifting the repair data includes: serially shifting, using the second interface, respective portions of the repair data from the built-in self-repair register of the pseudo parallel mapping circuit to the redundant sectors of the memory circuit.
In some examples, shifting the repair data includes: shifting, by the first register, a respective portion of the repair data to a redundant sector of the memory circuit that corresponds to a main sector of the memory circuit; where the respective portion of repair data is shifted to the redundant sector based on a fault at the corresponding main sector.
Some examples additionally include: generating the control signals includes generating multiple clock signals; and shifting the repair data includes shifting the repair data based on the multiple clock signals, such that respective portions of the repair data are shifted serially over the multiple clock cycles using the second interface.
In some examples, the pseudo parallel mapping circuit includes a dummy register and shifting the repair data includes: performing one or more serial shifts using the dummy register when a size of a corresponding I/O map is less than a size of the dummy register.
In some examples, the dummy register is configured to maintain a constant shift cycle when the repair data is provided to a memory instance of the memory circuit.
In some examples, a size of the dummy register coincides with a maximum size of an I/O map in a particular memory instance of the memory circuit.
Some examples additionally include: generating a repair signature indicating one or more faults at the memory circuit; and activating a redundancy feature of a memory macro of the memory’ circuit based on the repair signature; where, in response to activating the redundancy feature of the memory macro, the repair data is loaded to the pseudo parallel mapping circuit and shifted to the redundant sectors of the memory’ circuit to address the one or more faults indicated by the repair signature.
In some examples, the first interface is a parallel interface of the pseudo parallel mapping circuit; and the second interface is a serial interface of the pseudo parallel mapping circuit. Particular embodiments of the subject matter described in this specification can be implemented so as to realize one or more of the following advantages.
Implementing the disclosed system and techniques, or a related design, allows for a reduction in the overall circuit footprint for a One-Time Programmable (OTP) memory. Relative to existing designs for repair circuitry, the disclosed technique results in a lower physical system footprint for repairing infrastructure. Additionally, the reduced repair chain length of this system can result in improved system boot up times or repair register reloading times, as well as the ability to use an example “FAST Loading” feature of a given memoi macro. The reduced boot time also allows the direct repair of memory addresses needed for the boot process which would otherwise be unrepairable. Also achieved is a reduction in the max chain length and the total repair bits required. Finally, because the CBL and VMWs in this example are configured to work with existing serial repair interfaces, it does not require any changes to external hardware or software to implement (e.g., a solution that is vendor independent).
The details of one or more embodiments of the subject matter of this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates an example SoC.
FIG. 2 illustrates an example system including a custom BISR loader (CBL) and a virtual memory wrapper (VMW).
FIG. 3 illustrates an example system that incorporates CBL and a collection of VMWs into an example IP block.
FIG. 4 illustrates a detailed view of an example CBL.
FIG. 5 illustrates a detailed view of an example PPM.
FIG. 6 illustrates an example process of providing repair data using the techniques of this specification.
Like numbers between the drawings indicate similar components.
DETAILED DESCRIPTION
FIG. 1 is a block diagram of an example computing system 100 that includes a system-on-chip 102 (“SoC 102”). In some implementations, system 100 includes multiple SoCs. The SoC 102 includes a central processing unit 104 (‘‘CPU 104”), a shared memory 106 (“memory 106”), a repair data controller 108, and an IP/circuit block 110.
The CPU 104 can be a general purpose CPU (e.g., a single or multi-core CPU). The CPU 104 generates one or more indicators, such as an app-launch indicator or a function call that is triggered in response to executing or launching an application at a user device. For example, the application can be a camera application that uses an imaging sensor to generate image data or a gaming application that requires substantial memory and graphics processing resources to render graphical content of the game. The CPU 104 also generates one or more application values, such as pixel values or frame rate. The application values may be associated with a function call, may be descriptive of an event that occurs during execution of the application, or both.
The memory 106 is a system memory, shared memory . or both. In the example of FIG. 1, memory 106 is depicted external to circuit block 110. However, memory 106 can include portions of memory that are: i) specific to circuit block 110, ii) external to circuit block 1 10, or iii) both. The memory 106 can be a random access memory of the SoC 102, such as a dynamic random access memory (DRAM), a synchronous DRAM (SDRAM), or double data rate (DDR) SDRAM.
In some implementations, aspects of memory 106 are configured as a shared scratchpad memory that supports parallel access of its memory resources by two or more processors of the circuit 1 10. The memory 106 can also include various other types of memory, such as high bandwidth memory7 (HBM), narrow memory7 (e.g., for storing 8-bit values), wide memory (e.g., for storing 16-bit or 32-bit values), etc.
The repair data controller 108 is implemented in hardware and software. Aspects of the repair data controller 108 can be also implemented as firmware of the SoC 102 or firmware of a device of the SoC 102, such as the CPU 104. The repair data controller 108 includes a virtual memory7 wrapper 120 and a custom BISR loader 122 implemented in hardware and software. Each of the virtual memory wrapper 120 and the custom BISR loader 122 can include memory resources (e.g., flip-flops, registers, buffers, etc.) that are implemented in hardware and control logic (e.g., programmed code) that is implemented in software. The virtual memory wrapper 120 and custom BISR loader 122 are described in more detail below with reference to the example of Fig. 2.
Aspects of the repair data controller 108 can be implemented as a software routine (or module) of the CPU 104, for example, using one or more hardware resources of the CPU 104, such as registers, buffers, etc. In some implementations, the CPU 104 is configured as an instruction and vector data processing engine that processes data obtained from a system memory of the SoC 102, such as memory 106.
As described in detail below, the repair data controller 108 is configured to enable parallel repair data functions for a memory macro having a serial repair interface, as well as control signaling and associated data values, that are used to provide repair data to memory circuits in devices of the system 100. The devices can include processors, processor cores, or special-purpose processing devices, such as individual IP devices of the circuit block 110.
The circuit block 110 can include an image signal processor (ISP) 112, a tensor processing unit (TPU) 114, a digital signal processor (DSP) 116, and a graphics processing unit (GPU) 118. The circuit block 110 is referred to alternatively as an IP block 110, where the IP block can include one or more proprietary hardware elements. For example, each of the ISP 112, TPU 114, DSP 116, and GPU 118 can be a respective proprietary IP block (or IP device) of a particular entity or device manufacturer.
The repair data controller 108 uses operations performed by the virtual memory' wrapper 120 and custom BISR loader 122 to dynamically control, execute, and/or manage repair operations at the SoC 102 in support of a heterogeneous compute operations that involve executing an application at a device that includes the SoC 102. More specifically, the repair data controller 108 is configured to generate control signaling 124 and use one or more discrete signal values of the control signaling 124 to arrange and synchronize repair data operations across memory' circuits of system 100, such as memory' circuits in the processing units that are included among the IP block 110, the CPU 104, or both. In some implementations, each processor (e.g.. ISP 112, TPU 114, DSP 116, GPU 118, or CPU 104) of the SoC 102 includes multiple cores and the repair data controller 108 can generate control signaling 124 to synchronize or otherwise manage repair data operations at each core of the processors.
In the example of FIG. 1. system 100 and the SoC 102 is an integrated circuit of an example user/client device, consumer electronic device, or mobile device. Each of these devices can be example items such as a smartphone 130a, tablet 130b, laptop 130c, or smartwatch or wearable device 130d. The devices may also include other items such as an eNotebook, Netbook, smart speaker, or mobile computer. In some implementations, the system 100, including its SoC 102, is an integrated circuit of a desktop computer, network server, or related cloud-based asset. FIG. 2 illustrates an example repair data controller 108 that includes a custom BISR loader (CBL) 122 and a virtual memory wrapper (VMW) 120. Also included in the repair data controller 108 are BISR registers 224, a Pseudo Parallel Map (PPM) 226 that contains redundant memory addresses 213, and a memory structure 208 that contains multiple main sector memory' addresses 209. The FIG. 2 illustration of the BISR registers 224 of the repair data controller 108 is a representative example. In some examples, BISR registers 224 are components added to the SoC 102 by a third-party’ vendor. In these examples, the BISR registers 224 may be configured based on vendor-specific design preferences.
When a boot-up process is completed on the SoC 102, a signal is transmitted to the CBLs 122 of the repair data controller 108 included on the SoC 102. In some examples, this signal is a bisr_done signal 202 transmitted by a BISR controller (not illustrated). In some examples, the bisr_done signal 202 includes a repair signature that indicates faults in a memory structure of the SoC 102 or an IP block 110 within the SoC 102. Each CBL 122 in the SoC 102 generates (or passes) a clock signal (bisr_clk) 212 based on a bisr_done signal 202 received from the BISR controller. The bisr clock signal 212 is supplied as a clock signal to a CLK input of the PPM 226 in a VMW 120. More specifically, the bisr clock signal 212 is used to control and/or trigger a serial shifting of repair data from BISR registers 224 to memory structure 208 to implement repair operations at the memory structure.
For example, a first pulse of the bisr clock signal 212 causes the PWM 226 to obtain or receive repair data from its associated BISR register 224. In some examples, the repair data is transferred to the PPM's 226 redundant memory addresses 213 via a parallel interface (e.g., parallel BISR ports 214) that provides a communication interface between the BISR register 224 and PPM 226. In some examples, when loading repair data, the CBL 122 references the repair signature included in the bisr_done signal 202. On the second pulse of bisr clock signal (bisr_clk) 212, the CBL 122 sends an enable signal 210 (bisr_load_en) to the PPM 226 which receives this signal as serial_shift 215. The PPM 226 then shifts in (bisr SI) 207 one iteration of repair data from the redundant memory addresses 213 to the main sector memon addresses 209.
The PPM 226 can be configured to repeat this process for each pulse of bisr clk 212. For each pulse of bisr_clk 212, the PPM 226 can execute a sequential shift operation to shift respective iterations of repair data to corresponding address locations of memory structure 208. This iterative shifting of repair data can be repeated until data at the main sector addresses 209 match a corresponding redundant memory address 213 (e.g., redundant data for ”IOMap|01” is now at the "lOMapI 0|" main sector address). As each iteration of repair data is shifted via repair data signal 207, a shift out 211 of corrupted data (bisr_SO) is received and transferred to a redundant memory address 213. In some examples, a counter within the PPM 226 is also incremented for each iterative shift of repair data via repair data signal 207.
With each pulse of bisr clk 212, and while bisr_load_en 210 is enabled, a serial shift via signal 207 of repair data is conducted from the redundant memory addresses 213 to the main sector memory addresses 209. When a preset limit is reached (e.g., the last pulse of bisr_clk 212), the bisr_load_en 210 signal is disabled and the PPM 226 ceases shifting in 207 repair data from redundant memory addresses 213. In some examples, this preset limit is a constant (e.g., ‘‘N” clock cycles or pulses) that is set based on the largest number of memory' addresses 209 expected within the memory structures 208 and VMWs 120 serviced by the CBL 122 (e.g., an I/O map size in a particular memory instance or structure). For example, a particular memory structure 208 can contain 1,500 unique memory addresses 209, representing the largest number of unique memory addresses across 100 different memory' structures 208. In this example, the value of N would be set to 1,500, such that a shift-in operation performed via the repair data signal 207 is conducted for each of the 1,500 memory addresses 209. Because this particular memory structure 208 contains the largest number of unique memory addresses 208, the shared counter value of N would result in complete repairs for the other 100 memory structures 208 and VMWs 120, which comparatively have a smaller number of unique memory' addresses 209. In other examples, the value of N is set based on the largest expected number of unique memory addresses 209 in the IP block 110 or SoC 102. Alternatively or in addition, the preset limit used by the CBL 122 to pause or disable shift-in operations, via signal path 207, can be based on other values of N, different signals received by the CBL 122, or both.
Because each memory structure 208 within the IP block 110 may have a differing number of memory addresses 209, the number of required repair operations is potentially different across VMWs 120. However, in the above example, only one signal of bisr_load_en 210 is generated by the CBL 122 and sent to each VMW 120. To compensate for differentiation in the number of memory' addresses across IP blocks 110, each PPM 226 within each VMW 120 can configure a dummy register based on the largest expected amount of memory addresses 209 across all VMWs 120 serviced by the CBL 122 (e.g., an I/O map size in a particular memory instance or structure). For example, the dummy register size, D, can be configured based on the formula: D = N - ■‘BISR Reg Length of the memory,” where D and N are integer values. In some implementations, the dummy register size can be statically or dynamically configured. As described above, a value of N can be assigned based on the largest amount of memory7 addresses 209 expected in a particular IP block 110. In this example, a VMW 120 in a different IP block 226 with a BISR chain length of N-7 memory addresses 209 uses the PPM 226 to generate a dummy register of D=7 such that the VMW 120 can accept the maximum value (“N”) of the bisr_load_en signal 212. The operation and structure of the dummy register is described in more detail with reference to the example of FIG. 5.
Each of the VMW 120 and the PPM 226 can be implemented in hardware, software, or both. In some examples, the VMW 120 is implemented in software while the PPM 226 is implemented in hardware. In other examples, different implementations of hardw are and softw are arrangements for VMW 120 and/or PPM 226 are within the scope of this disclosure. In some examples, a reset signal 206 is sent to the BISR registers 224, PPM 226, memory structures 208, or a combination of these components.
FIG. 3 illustrates an example system 300 that incorporates CBL 122 and a collection of VMWs 120 into an example IP block 110. Additionally, example system 300 includes a “fast loading” data stream 306, a power management unit (PMU) 310, and a collection of multiplexers (MUXs) 312. In some examples, each IP block 110 contains multiple memory' structures each with an associated VMW 120. In this example, bisr_load_en 210 and bisr_clk 212 originate in a shared CBL 122 that transmits these signals to each of the VMWs 120 in the IP block 110.
Additionally, a shared BISR shift in signal 302 is sent from a BISR controller (not illustrated) to each of the BISR registers 224 to transfer repair data to each of the PPMs 226 within the VMWs 120. A common data shift out 304 is then returned to the BISR controller from the last BISR register 224 in the series. In some examples, BISR registers 224 and BISR controllers can include a “fast loading” mode where additional repair data is transmitted to each of the BISR registers 224 on the SoC 102 or IP block 110 in a fastloading data stream 306. This fast-loading data stream 306 is provided by the BISR controller, a one-time-programmable (OTP), or another component external to the SoC 102 or IP block 110. In some examples, utilizing a fast-loading mode is desirable to achieve faster SoC boot-up times. In the example where fast loading is possible, the fastloading repair data stream 306 is combined in a multiplexer 312 with the repair data received from the BISR register 224. The combined data stream is then provided to the associated PPM 226 within the VMW 120.
In some examples, when the repair operation by the CBL 122 is completed, the CBL sends a completion signal (bisr_done) 308 to other components within the SoC 102. In some examples, the bisr_done signal 308 is sent to a PMU 310 within the SoC 102. When the PMU 310 receives the bisr_done signal 308, the PMU 310 performs various operations, for example, enabling the SoC 102 or other components to utilize the newly repaired memory structures.
FIG. 4 illustrates a detailed view7400 of an example CBL 122. The example CBL 122 includes clock gates 404 and 414, a counter 408, sticky flops 402 and 406, comparators 410 and 412, and other various logic gates and circuitry. While certain logic gates are pictured, other logic gate combinations are possible. In some implementations, the logic gates and circuitry of CBL 122 is implemented using digital circuits that process discrete signals, e.g., corresponding to discrete voltages and current values. For these implementations, references to low " or “high” values and/or signals can correspond to binary or bit values of "0” and “1,” respectively.
In some examples, the CBL 122 receives a clock signal 401 from an external source, for example, a clock frequency maintained in a master clock of the SoC 102. In this example, this external clock signal 401 is used to ultimately generate the bisr-clk signal 212 of the CBL 122. In this example, the clock signal 401 is received by the CBL 122 at a sticky flop 402 and clock gate 404. In other examples, the clock signal used to generate bisr_clk 212 originates within the CBL 122.
In some examples, when CBL 122 receives the bisr_done signal 202 from the BISR controller, this signal is received by stick flop 402. In this example, receiving the bisr_done signal 202 causes the stick flop 402 to enable the clock gate 404 and counter 408. Additionally, the enable output signal of sticky flop 402 is returned to the input of stick flop 402 through OR gate 403. Upon being enabled by stick flop 402, clock gate 404 sends a clock signal to clock gate 414, counter 408, and stick flop 406.
Additionally, upon being enabled by stick flop 402, counter 408 starts/initiates a count sequence and generates a count/counter signal. This counter signal is transmitted from counter 408 to two comparators 410 and 412. One comparator 410 is set to a counter value of “0,” such that a high value is passed when the counter is not enabled or at a zero value. This high signal is inverted by an inverter 411 (NOT gate) such that the resulting value of bisr_load_en 210 for a counter value of zero is low. Additionally, the second comparator 412 is set to a preset value (e.g., N as described with reference to FIG. 2) and passes a low value for a counter value below this setpoint. The low output from comparator 412 is inverted by NOT gate 409 and passed to OR gate 413 which additionally accepts the output from comparator 410. If either input to OR gate 413 is high (e.g., a non-zero counter value below the setpoint), OR gate 413 passes a high signal and enables clock gate 414.
Upon the counter 408 incrementing to a non-zero value, the output of comparator 410 shifts to a low value, which is then inverted to a high value by NOT gate 411. This high value is then transmitted from the CBL 122 as bisr_load_en 210, enabling downstream components as discussed above in FIGS. 2 and 3. Additionally, when comparator 410 passes a high value (i.e. , at non-zero counter values), an enable signal is passed to clock gate 414.
Additionally, in some examples, the output of NOT gate 409 is provided to AND gate 407, which also receives input from stick flop 402. While the counter is below the max setpoint of comparator 412. NOT gate 409 passes a high signal to AND gate 407 (which additionally receives a high signal from enabled sticky flop 402). Upon the count reaching the setpoint value, and comparator 412 triggering a low output from NOT gate 409, AND gate 407 shifts to passing a low value output which disables the counter 408. In summary, when the counter 408 reaches the setpoint value, the logic within the CBL 122 is such that the counter 408 is then disabled and ceases to increment.
With each pulse of the clock signal 401 , the counter 408 increments the count value. Upon reaching a preset value (e.g., N as described with reference to FIG. 2), the second comparator 412 passes a high value which is then inverted by NOT gate 409 to a low value. This low value is received by OR gate 413 (which is already in receipt of a low value from comparator 410 for a non-zero count value) which then passes a low value and disables clock gate 414.
While clock gate 414 is enabled by the signal from OR gate 413 (e.g., non-zero counter values below the setpoint), clock gate 414 passes the clock signal 401 received from clock gate 404 as bisr_clk 212 to the downstream components as discussed above in FIGS. 2 and 3. Upon being disabled by OR gate 413, clock gate 414 will cease passing a bisr_clk signal 212.
In some examples, CBL 122 also receives a reset signal 206 from an external source. In some examples, this reset signal 206 is received by NOT gate 405 which inverts any high reset signal 206 to a low value. The low output of NOT gate 405 is received by sticky flop 406, which in response sends a low value to both AND gates 407 and 409. When AND gate 407 receives a low signal from stick flop 406, it passes a low signal to counter 408 and disables the counter. Additionally, when AND gate 409 receives a low signal from sticky' flop 406, a reset signal is passed to counter 408 which resets the count to a zero value.
While certain logical operations are described above, these are simply representative examples of how the functionality of CBL 122 is implemented in circuity. Other circuitry configurations within the scope of this disclosure that achieve the same or similar functionality of CBL 122 are possible. Moreover, changes in the functionality of CBL 122 in the different examples within this specification may require different implementations of the circuitry and logic described above.
FIG. 5 illustrates a detailed view of an example PPM 226. The example PPM 226 includes a BISR 510 and dummy register 520, each of which contain sticky flops and multiplexers.
As described with reference to FIG. 2, because the maximum value of the counter is based on the highest expected number of unique memory addresses 209, and one CBL 122 may service many different IP blocks 110 within the SoC 102, there may be instances where other memory structures 208 serviced by the CBL 122 have a lower number of memory addresses 209 (e.g., a smaller size I/O map) than the maximum value of the CBL’s 122 counter (e.g.. the value of N). For example, if a BISR 510 with 5 sticky flops 502, and 5 data addresses for associated memory addresses 209 in a memory structure 208, is directed to perform 10 memory' data shift-in operations (e.g., N=10), the memory data shifted into the memory structure 208 would ultimately be sent to the wrong address since the number of shift operations performed by the BISR 510 is greater than what is needed by the memory structure 208. To compensate for the differentiation in unique memory addresses 209 across different IP blocks 110 serviced by7 the CBL 122, and to maintain a constant shift cycle, each PPM 226 includes a dummy register 520 that contains additional sticky’ flops 522.
In the above example for the memory structure 208 with 5 unique memory addresses 209, a dummy register 520 can include 5 sticky flops 522 (e g., D=5) such that when the total number of shift-in operations, via signal 207, is conducted by the BISR 510, the memory data in the BISR’s 510 sticky' flops 502 is ultimately sent to the correct address (e.g.. N=BISR+D). The above example is merely representative, and other examples of the above operation of the dummy register can include more (or fewer) data addresses and sticky flops 522. For example, a dummy register can include significantly more data addresses and sticky flops 522, such as hundreds or thousands. In some examples, the multiplexers 524 also receive an input from a tie-0 value 526.
As described above with reference to FIG. 2, in some examples, when conducting a repair operation the PPM 226 is first loaded with repair data via a parallel interface. In some examples, this is performed with parallel BISR ports 214 which transmit each instance of repair data to an associated multiplexer 504 and sticky flop 502. Different examples of BISRs can have different numbers of multiplexers 504 (e g., multiplexers 504a through 504n) and sticky flops 502 (e.g., sticky flops 502a through 502n). Upon the BISR 510 receiving a shift-in command through serial shift 215, each sticky' flop 502 serially shifts in 211 the repair data through its associated multiplexer 504 to the next sticky flop 502 in the series. As a counter value of counter 408 increments, shift commands are routed through serial shift 215. A result of this operation is that the repair data is shifted, via signal path 207, from one address to the next in series (e.g., a different sticky flop). This shift operation continues until the repair data is transferred from the BISR 510, through the dummy register 520, to the end address in the memory structure 208 connected to the PPM 226.
FIG. 6 illustrates an example process 600 of providing repair data using the techniques of this specification.
The process 600 includes loading repair data, using a first interface, to a pseudo parallel mapping (PPM) circuit coupled to a memory circuit (610). In some examples, this first interface includes a set of parallel BISR ports 214 that receive repair data from BISR registers 224 (like those discussed above with reference to FIG. 2). In some examples, the PPM 226 contains multiple sticky flops and multiplexers that correspond to the number of unique memory addresses in the memory circuit that will receive repair data.
The process 600 includes generating, at a repair data controller, control signals that control shifting the repair data from the pseudo parallel mapping (PPM) circuit to the memory circuit (620). In some examples, the repair data controller is a custom-BISR loader (CBL) 122 that generates shift-in commands using a counter. In some examples, the limit of the CBL 122 counter is based on the largest number of unique memory' addresses across all memory circuits serviced by the CBL 122.
The process 600 includes shifting, using a second interface, the repair data from the pseudo parallel mapping (PPM) circuit to redundant sectors of the memory circuit based on multiple input/output (I/O) maps of the memory circuit (630). In some examples, the second interface is the connection between the PPM 226 and memory structure 208 that allows the transmission of repair data.
Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non-transitory storage medium for execution by, or to control the operation of, data processing apparatus. The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them. Alternatively or in addition, the program instructions can be encoded on an artificially- generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus.
The term “data processing apparatus” refers to data processing hardware and encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can also be, or further include, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). The apparatus can optionally include, in addition to hardware, code that creates an execution environment for computer programs, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
A computer program which may also be referred to or described as a program, software, a software application, an app. a module, a software module, a script, or code) can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data, e.g., one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, e.g., files that store one or more modules, sub-programs, or portions of code. A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a data communication network.
For a system of one or more computers to be configured to perform particular operations or actions means that the system has installed on it software, firmware, hardware, or a combination of them that in operation cause the system to perform the operations or actions. For one or more computer programs to be configured to perform particular operations or actions means that the one or more programs include instructions that, when executed by data processing apparatus, cause the apparatus to perform the operations or actions.
As used in this specification, an “engine,” or “software engine,” refers to a software implemented input/output system that provides an output that is different from the input. An engine can be an encoded block of functionality’, such as a library, a platform, a software development kit ("SDK”), or an object. Each engine can be implemented on any appropriate ty pe of computing device, e.g., servers, mobile phones, tablet computers, notebook computers, music players, e-book readers, laptop or desktop computers, PDAs, smart phones, or other stationary or portable devices, that includes one or more processors and computer readable media. Additionally, two or more of the engines may be implemented on the same computing device, or on different computing devices.
The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by special purpose logic circuitry, e.g., an FPGA or an ASIC, or by a combination of special purpose logic circuitry' and one or more programmed computers.
Computers suitable for the execution of a computer program can be based on general or special purpose microprocessors or both, or any other kind of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory' or a random access memory' or both. The essential addresss of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data. The central processing unit and the memory' can be supplemented by, or incorporated in, special purpose logic circuitry. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device, e.g., a universal serial bus (USB) flash drive, to name just a few.
Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g.. EPROM. EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magnetooptical disks; and CD-ROM and DVD-ROM disks.
To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and pointing device, e g, a mouse, trackball, or a presence sensitive display or other surface by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user’s device in response to requests received from the web browser. Also, a computer can interact with a user by sending text messages or other forms of message to a personal device, e.g., a smartphone, running a messaging application, and receiving responsive messages from the user in return.
Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface, a web browser, or an app through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g.. a communication network. Examples of communication networks include a local area network (LAN) and a wide area network (WAN), e.g., the Internet.
The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some embodiments, a server transmits data, e.g., an HTML page, to a user device, e.g., for purposes of displaying data to and receiving user input from a user interacting with the device, which acts as a client. Data generated at the user device, e.g.. a result of the user interaction, can be received at the server from the device.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or on the scope of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially be claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order show n or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. Particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain cases, multitasking and parallel processing may be advantageous.
What is claimed is:

Claims

1. A method for providing repair data to a memory circuit, the method comprising: loading, using a first interface, repair data to a pseudo parallel mapping circuit coupled to the memory' circuit; generating, by a repair data controller, control signals that control shifting the repair data from the pseudo parallel mapping circuit to the memory circuit; and shifting, using a second interface, the repair data from the pseudo parallel mapping circuit to redundant sectors of the memory circuit based on a plurality of input/output (I/O) maps of the memory circuit.
2. The method of claim 1, wherein the loading the repair data comprises: loading, using the first interface, respective portions of the repair data in parallel to a built-in self-repair register of the pseudo parallel mapping circuit.
3. The method of claim 2, wherein the shifting the repair data comprises: serially shifting, using the second interface, respective portions of the repair data from the built-in self-repair register of the pseudo parallel mapping circuit to the redundant sectors of the memory' circuit.
4. The method of claim 3, wherein shifting the repair data comprises: shifting, by the first register, a respective portion of the repair data to a redundant sector of the memory' circuit that corresponds to a main sector of the memory' circuit; wherein the respective portion of repair data is shifted to the redundant sector based on a fault at the corresponding main sector.
5. The method of claim 3, wherein: generating the control signals comprises generating a plurality of clock signals; and shifting the repair data comprises shifting the repair data based on the plurality of clock signals, such that respective portions of the repair data are shifted serially over the plurality of clock cycles using the second interface.
6. The method of claim 3, wherein the pseudo parallel mapping circuit comprises a dummy register and shifting the repair data comprises: performing one or more serial shifts using the dummy register when a size of a corresponding I/O map is less than a size of the dummy register.
7. The method of claim 6. wherein the dummy register is configured to maintain a constant shift cycle when the repair data is provided to a memory instance of the memory circuit.
8. The method of claim "I. wherein a size of the dummy register coincides with a maximum size of an I/O map in a particular memory instance of the memory circuit.
9. The method of claim 1, further comprising: generating a repair signature indicating one or more faults at the memory circuit; and activating a redundancy feature of a memory macro of the memory circuit based on the repair signature; wherein, in response to activating the redundancy feature of the memory macro, the repair data is loaded to the pseudo parallel mapping circuit and shifted to the redundant sectors of the memory circuit to address the one or more faults indicated by the repair signature.
10. The method of claim 1, wherein: the first interface is a parallel interface of the pseudo parallel mapping circuit; and the second interface is a serial interface of the pseudo parallel mapping circuit.
11. A system for providing repair data to a memory circuit, the system comprising: a processing device; and a non-transitory machine-readable storage device storing instructions that are executable by the processing device to cause performance of operations comprising: loading, using a first interface, repair data to a pseudo parallel mapping circuit coupled to the memory circuit; generating, by a repair data controller, control signals that control shifting the repair data from the pseudo parallel mapping circuit to the memory circuit; and shifting, using a second interface, the repair data from the pseudo parallel mapping circuit to redundant sectors of the memory circuit based on a plurality of input/output (I/O) maps of the memory circuit.
12. The system of claim 1, wherein the loading the repair data comprises: loading, using the first interface, respective portions of the repair data in parallel to a built-in self-repair register of the pseudo parallel mapping circuit.
13. The system of claim 12, wherein the shifting the repair data comprises: serially shifting, using the second interface, respective portions of the repair data from the built-in self-repair register of the pseudo parallel mapping circuit to the redundant sectors of the memory circuit.
14. The system of claim 13, wherein shifting the repair data comprises: shifting, by the first register, a respective portion of the repair data to a redundant sector of the memory circuit that corresponds to a main sector of the memory circuit; wherein the respective portion of repair data is shifted to the redundant sector based on a fault at the corresponding main sector.
15. The system of claim 13, wherein: generating the control signals comprises generating a plurality of clock signals; and shifting the repair data comprises shifting the repair data based on the plurality of clock signals, such that respective portions of the repair data are shifted serially over the plurality of clock cycles using the second interface.
16. The system of claim 13, wherein the pseudo parallel mapping circuit comprises a dummy register and shifting the repair data comprises: performing one or more serial shifts using the dummy register when a size of a corresponding I/O map is less than a size of the dummy register.
17. The system of claim 16, wherein the dummy register is configured to maintain a constant shift cycle when the repair data is provided to a memory instance of the memory circuit.
18. The system of claim 17, wherein a size of the dummy register coincides with a maximum size of an I/O map in a particular memory’ instance of the memory circuit.
19. The system of claim 11 , wherein: the first interface is a parallel interface of the pseudo parallel mapping circuit; and the second interface is a serial interface of the pseudo parallel mapping circuit.
20. A non-transitory machine-readable storage device storing instructions used to provide repair data to a memory circuit, the instructions being executable by a processing device to cause performance of operations comprising: loading, using a first interface, repair data to a pseudo parallel mapping circuit coupled to the memory circuit; generating, by a repair data controller, control signals that control shifting the repair data from the pseudo parallel mapping circuit to the memory’ circuit; and shifting, using a second interface, the repair data from the pseudo parallel mapping circuit to redundant sectors of the memory circuit based on a plurality of input/output (I/O) maps of the memory circuit.
PCT/US2023/078692 2023-11-03 2023-11-03 Repair data controller for memories with serial repair interfaces Pending WO2025095978A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/US2023/078692 WO2025095978A1 (en) 2023-11-03 2023-11-03 Repair data controller for memories with serial repair interfaces
TW113141201A TW202533244A (en) 2023-11-03 2024-10-29 Repair data controller for memories with serial repair interfaces

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2023/078692 WO2025095978A1 (en) 2023-11-03 2023-11-03 Repair data controller for memories with serial repair interfaces

Publications (1)

Publication Number Publication Date
WO2025095978A1 true WO2025095978A1 (en) 2025-05-08

Family

ID=89121530

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/078692 Pending WO2025095978A1 (en) 2023-11-03 2023-11-03 Repair data controller for memories with serial repair interfaces

Country Status (2)

Country Link
TW (1) TW202533244A (en)
WO (1) WO2025095978A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130148451A1 (en) * 2011-12-09 2013-06-13 Fujitsu Limited Memory device including redundant memory cell block
US20140029362A1 (en) * 2012-07-27 2014-01-30 Taiwan Semiconductor Manufacturing Company, Ltd. Mechanisms for bulit-in self test and repair for memory devices
US20210193226A1 (en) * 2019-12-20 2021-06-24 Sandisk Technologies Llc Centralized fixed rate serializer and deserializer for bad column management in non-volatile memory
US20220215896A1 (en) * 2019-04-19 2022-07-07 Siemens Industry Software Inc Method and apparatus for processing memory repair information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130148451A1 (en) * 2011-12-09 2013-06-13 Fujitsu Limited Memory device including redundant memory cell block
US20140029362A1 (en) * 2012-07-27 2014-01-30 Taiwan Semiconductor Manufacturing Company, Ltd. Mechanisms for bulit-in self test and repair for memory devices
US20220215896A1 (en) * 2019-04-19 2022-07-07 Siemens Industry Software Inc Method and apparatus for processing memory repair information
US20210193226A1 (en) * 2019-12-20 2021-06-24 Sandisk Technologies Llc Centralized fixed rate serializer and deserializer for bad column management in non-volatile memory

Also Published As

Publication number Publication date
TW202533244A (en) 2025-08-16

Similar Documents

Publication Publication Date Title
US20180174665A1 (en) Method to dynamically inject errors in a repairable memory on silicon and a method to validate built-in-self-repair logic
US9411668B2 (en) Approach to predictive verification of write integrity in a memory driver
US11442107B2 (en) System-on-chip for AT-SPEED test of logic circuit and operating method thereof
US8892794B2 (en) Using central direct memory access (CDMA) controller to test integrated circuit
US11144387B2 (en) Apparatus, systems, and methods to detect and/or correct bit errors using an in band link over a serial peripheral interface
US7290186B1 (en) Method and apparatus for a command based bist for testing memories
KR20170059219A (en) Memory device, memory system and method of verifying repair result of memory device
JP2013016182A (en) Booting memory device from host
US9437327B2 (en) Combined rank and linear address incrementing utility for computer memory test operations
CN111145826B (en) Memory built-in self-test method, circuit and computer storage medium
US9859023B2 (en) Memory test system and method of testing memory device
US20110320911A1 (en) Computer System and Method of Protection for the System's Marking Store
CN110235202A (en) It is tested using the in-line memory that storage device borrows
US11428734B2 (en) Test board and test system including the same
US11226823B2 (en) Memory module and operating method thereof
US20130031281A1 (en) Using a dma engine to automatically validate dma data paths
CN111831593A (en) Apparatus, system, and method for generating link training signals
US8762926B2 (en) Method and apparatus for diagnosing a fault of a memory using interim time after execution of an application
NL2033511B1 (en) Fast memory ecc error correction
WO2025095978A1 (en) Repair data controller for memories with serial repair interfaces
US7571357B2 (en) Memory wrap test mode using functional read/write buffers
KR20170056109A (en) Memory device and memory device test system
US8990624B2 (en) Emulator verification system, emulator verification method
CN101470650B (en) Method and device for detecting computer motherboard
CN111095228A (en) First boot with one memory channel

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23818611

Country of ref document: EP

Kind code of ref document: A1