[go: up one dir, main page]

WO2025155834A1 - Systèmes et procédés de génération d'images normalisées de sections de tissu biologique - Google Patents

Systèmes et procédés de génération d'images normalisées de sections de tissu biologique

Info

Publication number
WO2025155834A1
WO2025155834A1 PCT/US2025/012052 US2025012052W WO2025155834A1 WO 2025155834 A1 WO2025155834 A1 WO 2025155834A1 US 2025012052 W US2025012052 W US 2025012052W WO 2025155834 A1 WO2025155834 A1 WO 2025155834A1
Authority
WO
WIPO (PCT)
Prior art keywords
stain
image
pixel intensity
intensity values
computer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2025/012052
Other languages
English (en)
Inventor
Jeffrey LA
Krishnan Raghunathan
Jocelyn SILVESTER
Jay THIAGARAJAH
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Boston Childrens Hospital
Original Assignee
Boston Childrens Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Boston Childrens Hospital filed Critical Boston Childrens Hospital
Publication of WO2025155834A1 publication Critical patent/WO2025155834A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/01Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials specially adapted for biological cells, e.g. blood cells
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Optical investigation techniques, e.g. flow cytometry
    • G01N15/1429Signal processing
    • G01N15/1433Signal processing using image recognition
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Optical investigation techniques, e.g. flow cytometry
    • G01N15/1468Optical investigation techniques, e.g. flow cytometry with spatial resolution of the texture or inner structure of the particle
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/24Base structure
    • G02B21/241Devices for focusing
    • G02B21/244Devices for focusing using image analysis techniques
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N1/00Sampling; Preparing specimens for investigation
    • G01N1/28Preparing specimens for investigation including physical details of (bio-)chemical methods covered elsewhere, e.g. G01N33/50, C12Q
    • G01N1/30Staining; Impregnating ; Fixation; Dehydration; Multistep processes for preparing samples of tissue, cell or nucleic acid material and the like for analysis
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N2015/1006Investigating individual particles for cytology
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Optical investigation techniques, e.g. flow cytometry
    • G01N2015/1497Particle shape
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/40ICT specially adapted for the handling or processing of patient-related medical or healthcare data for data related to laboratory analysis, e.g. patient specimen analysis
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Definitions

  • Histopathological examination of sections of biological tissue plays a pivotal role in the diagnosis and prognosis of numerous medical conditions (e.g., carcinoma, dysplasia, benign tumors, celiacs disease, ulcerative colitis, etc.).
  • this examination involves the preparation of thin tissue sections, staining the sections with specific dyes, and then manually scrutinizing slides with the prepared tissue sections under a microscope to identify and/or characterize morphologic structures within the tissue sections.
  • the stain channel may include a range of pixel intensity values corresponding to a range of stain concentrations
  • the method may further include generating multiple images for the stain channel, each of which captures a different subrange in the stain channel's range of pixel intensity values
  • the multiple images may include a stain-high image that captures a subrange of pixel intensity values that are higher than the pixel intensity values of any of the other subranges in the range of pixel intensity values
  • the lower bound may include pixel intensity values from the stain-high image.
  • creating the segmentation map from the lower bound may include applying a correction term to the lower bound and creating the segmentation map from the corrected lower bound.
  • the method may further include generating, from the input image, multiple stain channels, including the stain channel indicating the stain that stains for nuclei, where each stain channel within the multiple stain channels corresponds to a different stain applied to the biological tissue section, generating, from the multiple stain channels, an upper bound, and generating the correction term based on the upper bound.
  • each stain channel within the multiple stain channels may include a different range of pixel intensity values corresponding to a different range of stain concentrations
  • the method may further include generating multiple images for each stain channel within the multiple stain channels
  • each of the multiple images may include a stain-background image, a stain-high image, and a stain- low image, each of which captures pixel intensity values within a different subrange of the corresponding stain channel's range of pixel intensity values
  • the stain-background image may capture a subrange of pixel intensity values corresponding to a background level of a stain indicated by the corresponding stain channel
  • the stain-high image may capture a subrange of pixel intensity values that are higher than the pixel intensity values in the subranges of the stain- background image and the stain-low image
  • the stain-low image may capture a subrange of pixel intensity values that are higher than the pixel intensity values in the subrange of the background image and lower than
  • the method may further include determining that an image of an additional section of the biological tissue either doesn't include any instance of the morphological structure or includes an additional instance of the morphological structure that doesn't satisfy the quality metric, in response to determining that the morphological structure in the section of the biological tissue satisfies the quality metric, adding the input image to a digital queue of usable images for a pathologist to review, and in response to determining that the image of the additional section of the biological tissue either doesn't include any instance of the morphological structure or includes an additional instance of the morphologic structure that doesn't satisfy the quality metric, precluding the image of the additional section from the digital queue of usable images.
  • the techniques described herein relate to a computer- implemented method of stain normalization image processing for digitized biological tissue images including identifying, from an input image of a biological tissue section, multiple stain channels, each indicating a different stain of a multiple stains applied to the biological tissue section and including a range of pixel intensity values corresponding to a range of concentrations for that stain, generating, for each stain channel within the multiple the stain channels, a multiple images based on relative stain concentration, where each image within the multiple images captures pixel intensity values within a different segment of the stain channel's range of pixel intensity values, and outputting one or more stain-normalized images based on one or more of the multiple images generated for each stain channel.
  • the techniques described herein relate to a computer- implemented method for constructing scale invariant images of biological tissue sections, the method including identifying multiple nuclei of cells within an image of a section of biological tissue, setting a length scale based on a size of one or more of the multiple nuclei, and scaling, for a morphological structure other than a nuclei that is represented in the image, a parameter related to length using the length scale that is based on the size of the one or more nuclei.
  • setting the length scale based on the size of one or more of the multiple nuclei may include setting the length scale based on an average size of the multiple nuclei.
  • the method may further include generating, from the input image, multiple stain channels, including the stain channel indicating the stain that stains for nuclei, where each stain channel within the multiple stain channels corresponds to a different stain applied to the biological tissue section, generating, from the multiple stain channels, an upper bound, and generating the correction term based on the upper bound.
  • each stain channel within the multiple stain channels may include a different range of pixel intensity values corresponding to a different range of stain concentrations
  • the method may further include generating multiple images for each stain channel within the multiple stain channels
  • each of the multiple images may include a stain-background image, a stain-high image, and a stain- low image, each of which captures pixel intensity values within a different subrange of the corresponding stain channel's range of pixel intensity values
  • the stain-background image may capture a subrange of pixel intensity values corresponding to a background level of a stain indicated by the corresponding stain channel
  • the stain-high image may capture a subrange of pixel intensity values that are higher than the pixel intensity values in the subranges of the stain- background image and the stain-low image
  • the stain-low image may capture a subrange of pixel intensity values that are higher than the pixel intensity values in the subrange of the background image and lower than
  • each of the multiple images may further include one or more stain-medium images including a subrange of pixel intensity values that are between the pixel intensity values in the subranges of the stain-high images and the stain-low images.
  • the upper bound may include pixels based on one or more of the images generated for each of the multiple stain channels except for the stain-background images.
  • the stain channel may indicate a stain that stains for nuclei.
  • the stain may include at least one of a hematoxylin stain, or a fluorescent stain that binds to DNA.
  • the multiple stain channels may include a hematoxylin channel, corresponding to a hematoxylin stain, and an eosin channel, corresponding to an eosin stain.
  • the method may further include identifying, from the input image and/or a stain-normalized image generated from the input image, multiple nuclei, and setting a length scale based on a size of one or more of the multiple nuclei.
  • the method may further include determining a quantitative histological measure of the morphologic structure using the length scale (e.g., a villus-height to crypt-depth ratio of a villus-crypt pair).
  • identifying the morphologic structure may include determining that the morphological structure satisfies a quality metric.
  • the method may further include, in response to determining that the morphological structure satisfies the quality metric, generating a user interface, including a display of the morphological structure, that visually draws attention to the morphological structure, visually indicates that the morphological satisfies the quality metric, and/or is prioritized in a queue of user interfaces, where each user interface in the queue may include an image of a different section of the biological tissue.
  • the method may further include determining that an image of an additional section of the biological tissue either doesn't include any instance of the morphological structure or includes an additional instance of the morphological structure that doesn't satisfy the quality metric, in response to determining that the morphological structure in the section of the biological tissue satisfies the quality metric, adding the input image to a digital queue of usable images for a pathologist to review, and in response to determining that the image of the additional section of the biological tissue either doesn't include any instance of the morphological structure or includes an additional instance of the morphologic structure that doesn't satisfy the quality metric, precluding the image of the additional section from the digital queue of usable images.
  • FIGS.2–3 and FIGS.5–8 are flow charts of processes 200, 300, 500, 600, 700, and 800, respectively, that may be implemented in some embodiments for an automated biological tissue analysis framework.
  • FIG.4 is an exemplary block diagram showing an aspect of an exemplary stain normalization process.
  • FIG.9 is an exemplary implementation of a computing device that may be used in a system implementing techniques described herein.
  • FIGS. 10, 21, 35, and 40–43 are exemplary graphs describing features of one implementation of the disclosed automated biological tissue analysis framework.
  • FIGS.11–20, 22–34, and 36–37 are exemplary images of biological tissue used to illustrated exemplary implementations of the disclosed automated biological tissue analysis framework.
  • Such methods typically normalize colors by deconvolving the colors of an image into three color channels: red, green, and blue.
  • staining protocols used to prepare biological tissue samples predominantly manifest in the red channel, resulting in a disproportionately heightened representation within this channel.
  • conventional methods fail to yield a useful result when normalizing images of stained tissue sections into red, green, and blue channels.
  • some techniques disclosed herein include normalizing pixel values for an image of biological tissue using a novel type of channel (e.g., a channel capable of meaningfully delineating stain concentration variance within a stained image).
  • tissue boundaries can also be complicated by tissue heterogeneity (e.g., variations in composition and/or density).
  • some of the techniques described herein include novel approaches to identifying morphologic structures and/or features (e.g., from stain and scale normalized images that correct for noise, artifacts, non-tissue elements, etc.). These approaches can improve the uniformity and accuracy of morphological measurements and/or feature extraction across diverse tissue sample, reducing measurement discrepancies that may arise from human error or variations in manual techniques.
  • the disclosed tissue analysis system may analyze each image in a set of images (e.g., where each image corresponds to a different section of a biological tissue sample) to identify well-oriented images.
  • the tissue analysis system may select well-oriented images for a histopathologist’s review and/or prioritize well-oriented images in a queue generated for a histopathologist’s review.
  • features described herein improve the functioning of a computer. For example, by enabling machine learning models to handle variations encountered in clinical practice, some techniques described herein can enable a computer to analyze histopathological images.
  • FIG. 1 illustrates a block diagram of a system 100 with which some embodiments may operate for analyzing tissue.
  • System 100 may, in some cases, perform techniques described below in connection with FIGs.2–43.
  • system 100 can include a computing device 102.
  • Computing device 102 may be any type or form of device that may perform functions directed at automated histopathological tissue analysis.
  • system 100 can include a tissue analysis facility 104 that, for example, may be configured to perform one or more of the acts described in connection with FIGs. 2–43.
  • the tissue analysis facility 104 may be configured to (1) transform an input image 106 of a section of biological tissue 108 into a normalized image 110 and/or (2) algorithmically analyze the normalized image 110 (e.g., to identify a morphological structure and/or feature, such as morphological structure 112 and/or a feature of the morphological structure 112).
  • the input image 106 may represent an image of the biological tissue section 108 (e.g., captured via a microscope 114).
  • FIG.2 describes an example of an integrated framework describing how one or more features of the illustrative frameworks of FIGs. 3–8 may work together, in some embodiments.
  • FIGs. 3–4 illustrate a framework for generating a stain-normalized image of a biological tissue section that may be used in some embodiments.
  • FIG.5 shows a process that may be used in some embodiments for generating a scale-normalized image of a biological tissue section.
  • FIG. 6 is a flowchart of an illustrative process of some embodiments for generating a segmentation map from a lower bound determined from a stain channel indicating a stain that stains for nuclei.
  • FIG.7 illustrates some techniques for identifying a morphological structure from a stain-normalized image, that may be used in some embodiments.
  • FIG.8 illustrates a process that may be used in some embodiments for identifying a villus-crypt pair from a stain-normalized image of a section of intestinal tissue.
  • FIG.2 provides a flow chart of a process 200 that may be implemented in some embodiments by a system, such as system 100 in FIG.1, to process and/or analyze an image of a section of biological tissue.
  • the staining process may include an immunochemistry (IHC) staining process in which antibodies labeled with different stains are used to help detect the presence (or absence) of specific proteins with the section of biological tissue.
  • IHC immunochemistry
  • systems and techniques are described herein in connection with images of fixed tissue sections, it should be appreciated that the systems and techniques described herein (e.g., for inferring structural information) can be used with images of any biological tissue, including tissues that do not result from sectioning.
  • the stained section is manually analyzed (e.g., by a pathologist viewing the stained section under a microscope).
  • the staining process described at step 210 results in stained tissue sections that are subject to a great deal of inter-laboratory and/or intra-laboratory variation in staining. As mentioned previously, such variability may arise from a variety of factors (e.g., differing laboratory protocols, reagent and batch variations, etc.).
  • an input image of the stained section is captured (e.g., input image 106).
  • the input image may represent a digital image that is represented in a color model, such as an RGB (Red-Green-Blue) color model.
  • each pixel of the input image may be defined by multiple channels, one channel for each color in the color model, and each pixel may include an intensity value for each of the multiple channels.
  • the input image may represent an RGB image defined by three color channels (a red color channel, a green color channel, and a blue channel) and each of the pixels in the input image may include three intensity values (one for each of the three color channels).
  • the input image may be captured using any technique designed for capturing images of biological tissue section.
  • FIG. 4 depicts an exemplary additional block diagram of the system 100 in which the system identifies a first stain channel 400 and a second stain channel 402 from the input image 106.
  • the first stain channel 400 indicates a first stain 404 and captures a range of pixel intensity values 406 for the input image 106 corresponding to stain concentrations for the first stain 404.
  • the second stain channel 402 indicates a second stain 408 and captures a range of pixel intensity values 410 for the input image 106 corresponding to stain concentrations for the second stain 408.
  • the system may identify stain channels corresponding to any stain, depending on which stains were applied to the biological tissue section (e.g., as described at step 210 of FIG. 2).
  • the biological tissue section may have been stained with hematoxylin and eosin.
  • the system may identify a hematoxylin channel, corresponding to the hematoxylin stain, and an eosin channel, corresponding to the eosin stain.
  • the range may further include one or more stain-moderate (e.g., stain- medium) segments, with a subrange of pixel intensity values that fall between the subranges of the stain-high segment and the stain-low segment.
  • one or more (e.g., each) of the images generated for a stain channel may be binarized (e.g., to yield a black-and-white image). An image may be binarized in a variety of ways.
  • an image may be binarized by assigning values of the image that are at and/or above a threshold value to a first binary value (e.g., 1) and values of the image that are at and/or below the threshold value to a second binary value (e.g., 0).
  • the threshold value for an image may be dynamically determined based on the values of the image.
  • the threshold value may represent a value corresponding to a median and/or mode of the values of the image. Binarizing the images may result in a variety of benefits.
  • binarizing the images may help yield crisply delineated borders (e.g., boundaries), which may be used for the distance transform and/or segmentation processes that will be described later (e.g., in connection with FIGS. 5–8). Processes for generating the various segments of the range are described in greater detail below (e.g., in connection with the section of this application labeled “Staining Invariance.”) [0076] FIG.
  • FIG. 4 depicts an embodiment in which four images are generated for the first stain channel 400 (image 412 with a stain-high segment of range 406, image 414 with a stain- medium segment of range 406, image 416 with a stain-low segment of range 406, and image 418 with a stain-background of range 406) and four image are generated for the second stain channel 402 (image 420 with a stain-high segment of range 410, image 422 with a stain-medium segment of range 410, image 424 with a stain-low segment of range 410, and image 426 with a stain- background of range 410).
  • an image (e.g., each of the images) generated for a stain channel may capture one or more dynamics of the varying levels of pixel intensity across the pixel intensity values of the image.
  • the image may capture spatial dynamics indicating variations in pixel intensity across two or more regions of the image. Once identified, these spatial dynamics may be used in a variety of ways.
  • spatial dynamics may be used for segmentation (e.g., to partition an image into distinct homogeneous regions), mathematical morphological transformations (e.g., to enhance or suppress various features in an image using operations such dilation, erosion, opening, and/or closing), and/or morphological feature identification (e.g., to extract and analyze shape-related features from an image).
  • the image may capture contrast dynamics indicating a variation between one or more of the relatively highest pixel intensity values within the image and one or more of the relatively lowest pixel intensity values within the image. Once identified, these contrast dynamics may be used in a variety of ways.
  • contrast dynamics may be used for stain normalization (e.g., the images generated for a stain channel may be normalized by shifting the values of an image based on the relative contrast in each image).
  • the image may capture brightness and/or saturation dynamics corresponding to relative quantities of pixel intensity values designated as a high pixel intensity value and pixel intensity values designated as low pixel intensity values.
  • these brightness and/or saturation dynamics may be used in a variety of ways. For example, brightness and/or saturation dynamics may be used when performing a distance transform (e.g., to identify the distance of pixels in the image to a nearest edge).
  • the distance transform may be used in a variety of contexts (e.g., for nuclei parameterization as will be discussed in connection with the steps of FIG.5).
  • the image may capture distribution dynamics indicating a distribution of pixel intensity values captured by the image. Once identified, these distribution dynamics may be used in a variety of ways. For example, distribution dynamics may be used for stain normalization and/or feature identification.
  • the system may output one or more stain- normalized images (e.g., normalized image 110 in FIG. 1) that are based on one or more of the images generated at step 320 for each stain channel.
  • the one or more stain- normalized images may include or represent an image of a lower bound or corrected lower bound (e.g., with one or more of the features described below in connection with FIG. 6). Additionally or alternatively, the one or more stain-normalized images may represent or include one or more of the images generated for each stain channel at step 320. In one embodiment, one or more of the images generated at step 320 may be digitally assembled to yield the one or more stain-normalized images.
  • SCALE NORMALIZATION FRAMEWORK [0082] Process 500 in FIG. 5 begins with step 510, where the system (e.g., the tissue analysis facility 104 in FIG.
  • nuclei may identify nuclei across multiple cells within an image of a section of biological tissue (e.g., an input image such as the input image 106 and/or a normalized image generated from the input image such as normalized image 110 in FIG. 1).
  • the system may identify the nuclei in a variety of ways. In some examples, the system may identify the nuclei from one or more stain channels identified from an input image and/or one or more stain-normalized images (e.g., a stain-high image) generated for the one or more stain channels (e.g., as described in connection with step 320 of FIG. 3).
  • the system may identify the nuclei from a stain channel that indicates a stain that stains for nuclei (e.g., a hematoxylin stain, a fluorescent stain that binds to DNA, a Feulgen stain, a methyl green stain, a Giemsa stain, etc.) and/or from one or more stain-normalized images generated for the stain channel that indicates the stain that stain for nuclei (e.g., a hematoxylin-high image, a Giemsa-high image, etc.).
  • a stain that stains for nuclei e.g., a hematoxylin stain, a fluorescent stain that binds to DNA, a Feulgen stain, a methyl green stain, a Giemsa stain, etc.
  • stain-normalized images generated for the stain channel that indicates the stain that stain for nuclei (e.g., a he
  • the system may identify the nuclei (e.g., from the one or more stain-normalized images) using mathematical morphology.
  • the system may (1) identify (e.g., using mathematical morphology) discrete structures within the one or more stain-normalized images, (2) determine a distribution of relative sizes for the discrete structures, (3) determine a peak distribution within the distribution of relative sizes, and (4) determine that the structures within the peak distribution are nuclei. This process for identifying nuclei may be effective because nuclei may be of a relatively uniform size.
  • the system may set a length scale based on a size of one or more of the nuclei.
  • the system may determine the size of the nuclei in a variety of ways. In examples in which a level of magnification applied to the biological tissue is known, the size of the nuclei may be measured using area and/or direct measurements. In some examples (e.g., in examples in which a level of magnification is not known), the system may determine the size of the nuclei using an intensity map (e.g., representing the distribution of pixel intensity values across an image associated with the nuclei such as the one or more stain-normalized images discussed in connection with step 510).
  • intensity map may refer to any type or form of image that represents the distribution of intensity values across an image. The intensity map may visualize the distribution of intensity values in a variety of ways.
  • the intensity map may visualize varying magnitudes of intensity values with varying shades (e.g., of color or grayscale) and/or with varying elevations.
  • the intensity map may represent a heat map in which the magnitudes of the intensity values are visualized with different shades of color and/or a geographic hill in which the magnitudes of the intensity values are visualized with different elevations.
  • an intensity map may simply refer to a set (a distribution) of intensity values corresponding to an image.
  • the intensity values, represented in the intensity map may correspond to measures of intensity for a variety of characteristics determined for a pixel.
  • the intensity values may represent a distance from a designated (e.g., nearest) boundary (e.g., where the highest intensity values correspond to pixels that are the farthest from a boundary and the lowest intensity values correspond to pixels that are the closest to the boundary).
  • the intensity map may be generated for the one or more stain-normalized images by applying a distance transform to the one or more stain-normalized images.
  • distance transform may refer to an operation that assigns pixel values (e.g., each pixel value) within an image (e.g., a binary stain-normalized image) to the nearest pixel with a different value (e.g., distance to the nearest background pixel), resulting in a grayscale image in which each pixel’s intensity value corresponds to its distance from the pixel’s nearest boundary (e.g., edge).
  • image e.g., a binary stain-normalized image
  • a different value e.g., distance to the nearest background pixel
  • the system identifies pixels of relatively high Hematoxylin concentration from a raw input (e.g., RGB) image.
  • FIG. 31 depicts an exemplary raw input image of a tissue section stained with Hematoxylin
  • FIG. 32 depicts an exemplary H-high image generated for a Hematoxylin stain channel identified for the raw input image.
  • the system reduces noise of the image (e.g., to fill in small gaps or holes in the image of the tissue).
  • the system may reduce the noise of an image using any type of operation.
  • the system reduces noise by performing a dilation operation on the H-high image followed by an erosion operation using a cross-shaped kernel.
  • the system After reducing noise in the H-high image, the system performs a distance transform (e.g., calculating the distance of each pixel within the H- high image to a nearest edge) to generate a distance map (e.g., depicted in FIG. 34).
  • the distance map may represent a local skeleton of the H-high image (e.g., corresponding to the pixels with the maximum distance values generated by the distance transform).
  • the distance values of the distance map may be plotted as a histogram (e.g., as depicted in FIG.35). Then, the most common distance (e.g., peak or maximum) identified in the histogram may be determined to be the average width of nuclei.
  • the system may determine the size of nuclei without completely demarcating individual nuclei. This advantage may be especially useful in images that capture a series of connected nuclei. By measuring minimum distance to an edge (e.g., boundary), the calculation may calculate an average size for a series of nuclei, even if the nuclei are connected. [0087] After determining the size of one or more of the nuclei captured by the input image, the system may set the length scale in a variety of ways. In some examples, the system may set the length scale based on an average size of the nuclei.
  • the system may set the length scale based on a mean size of multiple (e.g., all) of the identified nuclei, a median size for the identified nuclei, and/or a mode size of the identified nuclei.
  • the average size may be determined using distance values of a distance map (e.g., plotted as a histogram), as described in the specific example corresponding to FIGs. 31–35.
  • the length scale may be implemented by functionalizing mathematical morphology, image analysis, and/or image processing parameters related to pixel length and pixel count to perform morphological operations and image analysis and/or processing operations that are invariant across a range of input image scales, resolutions, magnifications, and/or other parameters.
  • the system may scale, for a morphological structure (other than a nuclei) that is represented in the image, a parameter related to length using the length scale set at step 512.
  • the image of the biological tissue section may represent an image of a section of intestinal tissue
  • the morphological structure may represent a villus-crypt pair
  • the parameter related to length may be a villus-height to crypt-depth ratio for the villus-crypt pair.
  • the stain channel may indicate a stain that stains for nuclei.
  • the stain channel may indicate any type or form of stain that stain for nuclei (e.g., a hematoxylin stain, a fluorescent stain that binds to DNA, a Feulgen stain, a methyl green stain, a Giemsa stain, etc.).
  • the stain channel may represent a set of multiple stain channels (e.g., first stain channel 400 and second stain channel 402 in FIG.4).
  • the system may generate the stain channel using one or more of the stain normalization features described as part of the stain normalization framework described in connection with FIGS.3 and 4. As described in connection with FIGS.3 and 4, such features may include (1) identifying multiple stain channels for an input image (e.g., first stain channel 400 and second stain channel 402) and (2) generating a set of images for each of the stain channels (e.g., images 412–426 in FIG.4).
  • Each image for a stain channel may include a different segment of the stain channel’s range of pixel intensity values.
  • a set of images generated for a stain channel may include a stain-high image, a stain-medium mage, a stain-low image, and a stain-background image.
  • the system may determine a lower bound from the stain channel (i.e., the stain channel indicating the stain that stains for nuclei).
  • the lower bound may represent pixel intensity values from a stain-high image generated for the stain channel (e.g., image 412 generated for first stain channel 400 as described in connection with FIG. 4).
  • the lower bound may represent all pixel intensity values from the stain-high image generated for the stain channel.
  • the system may also determine an upper bound. In these examples, the system may determine the upper bound from multiple stain channels identified at step 610 (e.g., each of the stain channels). In one such example, the upper bound may represent pixel intensity values from the images generated for the multiple stain channels. In one specific example, the upper bound may represent all of the pixel intensity values captured by all of the images generated for each of the stain channels, except for any stain-background images generated for the stain channels. [0093] At step 630, the system may create a segmentation map based on the lower bound.
  • the system may identify a morphological structure and/or feature of the biological tissue section from the segmentation map.
  • the system may create the segmentation map from the lower bound in a variety of ways. In some examples, the system may create the segmentation map from a corrected lower bound. In these examples, the system may (1) apply a correction term to the lower bound to determine a corrected lower bound and (2) create the segmentation map from the corrected lower bound. In one example, this correction term may be based on the upper bound discussed at step 620 and further described in connection with the section of this application labeled “Upper bound tissue boundary estimate.” The system may create the segmentation map in a variety of ways, as will be discussed in greater detail in connection with FIG.
  • identifying the morphological structure may include determining whether the morphological structure satisfied a quality metric (e.g., determining that the morphological structure is well-oriented).
  • a hematoxylin stain channel e.g., first stain channel 400
  • an eosin stain channel e.g., second stain channel 402 in FIG. 4
  • the systems may identify the stain channels in a variety of ways (e.g., as explained in connection with step 310 of FIG.3 and with FIG.4).
  • the system may normalize, based on the stain channels, each pixel intensity value for the input image to a normalized value to yield one or more stain-normalized images of the intestinal tissue section.
  • the system may normalize the pixel intensity values in a variety of ways (e.g., as explained in connection with steps 320 and 330 of FIG. 3 and with FIG.4). In addition, or as an alternative, to normalizing the pixel intensity values, the system may normalize the scale of the input image (e.g., using one or more of the features or processes described in connection with FIG.5) to yield a scale-normalized image of the intestinal tissue section. [0102] Finally, at step 830, the system may identify one or more villus-crypt pairs of the intestinal tissue section from the one or more stain-normalized (and/or scale-normalized) images of the biological tissue section (e.g., normalized image 110 in FIG. 1).
  • the system may identify one or more villus-crypt pairs of the intestinal tissue section from the one or more stain-normalized (and/or scale-normalized) images of the biological tissue section (e.g., normalized image 110 in FIG. 1).
  • the system may identify the villus-crypt pairs by identifying a central line, corresponding to a section of the intestinal tissue, from which both a first line, corresponding to a villus, and a second line, corresponding to a crypt, are projected in different (e.g., opposite) directions.
  • the central line may correspond to an intensity center (e.g., with one or more of the intensity center features described above in connection with FIG. 7). In one embodiment, this intensity center may correspond to a center of intensity for a lower bound and/or a corrected lower bound generated from one or more of the one or more stain-normalized images.
  • ViCE codifies the human interpretation of images of stained biological tissue (e.g., hematoxylin (H) and eosin (E) images).
  • H hematoxylin
  • E eosin
  • I S is a transformation of I RGB that results in an image with intensity values corresponding to inferred stain S concentration values
  • ( ) represents the observer’s interpretation of inferred stain concentrations relations with a length scale, l relative .
  • Staining Invariance Physical tissue properties determine stain affinity (As). Staining protocol parameters ( ⁇ ) like dye concentration and dye set-time directly influence stain concentration (I s ). In one embodiment, assuming independence between stain affinity and an effective staining protocol parameter ( ⁇ eff ), these dynamics may be modeled as follows: where
  • a proportionality between stain affinity and concentration follows: [0108] In one embodiment, it is assumed that such a proportionality can be adequately described by a positive linear translation ( ⁇ ) on and a translation ( ⁇ ) of stain affinity: [0109]
  • the disclosed framework sets the number of threshold classes c to be four. In examples in which two dyes are used for the sample, this produces eight binary images that capture relative stain concentration information resilient against staining protocol discrepancies between lab technicians and laboratories and form the basis of all segmentation maps. These images are labeled as the following: ⁇ H-high, H-mid, H-low, H-bkgd ⁇ , ⁇ E-high, E-mid, E-low, E-bkgd ⁇ . Objects formed from the collection of adjacent, similarly mapped pixels can then be grouped and described by their H-class and E-class and morphological features.
  • ViCE may preferentially utilize functions agnostic to translation and scale transformations.
  • a tetraclass threshold on (S) via a Kittler-Illingworth minimum error thresholding method e.g., Otsu’s method
  • Otsu Otsu
  • the sets of binary images: ⁇ S-high, S-mid, S-low, S-bkgd ⁇ and ⁇ A S -high, A S -mid, A S -low, A S -bkgd ⁇ are equivalent and contain staining information that is independent of staining protocol.
  • ViCE estimates the diameter of the average nuclei, , within each image to define a length scale, i.e. [l] .
  • the instant application identifies nuclei as a basis for defining length scale due to their relative invariance in size.
  • every operation within ViCE can be inherently scaled with image resolution, magnification, and compression. Most notably, the size of every structuring element within every MM methods can be functions of .
  • ⁇ pixel width(nucleus) 3 px ⁇ ⁇ pixel width(crypt) ⁇ 3 ⁇ or ⁇ pixel width(nucleus) 3 px ⁇ 2 ⁇ pixel width(nucleus) ⁇ (e.g., ⁇
  • ViCE generates segmentation maps, extracts features, and evaluates tissue quality from staining protocol independent images of relative stain concentration levels with parameterized operations including variables that individually scale by the size of each image’s nuclei.
  • ViCE Given a set of HE images with ranges of varied and unknown magnifications, resolutions, imaging devices and methods, and tissue staining protocols, ViCE can automatically deploy quantitative histology methods via scale and staining protocol agnostic methods.
  • Mathematical Morphology allows one to process and analyze digital images based on the size and shape of geometrical structures. Morphological operations used in morphological filters, feature extractions, and smoothing techniques, use structuring elements of a priori determined size, shape, and offset. These operations tend to simplify image data while preserving their essential shape characteristics.
  • ViCE primarily uses disk-shaped structuring elements where a radii (e.g., every radii) directly scales with the radius of the average nucleus found in each image. This allows for direct physical interpretations of all morphological techniques, especially since many morphological methods derive from the erosion and dilation of an object by a structuring element. Having clear digital to physical analogs can be a powerful tool for exploratory analysis.
  • ViCE Improved Color Deconvolution Method
  • Color deconvolution “de-stains” RGB image channels by producing an image whose channels correspond to the specific stains with pixel values proportional to stain concentration at that particular pixel location, instead of red, green, and blue channels and their respective intensities (Ruifrok et al.2001; Landini et al. 2020).
  • ViCE introduces a beneficial color deconvolution method that produces results at least two orders of magnitude faster than the currently available MATLAB and ImageJ methods. Through matrix manipulations alone, the disclosed framework can reduce the number of function callbacks by a number equal to the number of image pixels, thereby greatly reducing execution time without precision loss relative to the original method.
  • the initial segmentation map is that of the tissue from the background. Direct segmentation often relies on the nontrivial partitioning of non-viable tissues of varying size, shape, and stain concentration from tissue. In some examples, ViCE instead approaches tissue segmentation indirectly by using upper bound (bwUB) and lower bound (bwLB) estimates of the true tissue boundary throughout its methods.
  • bwUB upper bound
  • bwLB lower bound
  • bwUB While the perimeter of bwUB significantly overlaps with the true boundary, bwUB often fuses adjacent villi. While the villus length measured through bwLB will always be lower than the true value, bwLB does not fuse adjacent villi brushed against one another. Fusing within bwLB uses overlapping enterocyte nuclei, which are located near the basal membrane, formed during slide preparation. Although bwLB systematically underestimate villus heights, the objects commonly fused in bwUB do not pass quality control procedures, thereby removing all adjacent villi within contact of one another. Lollipop Stems, Heads, and Bases [0128] In some examples, ViCE depicts all VC candidates as lollipops.
  • the filters include successive 2nd-degree polynomial fits along a frame length of forty nuclei or roughly four crypt widths. Fitting with such low order polynomials across forty nuclei long frames effectively smooths jagged segments that are less than a dozen nuclei wide into continuous regions. This smooths out divots and cracks formed from goblet cells, poorly oriented tissue, and damaged tissue along bwLB’s perimeter. Additionally, this method does not significantly impact villi as both the villus tip, the region where the villus base merges into the tissue base, and the space between adjacent villi include parabolic curves, which minimizing the residual from the filter’s 2nd order polynomial fittings.
  • ViCE finds the villus height correction terms by mapping the pixel addresses of villus tips onto the distance transform of bwUB. This provides the pixel distance of the villus tip location to the nearest edge within the upper bound estimate, bwUB. This distance is the villus height correction term and is depicted as the radius of the lollipop head.
  • Stand: Demarcation Parameters [0134] After finding the lollipop stems and heads, its base is found through MM and trigonometry. This produces every lollipop’s demarcation width and demarcation angle, used in determining the width and angle of the crypt search windows used when finding the best crypt near every lollipop.
  • Generating crypt maps may involve distinguishing between identically stained nuclei within crypts, extracellular matrix, and enterocytes.
  • ViCE uses its 2D extension of the 1D MM filtering (MMF) method used by the original authors to improve lunar penetrating radar data (Zhang et al., 2019). Similar to bandpass filters, ViCE’s MMF extracts information specific to a range defined by the scale of two structuring elements of different size. This filter distinguishes between identically stained nuclei by disproportionately affecting nuclei surrounded by H-signal versus individual nuclei. This partitions crypt nuclei from extracellular and enterocyte nuclei.
  • MMF 1D MM filtering
  • Crypt segmentation maps are then found with disk-shaped structuring element radii of [0, 1.5 ⁇ ⁇ - ] and additional MM smoothing and filtering methods, detailed in Supplementary Materials.
  • VC candidates are then cropped and rotated and individually processed. VC pairs may involve the selection of the “best” nearby crypt. This is interpreted as the longest crypt whose center of mass is within the villus’ search window.
  • Quality Control Metrics are based on seven physical evaluation metrics. Each metric outputs one of the following: Good, Okay, and Poor. From these outputs, a final Quality result is determined, where Good Quality lollipops correspond to well- oriented VC pairs. The individual evaluation metrics are described below. 1. Tissue Damage Metric ⁇ Lollipop border and apical enterocyte overlap percentage. ⁇ The lollipop border is the region between the upper bound (bwUB) and lower bound (bwLB) estimate of tissue boundary.
  • Apical enterocytes contain moderate to strong signals. The border of a true villus ought to contain enterocytes. ⁇ Villus tip quality ⁇ Measure of intactness of villus tip 2. Orientation Metric ⁇ Threshold of max crypt and search window overlap percent ⁇ Crypts should overlap vertically relative to its search window. Well- oriented crypts should have overlap percentages > 90% ⁇ Min crypt aspect ratio ⁇ Crypts should span the mucosal layer. Good crypts have aspect ratios > 3 ⁇ Min crypt area ⁇ If the size of a VC pair’s best local crypt is unreasonably small, the VC pair is not well-oriented. 3.
  • FIG. 11 is an exemplary uniform background correction result.
  • FIG. 12 is an exemplary tissue quality evaluation summary.
  • FIG. 13 depicts an exemplary contrast enhancement through color model manipulation of color deconvolution results and
  • FIG. 14 depicts an exemplary crypt segmentation map.
  • FIG. 15 depicts exemplary color deconvolution results (left) combined hematoxylin (red) and eosin (green), and gray scale color deconvolution results of (middle) eosin and (right) hematoxylin.
  • FIG.16 depicts exemplary tissue boundary estimates: (left) upper bound, (middle) lower bound, and (right) smoothed lower bound estimate.
  • FIG. 11 is an exemplary uniform background correction result.
  • FIG. 12 is an exemplary tissue quality evaluation summary.
  • FIG. 13 depicts an exemplary contrast enhancement through color model manipulation of color deconvolution results and
  • FIG. 14 depicts an exemplary crypt segmentation map.
  • FIG. 15 depicts exemplary color deconvolution results (left) combined
  • FIG. 17 depicts an exemplary stain concentration density map of (left) eosin and (right) hematoxylin.
  • FIG.18 depicts an example of an automatically generated close-up figure of a Good Quality lollipop, highlighting villus mask and “ideal” crypt choice. The figure title contains the lollipop index, quality score, villus height, crypt depth, and their ratio.
  • FIG. 19 provides examples of the automatically generated and saved intermediate results: (left) crypt segmentation map, (middle) crop and rotation result, and (right) lollipop mask representing the villus mask.
  • FIG. 20 provides an exemplary tissue quality evaluation summary.
  • FIG. 22 shows raw images (e.g., RGB input images).
  • FIG.23 shows a high relative hematoxylin signal image (corresponding to the raw images of FIG. 22).
  • FIG. 24 shows a moderate relative hematoxylin signal image (corresponding to the raw images of FIG. 22).
  • FIG. 25 shows a low relative hematoxylin signal image (corresponding to the raw images of FIG. 22).
  • FIG. 26 shows a background relative hematoxylin signal image (corresponding to the raw images of FIG.22).
  • FIG. 27 shows a high relative eosin signal image (corresponding to the raw images of FIG. 22).
  • FIG. 28 shows a moderate relative eosin signal image (corresponding to the raw images of FIG. 22).
  • FIG. 29 shows a low relative eosin signal image (corresponding to the raw images of FIG. 22).
  • FIG. 30 shows a background relative eosin signal image (corresponding to the raw images of FIG.22).
  • Section S4 Determining ViCE’s Length Scale [0144]
  • length related parameters may directly scale by the estimated average nucleus diameter. Nuclei are easily identifiable due to DNA’s strong affinity to hematoxylin (H), producing distinct purplish blue objects.
  • FIG. 31 shows a raw image (e.g., an input RGB image).
  • FIG. 32 shows an image where we have identified pixels of relatively high H concentration.
  • FIG. 33 shows an image where we observe that the centers of nuclei typically contain pixels from H-strong.
  • nuclei diameter by mapping the skeleton of the binarized image H-strong with its distance transform.
  • This method can be sensitive to salt and pepper noise. We can resolve this with dilation of a [3x3] box-shaped kernel followed by an erosion of a [3x3] cross-shaped kernel.
  • FIG. 34 shows an exemplary distance transform.
  • FIG. 35 By collecting the distance transform pixel values at each skeletonized pixel, we have effectively collected the pixel distance of local ridgelines to their nearest edge.
  • FIG. 35 depicts a chart with regional max values from distance transform. [0147] Since the objects within H-strong are expected to be objects composed of adjacent nuclei, it follows that the most common width should be the width of individual nuclei.
  • FIG.38 provides an exemplary depiction of a ViCE help screen, containing icon description.
  • FIG.39 provides an exemplary depiction of an user interface of ViCE under usage.
  • Operating ViCE can be as simple as entering an image’s location and pressing RUN. File locations can either be entered within the large text field, capable of copying and pasting text, or through a dialog box, generated for point and click file selection by pressing OPEN FILE. Toggle batch processing ON by pressing the FAST FORWARD icon.
  • FIG. 40 shows the Area Under the Curve measured using the ROC curve. The area under the curve was 0.84.
  • a “functional facility,” however instantiated, is a structural component of a computer system that, when integrated with and executed by one or more computers, causes the one or more computers to perform a specific operational role.
  • a functional facility may be a portion of or an entire software element.
  • a functional facility may be implemented as a function of a process, or as a discrete process, or as any other suitable unit of processing. If techniques described herein are implemented as multiple functional facilities, each functional facility may be implemented in its own way; all need not be implemented the same way.

Landscapes

  • Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Analytical Chemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Immunology (AREA)
  • Biochemistry (AREA)
  • Dispersion Chemistry (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Image Processing (AREA)

Abstract

La présente divulgation concerne des procédés, des systèmes et des supports lisibles par ordinateur pour le traitement d'image de normalisation de coloration d'images de tissu biologique numérisées. Un procédé consiste à identifier une pluralité de canaux de coloration à partir d'une image d'entrée d'une section de tissu biologique, à générer une pluralité d'images pour chaque canal de coloration sur la base d'une concentration de coloration relative, et à délivrer des images normalisées en coloration sur la base des images générées. Un procédé de construction d'images invariantes à l'échelle de sections de tissu biologique comprend l'identification de noyaux dans une image, le réglage d'une échelle de longueur sur la base de la taille des noyaux, et la mise à l'échelle de paramètres pour d'autres structures morphologiques à l'aide de l'échelle de longueur. Un procédé d'identification de structures morphologiques consiste à générer un canal de coloration à partir d'une image d'entrée, à déterminer une limite inférieure, à créer une carte de segmentation, et à identifier une structure morphologique à partir de la carte de segmentation. L'invention concerne également des systèmes et des supports lisibles par ordinateur pour mettre en œuvre ces procédés.
PCT/US2025/012052 2024-01-19 2025-01-17 Systèmes et procédés de génération d'images normalisées de sections de tissu biologique Pending WO2025155834A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202463622925P 2024-01-19 2024-01-19
US202463622894P 2024-01-19 2024-01-19
US63/622,925 2024-01-19
US63/622,894 2024-01-19

Publications (1)

Publication Number Publication Date
WO2025155834A1 true WO2025155834A1 (fr) 2025-07-24

Family

ID=96472081

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2025/012052 Pending WO2025155834A1 (fr) 2024-01-19 2025-01-17 Systèmes et procédés de génération d'images normalisées de sections de tissu biologique

Country Status (1)

Country Link
WO (1) WO2025155834A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100136549A1 (en) * 2008-09-16 2010-06-03 Historx, Inc. Reproducible quantification of biomarker expression
US20160307305A1 (en) * 2013-10-23 2016-10-20 Rutgers, The State University Of New Jersey Color standardization for digitized histological images
US20180357765A1 (en) * 2015-09-23 2018-12-13 Koninklijke Philips N.V. Image processing method and apparatus for normalisation and artefact correction
US20210295994A1 (en) * 2020-03-18 2021-09-23 International Business Machines Corporation Pre-processing whole slide images in cognitive medical pipelines
US20230184658A1 (en) * 2017-12-06 2023-06-15 Ventana Medical Systems, Inc. Method of storing and retrieving digital pathology analysis results
US20230317253A1 (en) * 2017-08-04 2023-10-05 Ventana Medical Systems, Inc. Automatic assay assessment and normalization for image processing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100136549A1 (en) * 2008-09-16 2010-06-03 Historx, Inc. Reproducible quantification of biomarker expression
US20160307305A1 (en) * 2013-10-23 2016-10-20 Rutgers, The State University Of New Jersey Color standardization for digitized histological images
US20180357765A1 (en) * 2015-09-23 2018-12-13 Koninklijke Philips N.V. Image processing method and apparatus for normalisation and artefact correction
US20230317253A1 (en) * 2017-08-04 2023-10-05 Ventana Medical Systems, Inc. Automatic assay assessment and normalization for image processing
US20230184658A1 (en) * 2017-12-06 2023-06-15 Ventana Medical Systems, Inc. Method of storing and retrieving digital pathology analysis results
US20210295994A1 (en) * 2020-03-18 2021-09-23 International Business Machines Corporation Pre-processing whole slide images in cognitive medical pipelines

Similar Documents

Publication Publication Date Title
JP7558242B2 (ja) デジタル病理学分析結果の格納および読み出し方法
JP6660313B2 (ja) 画像解析を用いた核のエッジの検出
CN107430771B (zh) 用于图像分段的系统和方法
CN112734774B (zh) 一种高精度眼底血管提取方法、装置、介质、设备和系统
CN112703531B (zh) 生成组织图像的注释数据
CA2746743C (fr) Classement de cellules a plusieurs noyaux et cotation de micronoyaux
CN112380900A (zh) 基于深度学习的子宫颈液基细胞数字图像分类方法及系统
US12131465B2 (en) User-assisted iteration of cell image segmentation
US11062168B2 (en) Systems and methods of unmixing images with varying acquisition properties
US20210390278A1 (en) System for co-registration of medical images using a classifier
JP7214756B2 (ja) ステイン集合体における信号の定量
Gadermayr et al. CNN cascades for segmenting whole slide images of the kidney
JP7651195B2 (ja) 空間マルチパラメータ細胞・細胞内撮像プラットフォームからの組織サンプルの全スライド画像における管/腺及び内腔、管/腺のクラスタ、並びに個々の核を含む組織学的構造のスケーラブルで高精度なコンテクストガイドセグメンテーション
JP7705405B2 (ja) 生体試料中のオブジェクトの体系的特性評価
WO2020035550A1 (fr) Système et procédé d'analyse de données d'image microscopique et de génération d'un ensemble de données annotées pour apprentissage de classifieur
CN110490159B (zh) 识别显微图像中的细胞的方法、装置、设备及存储介质
Foucart et al. Artifact identification in digital pathology from weak and noisy supervision with deep residual networks
CN117576121A (zh) 一种显微镜扫描区域自动分割方法、系统、设备及介质
Guatemala-Sanchez et al. Nuclei segmentation on histopathology images of breast carcinoma
WO2006087526A1 (fr) Appareil et procede permettant de traiter des images de specimen a utiliser dans une analyse informatique de celles-ci
Khan et al. Segmentation of single and overlapping leaves by extracting appropriate contours
WO2025155834A1 (fr) Systèmes et procédés de génération d'images normalisées de sections de tissu biologique
WO2025006910A2 (fr) Procédés d'alignement d'images biologiques séquentiellement colorées et consécutives
JP7412321B2 (ja) オブジェクト分類装置、オブジェクト分類システム及びオブジェクト分類方法
Špringl Automatic malaria diagnosis through microscopy imaging

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 25742479

Country of ref document: EP

Kind code of ref document: A1