[go: up one dir, main page]

WO2018220282A1 - Analysis of myocyte contraction characteristics - Google Patents

Analysis of myocyte contraction characteristics Download PDF

Info

Publication number
WO2018220282A1
WO2018220282A1 PCT/FI2018/050416 FI2018050416W WO2018220282A1 WO 2018220282 A1 WO2018220282 A1 WO 2018220282A1 FI 2018050416 W FI2018050416 W FI 2018050416W WO 2018220282 A1 WO2018220282 A1 WO 2018220282A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
image
contraction
pixel values
plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/FI2018/050416
Other languages
French (fr)
Inventor
Kim LARSSON
Katriina AALTO-SETÄLÄ
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tays Sydankeskus Oy
Tty-Saatio
Tampereen Yliopisto
Original Assignee
Tays Sydankeskus Oy
Tty-Saatio
Tampereen Yliopisto
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tays Sydankeskus Oy, Tty-Saatio, Tampereen Yliopisto filed Critical Tays Sydankeskus Oy
Publication of WO2018220282A1 publication Critical patent/WO2018220282A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments

Definitions

  • the present invention relates to non-invasive analysis of contraction character- istics of myocytes, i.e. muscle cells.
  • example embodiments of the present invention relate to a method, to an apparatus and to a computer program for analyzing contraction characteristics of myocytes, such as human cardiomyocytes (CMs).
  • CMs human cardiomyocytes
  • cardiac side effects are a one of the most common reason for withdrawal of a drug from the market, and therefore reliably capturing any potential cardiac side effects of a drug already during the development phase would be highly beneficial.
  • CMs cardiomyocytes
  • primary CMs taken from a living heart placed into culture dishes stop beating and start to dedifferentiate since primary CMs taken from a living heart placed into culture dishes stop beating and start to dedifferentiate.
  • the procedure of extracting a test sample e.g. from a human heart in order to obtain one or more human CMs may constitute a high-risk procedure.
  • human induced pluripotent stem (hiPS) cells hiPSC
  • HiPSCs can be reprogrammed from other mature cells, such as skin fibroblasts, and further differentiated into hiPSC-derived CMs, which opens interesting possibilities for studying human CMs in vitro.
  • human embryonic stem cell derived CMs constitute a potential source of human CMs, while in this case the clinical phe- notype of the cell donor remains unknown.
  • Video-based analysis techniques typically result in an output signal that is descriptive of contraction characteristics the human CMs under analysis, such as velocity or force of contraction.
  • Figure 1 depicts an example of such an output signal, where x axis (the horizontal axis) represents a frame number (thereby representing time) and y axis (the vertical axis) represents a measure of contraction/relaxation.
  • the output signal consists of a series of peaks that may slightly vary in height, shape and (temporal) spacing. Therein, from the initiation of the rise phase to the end of the last decay (from last peak to baseline) represent (temporal) positions of the contraction/relaxation of the human CMs under analysis, whereas the baseline segments of the output signal between the peaks represent a relaxed state (i.e. non-contracted state) of the human CMs under analysis.
  • a method for analyzing contraction characteristics of one or more myocytes on basis of a time series of image frames that constitute a video sequence that depicts said one or more myocytes comprising obtaining, for each image frame, a respective reference plane level value that is descriptive of a reference intensity level in the respective image frame; obtaining a reference image that is de- scriptive of said one or more myocytes in a relaxed state; extracting from each image frame and the reference image respective one or more subsets of pixel positions in dependence of the reference plane level value obtained for the respective image frame; deriving, for each image frame in each of said one or more subsets, a respective difference frame that is descriptive of differences between the image frame and the reference image in the respective subset of pixel positions; and composing, on basis of pixel values in respective difference frames pertaining to said one or more subsets of pixel positions, respective one or more contraction signals that are descriptive of contraction characteristics of said one or more myocytes.
  • an apparatus for analyzing contraction characteristics of one or more myocytes on basis of a time series of image frames that constitute a video sequence that depicts said one or more myocytes comprising means for obtaining, for each image frame, a respective reference plane level value that is descriptive of a reference intensity level in the respective image frame; means for obtaining a reference image that is descriptive of said one or more myocytes in a relaxed state; means for extracting from each image frame and the reference image respective one or more subsets of pixel positions in dependence of the reference plane level value obtained for the respective image frame; means for de- riving, for each image frame in each of said one or more subsets, a respective difference frame that is descriptive of differences between the image frame and the reference image in the respective subset of pixel positions; and means for composing, on basis of pixel values in respective difference frames pertaining to said one or more subsets of pixel positions, respective one or more con- traction signals that are descriptive of contraction characteristics of said one or more myocytes.
  • an apparatus for analyzing contraction characteristics of one or more myocytes on basis of a time series of image frames that constitute a video sequence that depicts said one or more myocytes comprising at least one processor; and at least one memory including computer program code, which when executed by the at least one processor, causes the apparatus to obtain, for each image frame, a respective reference plane level value that is descriptive of a reference intensity level in the respective image frame; obtain a reference image that is descriptive of said one or more myocytes in a relaxed state; extract from each image frame and the reference image respective one or more subsets of pixel positions in dependence of the reference plane level value obtained for the respective image frame; derive, for each image frame in each of said one or more subsets, a respective difference frame that is descriptive of differences between the image frame and the reference image in the respective subset of pixel positions; and compose, on basis of pixel values in respective difference frames pertaining to said one or more subsets of pixel positions, respective one or more contraction signals that are descriptive of contraction characteristics of
  • the computer program according to an example embodiment may be embod- ied on a volatile or a non-volatile computer-readable record medium, for example as a computer program product comprising at least one computer readable non-transitory medium having program code stored thereon, the program which when executed by an apparatus cause the apparatus at least to perform the operations described hereinbefore for the computer program according to an example embodiment of the invention.
  • Figure 1 schematically illustrates a curve that is descriptive of contraction/relaxation characteristics of a human CM as a function of time
  • Figure 2A schematically illustrates the content of an image frame of a video sequence that depicts one or more CMs as a function of time and that serves as the basis for analysis according to an example
  • Figure 2B illustrates a grayscale image that constitutes an image frame of a video sequence (e.g. a video recording) that serves as the basis for analysis according to an example;
  • Figure 2C illustrates a grayscale image that depicts image content of a region of interest (ROI) within the grayscale image illustrated in Figure 2B.
  • ROI region of interest
  • FIG. 3 illustrates an analysis method in accordance with an example embodiment
  • Figure 4A illustrates pixel values in a single frame according to an example
  • Figure 4B illustrates mean values of the reference plane level and the ROI within images of the video sequence as a function of time according to respective examples
  • Figures 5A and 5B illustrate pixel values in a normalized frame and in a normalized reference image according to respective examples
  • Figures 6A to 6D illustrate pixel values in an upper plane frame (extracted from an image frame depicting the one or more CMs in a contracted state), in an upper plane reference frame, in a lower plane frame (extracted from said image frame depicting the one or more CMs in a contracted state) and in a lower plane reference frame according to respective examples;
  • Figures 7A and 7B illustrate pixel values in an upper plane difference frame and in a lower plane difference frame according to respective examples
  • Figures 8A to 8D illustrate pixel values in a first movement frame, in a second movement frame, in a third movement frame and in a fourth movement frame according to respective examples;
  • Figure 9A illustrates first, second, third and fourth contraction signals according to an example
  • Figure 9B illustrates normalized first, second, third and fourth contraction signals according to an example
  • Figures 10A to 10C illustrate a single contraction/relaxation cycle of a combined contraction signal according to respective examples
  • Figure 1 1 illustrates a single peak of a combined contraction signal according to an example
  • Figure 12A illustrates a single peak of first combined contraction signal a temporally corresponding segment of a second combined contraction signal according to respective examples
  • Figure 12B illustrates a single peak of a combined contraction signal derived as a sample-wise ratio of the first and second contraction signals of Figure 12A;
  • Figure 13 illustrates a block diagram of some components of an apparatus for implementing the analysis method according to an example.
  • HiPSC-derived human CMs described in the foregoing constitute an example of cultured or otherwise derived human cells.
  • HiPSCs can be obtained from any individual, also from those carrying certain genotype, by reprogramming already differentiated adult cells, such as skin fibroblasts, into a pluripotent state. Such hiPSCs can then be differentiated into the cell type of interest and with disease and genotype specific hiPSCs to obtain differentiated cells, for example CMs that carry the disease causing phenotype and genotype.
  • CMs that carry the disease causing phenotype and genotype.
  • it has been shown that the genotype and phenotype of the cells so de- rived is similar to that of the actual cells of the individual, e.g.
  • the hiPSC- derived human CMs carry the same mutation as the CMs in the heart of the individual, see e.g. Lahti A. L, V. J. Kujala, H. Chapman, A. P. Koivisto, M. Pek- kanen-Mattila, E. Kerkela, J. Hyttinen, K. Kontula, H. Swan, B. R. Conklin, S. Yamanaka, O. Silvennoinen, and K. Aalto-Setala, Model for long QT syndrome type 2 using human iPS cells demonstrates arrhythmogenic characteristics in cell culture, Dis. Model. Mech. 5:220-230, 2012.
  • differentiated cells can also be obtained by so called direct differentiation method.
  • This method enables the induction of differentiated cells, e.g. human CMs, directly from another differentiated cell type, e.g. from fibroblast, thereby bypassing the stem cell state applied in the hiPSC approach outlined in the foregoing.
  • derived human CM is used to refer to a human CM derived from a human cell different from a CM, e.g. by using the hiPSC meth- od, the direct differentiation method or another suitable method known in the art.
  • human CMs may also be derived from embryonic stem cells, while in this case their clinical characteristics may remain unknown.
  • the cell of other type used as basis for deriving the human CM may be referred to in the following as a source cell.
  • suitable types of source cells that are relatively straightforward to obtain include dermal fibroblasts or keratinocytes, blood cells such as leucocytes, mucosal cells and endothelial cells.
  • Derived human CMs provide interesting possibilities for non-invasive study of individual CMs to enable analysis of their contraction characteristics and, in particular, any deviations from contraction characteristics of a healthy human CM. Consequently, results of such analysis are potentially useable, for example, in detection of genetic disorders having a cardiac effect and/or in detection of cardiac side effects of a drug during development or testing of the drug.
  • CMs con-traction/relaxation characteristics
  • references are predominantly made to visual analysis of a CM in singular, while the analysis techniques described herein readily generalize into visual analysis of a plurality of CMs provided e.g. as a cell aggregate, as a cell sheet or as a larger cellular structure such as a semitransparent heart.
  • the expression 'semitransparent' refers to an arrangement of CMs where their structure is detectable in pixel values as a function of contraction/relaxation.
  • the one or more CMs under study may comprise human CMs, e.g.
  • CMs primary human CMs or derived human CMs described in the foregoing.
  • visual analysis of derived human CMs constitute a framework of special interest for application of the anal- ysis technique described herein
  • the visual analysis technique described in the present disclosure is equally applicable to CMs of other origin as well, such as animal CMs or cell lines.
  • this disclosure predominantly makes use of a CM as an example, the disclosed technique is applicable to analysis of any myocytes or other cells having contraction characteristics of some kind.
  • the one or more CMs under analysis may comprise a single dissociated CM, e.g.
  • the one or more CMs under analysis may comprise a single CM that is part of a cluster of CMs.
  • the subject of analysis includes a plurality of (two or more) CMs that constitute a cluster or aggregate of CMs or a selected part of such cluster or aggregate.
  • the analysis of one more CMs disclosed herein is based on a time series of digital images that depicts one or more CMs over a period of time.
  • the sequence of digital images may be obtained from a digital video camera or the sequence of digital images may comprise digital images converted from respective analogue images obtained from a storage medium such as an analog tape, a digital tape, CD, DVD, etc.
  • time series may be referred to as a digital video sequence
  • an image of the video sequence may be referred to as a frame (or e.g. as video frame or as an image frame).
  • the video sequence employs a suitable predefined frame rate, for example 50 frames per second (fps) or above.
  • the applied frame rate and period of time covered by the video sequence may be selected according to circumstances, especially in view of the desired reliability with respect to the final point resolution of the analysis. Increasing the frame rate and the duration of the video sequence typically improves reliability and/accuracy of the analysis, while it also involves a higher number frames to be analyzed and hence in many cases results in a higher computational load.
  • the time period covered by the video sequence i.e. its duration
  • the frames of the video sequence provide a fixed or substantially fixed field of view to the one or more CMs under study.
  • the one or more CMs under study are depicted in the same or substantially the same position of the image area throughout the frames of the video sequence.
  • the resolution of the image as number of pixels may be selected e.g. such that the portion of images depicting the one or more CMs under study hence constituting a region of interest within the images includes at least one pixel per plan (thus a theoretical minimum of four pixels) while in an exemplifying hardware setting 1600 pixels may be needed to provide a sufficient image resolution for enabling accurate enough analysis of contraction/relaxation characteristics of the one or more CMs under study.
  • the minimum number of pixels is dependent on several fac- tors that may include one or more of the following: applied hardware, the selected black/white distribution of pixels defined by the changes in their intensity due to CMs deformation, subtle pixels across the plane in the ROI of the one or more CMs under study, the pixel size (in other words, an area of an image covered by a single pixel).
  • Frames of the video sequence are provided as respective monochromatic digital images.
  • intensity information is provided while all pixels represent the same color.
  • a common type of a monochromatic image is a greyscale image where each pixel represents the intensity of white color in a range from total absence of light (black) to a maximum light intensity (white) with a predefined number of different shades of gray in between.
  • the pixel intensity may be expressed as value within a suitable scale from a predefined minimum to a predefined maximum, e.g. from 0 to 1 .
  • 8 or 16 bits are used to express the pixel intensity, thereby enabling a scale that represents 256 or 65536 different intensity levels, respectively.
  • value 0 indicates the minimum intensity (black in case of a grayscale image)
  • the maximum value enabled by the employed number of bits e.g. 255 or 65535 indicates the maximum intensity (white in case of a grayscale image).
  • Figure 2A schematically illustrates the content of a frame of the video sequence that depicts a cluster of CMs 102 against its background 104
  • Figure 2B shows an actual grayscale image that constitutes the frame of the video sequence from which the schematic illustration is drawn, thereby also depicting the cluster of CMs 102 and the background 104
  • Figure 2B further illustrates a first region of interest 106 and a second region of interest 108, referred to in the following as ROI and ROM , respectively.
  • the ROM covers a sub-area of an image frame that depicts a desired portion of the background 104 throughout the frames of the video sequence
  • the ROI covers a sub-area of an image frame that depicts at least part of the cluster of CMSs 102 throughout the frames of the video sequence.
  • Figure 2C further depicts image content of the first region of interest 106, i.e. ROI, illustrated in Figure 2B.
  • ROI and ROM are depicted in Figure 2B, respectively, as a rectangular region and as a circular region, this is a non-limiting example selected for visual clarity and simplicity of illustration and regions having a shape different from a rectangle and/or a circle may be employed instead (e.g. an elliptical region, a hexagonal region, a region of an arbitrary shape, etc.).
  • Figure 3 depicts a flow chart that outlines a method 300 for analysis of contrac- tion characteristics of one or myocytes, e.g. the cluster of CMs 102, on basis of the video sequence described in the foregoing.
  • the analysis within the frame- work of the method 300 relies on the observation that intensity of an image portion that depicts the one or more CMs under study varies within a contraction/relaxation cycle of the one or more CMs due to their movement and/or deformation resulting from the contraction-relaxation during the contrac- tion/relaxation cycle.
  • the changes in intensity are typically subtle and therefore careful analysis of intensity changes in the image depicting the one or more CMs under study is needed to provide reliable and accurate characterization of the contraction/relaxation cycle.
  • the method 300 outlines an approach that improves accuracy of the changes in pixel intensity, thereby providing improved analysis of the contraction characteristics of the one or more CMs under study.
  • a frame of the video sequence may be referred to as an image frame or as a video frame.
  • the method 300 proceeds to obtaining, for each frame Fk, a respective reference plane level value bw that is descriptive of a reference intensity level in the respective image frame Fk, as indicated in block 304, and obtaining a reference image RF that is descriptive of said one or more myocytes in a relaxed state, as indicated in block 306.
  • the reference intensity level indicated by the reference plane level value bw may be derived, for example, on basis of intensity level in the respective frame Fk, e.g. on basis of a certain sub-area of the frame Fk. Examples in this regard are provided in the following.
  • the method 300 further comprises extracting, from each frame Fk, a respective upper plane frame Fk, u and/or a lower plane frame Fk,i in dependence of the reference plane level value bw, as indicated in block 308.
  • the method 300 further comprises extracting, from the reference image RF, a respective upper plane reference frame Ffc.u and/or a lower plane reference frame Ffc.i in dependence of the reference plane level value bk, as indicated in block 310.
  • operations pertaining to block 308 may include extracting a pair of the upper plane frame Fk, u and the upper plane and the upper plane reference frame flk, u , a pair of the lower plane frame Fk,u and the lower plane reference frame flk,i or both of these pairs for the frames Fk of the sequence.
  • one or more subsets of pixel positions may be extracted from each frame Fk.
  • the upper plane and lower plane referred to in the foregoing serve as respective examples of subsets of pixel positions.
  • the method 300 proceeds to deriving, for each frame Fk, a respective upper plane difference frame Dk, u on basis of the upper plane frame Fk, u and the upper plane reference frame flk, u , and a respective lower plane difference frame £ ⁇ , ⁇ on basis of the lower plane frame Fk,i and the lower plane reference frame flk,i, as indicated in block 312.
  • the upper plane difference frame Dk, u and the lower plane difference frame £ ⁇ , ⁇ are descriptive of differences between the image frame Fk and the reference image RF in the respective one of the upper and lower planes. Examples of deriving the difference frames Dk,u, Dk,i , RKU and are provided in the following.
  • the method 300 further proceeds to composing respective contrac- tion/relaxation signals that are descriptive of contraction characteristics of said one or more CMs on basis of pixel values of the upper plane difference frames Dk, u and on basis of pixel values of the lower plane difference frames Dk,i derived for the frames of the video sequence, as indicted in block 314.
  • obtaining the reference plane level bk for frames of the video sequence may comprise computing the reference plane levels bk on basis of pixel values within a sub-area of the image frame Fk that depicts the background 104.
  • the reference plane level bk is defined separately for each frame Fk in order to compensate for small tonic changes and/or minor fluctuations in ambient light etc. that may occur over time, thereby making the reference plane level bk as a value that truly represents the intensity of the background 104 in the respective frame Fk.
  • operations of block 304 involve computing a respective reference plane level bk for each video frame Fk, on basis of pixel values within the ROM (described in the foregoing), thereby making the reference plane level bk to be representative of the intensity of the background 104 in the frame Fk.
  • a suitable function of pixel values within the ROM in the frame Fk may be applied for computation of the respective reference plane level bk.
  • Non- limiting examples of such a suitable function include an average or a median of the pixel values within the ROM .
  • operations of block 304 may further involve receiving a selection of the ROM and/or a function to be applied for computation of the reference plane level bk on basis of pixel values within the ROM .
  • the information that defines the ROM in the image frames Fk may be received, for example, as an indication of user-selected image plane area that serves as the ROM .
  • the user selection may be carried out by displaying a frame of the video sequence on an electronic display to the user, prompting the user to employ a suitable user-interface mechanism known in the art to select a sub-areas of the image area that is to serve as the ROM , and converting the user selected sub-areas of the image area into information that defines the ROM .
  • operations of block 304 involve computing a respective reference plane level bk for each video frame Fk, on basis of pixel values within the ROI (described in the foregoing), thereby using the image sub-area that represents the one or more CMs under analysis as basis for defining the respective reference plane level bw for frame Fk.
  • a suitable function of pixel values within the ROI in the frame Fk may be applied for computation of the respective reference plane level bw.
  • suitable functions include an average or a median of the pixel values within the ROI, whereas a further example involves defining the reference plane level bw e.g.
  • the approach that involves computing the refer- ence plane level bw on basis of pixel values within the ROI may be suitable for example in a scenario where the frames of the video sequence do not clearly illustrate an area that serves as the background 104.
  • operations of block 304 involve using a predefined value for the reference plane level bw across all video frame Fk of the video se- quence.
  • Non-limiting examples in this regard include employing a mid-point between the respective minimum and maximum pixel values enabled by the number of bits employed to express the pixel values (e.g. between 0 and 65535 in case of 16-bit pixel values) or a mid-point between respective minimum and maximum observed pixel values across the frames Fk of the video sequence, e.g. an average of the respective minimum and maximum pixel values.
  • operations of block 304 involve using simple linear regression of bw to include a slope in a video sequence of Fk.
  • Figure 4A provides an illustration of pixel values within the ROI of a single frame Fk according to an example
  • Figure 4B illustrates the value of the reference plane level bw as a function of frame index k (and hence as a function of time) computed using two different example approaches: the lower curve indicates the average of the pixel values within the ROM , while the upper curve indicates the average of pixel values within the ROI.
  • obtaining the reference image RF may comprise computing the reference image RF on basis of pixel values in selected one or more frames of the video sequence.
  • the reference image RF computation may rely on pixel values within the ROI (described in the foregoing) in the selected one or more frames of the video sequence.
  • the information that defines the ROI in the image plane of the video frames Fk may be received, for example, as an indication of respective user-selected image plane area that serves as the ROI, along the lines described in the foregoing for obtaining the information that defines the ROM , mutatis mutandis.
  • the following description assumes computation of the reference image RF based on pixel values within the ROI as well as carrying out the operations of the blocks 308 to 312 for pixels within the ROI only. However, the description generalizes into carrying out the corresponding operations e.g.
  • the reference image RF may be defined as a sub-area defined by the ROI in a single selected frame or the reference image RF may be computed as an average image of the sub-area defined by the ROI in two or more selected frames.
  • the pixel value of the reference image RF may be computed as an average of pixel values in spatially corresponding pixel positions in the selected frames.
  • the selected one or more frames each represent the cluster of CMs 102 depicted by the video sequence in a relaxed state (i.e. in a non-contracted state).
  • a single frame of the video sequence may be used to define the reference image RF, such that the pixel values within the ROI of the single frame as such constitute the reference image RF without additional computation (of an average), using a higher number of frames, e.g. two, three or more, for computa- tion of the reference image RF typically improves the accuracy and reliability of the analysis.
  • Selection of the one or more frames for computation of the reference image RF may be based, for example, on user selection: in this regard, the user selection may be carried out by computing an average of the pixel values within the ROI for at least a sub-series of the frames Fk of the video sequence, displaying a curve that depicts the computed average as a function of frame number k (and hence as a function of time) on an electronic display to the user, prompting the user to employ a suitable user-interface mechanism known in the art to select the one or more frames that represent the cluster of CMs 102 in a relaxed state, and receiving the user selection of the one or more frames for computation of the reference image RF.
  • the frames that represent the cluster of CMs 102 in a relaxed state are those that represent local minima of the displayed representative curve.
  • an automated mechanism may be employed instead of user-selection : in this regard, instead of displaying the curve that depicts the computed average as a function of frame number k, a processing rule may be employed to identify local minima of the average and to (e.g.) randomly select a desired number of the local minima to serve as the one or more frames that are used for definition / computation of the reference image RF.
  • operations to carry out the extraction of the upper plane frame Fk, u and the lower plane frame Fk,i from the frame Fk and extraction of the upper plane reference frame flk, u and the lower plane reference frame from the reference frame RF in dependence of the reference plane level bw may involve converting each frame Fk and the reference image RF into a respective normalized frame by using the reference plane level bw:
  • the normalized frame Fk and the normalized reference image R'w represent the sub-area of the image area defined by the ROI scaled in such a way that the pixel values of the frame Fk and the reference image RF that are higher than the reference plane level bw are provided as positive values, whereas the pixel values of the frame Fk and the reference image RF that are lower than the background plane level bw are provided as negative values.
  • Figure 5A provides an illustration of pixel values of the normalized frame Fk according to an example
  • Figure 5B provides an illustration of pixel values of the normalized reference image RV according to an example.
  • pixels of an image frame are represented by unsigned values (e.g. at 8 or 1 6 bits) and hence in such framework negative pixel values are not possible. Therefore, it may be necessary to convert pixel values of the image frames Fk and pixel values of the reference frame RF into corresponding matrices of signed pixel values before proceeding to compute respective normalized frames or images Fk and RV and to carry out any subsequent operations as matrix operations in order to enable application of negative pixel values where necessary.
  • the upper plane frame Fk,u may be extracted by selecting only those pixel positions of the normalized frame Fk that have a zero value or a positive value
  • the lower plane frame Fk,i may be extracted by selecting only those pixel positions of the normalized frame Fk that have a negative value: for the upper plane frame Fk, u the pixel values for the selected pixel positions are the corresponding pixel values of the frame Fk, whereas for the lower plane frame Fk,i the pixel values for the selected pixel positions are the complements of the corresponding pixel values of the frame Fk,i.
  • the pixel values for the remaining pixel positions may be set to zero.
  • the operations of block 31 0 may involve extracting the upper plane reference frame flk, u by selecting only those pixel positions of the normalized reference image RV that have a zero value or a positive value and deriving the lower plane reference frame flk,i by selecting only those pixel positions of the normalized reference image RV that have a negative value, where setting the pixel values in selected and non-selected pixel positions of the upper and lower plane references images flk, u and flk,i may be carried out as described in the foregoing for the upper and lower plane frames Fk, u and Fk,i, respectively, mutatis mutandis.
  • the upper plane frame Fk, u and the upper plane reference frame flk, u obtained as a consequence of operations described in the foregoing for blocks 308 and 310 hence represent, respectively, those pixels within the ROI in frame Fk and the reference image RF that have a value higher than the reference plane level bk, whereas the lower plane frame Fk,i and the lower plane reference frame Fk,i also obtained as a consequence of operations described in the foregoing for blocks 308 and 310 represent, respectively, those pixels of the ROI in frame Fk and the reference image RF that have a value lower than the reference plane level bk.
  • Figures 6A, 6B, 6C and 6D provide, respectively, illustrations of pixel values of the upper plane frame Fk, u , the upper plane reference frame flk, u , the lower plane frame Fg and the lower plane reference frame Fflg, derived on basis of the normalized frame Fk and the normalized reference frame R'w illustrated in the example of Figures 5A and 5B.
  • the upper plane difference frame DK,U is descriptive of absolute change in pixel values between the frame Fk and the reference frame RF in the upper plane
  • the lower plane difference frame Dig is descriptive of absolute change in pixel values between the frame Fk and the reference frame RF in the lower plane.
  • Figures 7 A and 7B provide, respectively, illustrations of pixel values of the upper plane difference frame Dk,u and the lower plane difference frame Dig, derived on basis of the upper plane frame Fk, u , the upper plane reference frame Fk,u, the lower plane frame Fg and the lower plane reference frame Fflg illustrated example of Figures 6A to 6D.
  • the upper plane and the lower plane serve as non-limiting examples and derivation of the difference frames Dk,u and Dg depends on the applied subsets.
  • the composition of the contraction signals may comprise on or more of the following (e.g. in dependence of availability of the upper plane difference frame Dk,u and/or the lower plane difference frame Dig):
  • operations of block 314 involve extracting, for each frame Fk, at least one movement frame on basis of the upper plane difference frame Dk, u and/or the upper plane difference frame Dk,u, which may serve as basis for de- riving a respective one of the first to fourth contraction signals si , S2, S3 and s 4 .
  • the first to fourth contraction signals si , S2, S3 and s 4 may be composed to extent the respective ones of the first to fourth movement frames ⁇ , ⁇ , ⁇ ,2, ⁇ .3 and ⁇ .4 are made available.
  • one or more of the following movement frames may be extracted:
  • the first movement frame ⁇ , ⁇ may also be referred as the frame of true white pixels because they are positive-valued pixels that originate from the upper plane frame Fk,u.
  • the second movement frame ⁇ .2 may also be referred as the frame of pseudo white pixels because they are negative-valued pixels that originate from the upper plane frame Fk,u.
  • the pixels of the first movement frame ⁇ , ⁇ and the second movement frame ⁇ .2 may be combined into a first combined movement frame that represents pixels of an upper plane.
  • the third movement frame Mk,3 may also be referred as the frame of true black pixels because they are positive-valued pixels that originate from the lower plane frame Fk,i
  • the fourth movement frame Mk,4 may also be referred as pseudo black pixels because they are negative-valued pixels that originate from the lower plane frame Fk,i.
  • the pixels of the third movement frame Mw,3 and the fourth movement frame MWA may be combined into a second combined movement frame that represents pixels of a lower plane.
  • Figures 8A, 8B, 8C and 8D provide, respectively, illustrations of the first movement frame ⁇ , ⁇ , the second movement frame Mw,2, the third movement frame ⁇ .3 and the fourth movement frame ⁇ .4, derived on basis of the upper difference frame Dk, u and the lower difference frame Dig de- picted in Figures 7 A and 7B, respectively.
  • the volume integration across a movement frame referred to above is provided as a sum of pixel values within the respective movement frame, e.g. within one of the first to fourth movement frames ⁇ , ⁇ , ⁇ ,2, ⁇ ,3 and ⁇ .4.
  • the contraction signal si , S2, S3, s 4 may be further normalized by dividing values of the time series that constitutes the respective contraction signal si , S2, S3 and s 4 by the number of pixels within the ROI.
  • Figure 9A depicts an example of contractions signals si , S2, S3 and s 4 derived, respectively, on basis of the movement frames ⁇ , ⁇ , ⁇ ,2, ⁇ .3 and ⁇ , ⁇ depicted in Figures 8A to 8D, whereas Figure 9B depicts the respective nor- malized contraction signals.
  • operations of block 314 optionally comprises composing one or more of the first to fourth contraction signals si , S2, sz, s 4 on basis of only those pixel values of the respective one of the upper and lower plane difference frames Dk,u, Ck, ⁇ whose absolute value exceeds a predefined (non-zero) threshold value.
  • This exclusion of the pixel values that fail to exceed the predefined threshold value may be referred to as filtering.
  • the filtering operation may be applied to the movement frames ⁇ , ⁇ , ⁇ ,2, ⁇ , ⁇ ⁇ .4 by applying the volume integration only to those pixel position where the pixel values exceeds the predefined threshold.
  • the predefined threshold may be set, for example, in dependence of the background level bk, e.g. as a predefined percentage of an average of the background level bw over the frames of the video sequence. In this regard, the predefined percentage may be selected e.g. from the range 1 % to 5%, for example predefined percentage 3% may be used.
  • the filtering operation may serve to improve the analysis via reducing noise and disturbances that may have an effect on the outcome of the analysis.
  • Each of the contraction signals s-i, S2, S3 and s 4 (and/or respective normalized versions thereof) serves to capture a different aspect of contraction/relaxation characteristics of the cluster of CMs depicted in the video sequence under analysis and hence they provide as such valuable information concerning con- traction characteristics of the cluster of CMs under study, possibly to an extent that enables e.g. a medical practitioner to evaluate the need for further analysis or even draw a diagnosis on basis of the contraction characteristics.
  • Figure 1 0A depicts the differences between the contraction signals si, S2, S3, s 4 for a single extraction/contraction cycle whereas Figures 1 0B and 1 0C depict the differences in further detail for scaled versions of the contraction signals si , S2, S3, s 4 :
  • 0A depicts a single contraction/relaxation cycle of the contraction signals si , S2, S3, s 4 as derived from the upper and lower plane dif- ference frames DK,U and Dig.
  • Figure 1 0C depicts is a peak part of the contraction/relaxation cycle de- picted in Figure 1 0B.
  • the contraction signals si, S2, S3, s 4 serving to capture different aspects of the contraction/relaxation characteristics, for the peak of the contraction signals si , S2, S3, s 4 illustrated in Figures 1 0A, 10B and 10C the first contraction signal si exhibits longer baseline-to-peak provide than the other contraction signals S2, S3, s 4 whereas the third contraction signal S3 exhibits maximal rise time among the contraction signals si, S2, S3, s 4 .
  • one or more of the contractions signals si , S2, S3 and s 4 may be processed further in order to derive a further signal and/or other information that is descriptive of further as- pects of the beating characteristics of the cluster of CMs under study.
  • two or more of the contraction signals si, S2, S3, s 4 may be combined into a combined contraction signal, e.g. as a linear combination of selected two or more contraction signals s-i , S2, S3, s 4 (e.g. by computing a respective element-wise linear combination of temporally corresponding elements of the selected two or more contraction signals si , S2, S3, s 4 .
  • a combined contraction signal may be derived as a sam- pie-wise product of two or more of the contraction signals si , S2, S3, s 4 or as a sample-wise ratio of two of the contraction signals si , S2, S3, s 4 .
  • Figure 12A depicts a single contraction/relaxation cycle of the first contraction signal s a and the second contraction signal Sb, both scaled such that the resulting signal level is normalized (e.g. scaled by a suitable scaling factor) to values around unity.
  • two or more (different) linear combinations of two or more contraction signals si , S2, S3, s 4 may be combined into a combined contraction signal e.g. by a sample-wise multiplication or sample-wise division.
  • the method 300 described in the foregoing via a number of examples may be implemented by an apparatus that comprises respective processing means for implementing the steps of the method 300, e.g. those described through blocks 302 to 314 or a limited subset thereof.
  • the processing means may be provided by hardware means, by software means, or by a combination of hardware means and software means.
  • Figure 13 illustrates a block diagram of some components of an exemplifying apparatus 400.
  • the apparatus 400 may comprise further components, elements or por- tions that are not depicted in Figure 13.
  • the apparatus 400 may be employed to implement the method 300.
  • the apparatus 400 comprises a processor 416 and a memory 415 for storing data and computer program code 417.
  • the memory 415 and a portion of the computer program code 417 stored therein may be further arranged to, with the processor 416, to implement the function(s) described in the foregoing in context of the method 300.
  • the apparatus 400 may comprise a communication portion 412 for communication with other devices.
  • the communication portion 412 if present, comprises at least one communication apparatus that enables wired or wireless com- munication with other apparatuses.
  • a communication apparatus of the communication portion 412 may also be referred to as a respective communication means.
  • the apparatus 400 may further comprise user I/O (input/output) components 418 that may be arranged, possibly together with the processor 416 and a portion of the computer program code 417, to provide a user interface for receiving input from a user of the apparatus 400 and/or providing output to the user of the apparatus 400 to control at least some aspects of operation of the method 300 implemented by the apparatus 400.
  • the user I/O components 418 may comprise hardware components such as a display, a touchscreen, a touchpad, a mouse, a keyboard, and/or an arrangement of one or more keys or buttons, etc.
  • the user I/O components 418 may be also referred to as peripherals.
  • the processor 416 may be arranged to control operation of the apparatus 400 e.g. in accordance with a portion of the computer program code 417 and possibly further in accordance with the user input received via the user I/O components 418 and/or in accordance with information received via the communication portion 412. Although the processor 416 is depicted as a single component, it may be implemented as one or more separate processing components.
  • memory 415 is depicted as a single component, it may be implemented as one or more separate components, some or all of which may be in- tegrated/removable and/or may provide permanent / semi-permanent/ dynamic/cached storage.
  • the computer program code 417 stored in the memory 415 may comprise computer-executable instructions that control one or more aspects of operation of the apparatus 400 when loaded into the processor 416.
  • the computer-executable instructions may be provided as one or more sequences of one or more instructions.
  • the processor 416 is able to load and execute the computer program code 417 by reading the one or more sequences of one or more instructions included therein from the memory 415.
  • the one or more sequences of one or more instructions may be configured to, when executed by the processor 416, cause the apparatus 400 to carry out operations, procedures and/or functions described in the foregoing in context of the method 300.
  • the apparatus 400 may comprise at least one processor 416 and at least one memory 415 including the computer program code 417 for one or more programs, the at least one memory 415 and the computer program code 417 configured to, with the at least one processor 416, cause the apparatus 400 to perform operations, procedures and/or functions described in the foregoing in context of the method 300.
  • the computer programs stored in the memory 415 may be provided e.g. as a respective computer program product comprising at least one computer- readable non-transitory medium having the computer program code 417 stored thereon, the computer program code, when executed by the apparatus 400, causes the apparatus 400 at least to perform operations, procedures and/or functions described in the foregoing in context of the method 300.
  • the computer-readable non-transitory medium may comprise a memory device or a record medium such as a CD-ROM, a DVD, a Blu-ray disc or another article of manufacture that tangibly embodies the computer program.
  • the computer program may be provided as a signal configured to reliably transfer the computer program.
  • references(s) to a processor should not be understood to encompass only pro- grammable processors, but also dedicated circuits such as field-programmable gate arrays (FPGA), application specific circuits (ASIC), signal processors, etc.
  • FPGA field-programmable gate arrays
  • ASIC application specific circuits
  • signal processors etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

According to an example embodiment, a method for analyzing contraction characteristics of one or more myocytes on basis of a time series of image frames that constitute a video sequence that depicts said one or more myocytes is provided. The exemplifying method comprises obtaining, for each im- age frame, a respective reference plane level value that is descriptive of a reference intensity level in the respective image frame; obtaining a reference image that is descriptive of said one or more myocytes in a relaxed state; extracting from each image frame and the reference image respective one or more subsets of pixel positions in dependence of the reference plane level value obtained for the respective image frame; deriving, for each image frame in each of said one or more subsets, a respective difference frame that is descriptive of differences between the image frame and the reference image in the respective subset of pixel positions; and composing, on basis of pixel values in respective difference frames pertaining to said one or more subsets of pixel positions, respective one or more contraction signals that are descriptive of contraction characteristics of said one or more myocytes.

Description

Analysis of myocyte contraction characteristics
FIELD OF THE INVENTION
The present invention relates to non-invasive analysis of contraction character- istics of myocytes, i.e. muscle cells. In particular, example embodiments of the present invention relate to a method, to an apparatus and to a computer program for analyzing contraction characteristics of myocytes, such as human cardiomyocytes (CMs).
BACKGROUND
Genetic disorders having cardiac effects are, typically, potentially lethal without proper therapy or medication, and therefore it is of essential importance to detect signs of such a disorder early on. Moreover, cardiac side effects are a one of the most common reason for withdrawal of a drug from the market, and therefore reliably capturing any potential cardiac side effects of a drug already during the development phase would be highly beneficial.
Studying cardiomyocytes (CMs) is a challenging task, since primary CMs taken from a living heart placed into culture dishes stop beating and start to dedifferentiate. Moreover, the procedure of extracting a test sample e.g. from a human heart in order to obtain one or more human CMs may constitute a high-risk procedure. As an alternative for using primary human CMs extracted from a living heart, human induced pluripotent stem (hiPS) cells (hiPSC) provide an interesting alternative for primary human CMs for such studies. HiPSCs can be reprogrammed from other mature cells, such as skin fibroblasts, and further differentiated into hiPSC-derived CMs, which opens interesting possibilities for studying human CMs in vitro. Also human embryonic stem cell derived CMs constitute a potential source of human CMs, while in this case the clinical phe- notype of the cell donor remains unknown.
One way to study the biomechanical properties of cultured human CMs involves usage of video microscopy, which allows for non-invasive analysis of beating characteristics of cultured human CMs, such as hiPSC-derived CMs introduced in the foregoing. Such analysis techniques may be referred to as video-based analysis (of CMs). An example of video-based analysis technique is described in the International patent application no. PCT/FI2013/050905, published as WO 2014/102449 A1 . Video-based analysis techniques typically result in an output signal that is descriptive of contraction characteristics the human CMs under analysis, such as velocity or force of contraction. Figure 1 depicts an example of such an output signal, where x axis (the horizontal axis) represents a frame number (thereby representing time) and y axis (the vertical axis) represents a measure of contraction/relaxation. As can be seen from the illustration of Figure 1 , the output signal consists of a series of peaks that may slightly vary in height, shape and (temporal) spacing. Therein, from the initiation of the rise phase to the end of the last decay (from last peak to baseline) represent (temporal) positions of the contraction/relaxation of the human CMs under analysis, whereas the baseline segments of the output signal between the peaks represent a relaxed state (i.e. non-contracted state) of the human CMs under analysis.
While known video-based analysis techniques provide a useful way of noninvasive analysis of beating characteristics of hiPSC-derived human CMs, there is still room for improvement in order to reach an improved reliability in comparison to non-invasive motion recording techniques. In this regard, it is worth noting that motion based analysis techniques are different from the patch clamp technique that is considered the reference technique for obtaining quantitative measures of membrane events such as membrane-voltage and membrane-current fluxes in CMs and other types of excitable cells. Since conventional patch clamp is restricted to recording of a single cell at a time, video- based non-invasive analysis of sufficient reliability would be a valuable tool for speeding up the analysis of CM contraction/relaxation characteristics also via its capability for analysis of a number of CMs at a time. SUMMARY
Therefore, it is an object of the present invention to provide a video-based analysis technique for estimating contraction/relaxation characteristics of one or more myocytes, such as human CMs, which technique provides reliable re- suits and is straightforward to apply while it is not harmful for the myocytes it serves to analyze.
These objects of the invention are reached by a method, by an apparatus and by a computer program as defined by the respective independent claims.
According to an example embodiment, a method for analyzing contraction characteristics of one or more myocytes on basis of a time series of image frames that constitute a video sequence that depicts said one or more myocytes is provided, the method comprising obtaining, for each image frame, a respective reference plane level value that is descriptive of a reference intensity level in the respective image frame; obtaining a reference image that is de- scriptive of said one or more myocytes in a relaxed state; extracting from each image frame and the reference image respective one or more subsets of pixel positions in dependence of the reference plane level value obtained for the respective image frame; deriving, for each image frame in each of said one or more subsets, a respective difference frame that is descriptive of differences between the image frame and the reference image in the respective subset of pixel positions; and composing, on basis of pixel values in respective difference frames pertaining to said one or more subsets of pixel positions, respective one or more contraction signals that are descriptive of contraction characteristics of said one or more myocytes. According to another example embodiment, an apparatus for analyzing contraction characteristics of one or more myocytes on basis of a time series of image frames that constitute a video sequence that depicts said one or more myocytes is provided, the apparatus comprising means for obtaining, for each image frame, a respective reference plane level value that is descriptive of a reference intensity level in the respective image frame; means for obtaining a reference image that is descriptive of said one or more myocytes in a relaxed state; means for extracting from each image frame and the reference image respective one or more subsets of pixel positions in dependence of the reference plane level value obtained for the respective image frame; means for de- riving, for each image frame in each of said one or more subsets, a respective difference frame that is descriptive of differences between the image frame and the reference image in the respective subset of pixel positions; and means for composing, on basis of pixel values in respective difference frames pertaining to said one or more subsets of pixel positions, respective one or more con- traction signals that are descriptive of contraction characteristics of said one or more myocytes.
According to another example embodiment, an apparatus for analyzing contraction characteristics of one or more myocytes on basis of a time series of image frames that constitute a video sequence that depicts said one or more myocytes is provided, the apparatus comprising at least one processor; and at least one memory including computer program code, which when executed by the at least one processor, causes the apparatus to obtain, for each image frame, a respective reference plane level value that is descriptive of a reference intensity level in the respective image frame; obtain a reference image that is descriptive of said one or more myocytes in a relaxed state; extract from each image frame and the reference image respective one or more subsets of pixel positions in dependence of the reference plane level value obtained for the respective image frame; derive, for each image frame in each of said one or more subsets, a respective difference frame that is descriptive of differences between the image frame and the reference image in the respective subset of pixel positions; and compose, on basis of pixel values in respective difference frames pertaining to said one or more subsets of pixel positions, respective one or more contraction signals that are descriptive of contraction characteristics of said one or more myocytes. According to another example embodiment, a computer program is provided, the computer program comprising computer readable program code configured to cause performing at least a method according to the example embodiment described in the foregoing when said program code is executed on a computing apparatus.
The computer program according to an example embodiment may be embod- ied on a volatile or a non-volatile computer-readable record medium, for example as a computer program product comprising at least one computer readable non-transitory medium having program code stored thereon, the program which when executed by an apparatus cause the apparatus at least to perform the operations described hereinbefore for the computer program according to an example embodiment of the invention.
The exemplifying embodiments of the invention presented in this patent application are not to be interpreted to pose limitations to the applicability of the appended claims. The verb "to comprise" and its derivatives are used in this patent application as an open limitation that does not exclude the existence of al- so unrecited features. The features described hereinafter are mutually freely combinable unless explicitly stated otherwise.
Some features of the invention are set forth in the appended claims. Aspects of the invention, however, both as to its construction and its method of operation, together with additional objects and advantages thereof, will be best un- derstood from the following description of some example embodiments when read in connection with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
The embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings, where Figure 1 schematically illustrates a curve that is descriptive of contraction/relaxation characteristics of a human CM as a function of time;
Figure 2A schematically illustrates the content of an image frame of a video sequence that depicts one or more CMs as a function of time and that serves as the basis for analysis according to an example; Figure 2B illustrates a grayscale image that constitutes an image frame of a video sequence (e.g. a video recording) that serves as the basis for analysis according to an example;
Figure 2C illustrates a grayscale image that depicts image content of a region of interest (ROI) within the grayscale image illustrated in Figure 2B.
Figure 3 illustrates an analysis method in accordance with an example embodiment;
Figure 4A illustrates pixel values in a single frame according to an example;
Figure 4B illustrates mean values of the reference plane level and the ROI within images of the video sequence as a function of time according to respective examples,
Figures 5A and 5B illustrate pixel values in a normalized frame and in a normalized reference image according to respective examples;
Figures 6A to 6D illustrate pixel values in an upper plane frame (extracted from an image frame depicting the one or more CMs in a contracted state), in an upper plane reference frame, in a lower plane frame (extracted from said image frame depicting the one or more CMs in a contracted state) and in a lower plane reference frame according to respective examples;
Figures 7A and 7B illustrate pixel values in an upper plane difference frame and in a lower plane difference frame according to respective examples;
Figures 8A to 8D illustrate pixel values in a first movement frame, in a second movement frame, in a third movement frame and in a fourth movement frame according to respective examples;
Figure 9A illustrates first, second, third and fourth contraction signals according to an example;
Figure 9B illustrates normalized first, second, third and fourth contraction signals according to an example; Figures 10A to 10C illustrate a single contraction/relaxation cycle of a combined contraction signal according to respective examples;
Figure 1 1 illustrates a single peak of a combined contraction signal according to an example; Figure 12A illustrates a single peak of first combined contraction signal a temporally corresponding segment of a second combined contraction signal according to respective examples;
Figure 12B illustrates a single peak of a combined contraction signal derived as a sample-wise ratio of the first and second contraction signals of Figure 12A; and
Figure 13 illustrates a block diagram of some components of an apparatus for implementing the analysis method according to an example.
DETAILED DESCRIPTION OF SOME EMBODIMENTS
Along the lines discussed in the foregoing, recently developed techniques to reprogram human cells into pluripotent stem cells provide interesting possibilities to study differentiated human cells, which have not been available before due to too risky procedure or due to rapid dedifferentiation of the primary cells in culture. Such techniques provide, for example, interesting possibilities to study human cells of several types, for example muscle cells such as CMs, on basis of visual data depicting one or more human cells, where the visual data may comprise an image, a plurality of images and/or a video stream.
HiPSC-derived human CMs described in the foregoing constitute an example of cultured or otherwise derived human cells. HiPSCs can be obtained from any individual, also from those carrying certain genotype, by reprogramming already differentiated adult cells, such as skin fibroblasts, into a pluripotent state. Such hiPSCs can then be differentiated into the cell type of interest and with disease and genotype specific hiPSCs to obtain differentiated cells, for example CMs that carry the disease causing phenotype and genotype. In this regard, it has been shown that the genotype and phenotype of the cells so de- rived is similar to that of the actual cells of the individual, e.g. the hiPSC- derived human CMs carry the same mutation as the CMs in the heart of the individual, see e.g. Lahti A. L, V. J. Kujala, H. Chapman, A. P. Koivisto, M. Pek- kanen-Mattila, E. Kerkela, J. Hyttinen, K. Kontula, H. Swan, B. R. Conklin, S. Yamanaka, O. Silvennoinen, and K. Aalto-Setala, Model for long QT syndrome type 2 using human iPS cells demonstrates arrhythmogenic characteristics in cell culture, Dis. Model. Mech. 5:220-230, 2012.
As another example, differentiated cells can also be obtained by so called direct differentiation method. This method enables the induction of differentiated cells, e.g. human CMs, directly from another differentiated cell type, e.g. from fibroblast, thereby bypassing the stem cell state applied in the hiPSC approach outlined in the foregoing.
In the following, the term derived human CM is used to refer to a human CM derived from a human cell different from a CM, e.g. by using the hiPSC meth- od, the direct differentiation method or another suitable method known in the art. As an example of such a further method, human CMs may also be derived from embryonic stem cells, while in this case their clinical characteristics may remain unknown. The cell of other type used as basis for deriving the human CM may be referred to in the following as a source cell. Non-limiting examples of suitable types of source cells that are relatively straightforward to obtain include dermal fibroblasts or keratinocytes, blood cells such as leucocytes, mucosal cells and endothelial cells. Derived human CMs provide interesting possibilities for non-invasive study of individual CMs to enable analysis of their contraction characteristics and, in particular, any deviations from contraction characteristics of a healthy human CM. Consequently, results of such analysis are potentially useable, for example, in detection of genetic disorders having a cardiac effect and/or in detection of cardiac side effects of a drug during development or testing of the drug.
In the following, various examples concerning a technique for estimating con- traction/relaxation characteristics of one or more CMs on basis of visual data are described. Throughout the description, references are predominantly made to visual analysis of a CM in singular, while the analysis techniques described herein readily generalize into visual analysis of a plurality of CMs provided e.g. as a cell aggregate, as a cell sheet or as a larger cellular structure such as a semitransparent heart. In this regard, the expression 'semitransparent' refers to an arrangement of CMs where their structure is detectable in pixel values as a function of contraction/relaxation. The one or more CMs under study may comprise human CMs, e.g. (primary) human CMs or derived human CMs described in the foregoing. However, even though visual analysis of derived human CMs constitute a framework of special interest for application of the anal- ysis technique described herein, the visual analysis technique described in the present disclosure is equally applicable to CMs of other origin as well, such as animal CMs or cell lines. Moreover, while this disclosure predominantly makes use of a CM as an example, the disclosed technique is applicable to analysis of any myocytes or other cells having contraction characteristics of some kind. The one or more CMs under analysis may comprise a single dissociated CM, e.g. a single dissociated derived human CM, that is not attached to any other CM and that is hence depicted in the visual data in isolation from other CMs. In another example, the one or more CMs under analysis may comprise a single CM that is part of a cluster of CMs. In a further example, the subject of analysis includes a plurality of (two or more) CMs that constitute a cluster or aggregate of CMs or a selected part of such cluster or aggregate.
The analysis of one more CMs disclosed herein is based on a time series of digital images that depicts one or more CMs over a period of time. The sequence of digital images may be obtained from a digital video camera or the sequence of digital images may comprise digital images converted from respective analogue images obtained from a storage medium such as an analog tape, a digital tape, CD, DVD, etc. Without losing generality, such time series may be referred to as a digital video sequence, and an image of the video sequence may be referred to as a frame (or e.g. as video frame or as an image frame). The video sequence employs a suitable predefined frame rate, for example 50 frames per second (fps) or above. The applied frame rate and period of time covered by the video sequence may be selected according to circumstances, especially in view of the desired reliability with respect to the final point resolution of the analysis. Increasing the frame rate and the duration of the video sequence typically improves reliability and/accuracy of the analysis, while it also involves a higher number frames to be analyzed and hence in many cases results in a higher computational load. As a non-limiting example, the time period covered by the video sequence (i.e. its duration) may be in the range from a few seconds to a few hours. The frames of the video sequence provide a fixed or substantially fixed field of view to the one or more CMs under study. In other words, the one or more CMs under study are depicted in the same or substantially the same position of the image area throughout the frames of the video sequence. As an example, the resolution of the image as number of pixels may be selected e.g. such that the portion of images depicting the one or more CMs under study hence constituting a region of interest within the images includes at least one pixel per plan (thus a theoretical minimum of four pixels) while in an exemplifying hardware setting 1600 pixels may be needed to provide a sufficient image resolution for enabling accurate enough analysis of contraction/relaxation characteristics of the one or more CMs under study. The minimum number of pixels is dependent on several fac- tors that may include one or more of the following: applied hardware, the selected black/white distribution of pixels defined by the changes in their intensity due to CMs deformation, subtle pixels across the plane in the ROI of the one or more CMs under study, the pixel size (in other words, an area of an image covered by a single pixel). Frames of the video sequence are provided as respective monochromatic digital images. As known in the art, in such images only intensity information is provided while all pixels represent the same color. A common type of a monochromatic image is a greyscale image where each pixel represents the intensity of white color in a range from total absence of light (black) to a maximum light intensity (white) with a predefined number of different shades of gray in between. While analysis based on grayscale images provides an approach of special interest, similar scale of color intensity can be applied for other colors as well and monochromatic images other than grayscale images may be applied as basis for the analysis in some examples. The pixel intensity may be expressed as value within a suitable scale from a predefined minimum to a predefined maximum, e.g. from 0 to 1 . In some formats 8 or 16 bits are used to express the pixel intensity, thereby enabling a scale that represents 256 or 65536 different intensity levels, respectively. Typically, it is assumed that value 0 indicates the minimum intensity (black in case of a grayscale image) whereas the maximum value enabled by the employed number of bits (e.g. 255 or 65535) indicates the maximum intensity (white in case of a grayscale image). As a non-limiting example concerning characteristics of frames the video sequence, Figure 2A schematically illustrates the content of a frame of the video sequence that depicts a cluster of CMs 102 against its background 104, whereas Figure 2B shows an actual grayscale image that constitutes the frame of the video sequence from which the schematic illustration is drawn, thereby also depicting the cluster of CMs 102 and the background 104. Figure 2B further illustrates a first region of interest 106 and a second region of interest 108, referred to in the following as ROI and ROM , respectively. Therein, the ROM covers a sub-area of an image frame that depicts a desired portion of the background 104 throughout the frames of the video sequence, whereas the ROI covers a sub-area of an image frame that depicts at least part of the cluster of CMSs 102 throughout the frames of the video sequence. Figure 2C further depicts image content of the first region of interest 106, i.e. ROI, illustrated in Figure 2B.
Even though the ROI and ROM are depicted in Figure 2B, respectively, as a rectangular region and as a circular region, this is a non-limiting example selected for visual clarity and simplicity of illustration and regions having a shape different from a rectangle and/or a circle may be employed instead (e.g. an elliptical region, a hexagonal region, a region of an arbitrary shape, etc.).
Figure 3 depicts a flow chart that outlines a method 300 for analysis of contrac- tion characteristics of one or myocytes, e.g. the cluster of CMs 102, on basis of the video sequence described in the foregoing. The analysis within the frame- work of the method 300 relies on the observation that intensity of an image portion that depicts the one or more CMs under study varies within a contraction/relaxation cycle of the one or more CMs due to their movement and/or deformation resulting from the contraction-relaxation during the contrac- tion/relaxation cycle. However, the changes in intensity (brightness) are typically subtle and therefore careful analysis of intensity changes in the image depicting the one or more CMs under study is needed to provide reliable and accurate characterization of the contraction/relaxation cycle. In this regard, the method 300 outlines an approach that improves accuracy of the changes in pixel intensity, thereby providing improved analysis of the contraction characteristics of the one or more CMs under study.
The method 300 commences by acquiring the video sequence that depicts one or more CMs, as indicated in block 302. Some characteristics of the video sequence are described in the foregoing. In the following, we refer to the frames of the acquired video sequence as Fk, / =1 , K, where k denotes the number of frame in the sequence and ^denotes the total number of frames. A frame of the video sequence may be referred to as an image frame or as a video frame.
The method 300 proceeds to obtaining, for each frame Fk, a respective reference plane level value bw that is descriptive of a reference intensity level in the respective image frame Fk, as indicated in block 304, and obtaining a reference image RF that is descriptive of said one or more myocytes in a relaxed state, as indicated in block 306. The reference intensity level indicated by the reference plane level value bw may be derived, for example, on basis of intensity level in the respective frame Fk, e.g. on basis of a certain sub-area of the frame Fk. Examples in this regard are provided in the following.
The method 300 further comprises extracting, from each frame Fk, a respective upper plane frame Fk,u and/or a lower plane frame Fk,i in dependence of the reference plane level value bw, as indicated in block 308. The method 300 further comprises extracting, from the reference image RF, a respective upper plane reference frame Ffc.u and/or a lower plane reference frame Ffc.i in dependence of the reference plane level value bk, as indicated in block 310. In other words, operations pertaining to block 308 may include extracting a pair of the upper plane frame Fk,u and the upper plane and the upper plane reference frame flk,u, a pair of the lower plane frame Fk,u and the lower plane reference frame flk,i or both of these pairs for the frames Fk of the sequence. In general, one or more subsets of pixel positions may be extracted from each frame Fk. In this regard, the upper plane and lower plane referred to in the foregoing serve as respective examples of subsets of pixel positions. Regardless of this generic applicability, for clarity and brevity of description, exemplifying details of the method 300 are described herein with references to extraction of the upper plane (cf. the upper plane frame Fk,u and the upper plane reference frame flk,u) and the lower plane (cf. the lower plane frame Fk,i and the lower plane reference frame flk,i) In this regard, examples of decomposing the frame Fk and the reference frame RF into a respective pair of upper and lower plane frames Fk,u, f?k,u, Fk,i and flk,i are provided in the following. Therefrom, the method 300 proceeds to deriving, for each frame Fk, a respective upper plane difference frame Dk,u on basis of the upper plane frame Fk,u and the upper plane reference frame flk,u, and a respective lower plane difference frame £λ,ι on basis of the lower plane frame Fk,i and the lower plane reference frame flk,i, as indicated in block 312. The upper plane difference frame Dk,u and the lower plane difference frame £λ,ι are descriptive of differences between the image frame Fk and the reference image RF in the respective one of the upper and lower planes. Examples of deriving the difference frames Dk,u, Dk,i , RKU and
Figure imgf000014_0001
are provided in the following.
The method 300 further proceeds to composing respective contrac- tion/relaxation signals that are descriptive of contraction characteristics of said one or more CMs on basis of pixel values of the upper plane difference frames Dk,u and on basis of pixel values of the lower plane difference frames Dk,i derived for the frames of the video sequence, as indicted in block 314.
The method steps outlined in the foregoing with references to blocks 302 to 314 of Figure 3 provide an outline of the analysis of contraction characteristics applicable for one or more myocytes, e.g. the cluster of CMs 102 depicted in the image frames of the video sequence, while operation described for each of the blocks 302 to 314 may be provided in one of multiple ways without departing from the outline set by the method 300. In the following, operations for each of the blocks 304 to 314 are described via respective non-limiting exam- pies that predominantly refer to one or more CMs as a representative of one or more myocytes under study..
Referring back to block 304, obtaining the reference plane level bk for frames of the video sequence may comprise computing the reference plane levels bk on basis of pixel values within a sub-area of the image frame Fk that depicts the background 104. The reference plane level bk is defined separately for each frame Fk in order to compensate for small tonic changes and/or minor fluctuations in ambient light etc. that may occur over time, thereby making the reference plane level bk as a value that truly represents the intensity of the background 104 in the respective frame Fk. In an example, operations of block 304 involve computing a respective reference plane level bk for each video frame Fk, on basis of pixel values within the ROM (described in the foregoing), thereby making the reference plane level bk to be representative of the intensity of the background 104 in the frame Fk. In general, a suitable function of pixel values within the ROM in the frame Fk may be applied for computation of the respective reference plane level bk. Non- limiting examples of such a suitable function include an average or a median of the pixel values within the ROM . In this regard, operations of block 304 may further involve receiving a selection of the ROM and/or a function to be applied for computation of the reference plane level bk on basis of pixel values within the ROM . The information that defines the ROM in the image frames Fk may be received, for example, as an indication of user-selected image plane area that serves as the ROM . In this regard, the user selection may be carried out by displaying a frame of the video sequence on an electronic display to the user, prompting the user to employ a suitable user-interface mechanism known in the art to select a sub-areas of the image area that is to serve as the ROM , and converting the user selected sub-areas of the image area into information that defines the ROM .
In another example, operations of block 304 involve computing a respective reference plane level bk for each video frame Fk, on basis of pixel values within the ROI (described in the foregoing), thereby using the image sub-area that represents the one or more CMs under analysis as basis for defining the respective reference plane level bw for frame Fk. Along the lines of the previous example, also in this case a suitable function of pixel values within the ROI in the frame Fk may be applied for computation of the respective reference plane level bw. Also in this scenario non-limiting examples of suitable functions include an average or a median of the pixel values within the ROI, whereas a further example involves defining the reference plane level bw e.g. as an average value of the lowest pixel value within the ROI and the highest pixel value within the ROI in frame Fk. The approach that involves computing the refer- ence plane level bw on basis of pixel values within the ROI may be suitable for example in a scenario where the frames of the video sequence do not clearly illustrate an area that serves as the background 104.
In a further example, operations of block 304 involve using a predefined value for the reference plane level bw across all video frame Fk of the video se- quence. Non-limiting examples in this regard include employing a mid-point between the respective minimum and maximum pixel values enabled by the number of bits employed to express the pixel values (e.g. between 0 and 65535 in case of 16-bit pixel values) or a mid-point between respective minimum and maximum observed pixel values across the frames Fk of the video sequence, e.g. an average of the respective minimum and maximum pixel values. In a further example, operations of block 304 involve using simple linear regression of bw to include a slope in a video sequence of Fk.
In order to illustrate the relationship between pixel values within the ROI and the reference plane level bw, Figure 4A provides an illustration of pixel values within the ROI of a single frame Fk according to an example, whereas Figure 4B illustrates the value of the reference plane level bw as a function of frame index k (and hence as a function of time) computed using two different example approaches: the lower curve indicates the average of the pixel values within the ROM , while the upper curve indicates the average of pixel values within the ROI. Referring to block 306, obtaining the reference image RF may comprise computing the reference image RF on basis of pixel values in selected one or more frames of the video sequence. In an example, the reference image RF computation may rely on pixel values within the ROI (described in the foregoing) in the selected one or more frames of the video sequence. The information that defines the ROI in the image plane of the video frames Fk may be received, for example, as an indication of respective user-selected image plane area that serves as the ROI, along the lines described in the foregoing for obtaining the information that defines the ROM , mutatis mutandis. The following description assumes computation of the reference image RF based on pixel values within the ROI as well as carrying out the operations of the blocks 308 to 312 for pixels within the ROI only. However, the description generalizes into carrying out the corresponding operations e.g. for pixels extracted from two or more sub- areas of the image area or for all pixels of the frames Fk of the video sequence. As a particular example, the reference image RF may be defined as a sub-area defined by the ROI in a single selected frame or the reference image RF may be computed as an average image of the sub-area defined by the ROI in two or more selected frames. In this regard, for each pixel position, the pixel value of the reference image RF may be computed as an average of pixel values in spatially corresponding pixel positions in the selected frames. The selected one or more frames each represent the cluster of CMs 102 depicted by the video sequence in a relaxed state (i.e. in a non-contracted state). Even though a single frame of the video sequence may be used to define the reference image RF, such that the pixel values within the ROI of the single frame as such constitute the reference image RF without additional computation (of an average), using a higher number of frames, e.g. two, three or more, for computa- tion of the reference image RF typically improves the accuracy and reliability of the analysis.
Selection of the one or more frames for computation of the reference image RF may be based, for example, on user selection: in this regard, the user selection may be carried out by computing an average of the pixel values within the ROI for at least a sub-series of the frames Fk of the video sequence, displaying a curve that depicts the computed average as a function of frame number k (and hence as a function of time) on an electronic display to the user, prompting the user to employ a suitable user-interface mechanism known in the art to select the one or more frames that represent the cluster of CMs 102 in a relaxed state, and receiving the user selection of the one or more frames for computation of the reference image RF. Therein, the frames that represent the cluster of CMs 102 in a relaxed state are those that represent local minima of the displayed representative curve. In a variation of this example, an automated mechanism may be employed instead of user-selection : in this regard, instead of displaying the curve that depicts the computed average as a function of frame number k, a processing rule may be employed to identify local minima of the average and to (e.g.) randomly select a desired number of the local minima to serve as the one or more frames that are used for definition / computation of the reference image RF.
Referring to blocks 308 and 310, operations to carry out the extraction of the upper plane frame Fk,u and the lower plane frame Fk,i from the frame Fk and extraction of the upper plane reference frame flk,u and the lower plane reference frame
Figure imgf000018_0001
from the reference frame RF in dependence of the reference plane level bw may involve converting each frame Fk and the reference image RF into a respective normalized frame by using the reference plane level bw:
- A normalized frame Fk is computed by subtracting the reference plane level bw from the frame Fk, i.e. Fk = Fk - bk
- A normalized reference image RV is computed by subtracting the refer- ence plane level bw from the reference image RF, i.e. RV = RF - bw; Herein, the normalized frame Fk and the normalized reference image R'w represent the sub-area of the image area defined by the ROI scaled in such a way that the pixel values of the frame Fk and the reference image RF that are higher than the reference plane level bw are provided as positive values, whereas the pixel values of the frame Fk and the reference image RF that are lower than the background plane level bw are provided as negative values. Figure 5A provides an illustration of pixel values of the normalized frame Fk according to an example, whereas Figure 5B provides an illustration of pixel values of the normalized reference image RV according to an example. It should be noted that in many image formats pixels of an image frame are represented by unsigned values (e.g. at 8 or 1 6 bits) and hence in such framework negative pixel values are not possible. Therefore, it may be necessary to convert pixel values of the image frames Fk and pixel values of the reference frame RF into corresponding matrices of signed pixel values before proceeding to compute respective normalized frames or images Fk and RV and to carry out any subsequent operations as matrix operations in order to enable application of negative pixel values where necessary.
Referring back to the operations related to block 308, the upper plane frame Fk,u may be extracted by selecting only those pixel positions of the normalized frame Fk that have a zero value or a positive value, whereas the lower plane frame Fk,i may be extracted by selecting only those pixel positions of the normalized frame Fk that have a negative value: for the upper plane frame Fk,u the pixel values for the selected pixel positions are the corresponding pixel values of the frame Fk, whereas for the lower plane frame Fk,i the pixel values for the selected pixel positions are the complements of the corresponding pixel values of the frame Fk,i. In both the upper plane frame Fk,u and the lower plane frame Fk,i the pixel values for the remaining pixel positions may be set to zero.
There are plurality of ways to implement this decomposition of pixels of normalized frame Fk into the corresponding upper plane frame Fk,u and the lower plane frame Fk,u. As an example, the upper plane frame Fk,u may be extracted by computing Fk,u = {abs{ Fw) + Fk)/2 or Fk,u = (sgrf((F'k)A2) + Fk)/2 for each pixel position of Fk, where abs denotes the absolute value operation, sqrt denotes the square root operation and Λ2 denotes the power of two operation.. Along similar lines, the lower plane frame Fk,i may be extracted by computing Fk,i = (ate(Fk) - Fk)/2 or Fk,i = (sgrf((F'k)A2) - Fk)/2 for each pixel position of Fk. Along similar lines, the operations of block 31 0 may involve extracting the upper plane reference frame flk,u by selecting only those pixel positions of the normalized reference image RV that have a zero value or a positive value and deriving the lower plane reference frame flk,i by selecting only those pixel positions of the normalized reference image RV that have a negative value, where setting the pixel values in selected and non-selected pixel positions of the upper and lower plane references images flk,u and flk,i may be carried out as described in the foregoing for the upper and lower plane frames Fk,u and Fk,i, respectively, mutatis mutandis.
As described in the foregoing for the upper and lower plane frames Fk,u and Fk,i, there are plurality of ways to implement the decomposition of pixels of the normalized reference frame RV into the corresponding upper plane reference frame Fk,u and the lower plane reference frame flk,u. As an example in this regard, the upper plane reference frame flk,u may be extracted by computing flk,u = {abs{R'k) + flk)/2 or flk,u = {sqrt{{R'w)A2) + flk)/2 for each pixel position of flk, and the lower plane reference frame
Figure imgf000020_0001
may be extracted by computing flig = {abs{R'k) - flk)/2 or flk,i = {sqrt{{R'k)A2) - flk)/2 for each pixel position of flk.
The upper plane frame Fk,u and the upper plane reference frame flk,u obtained as a consequence of operations described in the foregoing for blocks 308 and 310 hence represent, respectively, those pixels within the ROI in frame Fk and the reference image RF that have a value higher than the reference plane level bk, whereas the lower plane frame Fk,i and the lower plane reference frame Fk,i also obtained as a consequence of operations described in the foregoing for blocks 308 and 310 represent, respectively, those pixels of the ROI in frame Fk and the reference image RF that have a value lower than the reference plane level bk. As an example in this regard, Figures 6A, 6B, 6C and 6D provide, respectively, illustrations of pixel values of the upper plane frame Fk,u, the upper plane reference frame flk,u, the lower plane frame Fg and the lower plane reference frame Fflg, derived on basis of the normalized frame Fk and the normalized reference frame R'w illustrated in the example of Figures 5A and 5B.
Referring to block 312, the upper plane difference frame DK,U is descriptive of absolute change in pixel values between the frame Fk and the reference frame RF in the upper plane, whereas the lower plane difference frame Dig is descriptive of absolute change in pixel values between the frame Fk and the reference frame RF in the lower plane. The upper plane difference frame Dk,u may be derived by subtracting the pixel values of the upper plane reference frame Fk,u from the (spatially) corresponding pixel values of the upper plane frame Fk,u, i.e. by computing the pixel-wise difference Dk ,u = Fk,u f?k,u. The lower plane difference frame Og may be derived by subtracting the pixel values of the lower plane reference frame Fk,i from the (spatially) corresponding pixel values of the lower plane frame Fig, i.e. by computing the pixel-wise difference Dg = Fg - Fflg. Figures 7 A and 7B provide, respectively, illustrations of pixel values of the upper plane difference frame Dk,u and the lower plane difference frame Dig, derived on basis of the upper plane frame Fk,u, the upper plane reference frame Fk,u, the lower plane frame Fg and the lower plane reference frame Fflg illustrated example of Figures 6A to 6D. As pointed out in the foregoing, the upper plane and the lower plane serve as non-limiting examples and derivation of the difference frames Dk,u and Dg depends on the applied subsets. As an example, in case only the upper plane is considered, only the upper plane difference frame Dk,u needs to be derived, whereas in a scenario where only the lower plane is considered, only the lower plane difference frame Og needs to be derived. Consequently, in the subsequent steps of the method 300, only the difference frames(s), e.g. the upper plane difference frame Dk,u and/or the lower plane difference frame Dig, that are derived in block 312 are considered in composition of the contraction signals). Referring to block 314, the composition of the contraction signals may comprise on or more of the following (e.g. in dependence of availability of the upper plane difference frame Dk,u and/or the lower plane difference frame Dig):
- deriving a first contraction signal
Figure imgf000022_0001
on basis of positive pixel values of the upper plane difference frames Dk,u;
- deriving a second contraction signal S2 on basis of negative pixel values of the upper plane difference frames Dk,u;
- deriving a third contraction signal S3 on basis of positive pixel values of the lower plane difference frames Dig; and
- deriving a fourth contraction signal s4 on basis of negative pixel values of the lower plane difference frames Dig.
In an example, operations of block 314 involve extracting, for each frame Fk, at least one movement frame on basis of the upper plane difference frame Dk,u and/or the upper plane difference frame Dk,u, which may serve as basis for de- riving a respective one of the first to fourth contraction signals si , S2, S3 and s4. In particular, the first to fourth contraction signals si , S2, S3 and s4 may be composed to extent the respective ones of the first to fourth movement frames Λ ,ι, Λ ,2, Λ .3 and Λ .4 are made available. As an example, one or more of the following movement frames may be extracted:
A first movement frame Λ ,ι may be extracted by selecting only those pixel positions of the upper plane difference frame Dk,u that have a positive value, while the non-selected pixel positions of the upper plane difference frame DK,U may be represented in the first movement frame Λ ,ι by zero values. This may be accomplished, for example, by computing Λ .1 = {abs{Dw,u) + Dk,u)/2 or Λ ,ι = (sgrf((Dk,u)A2) + Dk,u)/2 for each pixel position of the upper plane difference frame Dk,u. The first movement frame Λ ,ι may also be referred as the frame of true white pixels because they are positive-valued pixels that originate from the upper plane frame Fk,u.
A second movement frame Mk,2 may be extracted by selecting only those pixel positions of the upper plane difference frame Dk,u that have a negative value, while the non-selected pixel positions of the upper plane difference frame Dk,u may be represented in the second movement frame Λ .2 by pixels of zero value. This may be accomplished, for example, by computing Mk,2 = (a£>s(Dk,u) - Dk,u)/2 or Mk,2 = (sgrf(( Dk,u)A2) - Dk,u)/2 for each pixel position of the upper plane difference frame Dk,u. The second movement frame Λ .2 may also be referred as the frame of pseudo white pixels because they are negative-valued pixels that originate from the upper plane frame Fk,u. The pixels of the first movement frame Λ ,ι and the second movement frame Λ .2 may be combined into a first combined movement frame that represents pixels of an upper plane.
A third movement frame Λ .3 may be extracted by selecting only those pixel positions of the lower plane difference frame Dig that have a positive value, while the non-selected pixel positions of the lower plane difference frame Dg may be represented in the third movement frame Mw,3 by pixels of zero value. This may be accomplished, for example, by computing Mk,s = {abs{
Figure imgf000023_0001
+ Og)/2 or Mk,s = (sgrf(( Og)A2) + Og)/2 for each pixel position of the lower plane difference frame Dig. The third movement frame Mk,3 may also be referred as the frame of true black pixels because they are positive-valued pixels that originate from the lower plane frame Fk,i
A fourth movement frame ΜΚ,Λ may be extracted by selecting only those pixel positions of the lower plane difference frame Dig that have a negative value, while the non-selected pixel positions of the lower plane difference frame Dig may be represented in the fourth movement frame Λ .4 by zero values. This may be accomplished, for example, by computing Mk,4 = {abs{
Figure imgf000023_0002
- Dk,i)/2 or MWA = (sgrf(( Og)A2) - Og)/2 for each pixel position of the lower plane difference frame Dig. The fourth movement frame Mk,4 may also be referred as pseudo black pixels because they are negative-valued pixels that originate from the lower plane frame Fk,i. The pixels of the third movement frame Mw,3 and the fourth movement frame MWA may be combined into a second combined movement frame that represents pixels of a lower plane. As an example in this regard, Figures 8A, 8B, 8C and 8D provide, respectively, illustrations of the first movement frame Λ ,ι , the second movement frame Mw,2, the third movement frame Λ .3 and the fourth movement frame Λ .4, derived on basis of the upper difference frame Dk,u and the lower difference frame Dig de- picted in Figures 7 A and 7B, respectively.
The procedure of composing one of the contraction signals si , S2, S3 and s4 on basis of the movement frames Λ ,ι , Λ ,2, Λ .3 and Λ .4 may involve one or more of the following (e.g. in dependence of availability of the first to fourth movement frames Λ ,ι to Λ .4) : - Derive the first contraction signal
Figure imgf000024_0001
by carrying out volume integration across the first movement frame Λ ,ι to compute contraction level Sk,i at frame index /c for frames / =1 , and arrange the computed contraction levels Sk,i into a time series that constitutes the first contraction signal S1 .
- Derive the second contraction signal S2 by carrying out volume integration across the second movement frame Mk,2 to compute contraction level Sk,2 at frame index / for frames / =1 , and arrange the computed contraction levels Sk,2 into a time series that constitutes the second contraction signal S2.
- Derive the third contraction signal S3 by carrying out volume integration across the third movement frame Mk,3 to compute contraction level Sk,3 at frame index / for frames / =1 , and arrange the computed contraction levels Sk,3 into a time series that constitutes the third contraction signal S3.
- Derive the fourth contraction signal S4 by carrying out volume integration across the fourth movement frame Mk,4 to compute contraction level Sk,4 at frame index / for frames / =1 , and arrange the computed contraction levels Sk,4 into a time series that constitutes the fourth contraction signal S4. In an example, the volume integration across a movement frame referred to above is provided as a sum of pixel values within the respective movement frame, e.g. within one of the first to fourth movement frames Λ ,ι , Λ ,2, Λ ,3 and Λ .4. The contraction signal si , S2, S3, s4 may be further normalized by dividing values of the time series that constitutes the respective contraction signal si , S2, S3 and s4 by the number of pixels within the ROI. As an example, normaliza- tion of the first contraction signal
Figure imgf000025_0001
may involve dividing each of the contraction levels Sk,i , / =1 , " by the number of pixels within the ROI. Normalization of the contraction signals si , S2, S3, s4 derived on basis of a certain video sequence serves to make them more readily comparable to contractions signals derived from other video sequences that may depict other CMs and/or that may be analyzed by using ROI of different shape and/or size. As an example concerning the contractions signals si , S2, S3, s4 and normalized versions thereof, Figure 9A depicts an example of contractions signals si , S2, S3 and s4 derived, respectively, on basis of the movement frames Λ ,ι , Λ ,2, Λ .3 and ΜΚ,Λ depicted in Figures 8A to 8D, whereas Figure 9B depicts the respective nor- malized contraction signals.
In an example, operations of block 314 optionally comprises composing one or more of the first to fourth contraction signals si , S2, sz, s4 on basis of only those pixel values of the respective one of the upper and lower plane difference frames Dk,u, Ck,\ whose absolute value exceeds a predefined (non-zero) threshold value. This exclusion of the pixel values that fail to exceed the predefined threshold value may be referred to as filtering. In case one or more of the contraction signals si , S2, S3, s4 are composed via definition of the respective movement frames Λ ,ι , Λ ,2, Λ ,3 Λ .4, the filtering operation may be applied to the movement frames Λ ,ι , Λ ,2, Μκ,ζ Λ .4 by applying the volume integration only to those pixel position where the pixel values exceeds the predefined threshold. The predefined threshold may be set, for example, in dependence of the background level bk, e.g. as a predefined percentage of an average of the background level bw over the frames of the video sequence. In this regard, the predefined percentage may be selected e.g. from the range 1 % to 5%, for example predefined percentage 3% may be used. The filtering operation may serve to improve the analysis via reducing noise and disturbances that may have an effect on the outcome of the analysis. Each of the contraction signals s-i, S2, S3 and s4 (and/or respective normalized versions thereof) serves to capture a different aspect of contraction/relaxation characteristics of the cluster of CMs depicted in the video sequence under analysis and hence they provide as such valuable information concerning con- traction characteristics of the cluster of CMs under study, possibly to an extent that enables e.g. a medical practitioner to evaluate the need for further analysis or even draw a diagnosis on basis of the contraction characteristics. To illustrate this aspect using exemplifying contraction signals
Figure imgf000026_0001
to s4, Figure 1 0A depicts the differences between the contraction signals si, S2, S3, s4 for a single extraction/contraction cycle whereas Figures 1 0B and 1 0C depict the differences in further detail for scaled versions of the contraction signals si , S2, S3, s4:
- Figure 1 0A depicts a single contraction/relaxation cycle of the contraction signals si , S2, S3, s4 as derived from the upper and lower plane dif- ference frames DK,U and Dig.
- Figure 1 0B depicts the single contraction/relaxation cycle of modified contraction signals si , S2, S3, s4 where the peak height in each of the contraction signals si , S2, S3, s4 is scaled on unity.
- Figure 1 0C depicts is a peak part of the contraction/relaxation cycle de- picted in Figure 1 0B.
As an example of the contraction signals si, S2, S3, s4 serving to capture different aspects of the contraction/relaxation characteristics, for the peak of the contraction signals si , S2, S3, s4 illustrated in Figures 1 0A, 10B and 10C the first contraction signal si exhibits longer baseline-to-peak provide than the other contraction signals S2, S3, s4 whereas the third contraction signal S3 exhibits maximal rise time among the contraction signals si, S2, S3, s4.
However, one or more of the contractions signals si , S2, S3 and s4 (and/or respective normalized versions thereof) may be processed further in order to derive a further signal and/or other information that is descriptive of further as- pects of the beating characteristics of the cluster of CMs under study. As an example in this regard, two or more of the contraction signals si, S2, S3, s4 may be combined into a combined contraction signal, e.g. as a linear combination of selected two or more contraction signals s-i , S2, S3, s4 (e.g. by computing a respective element-wise linear combination of temporally corresponding elements of the selected two or more contraction signals si , S2, S3, s4. As an ex- ample in this regard, Figure 1 1 depicts a single contraction/relaxation cycle of combined contraction signal sSUm derived as sSUm = si + S2+ S3 + s4 (e.g. as an element-wise sum Sk,sum = Sk,i + Sk,2+ Sk,3 + S , / =1 , K) and scaled such that the peak height of the combined contractions signal sSUm is at unity.
As other examples, a combined contraction signal may be derived as a sam- pie-wise product of two or more of the contraction signals si , S2, S3, s4 or as a sample-wise ratio of two of the contraction signals si , S2, S3, s4. As examples in this regard, a first combined contraction signal sa may derived as a sum of the first contraction signal si and the fourth contraction signal s4, i.e. sa =
Figure imgf000027_0001
+ s4 and a second combined contraction signal Sb may be derived as a sum of the second contraction signal S2 and the third contraction signal S3, i.e. Sb = S2 + S3. As an illustrative example, Figure 12A depicts a single contraction/relaxation cycle of the first contraction signal sa and the second contraction signal Sb, both scaled such that the resulting signal level is normalized (e.g. scaled by a suitable scaling factor) to values around unity. In a yet further example, two or more (different) linear combinations of two or more contraction signals si , S2, S3, s4 may be combined into a combined contraction signal e.g. by a sample-wise multiplication or sample-wise division. As an illustrative example in this regard, Figure 12B depicts a single contraction/relaxation cycle of a combined contraction signal sc derived as a sample- wise division of the first combined contraction signal sa by the second combined contraction signal Sb (derived e.g. as Sk,c = Sk,a / Sk,b), followed by a scaling (by a suitable scaling factor).
The method 300 described in the foregoing via a number of examples may be implemented by an apparatus that comprises respective processing means for implementing the steps of the method 300, e.g. those described through blocks 302 to 314 or a limited subset thereof. The processing means may be provided by hardware means, by software means, or by a combination of hardware means and software means. As an example in this regard, Figure 13 illustrates a block diagram of some components of an exemplifying apparatus 400. The apparatus 400 may comprise further components, elements or por- tions that are not depicted in Figure 13. The apparatus 400 may be employed to implement the method 300.
The apparatus 400 comprises a processor 416 and a memory 415 for storing data and computer program code 417. The memory 415 and a portion of the computer program code 417 stored therein may be further arranged to, with the processor 416, to implement the function(s) described in the foregoing in context of the method 300.
The apparatus 400 may comprise a communication portion 412 for communication with other devices. The communication portion 412, if present, comprises at least one communication apparatus that enables wired or wireless com- munication with other apparatuses. A communication apparatus of the communication portion 412 may also be referred to as a respective communication means. The apparatus 400 may further comprise user I/O (input/output) components 418 that may be arranged, possibly together with the processor 416 and a portion of the computer program code 417, to provide a user interface for receiving input from a user of the apparatus 400 and/or providing output to the user of the apparatus 400 to control at least some aspects of operation of the method 300 implemented by the apparatus 400. The user I/O components 418 may comprise hardware components such as a display, a touchscreen, a touchpad, a mouse, a keyboard, and/or an arrangement of one or more keys or buttons, etc. The user I/O components 418 may be also referred to as peripherals. The processor 416 may be arranged to control operation of the apparatus 400 e.g. in accordance with a portion of the computer program code 417 and possibly further in accordance with the user input received via the user I/O components 418 and/or in accordance with information received via the communication portion 412. Although the processor 416 is depicted as a single component, it may be implemented as one or more separate processing components. Similarly, although the memory 415 is depicted as a single component, it may be implemented as one or more separate components, some or all of which may be in- tegrated/removable and/or may provide permanent / semi-permanent/ dynamic/cached storage.
The computer program code 417 stored in the memory 415, may comprise computer-executable instructions that control one or more aspects of operation of the apparatus 400 when loaded into the processor 416. As an example, the computer-executable instructions may be provided as one or more sequences of one or more instructions. The processor 416 is able to load and execute the computer program code 417 by reading the one or more sequences of one or more instructions included therein from the memory 415. The one or more sequences of one or more instructions may be configured to, when executed by the processor 416, cause the apparatus 400 to carry out operations, procedures and/or functions described in the foregoing in context of the method 300.
Hence, the apparatus 400 may comprise at least one processor 416 and at least one memory 415 including the computer program code 417 for one or more programs, the at least one memory 415 and the computer program code 417 configured to, with the at least one processor 416, cause the apparatus 400 to perform operations, procedures and/or functions described in the foregoing in context of the method 300.
The computer programs stored in the memory 415 may be provided e.g. as a respective computer program product comprising at least one computer- readable non-transitory medium having the computer program code 417 stored thereon, the computer program code, when executed by the apparatus 400, causes the apparatus 400 at least to perform operations, procedures and/or functions described in the foregoing in context of the method 300. The computer-readable non-transitory medium may comprise a memory device or a record medium such as a CD-ROM, a DVD, a Blu-ray disc or another article of manufacture that tangibly embodies the computer program. As another example, the computer program may be provided as a signal configured to reliably transfer the computer program.
Reference(s) to a processor should not be understood to encompass only pro- grammable processors, but also dedicated circuits such as field-programmable gate arrays (FPGA), application specific circuits (ASIC), signal processors, etc. Features described in the preceding description may be used in combinations other than the combinations explicitly described.
Although functions have been described with reference to certain features, those functions may be performable by other features whether described or not. Although features have been described with reference to certain embodiments, those features may also be present in other embodiments whether described or not.

Claims

1 . A method for analyzing contraction characteristics of one or more myocytes on basis of a time series of image frames that constitute a video se- quence that depicts said one or more myocytes, the method comprising a) obtaining, for each image frame, a respective reference plane level value that is descriptive of a reference intensity level in the respective image frame; b) obtaining a reference image that is descriptive of said one or more myo- cytes in a relaxed state; c) extracting from each image frame and the reference image respective one or more subsets of pixel positions in dependence of the reference plane level value obtained for the respective image frame; d) deriving, for each image frame in each of said one or more subsets, a respective difference frame that is descriptive of differences between the image frame and the reference image in the respective subset of pixel positions; and e) composing, on basis of pixel values in respective difference frames pertaining to said one or more subsets of pixel positions, respective one or more contraction signals that are descriptive of contraction characteristics of said one or more myocytes.
2. A method according to claim 1 , wherein said time series of image frames depicts said one or more myocytes against a background and wherein the method step a) comprises computing the reference plane level value on basis of pixel values of a first region of interest, ROI, within image plane of the given image frames, wherein the first ROI defines one of the following: a sub-area of the image area that depicts a portion of the background, a sub-area of the image area that depicts at least part of said one or more CMs.
A method according to claim 2, wherein computing the reference plane level value on basis of pixel values of the first ROI comprises computing an average of the pixel values within the first ROI.
A method according to any of claims 1 to 3, wherein the method step b) comprises computing the reference image on basis of one or more image frames that depict said one or more myocytes in a relaxed state.
A method according to claim 4, comprising computing the reference image as a pixel-wise average of the pixel values across said one or more image frames
A method according to any of claims 1 to 5, wherein the method steps b) to d) are carried out on basis of pixel values of a second ROI within the image area in said one or more image frames, wherein the second ROI defines a sub-area of the image plane that depicts at least part of said one or more CMs.
A method according to any of claims 1 to 6, wherein said one or more subsets comprise one or both of an upper plane and a lower plane, and wherein the method step c) with respect to extracting the upper plane comprises, for a given image frame, deriving a respective upper plane frame as a first subset of pixel positions of the given image frame, wherein the first subset includes those pixel positions of the given image frame that have a pixel value that is higher than the reference plane level value obtained for the given image frame, and deriving a respective upper plane reference frame as a third subset of pixel positions of the reference image, wherein the third subset includes those pixel positions of the reference image that have a pixel value that is higher than the reference plane level value obtained for the given image frame; and wherein the method step c) with respect to extracting the lower plane comprises, for a given image frame, deriving a respective lower plane frame as a second subset of pixel positions of the given image frame, wherein the second subset includes those pixel positions of the given image frame that have a pixel value that is lower than the reference plane level value obtained for the given image frame, and deriving a respective lower plane reference frame as a fourth subset of pixel positions of the reference image, wherein the fourth subset includes those pixel positions of the reference image that have a pixel value that is lower than the reference plane level value obtained for the given image frame.
A method according to claim 7, further comprising, for the given frame, deriving a respective normalized image frame by subtracting, from pixel values of the given frame, the reference plane level value obtained for the given image frame; and deriving a respective normalized reference image by subtracting, from pixel values of the reference image, the reference plane level value obtained for the given image frame, wherein deriving the respective upper plane frame comprises extracting those pixels of the normalized image frame that have a positive value, deriving the respective lower plane frame comprises extracting absolute values of those pixels of the normalized image frame that have a negative value, deriving the respective upper plane reference frame comprises extracting those pixels of the normalized reference image that have a positive value, deriving the respective lower plane reference frame comprises extracting absolute values of those pixels of the normalized reference image that have a negative value.
9. A method according to claim 8, further comprising setting, for each of the upper plane frame and the lower plane frame, pixel values of non extracted pixel positions of the respective normalized image frame to zero; and setting, for each of the upper plane reference frame and the lower plane reference frame, pixel values of non extracted pixel positions of the respective normalized reference image to zero.
10. A method according to claim 8 or 9, wherein the method step d) comprises, for the given frame, deriving the upper plane difference frame by subtracting pixel values of the upper plane reference frame from spatially corresponding pixel values of the upper plane frame; and deriving the lower plane difference frame by subtracting pixel values of the lower plane reference frame from spatially corresponding pixel values of the lower plane frame.
1 1 . A method according to any of claims 1 to 10, wherein the method step e) comprises composing the respective one or more contraction signals on basis of those pixel values of the difference frames pertaining to the re- spective subset of pixel positins whose absolute value exceeds a predefined threshold value.
12. A method according to any of claims 1 to 1 1 , wherein said one or more subsets comprise one or both of an upper plane and a lower plane and wherein the method step e) comprises one or more of the following: deriving a first contraction signal on basis of positive pixel values of the upper plane difference frames; deriving a second contraction signal on basis of negative pixel values of the upper plane difference frames; deriving a third contraction signal on basis of positive pixel values of the lower plane difference frames; and deriving a fourth contraction signal on basis of negative pixel values of the lower plane difference frames.
13. A method according to claim 12, wherein deriving the first contraction signal comprises computing, for each image frame, a respective first contraction level value as a sum of positive pixel values of the respective upper plane difference frame and arranging the computed first contraction level values in respective positions of a first time series to form the first contraction signal; deriving the second contraction signal comprises computing, for each image frame, a respective second contraction level value as the absolute value of a sum of negative pixel values of the respective upper plane dif- ference frame and arranging the computed second contraction level values in respective positions of a second time series to form the second contraction signal; deriving the third contraction signal comprises computing, for each image frame, a respective third contraction level value as a sum of positive pixel values of the respective lower plane difference frame and arranging the computed third contraction level values in respective positions of a third time series to form the third contraction signal; and deriving the fourth contraction signal comprises computing, for each image frame, a respective fourth contraction level value as the absolute value of a sum of negative pixel values of the respective lower plane difference frame and arranging the computed fourth contraction level values in respective positions of a fourth time series to form the fourth contraction signal.
14. A method according to claim 13, wherein, deriving the first contraction signal comprises forming, for each image frame, a respective first movement frame by extracting those pixels of the upper plane difference frame that have a positive value and computing the respective first contraction level value as a sum of pixel values of the first movement frame; deriving the second contraction signal comprises forming, for each image frame, a respective second movement frame by extracting those pixels of the upper plane difference frame that have a negative value, multiplying values of the extracted pixels by -1 and computing the respective second contraction level value as a sum of pixel values of the second movement frame; deriving the third contraction signal comprises forming, for each image frame, a respective third movement frame by extracting those pixels of the lower plane difference frame that have a positive value and computing the respective third contraction level value as a sum of pixel values of the third movement frame; and deriving the fourth contraction signal comprises forming, for each image frame, a respective fourth movement frame by extracting those pixels of the lower plane difference frame that have a negative value, multiplying values of the extracted pixels by -1 and computing the respective fourth contraction level value as a sum of pixel values of the fourth movement frame;
15. A method according to any of claims 12 to 14, further comprising combin- ing two or more of said first, second, third and fourth contraction signals into a combined contraction signal that is descriptive of contraction characteristics of said one or more myocytes.
16. A method according to any of claims 1 to 15, wherein said image frames comprise monochrome images.
17. A method according to any of claims 1 to 16, wherein said myocytes comprise human cardiomyocytes.
18. An apparatus comprising means for performing the method according to any of claims 1 to 17.
19. An apparatus for analyzing contraction characteristics of one or more myocytes on basis of a time series of image frames that constitute a video se- quence that depicts said one or more myocytes, wherein the apparatus comprises at least one processor; and at least one memory including computer program code, which when executed by the at least one processor, causes the apparatus to: obtain, for each image frame, a respective reference plane level value that is descriptive of a reference intensity level in the respective image frame; obtain a reference image that is descriptive of said one or more myocytes in a relaxed state; extract from each image frame and the reference image respective one or more subsets of pixel positions in dependence of the reference plane level value obtained for the respective image frame; derive, for each image frame in each of said one or more subsets, a re- spective difference frame that is descriptive of differences between the image frame and the reference image in the respective subset of pixel positions; and compose, on basis of pixel values in respective difference frames pertaining to said one or more subsets of pixel positions, respective one or more contraction signals that are descriptive of contraction characteristics of said one or more myocytes.
20. A computer program comprising computer readable program code configured to cause performing of the method of any of claims 1 to 17 when said program code is executed on a computing apparatus.
PCT/FI2018/050416 2017-06-02 2018-06-01 Analysis of myocyte contraction characteristics Ceased WO2018220282A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FI20175498 2017-06-02
FI20175498 2017-06-02

Publications (1)

Publication Number Publication Date
WO2018220282A1 true WO2018220282A1 (en) 2018-12-06

Family

ID=62631131

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2018/050416 Ceased WO2018220282A1 (en) 2017-06-02 2018-06-01 Analysis of myocyte contraction characteristics

Country Status (1)

Country Link
WO (1) WO2018220282A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110866906A (en) * 2019-11-12 2020-03-06 安徽师范大学 Three-dimensional culture human myocardial cell pulsation detection method based on image edge extraction
CN114998231A (en) * 2022-05-24 2022-09-02 江苏艾玮得生物科技有限公司 Transient analysis method and device for myocardial microspheres

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6493041B1 (en) * 1998-06-30 2002-12-10 Sun Microsystems, Inc. Method and apparatus for the detection of motion in video
US20060275745A1 (en) * 2005-06-01 2006-12-07 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Method for optically determining dynamic behaviour of contracting muscle cells
US20130034272A1 (en) * 2010-04-12 2013-02-07 Ge Healthcare Uk Limited System and method for determining motion of a biological object
WO2014102449A1 (en) 2012-12-27 2014-07-03 Tampereen Yliopisto Visual cardiomyocyte analysis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6493041B1 (en) * 1998-06-30 2002-12-10 Sun Microsystems, Inc. Method and apparatus for the detection of motion in video
US20060275745A1 (en) * 2005-06-01 2006-12-07 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Method for optically determining dynamic behaviour of contracting muscle cells
US20130034272A1 (en) * 2010-04-12 2013-02-07 Ge Healthcare Uk Limited System and method for determining motion of a biological object
WO2014102449A1 (en) 2012-12-27 2014-07-03 Tampereen Yliopisto Visual cardiomyocyte analysis

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ALEX STALIN ET AL: "BSFD: BACKGROUND SUBTRACTION FRAME DIFFERENCE ALGORITHM FOR MOVING OBJECT DETECTION AND EXTRACTION", JOURNAL OF THEORETICAL AND APPLIED INFORMATION TECHNOLOGY, 28 February 2014 (2014-02-28), pages 623 - 628, XP055223103, Retrieved from the Internet <URL:http://www.jatit.org/volumes/Vol60No3/20Vol60No3.pdf> [retrieved on 20151023] *
LAHTI A. L.; V. J. KUJALA; H. CHAPMAN; A. P. KOIVISTO; M. PEK-KANEN-MATTILA; E. KERKELA; J. HYTTINEN; K. KONTULA; H. SWAN; B. R. C: "Model for long QT syndrome type 2 using human iPS cells demonstrates arrhythmogenic characteristics in cell culture", DIS. MODEL. MECH., vol. 5, 2012, pages 220 - 230
ROSE H ET AL: "SIMULTANEOUS MEASUREMENT OF CONTRACTION AND OXYGEN CONSUMPTION IN CARDIAC MYOCYTES", AMERICAN JOURNAL OF PHYSIO, AMERICAN PHYSIOLOGICAL SOCIETY, US, vol. 261, no. 4, PART 02, November 1991 (1991-11-01), pages H1329 - H1334, XP008055276, ISSN: 0002-9513 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110866906A (en) * 2019-11-12 2020-03-06 安徽师范大学 Three-dimensional culture human myocardial cell pulsation detection method based on image edge extraction
CN114998231A (en) * 2022-05-24 2022-09-02 江苏艾玮得生物科技有限公司 Transient analysis method and device for myocardial microspheres

Similar Documents

Publication Publication Date Title
EP2939212B1 (en) Visual cardiomyocyte analysis
Garnavi et al. Automatic segmentation of dermoscopy images using histogram thresholding on optimal color channels
US9064143B2 (en) System and method for determining motion of a biological object
WO2006118060A1 (en) Skin state analyzing method, skin state analyzing device, and recording medium on which skin state analyzing program is recorded
US9569845B2 (en) Method and system for characterizing cell motion
JP5405994B2 (en) Image processing apparatus, image processing method, image processing system, and skin evaluation method
US20180342078A1 (en) Information processing device, information processing method, and information processing system
CN106780436B (en) Medical image display parameter determination method and device
US9070004B2 (en) Automatic segmentation and characterization of cellular motion
CN105701806B (en) Depth image-based Parkinson tremor motion characteristic detection method and system
JP2007252891A (en) Estimation method of evaluation value by visual recognition of beauty of skin
JP4383352B2 (en) Histological evaluation of nuclear polymorphism
JP2009082338A (en) Skin discrimination method using entropy
WO2018220282A1 (en) Analysis of myocyte contraction characteristics
CN107567631A (en) Tissue sample analysis technology
Maddah et al. Automated, non-invasive characterization of stem cell-derived cardiomyocytes from phase-contrast microscopy
JP2007252892A (en) Estimation method of evaluation value by visual recognition of three-dimensional shape of skin surface
US20250225657A1 (en) Methods of processing optical images and applications thereof
JP7678481B2 (en) Skin surface analysis device and skin surface analysis method
JP7274936B2 (en) Cell image compression device, method and program
Wu et al. Portable skin analyzer based on smartphone
US12458276B2 (en) Methods for non-invasive, label-free imaging of cellular immune response in human skin using a nonlinear optical microscopy imaging system
US20250329016A1 (en) Digital image analysis
WO2020261455A1 (en) Cell function evaluation method and cell analysis device
JP2021036830A5 (en)

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18731884

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18731884

Country of ref document: EP

Kind code of ref document: A1