[go: up one dir, main page]

WO2015160286A1 - Arrangements and methods in a microscope system - Google Patents

Arrangements and methods in a microscope system Download PDF

Info

Publication number
WO2015160286A1
WO2015160286A1 PCT/SE2014/000050 SE2014000050W WO2015160286A1 WO 2015160286 A1 WO2015160286 A1 WO 2015160286A1 SE 2014000050 W SE2014000050 W SE 2014000050W WO 2015160286 A1 WO2015160286 A1 WO 2015160286A1
Authority
WO
WIPO (PCT)
Prior art keywords
candidate
subobjects
resolution
region
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/SE2014/000050
Other languages
French (fr)
Inventor
Anders Rosenqvist
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Teknikpatrullen AB
Original Assignee
Teknikpatrullen AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Teknikpatrullen AB filed Critical Teknikpatrullen AB
Priority to PCT/SE2014/000050 priority Critical patent/WO2015160286A1/en
Publication of WO2015160286A1 publication Critical patent/WO2015160286A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • G02B21/365Control or image processing arrangements for digital or video microscopes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Optical investigation techniques, e.g. flow cytometry
    • G01N15/1429Signal processing
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Optical investigation techniques, e.g. flow cytometry
    • G01N15/1429Signal processing
    • G01N15/1433Signal processing using image recognition
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Optical investigation techniques, e.g. flow cytometry
    • G01N15/1434Optical arrangements
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/693Acquisition
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N2015/1006Investigating individual particles for cytology

Definitions

  • the present invention relates to arrangements for object search and object depicting in a microscope system as well as methods for operating the arrangements.
  • Automatic microscope systems like for example the DM96 and the DM1200 systems from CellaVision AB are sold in increasing numbers. Such automatic microscope systems can, depending on model and application software, be used for analysis of for example blood smears and body fluid samples.
  • Figure 1 shows schematically an example of how an automatic microscope system can be composed.
  • Automatic microscope systems can search for interesting subobjects within an object 109, the object often being some biological sample placed on a glass slide, and then depict at least the interesting subobjects using a high magnification, high resolution setting for the optical subsystem 110 and the electronic image sensor 112 of the microscope system.
  • the object 109 which can be held in place by an object holder 108, is positioned using some positioning arrangement 106, which in turn may have a separate drive unit 104.
  • the object 109 is often illuminated by light transmitted from an illumination source 102 and through suitable holes in the positioning arrangement 106.
  • a single control unit 100 is controlling all sensors and actuators while also receiving, processing and analysing the images from the electronic image sensor 112.
  • Resolution can be used to describe the maximum ability of the optical subsystem 1 10 to resolve small details of the object 109 and can be expressed in line pairs per micrometer of object. Magnification is important for adapting a detailed optical image, produced by an optical subsytem, to a sensor, for example a human eye or an electronic image sensor If the magnification is not high enough for the actual combination of optical resolution and sensor, some of the details in the optical image on the sensor are lost when sampled by the sensor (undersampling) and the resolution will be limited by sensor.
  • magnification is too high, there is a great number of pixels per micrometer of object and the resultion will be limited by the resultion in the optical image on to the sensor.
  • pixel density the number of pixels per micrometer object in an image
  • resolution the resolvable details in an image measured in line pairs per micrometer of object
  • the optical subsystem 110 has only one single, high magnification, high resolution, setting. Since the area of the electronic image sensor 112 is limited, the high magnification results in a narrow field of (simultaneous) view of the object 109.
  • the DiffMaster Script Octavia has to search for possible interesting subobjects using one narrow field of view at a time and under focus control in order to not miss any possible interesting subobjects.
  • the search and the automatic focus control is performed by the control unit 100 using images from the electronic image sensor 112. The result is an automatic microscope system with a low number of parts and with a very good resolution for depicting subobjects, but at the price of a quite slow search speed.
  • the search for possible interesting subobjects is made using a low magnification, low resolution setting for the optical subsystem 110 resulting in a set of candidate subobjects and then the depicting of the candidate subobjects is made using a high magnification, high resolution setting for the optical subsystem 110 by returning to their previously determined positions.
  • the result is a quite fast system, with a very good resolution for depicting subobjects but at the price of an increased number of parts compared to the DiffMaster Octavia.
  • An increased number of parts may in addition also require extra costs for the hardware, more reuqired space and more adjustments and may also result in lower reliability and added weight of the microscope system.
  • the optical subsystem 110 of the DM96 system contains a motorised revolver equipped with multiple microscope objectives in combination with a single relay lens unit with a fixed back focal length which is used in combination with all of the objectives. These parts of the optical subsystem 110 are not shown in Figure 1.
  • the relay lens unit accomodates the light from the objective currently in use on to a single fixed mount camera with an electronic image sensor 112. There is a single illumination source 102 used by all the objectives.
  • the DM1200 system uses up to three fixed mount objectives in combination with two movable relay lens units and a single movable camera with an electronic image sensor 112.
  • the two relay lens units have one fixed back focal length each.
  • By lining up the camera with one selected relay lens unit and one selected objective a number of resulting resolutions and magnifications can be achieved.
  • microscopy immersion oil is often used between the optical subsystem 110 and the object 109 in order to achieve the highest possible optical resolution, which in turn may be required for a reliable analysis result from the automatic microscope system.
  • the automatic microscope system may not be able to go back and search for more possible interesting subobjects in case the set of candidate subobjects was found, during depicting, to include too many false positives, i.e. too few interesting subobjects.
  • the automatic microscope system may have to search for a great excess of possible interesting subobjects before applying the immersion oil and switching to a high magnification, high resolution setting. As a result, most of the time the great excess is not fully used, leading to a waste of time during search.
  • the need for excess may, however, be eliminated if the automatic microscope system is capable of producing high magnification, high resolution images and low magnification, low resolution images more or less simultaneously and without any time consuming switching of mechanical settings of the optical subsystem 110 or the electronic image sensor 1 12.
  • a microscope may use a plurality of illumination colours and, due to colour filters and beam splitters inside the optical subsystem, conduct the light along different optical paths for different colours, resulting in different magnifications and resolutions for differents colours.
  • some colour with low magnification is used for searching for possible interesting subobjects while the remaining colours with higher magnifications are used for depicting the candidate subobjects and if a single electronic image sensor is used for capturing all these colours, it will not be possible to capture enough information for high resolution, high magnification, colour images.
  • two separate electronic image sensors and possibly some additional beamsplitters and/or colour filters it may however be possible.
  • Neither multiple optical paths nor multiple electronic image sensors are shown in Figure 1. The result is a fast system, with a very good resolution for depicting subobjects, but at the price of an increased number of parts compared to the DiffMaster Scriptia, especially if the depicting is to be made with all colours.
  • a "scene analysis apparatus” may, inside the optical subsystem, use a beam splitter and conduct the light from the object and the objective along two parallel optical paths, each with a separate electronic image sensor. There is one path providing low magnification, low resolution images with a wider field of view of the object and one path providing simultaneous high magnification, high resolution images with a narrow field of view of the object.
  • the disclosed apparatus searches for possible interesting subobjects using the low resolution images. While the apparatus moves previously found candidate subobjects into the field of view of the high resolution image sensor, the movement also positions a new part of the object for search in the field of view of the low resolution image sensor. The result is a fast system, with a very good resolution for depicting subobjects, but still at the price of an increased number of parts compared to the DiffMaster Octavia.
  • the object is achieved wholly or partly with an automatic microscope system with an optical subsystem that can produce distorted images of an object on to an electronic image sensor in combination with a method for searching and depicting the object.
  • the distorted images of the object have, simultaneously, at least two regions per image. There is a first region having at least a first resolution providing information that is at least good enough for detection of subobjects of interest. There is also a second region having at least a second (higher) resolution providing information that is at least good enough for depicting subobjects with a very good resolution.
  • the object is navigated using a positioning arrangement, while information from the first region of the distorted images is used for detecting interesting subobjects and storing these together with estimated coordinates and while information from the second region of the distorted images is used for depicting previously detected interesting subobjects, when present in the second region.
  • a control unit executes the method for searching and depicting, by sequentially determining new coordinates to visit according to a navigation algorithm, by sequentially controlling the positions of the object relative to the optical subsystem according to the new coordinates, by performing detection in new areas of the object that are brought into the first region and, when at least one detected subobject is present in the second region, by depicting such subobjects.
  • the method for searching and depicting can provide reconstructed versions of the distorted images as a way of normalising the magnification and the "pixel density" of the images.
  • the reconstructed versions of the distorted images can be used by humans or by image analysis algorithms, making an automatic microscope according to the present invention at least as useful as a prior art system.
  • the speed of an automatic microscope according to the present invention depends on the rate of successfully depicted true subobjects of interest. This rate, in turn, depends on both the rate of depicted candidates and on the ratio of true subobjects among the depicted candidates.
  • the rate of depicted candidates of an automatic microscope according to the present invention will to some extent depend on the possible number of simultaneous subobjects in the second region.
  • the size and shape of the second region will depend on the depicting resolution required for the actual candidates and can therefore be adaptive.
  • the field of view of high magnification and high resolution is however probably fixed and quite narrow.
  • the ratio of true subobjects among the depicted candidates depends, for the present invention as well as for the prior art, on the ratio of false positive detections.
  • An advantage of the present invention is that although detection is performed using a first region having at least a first resolution, the detection can be tailored to the resolution of the actual position within the first region using information from a compression chart. Thereby the number of false positives can be kept as low as possible.
  • Another, similar, advantage of the present invention is that some of the detected subobjects can be evaluated using additional, improved resolution, information from a third region of a later distorted image but still before arriving in the second region.
  • Figure 1 which has already been discussed above, shows schematically an example of an automatic microscope system
  • Figure 2 shows schematically some different embodiments of an optical subsystem for achieving distorted images
  • Figure 3 shows some simple examples of image distortion
  • Figure 4 is a flow chart describing a method for detecting and depicting subobjects according to an embodiment of the present invention
  • Figure 5 is a flow chart describing a reconstruction method according to an embodiment of the present invention.
  • Figure 6 is a block diagram of physical and signal processing principles for a supervision method according to an embodiment of the present invention.
  • Figure 7 is a flow chart describing a method, according to an embodiment of the present invention, for supervision of a reconstruction method. Description of Embodiments
  • Fig. 1 which has been discussed in the Background Art section, shows schematically an example of an automatic microscope system according to an embodiment of the invention.
  • the microscope which can be, for example, a brightfield microscope, a darkfield microscope or a phase contrast microscope, comprises the following main components: a control unit 100 for controlling the automatic microscope as well as for processing and analysis of electronic images aquired from an electronic image sensor 112, a positioning arrangement 106 for carrying an object holder 108 which in turn can carry an object 109, a drive unit 104 for driving the positioning arrangement, an illumination source 102 for illuminating the object, an optical subsystem 1 10 for producing optical images of the object 109 on to the electronic image sensor 1 12.
  • the microscope may also comprise an optional oil drop unit 114 for dropping immersion oil on to the object and optional position sensors 116 for providing information on the state of the positioning arrangement 106 to the control unit 100.
  • the control unit 100 can be a personal computer or an embedded system.
  • the control unit 100 can contain configurable hardware, such as a field programmable gate array (FPGA).
  • the control unit can contain software for performing steps of some method according to the invention or contain configuration information for configurable hardware.
  • the control unit 100 can also, at least partly, be implemented using remote hardware like a server in a computer network.
  • the positioning arrangement 106 is movable in a plane essentially perpendicular to the optical axis 1 11 of the optical subsystem 110 in order to facilitate that different parts of the object 109 can be imaged by the optical subsystem 110.
  • the drive unit 104 can contain drive mechanisms like electrical motors and other actuators for moving the positioning arrangement 106.
  • the drive unit 104 can also contain power electronics for motors and other actuators.
  • the object holder 108 can be designed for manual or automatic placement and removal of the object 109.
  • the object 109 can be any type of object to be examined in a microscope. It can be a biological object on a glass slide, such as a tissue sample or a blood sample in which cells are to be identified and examined, but it can also be a non-biological object.
  • the optional oil drop unit 114 can contain an oil tank and a mouthpiece, which are connected by a pump or simply by an on/off valve if gravitational feed is provided.
  • the mouthpiece can be placed somewhere where it can be reached by the object using the positioning arrangement 106.
  • the optional position sensors 116 can provide the control system 100 with information on where the movable parts of the positioning arrangement 106 are situated. The information can be coarse, like from on/off switches that react when a certain point is reached or analog information on the position of some parts of the positioning arrangement.
  • the optical subsystem 110 has at least one setting.
  • a setting of the optical subsystem 110 corresponds to producing a type of optical image of the object 109 on to the electronic image sensor 1 12.
  • the type of optical image can be, but is not limited to, a lOx magnification low resolution image, a 50x magnification medium resolution image or a lOOx magnification high resolution image.
  • the resultion may however be further limited by the illumination from the illumination source and/or the pixel pitch of the electronic image sensor.
  • the optical subsystem 110 has at least one setting which achieves significant distortion when producing an optical image of the object 109 on to the electronic image sensor 1 12.
  • the distortion can correspond to a magnification (of the optical image on the electronic image sensor) that decreases with the distance to the optical axis 1 1 1 .
  • Today, high resolution fields of view of more than 220 um in diameter at the object can be achieved.
  • a commercially available Olympus objective for oil immersion brightfield microscopy marked "Olympus PlanN lOOx/12.5 Oil F.N. 22" can, at lOOx magnifiaction, produce an image with a diameter of 22 mm without any significant distortion or curvature of field.
  • Such an image corresponds to a field of view, at the object, of 22mm/100x, which is 220 ⁇ .
  • An electronic image sensor 112 with 640 times 480 pixels and a pixel pitch of 10 pm combined with a magnification of lOOx uses only approximately 64 ⁇ *48 ⁇ of the 220 ⁇ diameter object area.
  • the potential gain in object area, if distortion can be used in order to compress the optical image is thus roughly 220*220/64/48 which is approximately 16 times. This gain may correspond to detecting one white blood cell in every other distorted image, instead of detecting one white blood cell in every 32nd l OOx magnification distortion free image.
  • the requirements on the optical subsystem 1 10 can be relaxed compared to the requirements on the traditional high fidelity images described above.
  • the relaxation depends on the desired resolution in different regions of the distorted optical image on the electronic image sensor.
  • a first region (of the optical image on the electronic image sensor) which is at least a bit off from the optical axis 1 11, the magnification is lower than on the optical axis 111.
  • the resolution in the corresponding first region of images from the sensor is (due to lower magnification and less dense sampling) lower. Thereby the requirements on resolution of the optical subsystem may also be lowered for this first region, leading to that some SA, coma and curvature of field can pass unnoticed.
  • significant distortion is desired and the distortion may even be different for different colours (L-CA) as long as the resolution is sufficient for dectecting interesting subobjects.
  • the result shall, after reconstruction, be high fidelity high resolution (colour) images.
  • Distortion and L-CA (but not much more) can be relaxed in the second region, since reconstructed images for different colours can be scaled and merged, see WO0055667A1, where, however, the images are not distorted. If there is significant A-CA in the second region, it can be handled by focusing separately for different colours, followed by scaling and merging, see WO0055667A1.
  • FIG. 2 three different embodiments of a setting of the optical subsystem for achieving distorted images are shown.
  • the optical subsystem 200 contains a specially designed objective 202, and a relay lens unit 204.
  • the optical subsystem 210 contains a standard objective 212, a relay lens unit 214 and a curved mirror 216 that compresses (causes barrel distortion to) the optical image in two dimensions on its way to the electronic image sensor 112.
  • a mirror that eliminates the need for a relay lens unit.
  • the mirror may only give distortion along one dimension.
  • the optical subsystem 220 contains a standard objective 222, an angular magnification unit 224 and a fisheye type lens unit 226.
  • the angular amplification unit can be designed for parallel light in, to suit an infinity focus objective, and parallel light out and to have an angular magnification suitable for giving a desired second region magnification when combined with the fish eye type lens unit 226 and the objective 222.
  • the angular magnification unit can work like a telescope.
  • the angular magnification unit 224 will adapt the quite small angles of the parallel bundles out of the standard objective 222 to fit the fisheye type lens unit 226 where the field of view, which can be used for maximum distortion, may be up to +- 90 degrees.
  • the objectives 202, 212, 222 can be designed for immersion oil between object 109 and objective.
  • the objectives 202, 212, 222 can be designed for infinity focus meaning that points of the object 109, which are in focus and within the field of view at the front of the objective, result in bundles (one per point) of parallel rays at the back of the objective.
  • the objectives 202 and 212 can have finite back focal lengths, making the corresponding relay lens unit unecessary but at the risk of redesign if the magnification has to be changed due to a change of sensor pixel pitch.
  • the relay lens units 204 och 214 can have back focal lengths suitable for giving the desired magnifications in the second region, i.e. close to optical axis 11 1, when combined with the other components of the corresponding embodiment.
  • the distortion is designed to be achieved inside the objective 202, at the mirror 216 and inside the fisheye type lens 226 respectively, the other optical components of each embodiment may also be designed to contribute to the distortion.
  • FIG. 3 A shows an example of so called radial distortion, meaning that the distortion is a function of the radial distance to an optical axis 1 11 of the optical subsystem 110.
  • Fig. 3 A shows an example of so called radial distortion, meaning that the distortion is a function of the radial distance to an optical axis 1 11 of the optical subsystem 110.
  • three concentric circles 300, 302, and 304 with increasing radii are shown like they can appear on a electronic image sensor 1 12 if the corresponding optical subsystem 110 is substantially free from distortion.
  • To the right in Fig. 3A corresponding, still concentric, circles 310, 312 and 314 are shown after one possible distortion.
  • the optical axis 111 goes through the origins both to the left and to the right in Fig. 3 A.
  • the magnification to the right decreases, due to the distortion, with the distance to the optical axis 11 1.
  • a radial distortion function 330 used in the example in Fig 3 A is shown.
  • r_u and r_d correspond to undistorted radius and distorted radius, respectively.
  • the position of the three dots 320, 322 and 324 in Fig. 3B correspond to pairs of (r_u, r_d) of the three concentric circles of Fig. 3 A.
  • the slope of r_d with respect to r_u descreases with the distance to the optical axis, corresponding to decreasing magnification.
  • Equations 1-3 which are radial components of Brown's distortion model, show that x_u and y_u are distorted by the same factor, which is equivalent to preservation of the angle.
  • r_d can be computed using
  • An object at a radius corresponding to 320 will be have its details magnified according to the slope (d r_d/d r_u) in 320, while an object at a radius corresponding to 324 will be have its details magnified according to the slope (d r_d/d r_u) in 324, which is lower than the one in 320. Therefore an object at 320 will have a higher "pixel density" (i.e. occupy more pixels) at the electronic image sensor 112 than an equally sized object at 324. The result is that there will be a higher "pixel density" and thus more possible detail information for an object at 320 than for a corresponding object at 324. This effect can also be seen in Fig. 3A.
  • the distortion function affects, via the magnification, how dense an object is sampled at the electronic image sensor 112 and thereby the resolutions at different regions of the electronic image sensor 112.
  • reconstruction can be used to increase the "pixel density"
  • the distortion will still limit the resolution after reconstruction, since the reconstruction is based on the limited information in the images of the electronic image sensor 1 12.
  • the resolutions, after reconstruction, for different regions of the image of the electronic image sensor 1 12, are defined by the setting of the optical subsystem 110 and its corresponding distortion function and by properties like the pixel pitch of the electronic image sensor 112. For example, there may be a first region corresponding to the resoltion of a traditional 10x-25x magnification low resolution objective for detection of interesting subobjects. There can be a second region, which after reconstruction can give images corresponding to a traditional lOOx magnification high resolution objective for detailed depicting of subobjects.
  • Reconstruction is a way of normalising the magnification and the "pixel density" for easier understanding of the images by humans or for easier processing and analysis of the images by algorithms.
  • the "pixel density” after reconstruction can be chosen to (at least twice) the highest resolution in the reconstructed image according to the Nyquist sampling theorem.
  • the electronic image sensor 112 may be mounted in various positions relative to the optical axis
  • the image sensor 112 can be mounted symmetrically around the optical axis 111.
  • the image sensor can be mounted with the optical axis 111 close to an edge or a corner of the image sensor
  • the mount can be adjustable, by hand or by some actuator.
  • the position of the image sensor 1 12 with respect to the optical axis 111 will affect the distortion, magnification and resolution in different regions of the distorted images from the electronic image sensor 1 12.
  • Fig. 3C shows another example of radial distortion.
  • To the left in Fig. 3C there are three circles 340, 342 and 344 with identical radii but with different displacements in the x-direction relative to the optical axis 111, which is in the origin.
  • To the right in Fig. 3C the resulting appearances are shown according to the same radial distortion function as in Fig. 3A.
  • the circle 344, which is centered on the optical axis to the left is still a circle 354 around the optical axis to the right.
  • the other two circles 340 and 342 to the left do not appear as circles to the right, since the circles 340 and 342 do not have a constant radius with respect to the optical axis and since r_d is not proportional to r_u.
  • the radial distortion function is essentially a straight line where r_d is proportional to r_u and where the shape of the objects is preserved.
  • the displacement of the undistorted circles 340 and 342 causes a change in shape after distortion. That effect is useful for example when supervising or estimating a radial distortion function.
  • the distortion function is a smooth radial distortion function.
  • the distortion function does not necessarily have to be a smooth radial distortion function like in Eq. 1 -3. It does not even have to be a radial distortion function. It is possible to take care of other distortions, for example caused by different optical parts not being properly aligned, but then the equivalents of Eq. 1-3 get more complicated and there will be additional parameters to supervise and/or estimate.
  • the illumination source 102 can be a single source that supports all settings of the optical subsystem 110.
  • the illumination source 102 can consist of multiple subsources or have settings of its own in order to support the settings of the optical subsystem.
  • the illumination source or subsources may use light emitting diodes (LED:s) or incandescent light bulbs.
  • the illumination source 102 can also contain power electronics or optics for forming the illumination in a way that suits the object 109 or the optical subsystem 110.
  • the optical axis 11 1 may not always be straight, se Fig. 2B.
  • the optical axis can be a reference for radial distortion functions and radial distortion models.
  • the electronic image sensor 1 12 can be a two-dimensional CCD sensor or CMOS sensor or some other suitable sensor.
  • Fig. 4 is a flow chart showing possible steps of an example method for controlling an automatic microscope according to the invention.
  • the object 109 of study is assumed to be in the object holder 108 and the settings of the illumination source 102, the optical subsystem 110, the electronic image sensor 112 and other parts are assumed to be set.
  • the object of study is positioned in a desired position relative to the optical subsystem. This can, for example, be in a corner of a possibly rectangular area of the object 109.
  • a distorted image from the electronic image sensor 112 is aquired.
  • the image is distorted due to a distortion function, possibly a radial distortion function.
  • autofocusing can be performed, which is well known to a person skilled in the art.
  • at least parts of the distorted image may be stored for future use.
  • detection of possible interesting subobjects is performed using information from a first region of the aquired distorted image.
  • the resolution is at least at or above a first level, referred to as the "first resolution".
  • the detection can be performed in at least three ways, which differ not only in how they are performed but also in how much memory, energy and computational resources that can be required.
  • a first way of detection can be to detect based only on a corresponding (partly) reconstructed image.
  • a second way of detection can be to detect based only on the distorted image.
  • the application dependent detection criterion that is used may be adjusted according to the distortion/magnification and the resolution at the actual position in the distorted image. By such adjustment, the dectection performance may be improved.
  • a third way of detection can be to use the distorted image for pre-detection of possible areas of interest, to perform reconstruction of the areas of interest and then to detect interesting subobjects based on reconstructed images of the areas of interest.
  • pixel density and resolution are not synonyms. While detecting using a (partly) reconstructed image it can be beneficial to use a compression chart that describes the optical compression of details that occurs when the optical subsytem produces a distorted image of the object on to the electronic image sensor 112.
  • the resolution can, as pointed out earlier, be expressed in linepairs per micrometer object, while the compression chart can provide, for example, a compression factor that, if multiplied with the resolution on the optical axis, gives, at least an estimate of, the reduced resolution in different parts of the reconstructed image. Equation 4 above can be used for computing the compression factor as the slope of r_d with respect to r_u.
  • the compression factor which, for barrel distortion is less or equal to than one, will be different in different parts of the reconstructed image.
  • Such a compression chart can be organised as an image.
  • Such a compression chart can alternatively be expressed using mathematical expressions that describe borders between different regions of different compressions within the
  • step 406 information on detected possible subobjects, now called candidate subobjects, is stored possibly by storing at least estimated coordinates of the candidate subobjects in a list or in some other searchable storage structure. Parts of the distorted image, parts of a reconstructed image, settings of illumination, filters, exposure times, estimated size of the candidate subobject or other information can also be stored.
  • the list of candidate subobjects is evaluated in order to determine whether there may be any candidate subobjects in the second region of the current distorted image. If so, the method proceeds to step 410. If not, the method skips to step 414.
  • the second region has a resolution that for all parts of this second region is at least at or above a second level, referred to as the "second resolution". The second resolution is good enough for high resolution depicting of subobjects.
  • the position of a candidate subobject is depicted using the second region of a distorted image that is properly focused at least in the area correspodning to the candidate subobject. Then a high resolution image of the area corresponding to the candidate subobject can be formed.
  • the high resolution image can consist of a (partly) reconstructed image.
  • the high resolution image can also consist of pixels from parts of distorted images together with information needed for later reconstruction.
  • the high resolution image of the candidate can be checked in order to determine that it contains a interesting subobject.
  • a software counter of depicted subobjects can be incremented and the high resolution image can be stored.
  • the information on detected subobjects can be updated to show that the actual candidate subobject has been depicted, possibly by deleting the present candidate from the list/storage structure.
  • the list/storage structure of candidate subobjects can be further evaluated in order to determine whether there may be any candidate subobjects in a third (middle resolution) region of the current distorted image. If so, the method proceeds to step 416 If not, the method skips to step 420.
  • the position of a candidate subobject is depicted using the third region of a distorted image.
  • step 418 information from the third region of the distorted image, although not good enough for high resolution depicting, can be used for evaluating and possible confirming of the actual candidate subobject. If, after an evaluation using the additional information from the third region, it is found that the candidate subobject is a false positive candidate, the information on detected subobjects can be updated to show that the present candidate subobject is no longer a candidate and that no depicting should be performed for that candidate.
  • step 420 it is determined if the combined searching and depicting according to the method should go on or not.
  • a software counter of depicted subobjects may indicate that a sufficient number of subobjects has been depicted and the method can be ended.
  • the method can detect that a complete preset area of the object 109 has been searched and that all candidate subobjects from that area have been depicted and then end the method.
  • the method goes on and the next desired position for the object 109 relative to the optical subsystem 110 is determined.
  • methods for prediction of the best focusing position may also be performed. Such prediction methods are known to the person skilled in the art and are not described here.
  • the next desired (relative) position can be determined according to a navigation algorithm.
  • the navigation algorithm can be really simple: As long as there are candidate subobjects remaining to depict, go to the closest one and, while there, also search the corresponding first region and store new candidate objects. If there currently are no candidate subobjects, determine the next desired position in order to maximise the usefulness of the first region of the next distorted image.
  • the navigation algorithm can be a bit more complicated: If steps corresponding to steps 414, 416 and 418 are implemented, it is possible to evaluate candidates using a third region with at least a third resolution. This evaluation can be performed in order to not waste the limited area of the second region of future distorted images on false positive candidates.
  • the third region can be used if it just happens to contain a candidate subobject or the navigation algorithm may use the third region in a more active way by changing the priorities to acitivities like maximizing useful search area of the first region, evaluating possibly false candidates and depicting candidates. It may save time to pause the depicting and instead evaluate a number of candidates using the third region of a single distorted image and possibly find out that only one of the evaluated candidates is a true positive candidate.
  • the navigation algortihm may benefit from being even more complicated:
  • the method may be searching for both a first kind and a second kind of subobjects simultaneously.
  • the navigation algorithm may also have to prioritise not only searching, evaluating and depicting but also the two kinds of subobjects.
  • the second kind of subobjects may be complicated and result in a high ratio of false positive candidates when detected using first region information only.
  • the navigation algorithm may be designed to work in a way that minimises the risk for unsearched areas of the object.
  • the navigation algorithm may be designed to work in a way that minimises the risk for the same subobject being detected/depicted twice, especially if there is significant play in the positioning arrangement 106.
  • the second region corresponds to a limited field of view of the object, so there may exist large subobjects or dense clusters of multiple subobjects that can not be depicted using the second region of a single distorted image.
  • the large/clustered subobjects may be handled at steps 406, 410, 412, 414, 418 and/or 422:
  • the candidate subobject may seem too large or seem to be part of a dense too large cluster of multiple subobjects.
  • the method may then determine that multiple images of second region resolution are needed in order to depict the candidate subobject.
  • a large or cluster candidate subobject can be stored as a plurality of candidate subobjects with different doordinates. Perhaps the target number of depicted candidates may have to be adjusted as well.
  • the method can aquire and merge multiple images of second region resolution in order to depict the candidate subobject.
  • the method may also check that the whole candidate (clustered) subobject was depicted.
  • a segmentation of a possible cluster may be performed.
  • information from a third region can be used in order to gain information on the size of the large/clustered subobjects.
  • the navigation algorithm may have to navigate sequentially to multiple parts of large/clustered subobjects. Since the high magnification high resolution images of existing systems are also limited in filed of view, the problem of large/clustered subobjects is not unique to the present invention and is therefore not further discussed.
  • control unit 100 can use registering of consecutive distorted images after reconstruction in order to supervise the position of the object 109 relative to the optical subsystem 110, as is well known to a person skilled in the art, see for example EP1377865B 1 , where, however, the images used have not been reconstructed.
  • the electronic image sensor 112 may be mounted in various positions relative to the optical axis 111 and that the position can afffect the shape and size of the first, second and third regions.
  • the navigation algorithm can be chosen to be more or less complicated. As a part of designing an automatic microscope according to the present invention, it is probably advantageous to simulate the interaction of the position of the sensor 112 relative to the optical axis 111, the distortion function of the optical subsytem 1 10, the type(s) of subobjects, how subobjects of interest may be distributed over the object, the detection criterion, the use of evaluation, the dynamics of the positioning arrangement 106, the
  • Fig. 5 is a flow chart describing a reconstruction method, which is based on a distortion model of the distortion function.
  • the distortion model can be of the same form as the distortion function of Eq. 1-3 with the only difference that the true parameters of the distortion function may not be perfectly known and that the parameters of the distortion model are estimated and/or simulated.
  • an area, of the reconstructed image, to be reconstructed is selected.
  • the "pixel density" can also be selected, if not default.
  • an initial pixel of the area to be reconstructed and with undistorted coordinates (x_u,y_u) is chosen.
  • the distorted coordinates (x_d,y_d) corresponding to (x_u,y_u) are computed according to the distortion model and relative to a assumed position (x0_est,y0 est) of the optical axis 1 11 in the x-y-plane of the electronic image sensor 112.
  • step 512 information in the distorted image is accessed. Possibly the (x_d,y_d), that was computed at step 508, are not perfectly corresponding to a single pixel of the distorted image. If not, the distorted image can be interpolated in order to determine a pixel value for (x_d,y_d). A person skilled in the art is familiar with image interpolation.
  • the pixel value that was determined for (x_d,y_d) is assigned to (x_u,y u) of the reconstructed image.
  • step 520 the method continues with step 524 until all pixels in the area to be reconstructed have been assigned.
  • a new not yet assigned pixel with undistorted coordinates (x_u,y_u) is chosen and the loop continues at step 508.
  • the reconstruction can, especially at step 508, be performed colourwise with different parameter values in the distortion model.
  • the radius r_u of Equation 3 is evaluated for a radial distortion model. It can then be efficient to also compute the compression as the slope of r_d with respect to r_u as a function of r_u using Equation 4, if computation of a compression chart is desired.
  • Reconstruction from distorted image to undistorted image can be performed according to a distortion model, for example in step 508 of the exemplified reconstruction method above. Since successful reconstruction depends on the correctness of the distortion model used and since reconstructed images may be used for supervision of the position of the object 109 relative to the optical subsystem 1 10 as well, it is possible that the automatic microscope system, from a safety and/or reliability point of view, must contain supervision of the distortion model.
  • Fig. 6 is a block diagram for showing how the physical and signal processing principles of a possible distortion model supervision method can interact.
  • Block 600 corresponds to the object 109.
  • Block 604 corresponds to aquiring a first distorted image of the object.
  • Block 608 corresponds to the first distorted image.
  • Block 612 corresponds to a possible translation of the object 109 by the distances dx in the x-direction and dy in y- direction respectively.
  • Block 616 corresponds to the object 109 after the translation.
  • Block 620 corresponds to aquiring a second distorted image of the object.
  • Block 624 corresponds to the second distorted image. The aquiring of the first and second distorted images are assumed to be performed according to the same, but possibly not completely known, physical parameters.
  • the physical parameters include the distortion function, fl, and parameters xO and yO for the position of the optical axis 111 in the x-y-plane of the electronic image sensor 112.
  • the distortion function may be of the same form as the distortion function of Eq. 1-3. If so, the distortion function can be represented by the parameters Kl, K2,...
  • Blocks 628 and 632 correspond to reconstruction of the first and second distorted images under the influence of an estimated distortion model fl est and an estimated position (xO esfyO est) of the optical axis 1 1 1 in the x-y-plane of the electronic image sensor 112.
  • the parameter fl est may be represented by Kl est, K2_est, ... but it will still be denoted fl est below.
  • Blocks 636 and 640 correspond to the reconstructed first and second images under the influence of an estimated distortion model fl est and an estimated position (x0_est,y0_est), while block 644 corresponds to a detranslation of the second reconstructed image according to estimates dx est and dx est of the corresponding physical parameters dx and dy at block 612.
  • Blocks 604, 620 and 612 correspond to real world physical processes based on possibly not completely known physical parameters dx, dy, xO, yO and fl .
  • Blocks 628, 632 and 644 correspond to signal processing processes based on estimated parameters dx est, dy est, xO est, yO est and fl est. If the estimated parameters are close to their corresponding physical parameters, and if blocks 628, 632 and 644 are implemented closely enough to inverses of blocks 604, 620 and 612 respectively, the supposedly overlapping areas of 636 and the detranslated 640 will look very much alike.
  • Fig. 7 is a flow chart showing an example of a method for supervision of the distortion function and, possibly, for supervision of the whole reconstruction method as well.
  • a suitable object 109 is positioned relative to the optical subsystem 1 10.
  • the object can be a general object under analysis or a specially made calibration object
  • a first distorted image of the object 109 is aquired.
  • the object 109 is translated some distance/s in x and/or y corresponding to dx and dy.
  • a second distorted image of the translated object is aquired. It should overlap the first distorted image of step 704 in order to be useful.
  • initial values of parameters dx est, dy est, xO est, yO est and fl est are determined. It is an advantage if dx est and dy est have values that are quite close to the translation distances dx and dy in step 708. It is also an advantage if the values of xO est, yO est and fl est correspond to the last known values for these parameters, i.e. the values that are supposed to be supervised.
  • the first distorted image is reconstructed according to xO est, yO est and f l est.
  • the second distorted image is reconstructed according to xO est, yO est and fl est and then detranslated according to dx est and dy est.
  • the comparison measure can, for example, be some general measure that is used when registering images, see, for example, W09931622AI .
  • step 736 the values of the estimated parameters and the corresponding comparison measure are stored for later use.
  • step 740 it is determined if the current comparison measure is good enough.
  • the comparison measure may depend on the details of the object, it may not be possible to compare it to a fixed tolerance. Therefore it may not be possible to end the method after its first pass.
  • the method prepares for another loop by updating the estimated parameters according to some parameter search method. If the task of the supervision method is a mere determination of dx est and dy est for position (translation) supervision, it can be sufficient to use a parameter search method that varies dx est and dy est until an optimum of the comparison measure with respect to these two parameters has been found. If there is no good enough fit for the current xO est, yO est and fl est, that can be a sign of the distortion model not being correct and that there is a need for changing the task to supervising the position (x0,y0) of the optical axis 1 11 in the x-y-plane of the electronic image sensor 112 as well.
  • the parameter search method will have to vary xO est and y0_est as well in order to determine if there is a better estimate than the current (xO est, yO est). If the fit is still not good enough, it is possible to open up fl est for estimation as well.
  • the parameter search metohd can be based on, for example, the Nelder-Mead (simplex) metod. If the supervision method uses the implementation of the reconstruction method for its own reconstructions, see steps 720 and 724, the reconstruction method will, to some extent, be supervised as well.
  • the values of the comparison measure computed at step 732 can depend on the object, on the resolution and the distortion of the optical subsystem and on the electronic image sensor, it is hard to give a general definition of "good enough". Therefore estimation of some of the parameters described above can preferably be performed as part of a service routine, since it may take some time and/or benefit from using a known calibration object that has patterns that facilitate convergence of the parameter estimates.
  • Another possible way of normalising the values of the comparison measure could be to use objects that comprise a sample mounted on a glass slide with a sample free area having a specially designed calibration pattern. Still, the best calibration pattern can depend on the resolution and the distortion of the optical subsystem and on the electronic image sensor.
  • the compression chart can be determined as a part of the reconstruction.

Landscapes

  • Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Analytical Chemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Dispersion Chemistry (AREA)
  • Biochemistry (AREA)
  • Multimedia (AREA)
  • Pathology (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Theoretical Computer Science (AREA)
  • Microscoopes, Condenser (AREA)

Abstract

In an automatic microscope system with an optical subsystem (110) producing distorted images of an object (109) on an electronic image sensor (112), the object (109) is navigated while information from a first region of the distorted images is used for detecting interesting subobjects and while information from a second region, with higher magnification and resolution than the first, is used for depicting previously detected subobjects. During the navigation, a control unit (100) controls the positions of the object (109) relative to the optical subsystem (110) by executing a method for searching and depicting.

Description

Title
ARRANGEMENTS AND METHODS IN A MICROSCOPE SYSTEM Technical Field
The present invention relates to arrangements for object search and object depicting in a microscope system as well as methods for operating the arrangements. Background Art
Automatic microscope systems like for example the DM96 and the DM1200 systems from CellaVision AB are sold in increasing numbers. Such automatic microscope systems can, depending on model and application software, be used for analysis of for example blood smears and body fluid samples.
Figure 1 shows schematically an example of how an automatic microscope system can be composed. Automatic microscope systems can search for interesting subobjects within an object 109, the object often being some biological sample placed on a glass slide, and then depict at least the interesting subobjects using a high magnification, high resolution setting for the optical subsystem 110 and the electronic image sensor 112 of the microscope system. The object 109, which can be held in place by an object holder 108, is positioned using some positioning arrangement 106, which in turn may have a separate drive unit 104. The object 109 is often illuminated by light transmitted from an illumination source 102 and through suitable holes in the positioning arrangement 106. In addition, there may be an automatic oil drop unit 114 for application of immersion oil and some position sensors 116 for sensing some states of the positioning arrangement 106. In Figure 1 a single control unit 100 is controlling all sensors and actuators while also receiving, processing and analysing the images from the electronic image sensor 112.
Although high magnification and high resolution often go hand in hand in optical subsystems, the expressions are not synonyms. Resolution can be used to describe the maximum ability of the optical subsystem 1 10 to resolve small details of the object 109 and can be expressed in line pairs per micrometer of object. Magnification is important for adapting a detailed optical image, produced by an optical subsytem, to a sensor, for example a human eye or an electronic image sensor If the magnification is not high enough for the actual combination of optical resolution and sensor, some of the details in the optical image on the sensor are lost when sampled by the sensor (undersampling) and the resolution will be limited by sensor. If the magnification is too high, there is a great number of pixels per micrometer of object and the resultion will be limited by the resultion in the optical image on to the sensor. Throughout this description, the number of pixels per micrometer object in an image will be referred to as the "pixel density", while the resolvable details in an image measured in line pairs per micrometer of object will be referred to as "resolution" is the maximum possible resolution independently of the existance of small subobjects.
In the first product from CellaVision AB, the DiffMaster Octavia, the optical subsystem 110 has only one single, high magnification, high resolution, setting. Since the area of the electronic image sensor 112 is limited, the high magnification results in a narrow field of (simultaneous) view of the object 109.
Since a high resolution microscope objective, in addition, has a limited depth of focus, the DiffMaster Octavia has to search for possible interesting subobjects using one narrow field of view at a time and under focus control in order to not miss any possible interesting subobjects. The search and the automatic focus control is performed by the control unit 100 using images from the electronic image sensor 112. The result is an automatic microscope system with a low number of parts and with a very good resolution for depicting subobjects, but at the price of a quite slow search speed.
In later products from CellaVision AB, like the DM96 and DM1200 systems, the search for possible interesting subobjects is made using a low magnification, low resolution setting for the optical subsystem 110 resulting in a set of candidate subobjects and then the depicting of the candidate subobjects is made using a high magnification, high resolution setting for the optical subsystem 110 by returning to their previously determined positions. The result is a quite fast system, with a very good resolution for depicting subobjects but at the price of an increased number of parts compared to the DiffMaster Octavia. An increased number of parts may in addition also require extra costs for the hardware, more reuqired space and more adjustments and may also result in lower reliability and added weight of the microscope system.
In more detail, the optical subsystem 110 of the DM96 system contains a motorised revolver equipped with multiple microscope objectives in combination with a single relay lens unit with a fixed back focal length which is used in combination with all of the objectives. These parts of the optical subsystem 110 are not shown in Figure 1. The relay lens unit accomodates the light from the objective currently in use on to a single fixed mount camera with an electronic image sensor 112. There is a single illumination source 102 used by all the objectives.
The DM1200 system uses up to three fixed mount objectives in combination with two movable relay lens units and a single movable camera with an electronic image sensor 112. The two relay lens units have one fixed back focal length each. By lining up the camera with one selected relay lens unit and one selected objective a number of resulting resolutions and magnifications can be achieved. Placed below each fix mount objective there is a separate illumination source 102 In microscopy immersion oil is often used between the optical subsystem 110 and the object 109 in order to achieve the highest possible optical resolution, which in turn may be required for a reliable analysis result from the automatic microscope system. Since the lowest magnification, lowest resolution objectives, on the other hand, are designed to be used without immersion oil, it is not practical to use a low resolution objective once immersion oil has been applied to the object 109. Therefore, the automatic microscope system may not be able to go back and search for more possible interesting subobjects in case the set of candidate subobjects was found, during depicting, to include too many false positives, i.e. too few interesting subobjects. As a consequence, the automatic microscope system may have to search for a great excess of possible interesting subobjects before applying the immersion oil and switching to a high magnification, high resolution setting. As a result, most of the time the great excess is not fully used, leading to a waste of time during search.
The need for excess may, however, be eliminated if the automatic microscope system is capable of producing high magnification, high resolution images and low magnification, low resolution images more or less simultaneously and without any time consuming switching of mechanical settings of the optical subsystem 110 or the electronic image sensor 1 12.
In US3895854 it is disclosed how a microscope may use a plurality of illumination colours and, due to colour filters and beam splitters inside the optical subsystem, conduct the light along different optical paths for different colours, resulting in different magnifications and resolutions for differents colours. However, if some colour with low magnification is used for searching for possible interesting subobjects while the remaining colours with higher magnifications are used for depicting the candidate subobjects and if a single electronic image sensor is used for capturing all these colours, it will not be possible to capture enough information for high resolution, high magnification, colour images. With two separate electronic image sensors and possibly some additional beamsplitters and/or colour filters, it may however be possible. Neither multiple optical paths nor multiple electronic image sensors are shown in Figure 1. The result is a fast system, with a very good resolution for depicting subobjects, but at the price of an increased number of parts compared to the DiffMaster Octavia, especially if the depicting is to be made with all colours.
In US4061914 it is disclosed how a "scene analysis apparatus" may, inside the optical subsystem, use a beam splitter and conduct the light from the object and the objective along two parallel optical paths, each with a separate electronic image sensor. There is one path providing low magnification, low resolution images with a wider field of view of the object and one path providing simultaneous high magnification, high resolution images with a narrow field of view of the object. The disclosed apparatus searches for possible interesting subobjects using the low resolution images. While the apparatus moves previously found candidate subobjects into the field of view of the high resolution image sensor, the movement also positions a new part of the object for search in the field of view of the low resolution image sensor. The result is a fast system, with a very good resolution for depicting subobjects, but still at the price of an increased number of parts compared to the DiffMaster Octavia.
Thus there is still a need for a fast automatic microscope system consisting of a low number of parts and which produces very good resolution images of interesting subobjects.
Summary of Invention
It is an object of the present invention to enable an improved combination of low number of parts, high speed and very good resolution depicting in automatic microscope systems for searching and depicting subobjects of interest.
The object is achieved wholly or partly with an automatic microscope system with an optical subsystem that can produce distorted images of an object on to an electronic image sensor in combination with a method for searching and depicting the object.
The distorted images of the object have, simultaneously, at least two regions per image. There is a first region having at least a first resolution providing information that is at least good enough for detection of subobjects of interest. There is also a second region having at least a second (higher) resolution providing information that is at least good enough for depicting subobjects with a very good resolution.
According to the method for searching and depicting, the object is navigated using a positioning arrangement, while information from the first region of the distorted images is used for detecting interesting subobjects and storing these together with estimated coordinates and while information from the second region of the distorted images is used for depicting previously detected interesting subobjects, when present in the second region. A control unit executes the method for searching and depicting, by sequentially determining new coordinates to visit according to a navigation algorithm, by sequentially controlling the positions of the object relative to the optical subsystem according to the new coordinates, by performing detection in new areas of the object that are brought into the first region and, when at least one detected subobject is present in the second region, by depicting such subobjects.
By using distorted images of the object it is possible to have simultaneous low magnification, low resolution images with a wide field of view and high magnification, high resolution images with a narrow field of view like in the systems disclosed in US3895854 and US4061914, but now with the advantage of not necessarily increasing the number of parts in the optical subsystem or the number of electronic image sensors.
The method for searching and depicting can provide reconstructed versions of the distorted images as a way of normalising the magnification and the "pixel density" of the images. The reconstructed versions of the distorted images can be used by humans or by image analysis algorithms, making an automatic microscope according to the present invention at least as useful as a prior art system.
The speed of an automatic microscope according to the present invention depends on the rate of successfully depicted true subobjects of interest. This rate, in turn, depends on both the rate of depicted candidates and on the ratio of true subobjects among the depicted candidates.
The rate of depicted candidates of an automatic microscope according to the present invention will to some extent depend on the possible number of simultaneous subobjects in the second region. The size and shape of the second region will depend on the depicting resolution required for the actual candidates and can therefore be adaptive. In an automatic microscope according to the prior art, the field of view of high magnification and high resolution is however probably fixed and quite narrow.
The ratio of true subobjects among the depicted candidates depends, for the present invention as well as for the prior art, on the ratio of false positive detections. An advantage of the present invention is that although detection is performed using a first region having at least a first resolution, the detection can be tailored to the resolution of the actual position within the first region using information from a compression chart. Thereby the number of false positives can be kept as low as possible.
Another, similar, advantage of the present invention is that some of the detected subobjects can be evaluated using additional, improved resolution, information from a third region of a later distorted image but still before arriving in the second region.
Brief Description of Drawings
Figure 1, which has already been discussed above, shows schematically an example of an automatic microscope system;
Figure 2 shows schematically some different embodiments of an optical subsystem for achieving distorted images;
Figure 3 shows some simple examples of image distortion;
Figure 4 is a flow chart describing a method for detecting and depicting subobjects according to an embodiment of the present invention;
Figure 5 is a flow chart describing a reconstruction method according to an embodiment of the present invention;
Figure 6 is a block diagram of physical and signal processing principles for a supervision method according to an embodiment of the present invention;
Figure 7 is a flow chart describing a method, according to an embodiment of the present invention, for supervision of a reconstruction method. Description of Embodiments
The invention is described more fully hereinafter with reference to the accompanying drawings, in which examples of embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the specific embodiments set forth herein. It should also be noted that these embodiments are not mutually exclusive. Thus, components or features from one embodiment may be assumed to be present or used in another embodiment, where such inclusion is suitable. Composition of the Automatic Microscope
Fig. 1 , which has been discussed in the Background Art section, shows schematically an example of an automatic microscope system according to an embodiment of the invention.
The microscope, which can be, for example, a brightfield microscope, a darkfield microscope or a phase contrast microscope, comprises the following main components: a control unit 100 for controlling the automatic microscope as well as for processing and analysis of electronic images aquired from an electronic image sensor 112, a positioning arrangement 106 for carrying an object holder 108 which in turn can carry an object 109, a drive unit 104 for driving the positioning arrangement, an illumination source 102 for illuminating the object, an optical subsystem 1 10 for producing optical images of the object 109 on to the electronic image sensor 1 12. In addition, the microscope may also comprise an optional oil drop unit 114 for dropping immersion oil on to the object and optional position sensors 116 for providing information on the state of the positioning arrangement 106 to the control unit 100.
The control unit 100 can be a personal computer or an embedded system. The control unit 100 can contain configurable hardware, such as a field programmable gate array (FPGA). The control unit can contain software for performing steps of some method according to the invention or contain configuration information for configurable hardware. The control unit 100 can also, at least partly, be implemented using remote hardware like a server in a computer network.
The positioning arrangement 106 is movable in a plane essentially perpendicular to the optical axis 1 11 of the optical subsystem 110 in order to facilitate that different parts of the object 109 can be imaged by the optical subsystem 110.
The drive unit 104 can contain drive mechanisms like electrical motors and other actuators for moving the positioning arrangement 106. The drive unit 104 can also contain power electronics for motors and other actuators.
The object holder 108 can be designed for manual or automatic placement and removal of the object 109.
The object 109 can be any type of object to be examined in a microscope. It can be a biological object on a glass slide, such as a tissue sample or a blood sample in which cells are to be identified and examined, but it can also be a non-biological object.
The optional oil drop unit 114 can contain an oil tank and a mouthpiece, which are connected by a pump or simply by an on/off valve if gravitational feed is provided. The mouthpiece can be placed somewhere where it can be reached by the object using the positioning arrangement 106. The optional position sensors 116 can provide the control system 100 with information on where the movable parts of the positioning arrangement 106 are situated. The information can be coarse, like from on/off switches that react when a certain point is reached or analog information on the position of some parts of the positioning arrangement.
The optical subsystem 110 has at least one setting. A setting of the optical subsystem 110 corresponds to producing a type of optical image of the object 109 on to the electronic image sensor 1 12. The type of optical image can be, but is not limited to, a lOx magnification low resolution image, a 50x magnification medium resolution image or a lOOx magnification high resolution image. The resultion may however be further limited by the illumination from the illumination source and/or the pixel pitch of the electronic image sensor.
According to the present invention, the optical subsystem 110 has at least one setting which achieves significant distortion when producing an optical image of the object 109 on to the electronic image sensor 1 12. The distortion can correspond to a magnification (of the optical image on the electronic image sensor) that decreases with the distance to the optical axis 1 1 1 . Today, high resolution fields of view of more than 220 um in diameter at the object can be achieved. A commercially available Olympus objective for oil immersion brightfield microscopy marked "Olympus PlanN lOOx/12.5 Oil F.N. 22" can, at lOOx magnifiaction, produce an image with a diameter of 22 mm without any significant distortion or curvature of field. Such an image corresponds to a field of view, at the object, of 22mm/100x, which is 220 μιη. An electronic image sensor 112 with 640 times 480 pixels and a pixel pitch of 10 pm combined with a magnification of lOOx uses only approximately 64μπι*48 μπι of the 220 μηι diameter object area. The potential gain in object area, if distortion can be used in order to compress the optical image, is thus roughly 220*220/64/48 which is approximately 16 times. This gain may correspond to detecting one white blood cell in every other distorted image, instead of detecting one white blood cell in every 32nd l OOx magnification distortion free image.
Traditionally, designers of microscope optics have tried to avoid distortion. Actually, designers have been striving for high fidelity images over a wide field of view, by requiring limited spherical aberration (SA), limited coma, limited distortion, limited curvature of field and limited chromatic aberrations (CA), both axial (A-CA) and lateral (L-CA). However, according to the present invention, a setting of the optical subsystem 110 with significant distortion does not necessarily have to produce high fidelity images over the whole field of view. According to the present invention, distorted electronic images can be reconstructed using a method of the invention described in the "Function of the Microscope" section below.
According to some embodiments of the invention the requirements on the optical subsystem 1 10 can be relaxed compared to the requirements on the traditional high fidelity images described above. The relaxation depends on the desired resolution in different regions of the distorted optical image on the electronic image sensor.
In a first region (of the optical image on the electronic image sensor) which is at least a bit off from the optical axis 1 11, the magnification is lower than on the optical axis 111. The resolution in the corresponding first region of images from the sensor is (due to lower magnification and less dense sampling) lower. Thereby the requirements on resolution of the optical subsystem may also be lowered for this first region, leading to that some SA, coma and curvature of field can pass unnoticed. In this first region, significant distortion is desired and the distortion may even be different for different colours (L-CA) as long as the resolution is sufficient for dectecting interesting subobjects.
In a second region, closer to and including the optical axis, the result shall, after reconstruction, be high fidelity high resolution (colour) images. Distortion and L-CA (but not much more) can be relaxed in the second region, since reconstructed images for different colours can be scaled and merged, see WO0055667A1, where, however, the images are not distorted. If there is significant A-CA in the second region, it can be handled by focusing separately for different colours, followed by scaling and merging, see WO0055667A1.
In Fig. 2, three different embodiments of a setting of the optical subsystem for achieving distorted images are shown.
According to an embodiment shown in Fig. 2A, the optical subsystem 200 contains a specially designed objective 202, and a relay lens unit 204.
With a specially designed objective 202, an even larger field of view, with distortion and relaxed requirements of first and second regions as described above, can be achieved since the relaxed requirements make it easier for the designer of the objective and since the distortion allows for the optical image to be compressed in diameter already inside the objective 202. During the history of optical design, distortion of the desired type (barrel distortion) has often been a problem when using short focal lengths and positive lenses and the objective designer has traditionally had to fight distortion. In the embodiment shown i Fig. 2A such distortion is instead desired.
According to an embodiment shown in Fig. 2B, the optical subsystem 210 contains a standard objective 212, a relay lens unit 214 and a curved mirror 216 that compresses (causes barrel distortion to) the optical image in two dimensions on its way to the electronic image sensor 112. In another embodiment, not shown, it may be possible to use a mirror that eliminates the need for a relay lens unit. In yet another embodiment the mirror may only give distortion along one dimension.
According to an embodiment shown in Fig. 2C, the optical subsystem 220 contains a standard objective 222, an angular magnification unit 224 and a fisheye type lens unit 226. The angular amplification unit can be designed for parallel light in, to suit an infinity focus objective, and parallel light out and to have an angular magnification suitable for giving a desired second region magnification when combined with the fish eye type lens unit 226 and the objective 222. The angular magnification unit can work like a telescope. The angular magnification unit 224 will adapt the quite small angles of the parallel bundles out of the standard objective 222 to fit the fisheye type lens unit 226 where the field of view, which can be used for maximum distortion, may be up to +- 90 degrees.
The objectives 202, 212, 222 can be designed for immersion oil between object 109 and objective. The objectives 202, 212, 222 can be designed for infinity focus meaning that points of the object 109, which are in focus and within the field of view at the front of the objective, result in bundles (one per point) of parallel rays at the back of the objective. The objectives 202 and 212 can have finite back focal lengths, making the corresponding relay lens unit unecessary but at the risk of redesign if the magnification has to be changed due to a change of sensor pixel pitch.
The relay lens units 204 och 214 can have back focal lengths suitable for giving the desired magnifications in the second region, i.e. close to optical axis 11 1, when combined with the other components of the corresponding embodiment.
Although the distortion is designed to be achieved inside the objective 202, at the mirror 216 and inside the fisheye type lens 226 respectively, the other optical components of each embodiment may also be designed to contribute to the distortion.
When distortion is achieved, it is an advantage, from a signal to noise ratio perspective, if the illumination of the pixels of the electronic image sensor 112 is quite even for an evenly illuminated object 109. Therefore it may be an advantage to have such a requirement on the design of the optical subsystem.
In Fig. 3, some examples of distortion are shown. Fig. 3 A shows an example of so called radial distortion, meaning that the distortion is a function of the radial distance to an optical axis 1 11 of the optical subsystem 110. To the left in Fig. 3 A, three concentric circles 300, 302, and 304 with increasing radii are shown like they can appear on a electronic image sensor 1 12 if the corresponding optical subsystem 110 is substantially free from distortion. To the right in Fig. 3A corresponding, still concentric, circles 310, 312 and 314 are shown after one possible distortion. The optical axis 111 goes through the origins both to the left and to the right in Fig. 3 A. The magnification to the right decreases, due to the distortion, with the distance to the optical axis 11 1.
In Fig 3B a radial distortion function 330 used in the example in Fig 3 A is shown. Here r_u and r_d correspond to undistorted radius and distorted radius, respectively. The position of the three dots 320, 322 and 324 in Fig. 3B correspond to pairs of (r_u, r_d) of the three concentric circles of Fig. 3 A. The slope of r_d with respect to r_u descreases with the distance to the optical axis, corresponding to decreasing magnification.
For a general point in a radially distorted image, its distance to the optical axis is distorted according to the radial distortion function while its angle in the x-y plane is preserved. The Equations 1-3 below, which are radial components of Brown's distortion model, show that x_u and y_u are distorted by the same factor, which is equivalent to preservation of the angle.
x_d = x_u*(l +Kl *r_uA2+K2*r_iiA4+...) (Equation 1 )
y d = y_u*( l+Kl *r_uA2+K2*r_uA4+...) (Equation 2) and where
r u = sqrt(xjj*x_u+y_u*y_u) (Equation 3)
and Ki is the i:th radial distortion coefficient. Following from Equations 1 and 2, r_d can be computed using
r_d=r_u*(l+Kl *r_uA2+K2*r_uA4+...) (Equation 4)
An object at a radius corresponding to 320 will be have its details magnified according to the slope (d r_d/d r_u) in 320, while an object at a radius corresponding to 324 will be have its details magnified according to the slope (d r_d/d r_u) in 324, which is lower than the one in 320. Therefore an object at 320 will have a higher "pixel density" (i.e. occupy more pixels) at the electronic image sensor 112 than an equally sized object at 324. The result is that there will be a higher "pixel density" and thus more possible detail information for an object at 320 than for a corresponding object at 324. This effect can also be seen in Fig. 3A. Thus, the distortion function affects, via the magnification, how dense an object is sampled at the electronic image sensor 112 and thereby the resolutions at different regions of the electronic image sensor 112. Although reconstruction can be used to increase the "pixel density", the distortion will still limit the resolution after reconstruction, since the reconstruction is based on the limited information in the images of the electronic image sensor 1 12.
The resolutions, after reconstruction, for different regions of the image of the electronic image sensor 1 12, are defined by the setting of the optical subsystem 110 and its corresponding distortion function and by properties like the pixel pitch of the electronic image sensor 112. For example, there may be a first region corresponding to the resoltion of a traditional 10x-25x magnification low resolution objective for detection of interesting subobjects. There can be a second region, which after reconstruction can give images corresponding to a traditional lOOx magnification high resolution objective for detailed depicting of subobjects. There can also be a third region corresponding to a traditional 25x-50x magnification middle resolution objective Reconstruction is a way of normalising the magnification and the "pixel density" for easier understanding of the images by humans or for easier processing and analysis of the images by algorithms. The "pixel density" after reconstruction can be chosen to (at least twice) the highest resolution in the reconstructed image according to the Nyquist sampling theorem.
The electronic image sensor 112 may be mounted in various positions relative to the optical axis
111. The image sensor 112 can be mounted symmetrically around the optical axis 111. The image sensor can be mounted with the optical axis 111 close to an edge or a corner of the image sensor
1 12. The mount can be adjustable, by hand or by some actuator. The position of the image sensor 1 12 with respect to the optical axis 111 will affect the distortion, magnification and resolution in different regions of the distorted images from the electronic image sensor 1 12.
Fig. 3C shows another example of radial distortion. To the left in Fig. 3C there are three circles 340, 342 and 344 with identical radii but with different displacements in the x-direction relative to the optical axis 111, which is in the origin. To the right in Fig. 3C the resulting appearances are shown according to the same radial distortion function as in Fig. 3A. The circle 344, which is centered on the optical axis to the left is still a circle 354 around the optical axis to the right. The other two circles 340 and 342 to the left do not appear as circles to the right, since the circles 340 and 342 do not have a constant radius with respect to the optical axis and since r_d is not proportional to r_u. For a traditional setting of the optical subsytem 110 the radial distortion function is essentially a straight line where r_d is proportional to r_u and where the shape of the objects is preserved.
Thus, the displacement of the undistorted circles 340 and 342 causes a change in shape after distortion. That effect is useful for example when supervising or estimating a radial distortion function.
In the examples, the distortion function is a smooth radial distortion function. However, the distortion function does not necessarily have to be a smooth radial distortion function like in Eq. 1 -3. It does not even have to be a radial distortion function. It is possible to take care of other distortions, for example caused by different optical parts not being properly aligned, but then the equivalents of Eq. 1-3 get more complicated and there will be additional parameters to supervise and/or estimate.
The illumination source 102 can be a single source that supports all settings of the optical subsystem 110. The illumination source 102 can consist of multiple subsources or have settings of its own in order to support the settings of the optical subsystem. The illumination source or subsources may use light emitting diodes (LED:s) or incandescent light bulbs. The illumination source 102 can also contain power electronics or optics for forming the illumination in a way that suits the object 109 or the optical subsystem 110.
The optical axis 11 1 may not always be straight, se Fig. 2B. The optical axis can be a reference for radial distortion functions and radial distortion models.
The electronic image sensor 1 12 can be a two-dimensional CCD sensor or CMOS sensor or some other suitable sensor.
Function of the Microscope
As described above, the resolutions, after reconstruction, for different regions of the image of the electronic image sensor 1 12, can be different. Therefore some regions are only good enough for detection of possible interesting subobjects, while other regions are good enough for high magnification, high resolution colour depicting of subobjects. Since different regions of the image have different, more or less simultaneous, uses, special methods for controlling an automatic microscope according to the invention will now be described in more detail. The features of the methods and their interaction with the distorted images are described with reference to Fig. 4, 5 and 7. Fig. 4 is a flow chart showing possible steps of an example method for controlling an automatic microscope according to the invention. When the method begins, the object 109 of study is assumed to be in the object holder 108 and the settings of the illumination source 102, the optical subsystem 110, the electronic image sensor 112 and other parts are assumed to be set.
At step 400, the object of study is positioned in a desired position relative to the optical subsystem. This can, for example, be in a corner of a possibly rectangular area of the object 109.
At step 402, a distorted image from the electronic image sensor 112 is aquired. The image is distorted due to a distortion function, possibly a radial distortion function. In connection with step 402, autofocusing can be performed, which is well known to a person skilled in the art. In connection with step 402, at least parts of the distorted image may be stored for future use.
At step 404, detection of possible interesting subobjects is performed using information from a first region of the aquired distorted image. For all parts of this first region the resolution is at least at or above a first level, referred to as the "first resolution". The detection can be performed in at least three ways, which differ not only in how they are performed but also in how much memory, energy and computational resources that can be required. A first way of detection can be to detect based only on a corresponding (partly) reconstructed image. A second way of detection can be to detect based only on the distorted image. Possibly, the application dependent detection criterion that is used may be adjusted according to the distortion/magnification and the resolution at the actual position in the distorted image. By such adjustment, the dectection performance may be improved.
A third way of detection can be to use the distorted image for pre-detection of possible areas of interest, to perform reconstruction of the areas of interest and then to detect interesting subobjects based on reconstructed images of the areas of interest.
As pointed out above, "pixel density" and resolution are not synonyms. While detecting using a (partly) reconstructed image it can be beneficial to use a compression chart that describes the optical compression of details that occurs when the optical subsytem produces a distorted image of the object on to the electronic image sensor 112. The resolution can, as pointed out earlier, be expressed in linepairs per micrometer object, while the compression chart can provide, for example, a compression factor that, if multiplied with the resolution on the optical axis, gives, at least an estimate of, the reduced resolution in different parts of the reconstructed image. Equation 4 above can be used for computing the compression factor as the slope of r_d with respect to r_u. The compression factor, which, for barrel distortion is less or equal to than one, will be different in different parts of the reconstructed image. Such a compression chart can be organised as an image. Such a compression chart can alternatively be expressed using mathematical expressions that describe borders between different regions of different compressions within the
reconstructed image. As a simple example of using a compression chart, suppose that there are interesting subobjects ("Type A") that all have a small characteristic detail and that there also exist similar non-interesting subobjects ("Type B") that differ from "Type A'only by missing that characteristic detail. When a subobject of "Type A" or "Type B" has been found at a some particular position in a reconstructed image, the compression chart of that reconstructed image can be consulted. If the resolution, after compression, at the particular position is high enough for reliable discrimination of "Type A" from "Type B", then only the subobjects having the small detail ("Type A") are kept as candidates. If not high enough, all subobjects looking as "Type A" or "Type B" are kept as candidates, since there is simply not enough resolution yet for a reliable discrimination.
At step 406, information on detected possible subobjects, now called candidate subobjects, is stored possibly by storing at least estimated coordinates of the candidate subobjects in a list or in some other searchable storage structure. Parts of the distorted image, parts of a reconstructed image, settings of illumination, filters, exposure times, estimated size of the candidate subobject or other information can also be stored.
At step 408, the list of candidate subobjects is evaluated in order to determine whether there may be any candidate subobjects in the second region of the current distorted image. If so, the method proceeds to step 410. If not, the method skips to step 414. The second region has a resolution that for all parts of this second region is at least at or above a second level, referred to as the "second resolution". The second resolution is good enough for high resolution depicting of subobjects. At step 410, the position of a candidate subobject is depicted using the second region of a distorted image that is properly focused at least in the area correspodning to the candidate subobject. Then a high resolution image of the area corresponding to the candidate subobject can be formed. The high resolution image can consist of a (partly) reconstructed image. The high resolution image can also consist of pixels from parts of distorted images together with information needed for later reconstruction.
At step 412, the high resolution image of the candidate can be checked in order to determine that it contains a interesting subobject. In case the checking gives a positive result, a software counter of depicted subobjects can be incremented and the high resolution image can be stored. The information on detected subobjects can be updated to show that the actual candidate subobject has been depicted, possibly by deleting the present candidate from the list/storage structure. At step 414, an optional step, the list/storage structure of candidate subobjects can be further evaluated in order to determine whether there may be any candidate subobjects in a third (middle resolution) region of the current distorted image. If so, the method proceeds to step 416 If not, the method skips to step 420.
At step 416, the position of a candidate subobject is depicted using the third region of a distorted image.
At step 418, information from the third region of the distorted image, although not good enough for high resolution depicting, can be used for evaluating and possible confirming of the actual candidate subobject. If, after an evaluation using the additional information from the third region, it is found that the candidate subobject is a false positive candidate, the information on detected subobjects can be updated to show that the present candidate subobject is no longer a candidate and that no depicting should be performed for that candidate.
At step 420, it is determined if the combined searching and depicting according to the method should go on or not. A software counter of depicted subobjects may indicate that a sufficient number of subobjects has been depicted and the method can be ended. As an alternative, the method can detect that a complete preset area of the object 109 has been searched and that all candidate subobjects from that area have been depicted and then end the method.
At step 422, the method goes on and the next desired position for the object 109 relative to the optical subsystem 110 is determined. In connection with step 422, methods for prediction of the best focusing position may also be performed. Such prediction methods are known to the person skilled in the art and are not described here. The next desired (relative) position can be determined according to a navigation algorithm. The navigation algorithm can be really simple: As long as there are candidate subobjects remaining to depict, go to the closest one and, while there, also search the corresponding first region and store new candidate objects. If there currently are no candidate subobjects, determine the next desired position in order to maximise the usefulness of the first region of the next distorted image.
The navigation algorithm can be a bit more complicated: If steps corresponding to steps 414, 416 and 418 are implemented, it is possible to evaluate candidates using a third region with at least a third resolution. This evaluation can be performed in order to not waste the limited area of the second region of future distorted images on false positive candidates. The third region can be used if it just happens to contain a candidate subobject or the navigation algorithm may use the third region in a more active way by changing the priorities to acitivities like maximizing useful search area of the first region, evaluating possibly false candidates and depicting candidates. It may save time to pause the depicting and instead evaluate a number of candidates using the third region of a single distorted image and possibly find out that only one of the evaluated candidates is a true positive candidate.
The navigation algortihm may benefit from being even more complicated: The method may be searching for both a first kind and a second kind of subobjects simultaneously. Then the navigation algorithm may also have to prioritise not only searching, evaluating and depicting but also the two kinds of subobjects. For example, the second kind of subobjects may be complicated and result in a high ratio of false positive candidates when detected using first region information only.
The navigation algorithm may be designed to work in a way that minimises the risk for unsearched areas of the object.
The navigation algorithm may be designed to work in a way that minimises the risk for the same subobject being detected/depicted twice, especially if there is significant play in the positioning arrangement 106.
The second region corresponds to a limited field of view of the object, so there may exist large subobjects or dense clusters of multiple subobjects that can not be depicted using the second region of a single distorted image. With the present invention, the large/clustered subobjects may be handled at steps 406, 410, 412, 414, 418 and/or 422: At step 406, the candidate subobject may seem too large or seem to be part of a dense too large cluster of multiple subobjects. The method may then determine that multiple images of second region resolution are needed in order to depict the candidate subobject. Possibly, a large or cluster candidate subobject can be stored as a plurality of candidate subobjects with different doordinates. Perhaps the target number of depicted candidates may have to be adjusted as well. At step 410, if the candidate is stored as a single large object, the method can aquire and merge multiple images of second region resolution in order to depict the candidate subobject. At step 410, the method may also check that the whole candidate (clustered) subobject was depicted. At step 412, a segmentation of a possible cluster may performed. At steps 414, 416 and 418, information from a third region can be used in order to gain information on the size of the large/clustered subobjects. At step 422 the navigation algorithm may have to navigate sequentially to multiple parts of large/clustered subobjects. Since the high magnification high resolution images of existing systems are also limited in filed of view, the problem of large/clustered subobjects is not unique to the present invention and is therefore not further discussed.
In addition, the control unit 100 can use registering of consecutive distorted images after reconstruction in order to supervise the position of the object 109 relative to the optical subsystem 110, as is well known to a person skilled in the art, see for example EP1377865B 1 , where, however, the images used have not been reconstructed.
As pointed out above, the electronic image sensor 112 may be mounted in various positions relative to the optical axis 111 and that the position can afffect the shape and size of the first, second and third regions. As also pointed out above, the navigation algorithm can be chosen to be more or less complicated. As a part of designing an automatic microscope according to the present invention, it is probably advantageous to simulate the interaction of the position of the sensor 112 relative to the optical axis 111, the distortion function of the optical subsytem 1 10, the type(s) of subobjects, how subobjects of interest may be distributed over the object, the detection criterion, the use of evaluation, the dynamics of the positioning arrangement 106, the
performance of the sensor 112 and other design parameters like for example the geometrical overlap between consecutive distorted images, in order to evaluate the potential speed and detection performance of the automatic microscope.
Reconstruction of Images
Reconstruction, which has been mentioned in connection with multiple steps of the method of Fig. 4, will now be described in more detail with reference to Fig. 5. Fig. 5 is a flow chart describing a reconstruction method, which is based on a distortion model of the distortion function. The distortion model can be of the same form as the distortion function of Eq. 1-3 with the only difference that the true parameters of the distortion function may not be perfectly known and that the parameters of the distortion model are estimated and/or simulated.
At step 500, an area, of the reconstructed image, to be reconstructed is selected. The "pixel density" can also be selected, if not default.
At step 504, an initial pixel of the area to be reconstructed and with undistorted coordinates (x_u,y_u) is chosen.
At step 508, the distorted coordinates (x_d,y_d) corresponding to (x_u,y_u) are computed according to the distortion model and relative to a assumed position (x0_est,y0 est) of the optical axis 1 11 in the x-y-plane of the electronic image sensor 112.
At step 512, information in the distorted image is accessed. Possibly the (x_d,y_d), that was computed at step 508, are not perfectly corresponding to a single pixel of the distorted image. If not, the distorted image can be interpolated in order to determine a pixel value for (x_d,y_d). A person skilled in the art is familiar with image interpolation.
At step 516, the pixel value that was determined for (x_d,y_d) is assigned to (x_u,y u) of the reconstructed image.
At step 520 the method continues with step 524 until all pixels in the area to be reconstructed have been assigned.
At step 524 a new not yet assigned pixel with undistorted coordinates (x_u,y_u) is chosen and the loop continues at step 508.
Since the distortion in the optical subsystem may be different for different colours (L-CA), the reconstruction can, especially at step 508, be performed colourwise with different parameter values in the distortion model. In connection with step 508, the radius r_u of Equation 3 is evaluated for a radial distortion model. It can then be efficient to also compute the compression as the slope of r_d with respect to r_u as a function of r_u using Equation 4, if computation of a compression chart is desired.
Supervision of the Distortion Model
Reconstruction from distorted image to undistorted image can be performed according to a distortion model, for example in step 508 of the exemplified reconstruction method above. Since successful reconstruction depends on the correctness of the distortion model used and since reconstructed images may be used for supervision of the position of the object 109 relative to the optical subsystem 1 10 as well, it is possible that the automatic microscope system, from a safety and/or reliability point of view, must contain supervision of the distortion model.
Fig. 6 is a block diagram for showing how the physical and signal processing principles of a possible distortion model supervision method can interact.
Block 600 corresponds to the object 109. Block 604 corresponds to aquiring a first distorted image of the object. Block 608 corresponds to the first distorted image. Block 612 corresponds to a possible translation of the object 109 by the distances dx in the x-direction and dy in y- direction respectively. Block 616 corresponds to the object 109 after the translation. Block 620 corresponds to aquiring a second distorted image of the object. Block 624 corresponds to the second distorted image. The aquiring of the first and second distorted images are assumed to be performed according to the same, but possibly not completely known, physical parameters. In this example the physical parameters include the distortion function, fl, and parameters xO and yO for the position of the optical axis 111 in the x-y-plane of the electronic image sensor 112. The distortion function may be of the same form as the distortion function of Eq. 1-3. If so, the distortion function can be represented by the parameters Kl, K2,...
Blocks 628 and 632 correspond to reconstruction of the first and second distorted images under the influence of an estimated distortion model fl est and an estimated position (xO esfyO est) of the optical axis 1 1 1 in the x-y-plane of the electronic image sensor 112. The parameter fl est may be represented by Kl est, K2_est, ... but it will still be denoted fl est below.
Blocks 636 and 640 correspond to the reconstructed first and second images under the influence of an estimated distortion model fl est and an estimated position (x0_est,y0_est), while block 644 corresponds to a detranslation of the second reconstructed image according to estimates dx est and dx est of the corresponding physical parameters dx and dy at block 612.
Blocks 604, 620 and 612, on one hand, correspond to real world physical processes based on possibly not completely known physical parameters dx, dy, xO, yO and fl . Blocks 628, 632 and 644, on the other hand, correspond to signal processing processes based on estimated parameters dx est, dy est, xO est, yO est and fl est. If the estimated parameters are close to their corresponding physical parameters, and if blocks 628, 632 and 644 are implemented closely enough to inverses of blocks 604, 620 and 612 respectively, the supposedly overlapping areas of 636 and the detranslated 640 will look very much alike. However, as soon as some parameter estimate deviates from its corresponding physical parameter, the supposedly overlapping areas of 636 and the detranslated 640 will differ. Assuming that the object is complicated enough and that dx and dy are not both too small, two or more simultaneous parameter deviations can hardly mask each other and the principles of Fig. 6 can be used in a number of methods, which will now be discussed referring to Fig. 7.
Fig. 7 is a flow chart showing an example of a method for supervision of the distortion function and, possibly, for supervision of the whole reconstruction method as well.
At step 700 a suitable object 109 is positioned relative to the optical subsystem 1 10. The object can be a general object under analysis or a specially made calibration object At step 704 a first distorted image of the object 109 is aquired.
At step 708 the object 109 is translated some distance/s in x and/or y corresponding to dx and dy. At step 712 a second distorted image of the translated object is aquired. It should overlap the first distorted image of step 704 in order to be useful.
At step 716 initial values of parameters dx est, dy est, xO est, yO est and fl est are determined. It is an advantage if dx est and dy est have values that are quite close to the translation distances dx and dy in step 708. It is also an advantage if the values of xO est, yO est and fl est correspond to the last known values for these parameters, i.e. the values that are supposed to be supervised.
At step 720 the first distorted image is reconstructed according to xO est, yO est and f l est. At step 724 the second distorted image is reconstructed according to xO est, yO est and fl est and then detranslated according to dx est and dy est.
At step 728 the supposedly overlapping areas, possibly determined using xO est and yO est, are compared and at step 732 a comparison measure is computed. The comparison measure can, for example, be some general measure that is used when registering images, see, for example, W09931622AI .
At step 736 the values of the estimated parameters and the corresponding comparison measure are stored for later use.
At step 740 it is determined if the current comparison measure is good enough. However, since the comparison measure may depend on the details of the object, it may not be possible to compare it to a fixed tolerance. Therefore it may not be possible to end the method after its first pass.
At step 744, the method prepares for another loop by updating the estimated parameters according to some parameter search method. If the task of the supervision method is a mere determination of dx est and dy est for position (translation) supervision, it can be sufficient to use a parameter search method that varies dx est and dy est until an optimum of the comparison measure with respect to these two parameters has been found. If there is no good enough fit for the current xO est, yO est and fl est, that can be a sign of the distortion model not being correct and that there is a need for changing the task to supervising the position (x0,y0) of the optical axis 1 11 in the x-y-plane of the electronic image sensor 112 as well. If so, the parameter search method will have to vary xO est and y0_est as well in order to determine if there is a better estimate than the current (xO est, yO est). If the fit is still not good enough, it is possible to open up fl est for estimation as well.
The parameter search metohd can be based on, for example, the Nelder-Mead (simplex) metod. If the supervision method uses the implementation of the reconstruction method for its own reconstructions, see steps 720 and 724, the reconstruction method will, to some extent, be supervised as well.
Since the values of the comparison measure computed at step 732 can depend on the object, on the resolution and the distortion of the optical subsystem and on the electronic image sensor, it is hard to give a general definition of "good enough". Therefore estimation of some of the parameters described above can preferably be performed as part of a service routine, since it may take some time and/or benefit from using a known calibration object that has patterns that facilitate convergence of the parameter estimates. Another possible way of normalising the values of the comparison measure, could be to use objects that comprise a sample mounted on a glass slide with a sample free area having a specially designed calibration pattern. Still, the best calibration pattern can depend on the resolution and the distortion of the optical subsystem and on the electronic image sensor.
It can be seen as an advantage if the physical parameters xO, yO and fl are fixed and stable resulting in less need for supervision, change detection and re-estimation of xO est, yO est and fl est
As long as the physical parameters xO, yO and fl are unchanged the resolutions of different regions of the reconstructed images will be unchanged and an already computed compression chart may be reused for, for example, detection at step 404. However, if a change in the physical parameters xO, yO and fl is detected, it may be needed to update the compression chart as well.
As pointed out above, the compression chart can be determined as a part of the reconstruction.
It will be appreciated that the foregoing description and the accompanying drawings represent non-limiting examples of the methods and apparatus taught herein. As such, the invention is not limited to the specific embodiments provided in the foregoing description and accompanying drawings, but is instead limited only by the following claims and their legal equivalents.

Claims

1 . In an automatic microscope system comprising an object (109), an optical subsystem ( 110; 200; 210; 220) and an electronic image sensor (112) for producing electronical images of the object and a positioning arrangement (106) for positioning the object relative to the optical subsystem, a method for detecting and depicting subobjects of a first kind within the object, the method characterized by comprising the steps of aquiring (402) a second distorted image from the electronic image sensor (112), detecting (404) candidate subobjects of the first kind based upon information from a first region of the second distorted image, the first region having at least a first resolution, storing (406) information on a candidate subobject of the first kind in a searchable storage structure of candidate subobjects of the first kind, the stored information including at least estimated coordinates of the candidate subobject,
depicting (408, 410) a candidate subobject, if present, according to any storage structure of candidate subobjects of the first kind, within a second region of the second distorted image, the second region having at least a second resolution, the second resolution being higher than the first resolution,
determining (422) desired next coordinates of the object (109) according to a navigation algorithm, and
positioning (400) the object (109) relative to the optical subsystem ( 1 10) according to the desired next coordinates.
2. A method as claimed in claim 1, wherein the detecting step uses information from a reconstructed version of a part of the second distorted image
3 A method as claimed in claim 1, wherein the detecting step uses information from a compression chart.
4. A method as claimed in claim 1 , wherein the depicting step uses information from a reconstructed version of a part of the second distorted image.
5. A method as claimed in claim 1, further comprising the step of evaluating and updating (414, 416, 418) a candidate subobject, if present, according to any searchable storage structure of candidate subobjects of the first kind, within a third region of the second distorted image, the third region having at least a third resolution, the third resolution being between first and second resolutions, based upon information from the third region.
6. A method as claimed in claim 1 , wherein the desired next coordinates are expressed as coordinates of the object (109) relative to the optical subsystem (1 10).
7. A method as claimed in claim 1 , wherein the navigation algorithm (422) determines
desired next coordinates based upon estimated coordinates of candidate subobjects stored in the searchable storage structure of candidate subobjects of the first kind
8. A method as claimed in claim 1 , where the navigation algorithm (422) uses information from a compression chart in order to determine a border of at least one of the first and second regions.
9. A method as claimed in claim 5, wherein the step of evaluating and updating (418) uses information from a reconstructed version of a part of the second distorted image.
10. A method as claimed in claim 5, wherein the step of evaluating and updating (418) uses information from a compression chart.
11. A method as claimed in claim 2, 4 or 9, wherein the reconstructed version of a part of the second distorted image is based upon (508) at least one distortion model.
12. A method as claimed in claim 3, 8 or 10, wherein the compression chart (508) is based on at least one distortion model.
13. A method according to claim 11, further comprising the steps of supervising the
distortion model and detecting a possible change in a distortion model parameter. 4 A method according to claim 13, further comprising the step of estimating a parameter of the distortion model. 5 An automatic microscope system comprising an object (109), an optical subsystem (1 10;
200; 210; 220) and an electronic image sensor (1 12) for producing electronical images of the object and a positioning arrangement (106) for positioning the object relative to the optical subsystem, and a configurable control unit (100), characterized in that the optical subsystem (110; 200; 210; 220) comprises at least one setting for producing distorted images of the object (109) on to the electronic image sensor (112) and that the control unit (100) is configured for performing a method comprising the steps of
aquiring (402) a second distorted image from the electronic image sensor ( 1 12), detecting (404) candidate subobjects of the first kind based upon information from a first region of the second distorted image, the first region having at least a first resolution, storing (406) information on a candidate subobject of the first kind in a searchable storage structure of candidate subobjects of the first kind, the stored information including at least estimated coordinates of the candidate subobject,
depicting (408, 410) a candidate subobject if present, according to any storage structure of candidate subobjects of the first kind, within a second region of the second distorted image, the second region having at least a second resolution, the second resolution being higher than the first resolution,
determining (422) desired next coordinates of the object (109) according to a navigation algorithm, and
positioning (400) the object (109) relative to the optical subsystem (110) according to the desired next coordinates.
16. A memory device comprising information for configuring a control unit (100) to perform the steps of a method according to claim 1.
PCT/SE2014/000050 2014-04-15 2014-04-15 Arrangements and methods in a microscope system Ceased WO2015160286A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/SE2014/000050 WO2015160286A1 (en) 2014-04-15 2014-04-15 Arrangements and methods in a microscope system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/SE2014/000050 WO2015160286A1 (en) 2014-04-15 2014-04-15 Arrangements and methods in a microscope system

Publications (1)

Publication Number Publication Date
WO2015160286A1 true WO2015160286A1 (en) 2015-10-22

Family

ID=50780831

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2014/000050 Ceased WO2015160286A1 (en) 2014-04-15 2014-04-15 Arrangements and methods in a microscope system

Country Status (1)

Country Link
WO (1) WO2015160286A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019102272A1 (en) * 2017-11-24 2019-05-31 Sigtuple Technologies Private Limited Method and system for reconstructing a field of view

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3895854A (en) 1973-10-18 1975-07-22 Coulter Electronics Chromatic method and apparatus for conducting microscopic examinations at a plurality of magnifications
US4061914A (en) 1974-11-25 1977-12-06 Green James E Method and apparatus for dual resolution analysis of a scene
WO1999031622A1 (en) 1997-12-18 1999-06-24 Cellavision Ab Feature-free registration of dissimilar images using a robust similarity metric
JPH11249021A (en) * 1998-03-03 1999-09-17 Nikon Corp Image display system
WO2000055667A1 (en) 1999-03-18 2000-09-21 Cellavision Ab A chromatically uncompensated optical system for composing colour images
EP1377865B1 (en) 2001-04-12 2009-01-28 Cellavision AB A method in microscopy and a microscope, where subimages are recorded and puzzled in the same coordinate system to enable a precise positioning of the microscope stage

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3895854A (en) 1973-10-18 1975-07-22 Coulter Electronics Chromatic method and apparatus for conducting microscopic examinations at a plurality of magnifications
US4061914A (en) 1974-11-25 1977-12-06 Green James E Method and apparatus for dual resolution analysis of a scene
WO1999031622A1 (en) 1997-12-18 1999-06-24 Cellavision Ab Feature-free registration of dissimilar images using a robust similarity metric
JPH11249021A (en) * 1998-03-03 1999-09-17 Nikon Corp Image display system
WO2000055667A1 (en) 1999-03-18 2000-09-21 Cellavision Ab A chromatically uncompensated optical system for composing colour images
EP1377865B1 (en) 2001-04-12 2009-01-28 Cellavision AB A method in microscopy and a microscope, where subimages are recorded and puzzled in the same coordinate system to enable a precise positioning of the microscope stage

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019102272A1 (en) * 2017-11-24 2019-05-31 Sigtuple Technologies Private Limited Method and system for reconstructing a field of view
US11269172B2 (en) * 2017-11-24 2022-03-08 Sigtuple Technologies Private Limited Method and system for reconstructing a field of view

Similar Documents

Publication Publication Date Title
US11754392B2 (en) Distance determination of a sample plane in a microscope system
EP3374817B1 (en) Autofocus system for a computational microscope
CN108106603B (en) Variable focus lens system with multi-stage extended depth of field image processing
US10330906B2 (en) Imaging assemblies with rapid sample auto-focusing
CA2719004C (en) Method and apparatus for determining a focal position of an imaging device adapted to image a biologic sample
US10477097B2 (en) Single-frame autofocusing using multi-LED illumination
US20190293918A1 (en) Digital microscope apparatus, method of searching for in-focus position thereof, and program
US20140071452A1 (en) Fluid channels for computational imaging in optofluidic microscopes
CN113705298B (en) Image acquisition method, device, computer equipment and storage medium
US9195041B2 (en) Spherical aberration correction for non-descanned applications
US12301990B2 (en) Deep learning model for auto-focusing microscope systems
JP2001083428A (en) Microscope equipment
EP3714307B1 (en) Accelerating digital microscopy scans using empty/dirty area detection
CN110168609A (en) For generating the method and digital microscope of the threedimensional model of sample in digital microscope
US11237374B2 (en) Optical arrangement and method for imaging a sample
WO2021150973A1 (en) Intelligent automated imaging system
CN109656012A (en) The method of microscope and the micro-image for generating the depth of field with extension
WO2015160286A1 (en) Arrangements and methods in a microscope system
WO2007126437A2 (en) Systems and methods for emphasized cytological specimen review
CN114326074A (en) Method and microscope for generating a sample overview image
CN115236093B (en) Optical detection system, control method thereof, electronic device, and storage medium
JP2023000013A (en) Three-dimensional detection microscope system
US20240111142A1 (en) Apparatuses for a Microscope System, Microscope System, Methods and Computer Program
Zanoguera Autofocus for high speed scanning in image cytometry
Wang et al. A Parfocality Measurement Method of a Continuous Zoom Stereo Microscope

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14726012

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24.02.2017)

122 Ep: pct application non-entry in european phase

Ref document number: 14726012

Country of ref document: EP

Kind code of ref document: A1