WO2016027524A1 - Appareil de traitement d'informations, procédé de traitement d'informations, programme et dispositif optique électronique - Google Patents
Appareil de traitement d'informations, procédé de traitement d'informations, programme et dispositif optique électronique Download PDFInfo
- Publication number
- WO2016027524A1 WO2016027524A1 PCT/JP2015/063387 JP2015063387W WO2016027524A1 WO 2016027524 A1 WO2016027524 A1 WO 2016027524A1 JP 2015063387 W JP2015063387 W JP 2015063387W WO 2016027524 A1 WO2016027524 A1 WO 2016027524A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- optical system
- optical
- image
- feature amount
- information processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
Definitions
- the present disclosure relates to an information processing apparatus, an information processing method, a program, and an electro-optical device.
- stereoscopic display devices such as so-called head-mounted displays that allow a user to visually recognize a stereoscopic image by separately outputting a left-eye image and a right-eye image to the user have become widespread.
- the stereoscopic display device is a device that perceives the image stereoscopically by the user recognizing the left-eye image and the right-eye image
- the stereoscopic image provided by the stereoscopic display device is It is important to evaluate whether it is appropriate from the viewpoint of comfort.
- Patent Document 1 discloses a technique for evaluating a stereoscopic video obtained by fusing a left-eye image and a right-eye image using image processing for obtaining an optical flow.
- the stereoscopic image is evaluated by calculating the optical flow.
- an information processing apparatus in view of the above circumstances, an information processing apparatus, an information processing method, and a program capable of quantitatively and rapidly measuring a change in an optical state that may occur in an optical system provided in an optical apparatus. suggest.
- the present disclosure proposes an electro-optical device in which an optical system provided in the optical device is adjusted based on a feature amount that quantitatively represents a change in an optical state that can occur in the optical system.
- a plurality of random pattern images output from an optical device having a predetermined optical system are captured in a situation where the state of the optical system is different, and a plurality of imaging data corresponding to the state of the optical system, respectively.
- a corresponding position specifying unit that specifies positions corresponding to each other between the plurality of captured images, and between the plurality of captured images being focused on based on a specifying result obtained by the corresponding position specifying unit
- An information processing apparatus is provided that includes a feature amount calculation unit that calculates a feature amount that represents a difference in the optical system.
- a plurality of random pattern images output from an optical apparatus having a predetermined optical system are captured in a situation where the state of the optical system is different, and a plurality of images corresponding to the state of the optical system are respectively obtained.
- Generating imaging data, and using the plurality of captured image data calculating an optical flow between the plurality of captured images at an arbitrary point of at least three points in the captured image corresponding to the captured data.
- specifying the positions corresponding to each other among the plurality of captured images and representing the difference in the optical system between the plurality of captured images of interest based on the identification result of the positions corresponding to each other Calculating an amount, and an information processing method is provided.
- a plurality of random pattern images output from an optical apparatus having a predetermined optical system are captured in a situation where the state of the optical system is different, and a plurality of images corresponding to the state of the optical system are respectively obtained.
- a computer having an imaging unit that generates imaging data uses a plurality of imaging data captured by the imaging unit, and a plurality of captured images at arbitrary points of at least three points in a captured image corresponding to the imaging data.
- an optical system that displays a predetermined image on a predetermined display screen and a random pattern image output from an optical device having the predetermined optical system are in a state where the state of the optical system is different.
- An imaging unit that captures a plurality of images and generates a plurality of imaging data respectively corresponding to the state of the optical system, and a captured image corresponding to the imaging data using the plurality of imaging data captured by the imaging unit
- a corresponding position specifying unit that specifies positions corresponding to each other among the plurality of captured images by calculating an optical flow between the plurality of captured images at an arbitrary point of at least three points, and the corresponding position specifying unit
- a feature amount calculation unit that calculates a feature amount representing a difference in the optical system between the plurality of captured images of interest based on the obtained specific result. It acquires feature amount information based on the feature amount calculated by the electronic optical apparatus and an adjustment unit for adjusting the optical system based on the feature amount information is provided.
- optical data between a plurality of captured images is used at any point of at least three or more points in a captured image corresponding to the captured image data using a plurality of captured image data captured in different situations of the optical system.
- positions corresponding to each other between the plurality of captured images are specified.
- a feature amount representing a difference in the optical system between the plurality of captured images of interest is calculated.
- FIG. 3 is a block diagram illustrating an example of a configuration of an information processing device according to a first embodiment of the present disclosure.
- FIG. It is the block diagram which showed an example of the structure of the arithmetic processing part which the information processing apparatus which concerns on the embodiment has.
- FIG. 3 is a block diagram illustrating an example of a hardware configuration of an information processing apparatus and an electro-optical device according to an embodiment of the present disclosure.
- FIG. 1 is a block diagram illustrating an example of the configuration of the information processing apparatus according to the present embodiment.
- FIG. 2 is a block diagram illustrating an example of a configuration of an arithmetic processing unit included in the information processing apparatus according to the present embodiment.
- 3 to 4B are explanatory diagrams showing an example of the random pattern image generation process according to the present embodiment.
- 5 to 6 are explanatory diagrams showing an example of the optical flow calculation process according to the present embodiment.
- 7 to 9 are explanatory diagrams showing an example of the feature amount calculation processing according to the present embodiment.
- FIG. 10 is an explanatory diagram showing an example of the state determination process of the optical system according to the present embodiment.
- the information processing apparatus 10 captures a random pattern image output from the optical system 3 provided in the optical apparatus 1 and calculates a feature amount that characterizes the optical state of the optical system 3. And functions as a measuring device for measuring the optical state of the optical system 3.
- the optical apparatus 1 to be measured by the information processing apparatus 10 according to the present embodiment is not particularly limited, and a known optical system 3 capable of realizing a plurality of different optical states is provided. What is necessary is just to have.
- various images are output to the outside through the optical system 3.
- Examples of the optical device 1 include an optical device having a zoom optical system, an optical device having a stereo optical system, or an optical device that generates a stereo image.
- Examples of the optical apparatus having the zoom optical system include a camera, a camcorder, and a telescope.
- Examples of the optical apparatus having a stereo optical system include a microscope and binoculars.
- Optical devices that generate stereo images include stereoscopic display devices such as head-mounted displays.
- the information processing apparatus 10 mainly includes an imaging unit 101, an arithmetic processing unit 103, and a storage unit 105, as illustrated in FIG.
- the imaging unit 101 includes various lenses and various imaging elements such as a charge coupled device (CCD) and a complementary metal oxide semiconductor (CMOS).
- CCD charge coupled device
- CMOS complementary metal oxide semiconductor
- the lens and the image sensor constituting the imaging unit 101 are not particularly limited, and any known one can be used as long as it can appropriately capture an image output from the optical device 1. it can.
- the imaging unit 101 uses a known drive mechanism such as various motors and actuators in order to change the installation position of the lens and the image sensor relative to the optical device 1. You may have.
- the imaging unit 101 captures a plurality of random pattern images output from the optical device 1 having the predetermined optical system 3 under different conditions of the optical system 3, and changes the state to the state of the optical system 3. A plurality of corresponding imaging data is generated.
- the random pattern image is a pattern image in which the randomness of the pattern arrangement is mathematically guaranteed, and there is no portion having the same pattern in a certain pattern image. Therefore, in such a random pattern image, the value of the autocorrelation coefficient at the position of interest is 1, and the value of the cross-correlation coefficient between the position of interest and other positions is 0.
- Such a random pattern image can be created by overlaying and synthesizing images with fine textures, but can also be created by using an M-sequence signal composed of random numbers consisting of 0 and 1. It is.
- the random pattern image captured by the imaging unit 101 may be a random pattern image created as described above, but it is more preferable to use a synthetic random pattern image generated by the arithmetic processing unit 103 described later.
- the random pattern image as described above as an image to be output from the optical device 1 and performing an arithmetic process on the captured image obtained by capturing the random pattern image, all the pixels constituting the captured image are subjected to an optical flow described later. It can be used as a feature point when calculating. Therefore, when calculating the optical flow for such a captured image, the optical flow can be calculated at an arbitrary point constituting the image.
- the “situation in which the state of the optical system 3 is different” refers to, for example, one optical system that outputs a left-eye image and one optical system that outputs a right-eye image in a head-mounted display (HMD).
- the optical device 1 may be realized by the presence of a plurality of optical systems 3 having different states.
- the situation in which the state of the optical system 3 is different refers to an optical system in which one type of optical system 3 exists in the optical device 1 like an optical device having a zoom optical system and a wide-angle image is output. It may be realized by changing the state of various optical elements in the optical system 3 like an optical system when a telephoto image is output.
- the situation where the state of the optical system 3 is different means that the optical system 3 included in the standard optical device 1 and the optical system 3 included in the optical device 1 to be inspected have a plurality of It may be realized by a plurality of optical systems 3 that may be in different states between the optical devices 1.
- a plurality of pieces of imaging data generated by the imaging unit 101 and corresponding to the state of the optical system 3 are output to the arithmetic processing unit 103.
- the arithmetic processing unit 103 is implemented by, for example, a CPU (Central Processing Unit), a ROM (Read Only Memory), a RAM (Random Access Memory), an input device, an output device, a communication device, and the like.
- the arithmetic processing unit 103 controls the imaging process performed by the imaging unit 101 and performs various arithmetic processes on a plurality of imaging data generated by the imaging unit 101, thereby providing an optical system provided in the optical apparatus 1.
- the feature quantity characterizing the state 3 is calculated.
- the arithmetic processing unit 103 can also determine the state of the optical system 3 provided in the optical device 1 based on the calculated feature amount. Further, the arithmetic processing unit 103 can generate a random pattern image used in the optical device 1 and output it to the optical device 1.
- the storage unit 105 is realized by, for example, a RAM or a storage device included in the information processing apparatus 10 according to the present embodiment.
- the storage unit 105 performs various programs including various databases used for processing in the imaging unit 101 and the arithmetic processing unit 103, various programs including applications used for various arithmetic processes executed by these processing units, and some processing. Various parameters that need to be saved at the time, the progress of the process, and the like may be recorded as appropriate.
- the storage unit 105 can be freely accessed by each processing unit such as the imaging unit 101 and the arithmetic processing unit 103 to write and read data.
- the arithmetic processing unit 103 includes an imaging control unit 111, a random pattern image generation unit 113, a data output unit 115, a data acquisition unit 117, and a corresponding position specifying unit 119. And a feature amount calculation unit 121 and an optical system state determination unit 123.
- the imaging control unit 111 is realized by, for example, a CPU, a ROM, a RAM, a communication device, and the like.
- the imaging control unit 111 comprehensively controls imaging processing in the imaging unit 101.
- the imaging control unit 111 cooperates with a data output unit 115 described later to output a composite random pattern image generated by a random pattern image generation unit 113 described later to the optical device 1 to be imaged. Is also possible.
- the imaging unit 101 included in the information processing apparatus 10 according to the present embodiment can capture a random pattern image output from the optical device 1 including the predetermined optical system 3 at an appropriate timing.
- the random pattern image generation unit 113 is realized by a CPU, a ROM, a RAM, and the like, for example.
- the random pattern image generation unit 113 generates a random pattern image used in the optical device 1 based on the M-sequence signal.
- the random pattern image generation unit 113 may generate a combined random pattern image by combining random pattern images based on at least two types of M-sequence signals.
- a random pattern image can be obtained by assigning such M-sequence signals to the pixels constituting the image.
- a random pattern image generated by assigning an M-sequence signal for each pixel is hereinafter referred to as an “M1 image” for convenience.
- An Mp image generated by assigning an M-sequence signal to each pixel region defined by p ⁇ p (p ⁇ 2) pixels will be referred to as an “Mp image” for convenience.
- the degree m of the irreducible polynomial h (m) used when generating the random pattern image is preferably determined according to the number of pixels of the image (image corresponding to the imaging data) generated by the imaging unit 101. Specifically, when an image (captured image) corresponding to the captured data is composed of P pixels, it is preferable to select an integer m that satisfies m ⁇ log 2 (P). By selecting such an integer m, it is possible to prevent a repeated pattern from being present in the image.
- the random pattern image used for generating the synthesized random pattern image may be selected according to the window size that is a processing unit in the optical flow calculation process performed by the corresponding position specifying unit 119 described later.
- the random pattern image generation unit 113 includes a first random pattern image in which the size of the dots constituting the random pattern image is smaller than the window size, and a second random size in which the size of the dots is larger than the window size. It is preferable to select at least a pattern image.
- the pattern constituting the random pattern image is not completely crushed even if the optical system 3 as the imaging target is blurred, and the corresponding position in the subsequent stage
- the specifying unit 119 can perform more accurate calculation processing.
- the optical flow can be calculated even when the window size in the optical flow calculation is relatively small. Accordingly, it is possible to realize a highly accurate optical flow calculation process while suppressing an increase in the calculation amount.
- the random pattern image generation unit 113 has an odd dot size such as M1, M3, M5, M7, M9,..., As shown in FIG. A synthetic random pattern image may be generated using only the random pattern image. Further, the random pattern image generation unit 113, for example, as shown in FIG. 4B, only random pattern images with an even dot size such as M2, M4, M6, M8, M10, and so on. Utilizing this, a synthetic random pattern image may be generated.
- the low-pass filter is not particularly limited, and a known filter can be used. For example, a Fourier transform such as a fast Fourier transform (FFT) or an inverse Fourier transform and a frequency filter may be combined. An averaging filter may be used.
- FFT fast Fourier transform
- IFFT inverse Fourier transform
- the window size at the time of optical flow calculation is set to It has become clear that it is more preferable to set the dot size to 2 to 3 times. Therefore, when generating a composite random pattern image, the random pattern image generation unit 113 replaces the processing shown in FIG. 3 with a plurality of random pattern images whose dot size is 1 ⁇ 2 or less of the window size. It is preferable to use it. In addition, it is more preferable that the random pattern image generation unit 113 uses a plurality of random pattern images whose dot size is 1/3 or less of the window size when generating the composite random pattern image.
- the random pattern image generation unit 113 outputs the composite random pattern image generated in this way to the data output unit 115.
- the data output unit 115 is realized by, for example, a CPU, a ROM, a RAM, a communication device, and the like.
- the data output unit 115 outputs the combined random pattern image generated by the random pattern image generation unit 113 to the optical device 1 that is the imaging target of the imaging unit 101.
- the composite random pattern image is output through the optical system 3.
- the data acquisition unit 117 is realized by, for example, a CPU, a ROM, a RAM, a communication device, and the like.
- the data acquisition unit 117 acquires a plurality of pieces of imaging data generated by the imaging unit 101 and outputs them to a corresponding position specifying unit 119 described later.
- the corresponding position specifying unit 119 is realized by, for example, a CPU, a ROM, a RAM, and the like.
- the corresponding position specifying unit 119 uses a plurality of imaging data captured by the imaging unit 101, and at least three or more arbitrary points (preferably, three or more points not on a straight line) in the captured image corresponding to the imaging data. ), The optical flow between the plurality of captured images is calculated. Thereafter, the corresponding position specifying unit 119 specifies positions corresponding to each other among the plurality of captured images based on the calculated optical flow.
- the point for calculating the optical flow may be any point in the captured image, and may be a lattice point in the captured image or may not be a lattice point.
- a plurality of points can be considered as schematically illustrated in FIG.
- one of the plurality of captured images generated by the imaging unit 101 will be referred to as an image A, and the other one will be referred to as an image B.
- points of the image A and (M x, M y) a point image B (P x, P y) will be denoted as.
- the corresponding position specifying unit 119 sets an appropriate window size in cooperation with the random pattern image generation unit 113 prior to the calculation of the optical flow.
- the specific size of the window size to be set is not particularly limited, and may be set as appropriate according to the required optical flow calculation accuracy.
- the corresponding position specifying unit 119 sets the number of points for calculating the optical flow to at least 3 points.
- the feature amount calculation unit 121 described later performs a process of specifying a geometric conversion matrix that associates the image A and the image B of interest. The geometric conversion that can occur between the two images will be described later.
- affine transformation and projective transformation There are two types: affine transformation and projective transformation.
- the number of unknowns in the affine transformation matrix of 3 rows ⁇ 3 columns is six.
- the corresponding position specifying unit 119 may be at least three points (the number of values applied to the process is six). Further, when adopting a model in which the geometric transformation characterizing two images is a projective transformation, the number of unknowns in the projection transformation matrix of 3 rows ⁇ 3 columns is eight. Therefore, when the projective transformation model is adopted in the feature amount calculation unit 121 described later, the corresponding position specifying unit 119 may be at least 4 points (the number of values applied to the process is 8).
- the present inventor calculated an optical flow for 4 to 8000 points using a random pattern image and an image obtained by rotating, translating, and enlarging / reducing the random pattern image. Tried to process.
- the accuracy of optical flow calculation when a converted image with almost no distortion was used for verification, no significant difference was found in the accuracy of optical flow calculation. Accordingly, in the actual measurement of the optical system 3, if accurate affine transformation or projective transformation is considered between two images to be compared, the number of points for calculating the optical flow is three (in the case of affine transformation). Alternatively, even if there are four points (in the case of projective transformation), the optical flow calculation result is considered to have sufficient accuracy.
- the number of points for calculating the optical flow is set to 1000 points or less (for example, about several tens to several hundred points), and the optical flow with sufficient accuracy can be obtained by arranging the points uniformly over the entire image. It was found that it was possible to calculate.
- the corresponding position specifying unit 119 calculates the optical flow between the two images of interest in this manner, and thereby, at the point A (M x , M y ) in the image A as schematically shown in FIG. Corresponding points are identified from the points constituting the image B. Thereby, the corresponding position specifying unit 119 can specify that the point of the image B corresponding to the point A (M x , M y ) in the image A is the point B (P x , P y ). The corresponding position specifying unit 119 performs such a corresponding position specifying process on the point where the optical flow is calculated.
- the corresponding position specifying unit 119 calculates the optical flow for a sufficiently large number of points, the number of points for which sufficient accuracy can be obtained in the feature amount calculation processing in the feature amount calculation unit 121 described later.
- the corresponding position specifying process may be performed, or the corresponding position specifying process may be performed for all points for which optical flows have been calculated.
- Corresponding position specifying unit 119 when specifying the correspondence between the two points of interest based on the optical flow calculation result, outputs the obtained specifying result to feature amount calculating unit 121.
- the feature amount calculation unit 121 is realized by, for example, a CPU, a ROM, a RAM, and the like.
- the feature amount calculation unit 121 calculates a feature amount representing a difference in the optical system between a plurality of focused images based on the specification result obtained by the corresponding position specification unit 119.
- the feature quantity calculation unit 121 uses the identification result of the point correspondence obtained by the corresponding position identification unit 119 and the first image (image A) corresponding to the first imaging data. Then, a transformation matrix representing the geometric correspondence between the second image (image B) corresponding to the second image data is calculated. In addition, the feature quantity calculation unit 121 uses the calculated transformation matrix to calculate a feature quantity (hereinafter referred to as an optical system shift, an image size change caused by the optical system, an optical system tilt, and an optical system distortion). , Also referred to as “optical system feature amount”).
- the feature quantity calculation unit 121 As schematically shown in FIG. 7, as the geometric transformation representing the geometric correspondence between the image A and the image B, two of affine transformation and projective transformation are used. Consider the type. Hereinafter, the arithmetic processing performed by the feature amount calculation unit 121 for both the affine transformation model and the projective transformation model will be specifically described.
- the feature amount calculation processing in the feature amount calculation unit 121 when the affine transformation model is adopted will be specifically described.
- the point A (M x , M y ) and the point B (P x , P y ) are related by affine transformation, the relationship can be expressed as in the following expression 101.
- the matrix A is a transformation matrix representing an affine transformation as represented by the following equation 103.
- the affine transformation matrix A has six unknowns a 11 to a 23 in its matrix elements.
- the affine transformation includes a translation operation represented by the following equation 105, a rotation operation represented by equation 107, a skew (shearing) operation represented by equation 109, and an enlargement / reduction operation represented by equation 111.
- matrix elements T x and T y correspond to the position shift amounts in the x-axis direction and the y-axis direction, respectively
- the angle ⁇ in equation 107 corresponds to the rotation angle
- the angle in equation 109 ⁇ corresponds to the skew angle
- the matrix elements S x and S y in Equation 111 correspond to the size change amounts in the x-axis direction and the y-axis direction, respectively.
- the feature amount calculation unit 121 is an affine transformation matrix A represented by the following Expression 103 ′, which is a transformation matrix when an enlargement / reduction operation, a skew operation, a rotation operation, and a translation operation are performed in order. As a result, feature amount calculation processing is performed.
- the feature amount calculation unit 121 calculates the affine transformation matrix A represented by the equation 103 ′ based on the following equations 113a and 113b using the identification result of the point correspondence obtained by the corresponding position identification unit 119. .
- the following formulas 113a and 113b there is a process of calculating an inverse matrix of a matrix of 3 rows ⁇ 3 columns, and the inverse matrix is calculated using a general solution of an inverse matrix of 3 rows ⁇ 3 columns. can do.
- the sigma symbol represents that the sum of the values represented by the respective elements is calculated for all the points to be used, and N represents the number of points to be used for the calculation. .
- the feature quantity calculation unit 121 performs the operations represented by the above formulas 113a and 113b based on the calculation result of the optical flow, whereby the matrix elements a 11 to a 23 of the affine transformation matrix A represented by the formulas 103 and 103 ′. Can be specified.
- the feature quantity calculation unit 121 calculates a standard deviation ⁇ of a conversion error represented by the following expression 127 as a feature quantity that represents a distortion that cannot be expressed by affine transformation or a degree of image collapse.
- (P x , P y ) is the value of the point specified by the corresponding position specifying unit 119, and (P x , P y ) with a hat symbol is obtained. It is an estimated value when (M x , M y ) is affine transformed using the affine transformation matrix A.
- the value of the standard deviation ⁇ is small, it indicates that the amount of distortion or collapse is small, and when the value of the standard deviation ⁇ is large, it indicates that the amount of distortion or collapse is large.
- the feature amount calculation unit 121 calculates an optical system feature amount that represents a difference in the state (optical state) of the optical system 3 as expressed by the above formulas 115 to 127 by performing the above arithmetic processing. .
- affine transformation models other than “TRKS”, “TSRK”, “TSKR”, “TRSK”, “TKSR”, and “TKRS” that perform the translation operation last are actually the above 4 It is clear that two matrix operations are performed, but the elements (a 13 , a 23 ) indicating the parallel movement amount (position shift amount) of the affine transformation model do not become T x , T y . For this reason, T x and T y obtained as a result of the above calculation cannot be handled as the vertical shift amount and the horizontal shift amount.
- the feature amount calculation unit 121 handles the expression 103 ', which is an affine transformation model of "TRKS", as the transformation matrix A of the affine transformation model.
- the projective transformation is performed in addition to the translation operation represented by the above equation 105, the rotation operation represented by the equation 107, the skew (shearing) operation represented by the equation 109, and the enlargement / reduction operation represented by the equation 111.
- the projective transformation is performed in addition to the translation operation represented by the above equation 105, the rotation operation represented by the equation 107, the skew (shearing) operation represented by the equation 109, and the enlargement / reduction operation represented by the equation 111.
- Expression 135. matrix elements ⁇ and ⁇ correspond to projection coefficients in the x-axis direction and the y-axis direction, respectively.
- the feature quantity calculation unit 121 is a projection matrix represented by the following expression 133 ′, which is a transformation matrix when an enlargement / reduction operation, a skew operation, a rotation operation, a parallel movement operation, and a projection operation are performed in order.
- 133 ′ a transformation matrix when an enlargement / reduction operation, a skew operation, a rotation operation, a parallel movement operation, and a projection operation are performed in order.
- a feature amount calculation process is performed.
- the projective transformation matrix H represents exactly the same transformation even if all elements are multiplied by a constant, and thus the feature quantity calculation unit 121 assumes a projective transformation matrix H ′ represented by the following expression 133 ′′, and features An amount calculation process is performed.
- the feature quantity calculation unit 121 calculates the projective transformation matrix H ′ represented by the expression 133 ′′ based on the following expression 135 using the identification result of the point correspondence obtained by the corresponding position specifying unit 119.
- Expression 135 there is a process of calculating an inverse matrix of a matrix of 8 rows ⁇ 8 columns. Such an inverse matrix can be calculated using a known numerical operation program.
- the sigma symbol indicates that the sum of the values represented by each element is calculated for all of the points to be used, and N indicates the number of points to be used for the calculation.
- the feature quantity calculation unit 121 identifies each matrix element h 11 to h 32 of the projective transformation matrix H expressed by Expression 133, 133 ′′ by performing the calculation expressed by Expression 135 based on the calculation result of the optical flow. can do.
- the feature quantity calculation unit 121 calculates the optical system feature quantity based on the following formulas 137 to 151 using the value of the obtained projective transformation matrix H.
- the feature amount calculation unit 121 performs conversion when the enlargement / reduction operation, the skew operation, the rotation operation, the translation operation, and the projection operation are sequentially performed as in the above-described equation 133 ′.
- a projection transformation matrix H represented by the following expression 153 may be assumed to perform the feature amount calculation process.
- a projective transformation matrix H represented by the following Expression 153 is a transformation matrix when an enlargement / reduction operation, a skew operation, a rotation operation, a projection operation, and a parallel movement operation are sequentially performed.
- the feature amount calculation unit 121 performs the calculation represented by the above equation 135 based on the calculation result of the optical flow, and each matrix element h 11 to h of the projective transformation matrix H represented by the equations 133 and 153. 32 is specified. After that, the feature quantity calculation unit 121 can calculate the optical system feature quantity in the same manner as in the case of using the expression 133 ′′ by combining the expression 133 and the expression 153.
- the translation operation and the projection operation are performed as much as possible (that is, one of the operations is performed last and the other operation is performed second from the last). Conceivable. Therefore, as the projective transformation model according to the present embodiment, it is preferable to use the “BTRKS” model represented by the above-described equations 133 ′ and 133 ′′ or the “TBRKS” model represented by the above-described equation 153.
- the affine transformation model and the projective transformation model employed in the feature amount calculation unit 121 according to the present embodiment have been specifically described.
- the feature amount calculation unit 121 calculates, for example, an optical system feature amount as illustrated in FIG.
- the optical system feature amount is a quantification of the state of the optical system 3, and the state of the optical system 3 can be objectively determined by referring to the feature amount.
- the feature quantity calculation unit 121 can use two types of geometric transformation models, an affine transformation model and a projective transformation model. These two types of models are as shown in FIG. It is preferable to use properly from the viewpoint.
- the optical system 3 to be imaged by the imaging unit 101 is an optical system in which there is no tilt of the display image or the tilt of the display image is negligible
- the affine transformation model has a small number of unknowns, and it is possible to obtain a highly accurate feature amount calculation result while suppressing the calculation cost.
- the optical system 3 to be imaged by the imaging unit 101 is an optical system in which there is a negligible amount of tilting
- the fall of the display image is a situation where the display image is not directly facing the imaging unit 101.
- the tilt of the display image means that the upper end of the display image is located on the back side with respect to the lower end along the optical axis connecting the imaging unit 101 and the optical system 3 (or located on the near side). And the case where the right end portion of the display image is located on the back side (or located on the near side) with respect to the left end portion.
- the affine transformation model is advantageous, for example, as shown in FIG. 9, (a) when measuring the zoom amount (magnification) in the zoom optical system, or (b) only the feature amount excluding the tilt And (c) when measuring an HMD having an optical system that generates a stereo image from a single display, and (d) when measuring a stereo optical system such as a microscope and binoculars.
- the feature amount calculation unit 121 which model is to be adopted is set in advance according to the above policy according to the characteristics of the optical system 3 to be measured, etc., prior to the measurement of the optical system 3. Just keep it.
- the feature amount calculation unit 121 outputs information regarding the optical system feature amount calculated as described above as shown in FIG. 8 to the optical system state determination unit 123. Further, the feature amount calculation unit 121 may output information about the optical system feature amount calculated as described above to various information processing apparatuses such as various computers and various servers existing outside the information processing apparatus 10. Good.
- the optical system state determination unit 123 is realized by, for example, a CPU, a ROM, a RAM, and the like.
- the optical system state determination unit 123 determines the state of the optical system 3 based on the optical system feature amount calculated by the feature amount calculation unit 121. Specifically, in the optical system state determination unit 123, a determination threshold is set in advance for each optical system feature amount as shown in FIG. 8, and the magnitude of the calculated optical system feature amount and the determination threshold value is small. The state of the optical system 3 is determined by comparison.
- the determination result of the state of the optical system determined by the optical system state determination unit 123 may be output to the user of the information processing apparatus 10 by various methods, or may be directly fed back to the optical device 1. Good. By using such a determination result, the state of the optical system 3 provided in the optical apparatus 1 can be specifically grasped, and the optical system 3 can be adjusted more easily.
- FIG. 10 simply shows an example of an optical system state determination process when a head-mounted display having a stereo optical system is a processing target.
- the optical system state determination unit 123 refers to the feature amount related to the calculated size (S x , S y ) and the feature amount is not approximately 1 (that is, the feature amount). Can be determined that the magnification is different between the right and left optical systems.
- the optical system state determination unit 123 pays attention to the rotation angle ( ⁇ ), and the optical system is tilted when the feature amount is not almost 0 (that is, when the feature amount exceeds a predetermined threshold). (That is, it can be determined that the optical system is rotating).
- the optical system state determination unit 123 pays attention to the skew angle ( ⁇ ), and when the feature amount is not almost zero (that is, when the feature amount exceeds a predetermined threshold), the optical system is distorted. It can be determined that there is.
- the optical system state determination unit 123 focuses on the shift amount (T x , T y ), and when the feature amount does not become almost zero (that is, when the feature amount exceeds a predetermined threshold), the optical system state determination unit 123 focuses on the shift amount (T x , T y ). It can be determined that there are cases where the mounting position and angle are not correct or the left and right optical systems are mounted in reverse. In particular, in the case of a stereo optical system, the shift amount T y is a feature amount representing the parallax in the optical system 3, and thus the determination result is an important guideline when evaluating the stereo optical system.
- the optical system state determination unit 123 pays attention to the projection coefficients ( ⁇ , ⁇ ), and the optical system is tilted when the feature amount is not almost zero (that is, when the feature amount exceeds a predetermined threshold). It can be determined that the display has fallen or the display has fallen.
- the optical system state determination unit 123 focuses on the standard deviation ( ⁇ ) of the conversion error, and when the feature amount is not almost zero (that is, when the feature amount exceeds a predetermined threshold), It can be determined that there is distortion.
- the optical system state determination unit 123 can calculate the optical system feature amount that is sequentially calculated. Based on this, the state of the optical system 3 can be determined.
- each component described above may be configured using a general-purpose member or circuit, or may be configured by hardware specialized for the function of each component.
- the CPU or the like may perform all functions of each component. Therefore, it is possible to appropriately change the configuration to be used according to the technical level at the time of carrying out the present embodiment.
- a computer program for realizing each function of the information processing apparatus according to the present embodiment as described above can be produced and installed in a personal computer or the like.
- a computer-readable recording medium storing such a computer program can be provided.
- the recording medium is, for example, a magnetic disk, an optical disk, a magneto-optical disk, a flash memory, or the like.
- the above computer program may be distributed via a network, for example, without using a recording medium.
- FIG. 11 is a flowchart illustrating an example of the flow of the information processing method according to the present embodiment.
- a random pattern image is generated by the random pattern image generation unit 113 by the method described above (step S101). Thereafter, the generated random pattern image is output to the optical device 1 via the data output unit 115 (step S103).
- the imaging unit 101 captures the random pattern image output from the optical device 1 under the control of the imaging control unit 111 (step S105). ) To generate imaging data.
- the imaging data generated by the imaging unit 101 is transmitted to the corresponding position specifying unit 119 via the data acquisition unit 117.
- the corresponding position specifying unit 119 specifies the corresponding position of the point of interest by using the two captured images by referring to the transmitted imaging data and using the method using the optical flow as described above (step S107). ). Thereafter, the corresponding position specifying unit 119 outputs information representing the result of specifying the corresponding position of the point to the feature amount calculating unit 121.
- the feature amount calculation unit 121 uses the information on the corresponding positions of the points by the corresponding position specifying unit 119 to specify a conversion matrix between two captured images (Ste S109). After that, the feature quantity calculation unit 121 calculates various optical system feature quantities as shown in FIG. 8 based on the identified conversion matrix (step S111). When the feature amount calculation unit 121 calculates the optical system feature amount, the feature amount calculation unit 121 outputs information regarding the calculated optical system feature amount to the optical system state determination unit 123.
- the optical system state determination unit 123 determines the state of the focused optical system 3 based on the optical system feature amount calculated by the feature amount calculation unit 121 (step S113). As a result, in the information processing method according to the present embodiment, it is possible to calculate a feature amount representing a difference in the optical system 3 and to determine the state of the optical system 3 based on the feature amount.
- FIGS. 12 and 13 are block diagrams illustrating an example of the configuration of the electro-optical device according to the present embodiment.
- the optical system characteristic amount calculated by the information processing apparatus 10 according to the present embodiment By using the optical system characteristic amount calculated by the information processing apparatus 10 according to the present embodiment, the optical system determination result, and the like, it is possible to realize the following electro-optical device.
- the electro-optical device 20 mainly includes a predetermined optical system 201, an adjustment unit 203, and a storage unit 205.
- the optical system 201 is a mechanism for outputting various images, such as still images and moving images, held or acquired by the electro-optical device 20 to the outside of the electro-optical device 20.
- the optical system 201 is configured using various optical elements such as a lens, a mirror, and a filter.
- the type of the optical system 201 included in the electro-optical device 20 is not particularly limited, and examples thereof include known optical systems such as a zoom optical system, a stereo optical system, and an optical system that generates a stereo image. Can do.
- the electro-optical device 20 functions as, for example, a stereoscopic display device such as a camera, a camcorder, a telescope, a microscope, binoculars, and a head-mounted display.
- a drive mechanism in order to change the state (optical state) of the optical system 201 by changing the installation position, the installation angle, and the like of various optical elements constituting the optical system 201, Various drive mechanisms (not shown) are provided.
- a drive mechanism is not particularly limited, and examples thereof include a voice coil motor and a piezo element.
- the adjustment unit 203 is realized by a CPU, a ROM, a RAM, a communication device, and the like.
- the adjustment unit 203 acquires feature amount information based on the optical system feature amount generated by the information processing apparatus 10 according to the present embodiment capturing a random pattern image output from the optical system 201, and the acquired feature amount information.
- the optical system 201 is adjusted based on the above.
- the feature amount information acquired by the adjustment unit 203 from the information processing device 10 is not only the optical system feature amount generated by the information processing device 10 but also the state of the optical system 201 obtained based on the optical system feature amount. Information indicating the determination result is also included.
- the adjustment unit 203 adjusts the state of the optical system 201 to an appropriate state by controlling the drive mechanism provided in the optical system 201 based on the feature amount information acquired from the information processing apparatus 10.
- the control method of the drive mechanism is not particularly limited, and feedback control according to the acquired feature amount information may be performed, or the optical system feature amount stored in the storage unit 205 or the like described later.
- the drive mechanism may be controlled by referring to a database associated with the drive mechanism control method.
- the adjustment unit 203 automatically adjusts the optical system 201 based on the feature amount information acquired from the information processing apparatus 10, so that the state of the optical system 201 in the electro-optical device 20 always maintains an appropriate state. It becomes possible.
- the storage unit 205 is realized by, for example, a RAM or a storage device provided in the electro-optical device 20 according to the present embodiment.
- the storage unit 205 needs to store entity data of various images output from the optical system 201, various databases and various programs used for processing in the adjustment unit 203, and some processing. Various parameters, progress of processing, and the like may be recorded as appropriate.
- the storage unit 205 can be freely accessed by each processing unit such as the optical system 201 and the adjustment unit 203 to write and read data.
- FIG. 12 the aspect in which the feature amount information is acquired from the information processing apparatus 10 provided outside the electro-optical device 20 and the adjustment unit 203 adjusts the optical system 201 has been described.
- the device itself may have the function of the information processing apparatus 10.
- the electro-optical device 20 having the function of the information processing apparatus 10 includes an optical system 201, an adjustment unit 203, a storage unit 205, an imaging unit 207, and an arithmetic processing unit 209. Prepare mainly.
- optical system 201 and the adjustment unit 203 have the same functions as the optical system 201 and the adjustment unit 203 included in the electro-optical device 20 described with reference to FIG. Therefore, detailed description is omitted below.
- the storage unit 205 has both the function of the storage unit 205 in the electro-optical device 20 described with reference to FIG. 12 and the function of the storage unit 105 in the information processing apparatus 10, and will be described in detail below. The detailed explanation is omitted.
- the imaging unit 207 and the arithmetic processing unit 209 have the same functions as the imaging unit 101 and the arithmetic processing unit 103 included in the information processing apparatus 10 according to the present embodiment, respectively, and have the same effects. Detailed description is omitted below.
- the random pattern image generated by the arithmetic processing unit 209 is output from the optical system 201, and the output random pattern image is captured by the imaging unit 207, thereby corresponding to the random pattern image.
- Imaging data is generated.
- the arithmetic processing unit 209 performs the arithmetic processing as described above based on the generated imaging data, thereby generating information representing the optical system feature amount and the optical system state determination result.
- the adjustment unit 203 can appropriately maintain the state of the optical system 201 by adjusting the optical system 201 based on the feature amount information generated by the arithmetic processing unit 209.
- each component described above may be configured using a general-purpose member or circuit, or may be configured by hardware specialized for the function of each component.
- the CPU or the like may perform all functions of each component. Therefore, it is possible to appropriately change the configuration to be used according to the technical level at the time of carrying out the present embodiment.
- a computer program for realizing each function of the electro-optical device according to the present embodiment as described above can be produced and mounted on a personal computer or the like.
- a computer-readable recording medium storing such a computer program can be provided.
- the recording medium is, for example, a magnetic disk, an optical disk, a magneto-optical disk, a flash memory, or the like.
- the above computer program may be distributed via a network, for example, without using a recording medium.
- FIG. 14 is a block diagram for explaining a hardware configuration of the information processing apparatus 10 and the electro-optical device 20 according to the embodiment of the present disclosure.
- the imaging unit 101 included in the information processing apparatus 10 and the imaging unit 207 that can be included in the electro-optical device 20 are not illustrated.
- the information processing apparatus 10 and the electro-optical device 20 mainly include a CPU 901, a ROM 903, and a RAM 905.
- the information processing apparatus 10 and the electro-optical device 20 further include a host bus 907, a bridge 909, an external bus 911, an interface 913, an input device 915, an output device 917, a storage device 919, and a drive 921.
- a connection port 923, and a communication device 925 mainly include a CPU 901, a ROM 903, and a RAM 905.
- the information processing apparatus 10 and the electro-optical device 20 further include a host bus 907, a bridge 909, an external bus 911, an interface 913, an input device 915, an output device 917, a storage device 919, and a drive 921.
- a connection port 923, and a communication device 925 mainly include a CPU 901, a ROM 903, and a RAM 905.
- the information processing apparatus 10 and the electro-optical device 20 further include a
- the CPU 901 functions as an arithmetic processing device and a control device, and in accordance with various programs recorded in the ROM 903, the RAM 905, the storage device 919, or the removable recording medium 927, the entire operation in the information processing device 10 and the electro-optical device 20 or one of them. Control part.
- the ROM 903 stores programs used by the CPU 901, calculation parameters, and the like.
- the RAM 905 primarily stores programs used by the CPU 901, parameters that change as appropriate during execution of the programs, and the like. These are connected to each other by a host bus 907 constituted by an internal bus such as a CPU bus.
- the host bus 907 is connected to an external bus 911 such as a PCI (Peripheral Component Interconnect / Interface) bus via a bridge 909.
- PCI Peripheral Component Interconnect / Interface
- the input device 915 is an operation means operated by the user such as a mouse, a keyboard, a touch panel, a button, a switch, and a lever. Further, the input device 915 may be, for example, remote control means (so-called remote control) using infrared rays or other radio waves, or a mobile phone or PDA corresponding to the operation of the information processing device 10 and the electro-optical device 20.
- the external connection device 929 such as the above may be used.
- the input device 915 includes an input control circuit that generates an input signal based on information input by a user using the above-described operation means and outputs the input signal to the CPU 901, for example.
- a user of the information processing apparatus 10 and the electro-optical device 20 operates the input device 915 to input various data or instruct a processing operation to the information processing apparatus 10 and the electro-optical device 20. Can do.
- the output device 917 is a device that can notify the user of the acquired information visually or audibly. Examples of such devices include CRT display devices, liquid crystal display devices, plasma display devices, EL display devices and display devices such as lamps, audio output devices such as speakers and headphones, printer devices, mobile phones, and facsimiles.
- the output device 917 outputs results obtained by various processes performed by the information processing device 10 and the electro-optical device 20, for example. Specifically, the display device displays results obtained by various processes performed by the information processing apparatus 10 and the electro-optical device 20 as text or images.
- the audio output device converts an audio signal composed of reproduced audio data, acoustic data, and the like into an analog signal and outputs the analog signal.
- the storage device 919 is a data storage device configured as an example of a storage unit of the information processing device 10 and the electro-optical device 20.
- the storage device 919 includes, for example, a magnetic storage device such as an HDD (Hard Disk Drive), a semiconductor storage device, an optical storage device, or a magneto-optical storage device.
- the storage device 919 stores programs executed by the CPU 901, various data, various data acquired from the outside, and the like.
- the drive 921 is a reader / writer for a recording medium, and is built in or externally attached to the information processing apparatus 10 and the electro-optical device 20.
- the drive 921 reads information recorded on a removable recording medium 927 such as a mounted magnetic disk, optical disk, magneto-optical disk, or semiconductor memory, and outputs the information to the RAM 905.
- the drive 921 can write a record on a removable recording medium 927 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
- the removable recording medium 927 is, for example, a DVD medium, an HD-DVD medium, a Blu-ray (registered trademark) medium, or the like.
- the removable recording medium 927 may be a compact flash (registered trademark) (CompactFlash: CF), a flash memory, or an SD memory card (Secure Digital memory card). Further, the removable recording medium 927 may be, for example, an IC card (Integrated Circuit card) on which a non-contact IC chip is mounted, an electronic device, or the like.
- CompactFlash CompactFlash: CF
- flash memory a flash memory
- SD memory card Secure Digital memory card
- the removable recording medium 927 may be, for example, an IC card (Integrated Circuit card) on which a non-contact IC chip is mounted, an electronic device, or the like.
- the connection port 923 is a port for directly connecting a device to the information processing apparatus 10 and the electro-optical device 20.
- Examples of the connection port 923 include a USB (Universal Serial Bus) port, an IEEE 1394 port, a SCSI (Small Computer System Interface) port, and the like.
- As another example of the connection port 923 there are an RS-232C port, an optical audio terminal, an HDMI (High-Definition Multimedia Interface) port, and the like.
- the communication device 925 is a communication interface configured with, for example, a communication device for connecting to the communication network 931.
- the communication device 925 is, for example, a communication card for a wired or wireless LAN (Local Area Network), Bluetooth (registered trademark), or WUSB (Wireless USB).
- the communication device 925 may be a router for optical communication, a router for ADSL (Asymmetric Digital Subscriber Line), or a modem for various communication.
- the communication device 925 can transmit and receive signals and the like according to a predetermined protocol such as TCP / IP, for example, with the Internet or other communication devices.
- the communication network 931 connected to the communication device 925 is configured by a wired or wireless network, and may be, for example, the Internet, a home LAN, infrared communication, radio wave communication, satellite communication, or the like. .
- each component described above may be configured using a general-purpose member, or may be configured by hardware specialized for the function of each component. Therefore, it is possible to change the hardware configuration to be used as appropriate according to the technical level at the time of carrying out this embodiment.
- An imaging unit that captures a plurality of random pattern images output from an optical device having a predetermined optical system in a situation where the state of the optical system is different, and generates a plurality of imaging data respectively corresponding to the state of the optical system; , By using the plurality of imaging data captured by the imaging unit and calculating the optical flow between the plurality of captured images at any point of at least three or more points in the captured image corresponding to the imaging data, A corresponding position specifying unit for specifying positions corresponding to each other between the captured images; A feature amount calculation unit that calculates a feature amount representing a difference in the optical system between the plurality of captured images of interest based on the specification result obtained by the corresponding position specification unit; An information processing apparatus comprising: (2) A random pattern image generation unit that generates the random pattern image used in the optical device based on an M-sequence signal; The random pattern image generation unit combines a plurality of random pattern images in which the size of dots constituting the random
- the feature amount calculation unit uses the identification result to obtain a geometrical relationship between a first image corresponding to the first imaging data and a second image corresponding to the second imaging data. Calculate a transformation matrix representing the correspondence, Using the calculated transformation matrix, the feature amount representing a deviation of the optical system, a change in image size caused by the optical system, a tilt of the optical system, and a distortion of the optical system is calculated. (1) Or the information processing apparatus according to (2).
- the feature amount calculation unit includes: As the transformation matrix, calculate an affine transformation matrix between the first image and the second image, Using the calculated affine transformation matrix, at least a size change amount, a position shift amount, a rotation angle, and a skew angle between the first image and the second image are calculated as the feature amount.
- the feature amount calculation unit calculates a projective transformation matrix between the first image and the second image as the transformation matrix, Using the calculated projective transformation matrix, at least a size change amount, a position shift amount, a rotation angle, a skew angle, and a projective transformation coefficient between the first image and the second image are used as the feature amount.
- the information processing apparatus according to (3) which calculates.
- the information processing apparatus wherein the affine transformation matrix is a transformation matrix when an enlargement / reduction operation, a skew operation, a rotation operation, and a translation operation are sequentially performed.
- the projective transformation matrix is a transformation matrix obtained when an enlargement / reduction operation, a skew operation, a rotation operation, a translation operation, and a projection operation are sequentially performed, or an enlargement / reduction operation, a skew operation, a rotation operation, a projection operation, and a parallel operation.
- the information processing apparatus which is a conversion matrix when movement operations are performed in order.
- the information processing apparatus according to any one of (1) to (7), further comprising: an optical system state determination unit that determines the state of the optical system based on the feature amount calculated by the feature amount calculation unit. .
- the optical apparatus is an optical apparatus having a zoom optical system, an optical apparatus having a stereo optical system, or an optical apparatus that generates a stereo image. apparatus.
- the optical device having the zoom optical system is a camera, a camcorder, or a telescope.
- the optical apparatus having the stereo optical system is a microscope or binoculars.
- the information processing apparatus wherein the optical device that generates the stereo image is a head-mounted display.
- (13) Taking a plurality of random pattern images output from an optical device having a predetermined optical system in a situation where the state of the optical system is different, and generating a plurality of imaging data corresponding to the state of the optical system, By using the plurality of captured image data and calculating an optical flow between the plurality of captured images at an arbitrary point of at least three or more points in the captured image corresponding to the captured data, between the plurality of captured images Identifying positions corresponding to each other in Calculating a feature amount representing a difference in the optical system between the plurality of captured images of interest based on the identification results of the positions corresponding to each other; Including an information processing method.
- An imaging unit that captures a plurality of random pattern images output from an optical device having a predetermined optical system in a situation where the state of the optical system is different, and generates a plurality of imaging data respectively corresponding to the state of the optical system Computer
- a corresponding position specifying function for specifying positions corresponding to each other between the captured images;
- a feature amount calculation function for calculating a feature amount representing a difference in the optical system between the plurality of captured images of interest based on the specification result obtained by the corresponding position specifying function;
- a program to realize (15) An optical system for displaying a predetermined image on a predetermined display screen;
- An imaging unit that captures a plurality of random pattern images output from an optical device having a predetermined optical system in a situation where the state of the optical system is different, and generates a plurality of imaging data respectively corresponding to
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Studio Devices (AREA)
Abstract
Le problème décrit par l'invention consiste à mesurer de manière quantitative et rapidement un changement d'état optique qui peut se produire dans un système optique formé dans un dispositif optique. La solution selon l'invention porte sur un appareil de traitement d'informations qui comprend : une unité de capture d'image qui capture une pluralité d'images de motif aléatoire, qui sont sorties depuis un dispositif optique ayant un système optique donné, dans des situations respectives où le système optique présente des états différents respectifs, ce qui permet de générer une pluralité d'éléments de données de capture d'image correspondant aux états respectifs du système optique ; une unité de détermination de position correspondante qui calcule, à l'aide de la pluralité d'éléments de données de capture d'image générés par l'unité de capture d'image, un flux optique parmi la pluralité d'images capturées par rapport à au moins trois points arbitraires dans les images capturées correspondant à la pluralité d'éléments de données de capture d'image, ce qui permet de déterminer des positions correspondant les unes aux autres parmi la pluralité d'images capturées ; et une unité de calcul de quantité caractéristique qui calcule, sur la base d'un résultat de détermination obtenu par l'unité de détermination de position correspondante, une quantité de caractéristique représentative de la différence du système optique parmi la pluralité d'images capturées d'intérêt.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2014166437 | 2014-08-19 | ||
| JP2014-166437 | 2014-08-19 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2016027524A1 true WO2016027524A1 (fr) | 2016-02-25 |
Family
ID=55350478
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2015/063387 Ceased WO2016027524A1 (fr) | 2014-08-19 | 2015-05-08 | Appareil de traitement d'informations, procédé de traitement d'informations, programme et dispositif optique électronique |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2016027524A1 (fr) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2001194126A (ja) * | 2000-01-14 | 2001-07-19 | Sony Corp | 三次元形状計測装置および三次元形状計測方法、並びにプログラム提供媒体 |
| JP2009182879A (ja) * | 2008-01-31 | 2009-08-13 | Konica Minolta Holdings Inc | 校正装置及び校正方法 |
| WO2009141998A1 (fr) * | 2008-05-19 | 2009-11-26 | パナソニック株式会社 | Procédé d’étalonnage, dispositif d’étalonnage et système d’étalonnage comprenant le dispositif |
| JP2012058188A (ja) * | 2010-09-13 | 2012-03-22 | Ricoh Co Ltd | 校正装置、距離計測システム、校正方法および校正プログラム |
-
2015
- 2015-05-08 WO PCT/JP2015/063387 patent/WO2016027524A1/fr not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2001194126A (ja) * | 2000-01-14 | 2001-07-19 | Sony Corp | 三次元形状計測装置および三次元形状計測方法、並びにプログラム提供媒体 |
| JP2009182879A (ja) * | 2008-01-31 | 2009-08-13 | Konica Minolta Holdings Inc | 校正装置及び校正方法 |
| WO2009141998A1 (fr) * | 2008-05-19 | 2009-11-26 | パナソニック株式会社 | Procédé d’étalonnage, dispositif d’étalonnage et système d’étalonnage comprenant le dispositif |
| JP2012058188A (ja) * | 2010-09-13 | 2012-03-22 | Ricoh Co Ltd | 校正装置、距離計測システム、校正方法および校正プログラム |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP4958610B2 (ja) | 画像防振装置、撮像装置及び画像防振方法 | |
| JP5762142B2 (ja) | 撮像装置、画像処理装置およびその方法 | |
| US9066034B2 (en) | Image processing apparatus, method and program with different pixel aperture characteristics | |
| JP5472328B2 (ja) | ステレオカメラ | |
| US20130063571A1 (en) | Image processing apparatus and image processing method | |
| JP6308748B2 (ja) | 画像処理装置、撮像装置及び画像処理方法 | |
| JP6677098B2 (ja) | 全天球動画の撮影システム、及びプログラム | |
| CN105635588B (zh) | 一种稳像方法及装置 | |
| JP2015201704A (ja) | 画像処理装置、制御方法及びプログラム | |
| CN104520898A (zh) | 多帧图像校准器 | |
| JP2013211827A (ja) | 画像処理方法および装置、プログラム。 | |
| JP2012003233A (ja) | 画像処理装置、画像処理方法およびプログラム | |
| JP2014504462A (ja) | 映像変換装置およびそれを利用するディスプレイ装置とその方法 | |
| JP6552256B2 (ja) | 画像処理装置及び画像処理装置の制御方法 | |
| WO2011010438A1 (fr) | Appareil de détection de parallaxe, appareil de télémétrie et procédé de détection de parallaxe | |
| JP6319972B2 (ja) | 画像処理装置、撮像装置、画像処理方法、およびプログラム | |
| JP6310320B2 (ja) | 画像処理装置、撮像装置、画像処理方法、および、プログラム | |
| JPWO2017169039A1 (ja) | 画像処理装置、撮像装置、および画像処理方法、並びにプログラム | |
| CN109495733B (zh) | 三维影像重建方法、装置及其非暂态电脑可读取储存媒体 | |
| JP2015228113A (ja) | 画像処理装置および画像処理方法 | |
| CN106780758B (zh) | 用于虚拟现实设备的成像方法、装置及虚拟现实设备 | |
| WO2021245982A1 (fr) | Dispositif de traitement d'informations, procédé de traitement d'informations, et programme | |
| TW202042178A (zh) | 用於偵測在失真影像中之物體之方法、系統及裝置 | |
| JP5086120B2 (ja) | 奥行き情報取得方法、奥行き情報取得装置、プログラムおよび記録媒体 | |
| WO2016027524A1 (fr) | Appareil de traitement d'informations, procédé de traitement d'informations, programme et dispositif optique électronique |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15833221 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| NENP | Non-entry into the national phase |
Ref country code: JP |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 15833221 Country of ref document: EP Kind code of ref document: A1 |