WO2010098160A1 - 物体運動推定装置、物体運動推定方法、プログラム及び記録媒体 - Google Patents
物体運動推定装置、物体運動推定方法、プログラム及び記録媒体 Download PDFInfo
- Publication number
- WO2010098160A1 WO2010098160A1 PCT/JP2010/050924 JP2010050924W WO2010098160A1 WO 2010098160 A1 WO2010098160 A1 WO 2010098160A1 JP 2010050924 W JP2010050924 W JP 2010050924W WO 2010098160 A1 WO2010098160 A1 WO 2010098160A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- optical flow
- equation
- image
- captured image
- luminance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/269—Analysis of motion using gradient-based methods
Definitions
- the present invention relates to an object motion estimation device, an object motion estimation method, a program, and a recording medium, and more particularly, to an object motion estimation device that estimates an image of an object by performing image analysis processing on a captured image obtained by imaging an object. .
- the optical flow is an apparent velocity distribution on the image generated by the relative motion between the observer and the object.
- the optical flow is usually expressed as an apparent velocity vector at each point on the image.
- This linear equation gives the essential principle of optical flow estimation.
- the luminance Hessian matrix (3) that appears here is a quantity closely related to the angle of the object in the image, and the quantity determinant det (H) related to the ease of solving the linear equation is the quantity of the estimated optical flow. It is also used as reliability.
- the direction of optical flow research is to reduce the effects of noise due to spatial differences using spatial spread (Lucas et al. (See Non-Patent Document 4), etc.), and the regularity of the Hessian matrix is lost.
- the conventional gradient method As a problem of the conventional gradient method, for example, in the conventional method, there is a case where an optical flow is translated (that is, the spatial gradient is 0).
- the assumption of translational nature of the optical flow is an assumption for the optical flow that is originally unknown, and it is important to relax as much as possible.
- an object of the present invention is to overcome the problems in the conventional gradient method and provide an object motion estimation apparatus and the like suitable for optical flow estimation by image analysis processing on captured images.
- an object motion estimation apparatus for estimating a motion of the object by performing an image analysis process on a captured image obtained by imaging the object by the imaging unit, and using a velocity distribution of relative motion between the imaging unit and the object.
- an optical flow invariance is assumed when the first-order total derivative with respect to time of luminance of the object in the captured image is zero, and the object moves on a plane perpendicular to the imaging axis of the imaging means.
- an image analysis processing means for estimating the optical flow using the luminance I of the pixel of the captured image as an observation amount and also estimating the spatial gradient of the estimated optical flow.
- the invention according to claim 2 is the object motion estimation device according to claim 1, wherein the captured image is a two-dimensional image, and is an image having a size of at least 3 pixels ⁇ 3 pixels, and an imaging time Are stored in the captured image storage means, and the image analysis processing means includes a brightness gradient calculation means for calculating a brightness gradient at a point of interest of the captured image, and a time gradient from the brightness gradient. And time equation calculation means for solving the simultaneous equations using the optical flow and the spatial gradient of the optical flow as variables and using the time gradient as a coefficient.
- the invention according to claim 3 is the object motion estimation device according to claim 1 or 2, wherein the image analysis processing means is a point of interest P (x 0 , on the captured image represented by the equation (eq1).
- the optical flow [u v] T is estimated at the same time as the spatial gradient is estimated by solving the linear equation (eq2) with the luminance I at y 0 ) as the observation amount z (t).
- the invention according to claim 4 is the object motion estimation apparatus according to claim 1, wherein the image analysis processing means includes an observation equation relating to a state variable including the optical flow and a spatial gradient of the optical flow.
- the state equation processing means for estimating the state variable, and the observation equation does not include spatial differentiation by using the luminance near the space of the point of interest on the captured image as an observation amount It is.
- the invention according to claim 5 is the object motion estimation apparatus according to claim 4, wherein the observation equation includes system noise and observation noise, and the estimation of the state variable is an estimated value.
- the state variables are successively estimated using a Kalman filter that approximates the covariance of.
- the imaging unit images the object at least three times at different imaging times.
- a step of storing the obtained captured image in the captured image storage unit, and an image analysis processing unit performs an image analysis process on the captured image to obtain a velocity distribution of relative motion between the imaging unit and the object.
- the invention according to claim 7 is a program for causing a computer to operate as the object motion estimation device according to any one of claims 1 to 5.
- the invention according to claim 8 is a computer-readable recording medium on which the program according to claim 7 is recorded.
- the linear equation (eq2) is an overconstrained linear equation.
- the solution of the linear equation (eq2) is obtained by an estimation method such as a least square method.
- the present invention specifically performs processing using physical properties related to a captured image acquired as data, and translates optical flow. Therefore, it is possible to estimate the optical flow without requiring the regularity of the Hessian matrix of the image luminance.
- the prior information about the motion is known, for example, simple enlargement or rotation (non-null) (See Patent Document 5).
- prior information regarding the optical flow is required in addition to the luminance of each point of the captured image. According to the present invention, optical flow can be estimated without requiring such prior information.
- the present invention it is possible to estimate the spatial gradient simultaneously with the estimation of the optical flow.
- the spatial gradient of the optical flow contains information on the rotation and divergence of the vector field and is a very important quantity for restoring the motion of the object in the three-dimensional space.
- the present invention makes it possible to estimate an important quantity called a spatial gradient at the same time as an optical flow without requiring an assumption in advance.
- the present invention can estimate (correct) the spatial gradient at the same time, and the accuracy is improved in principle.
- the rotation and enlargement of the image can be directly detected, and the influence of the rotation and enlargement of the image on the optical flow is corrected. Therefore, the accuracy of the optical flow in the case of the rotation and enlargement is improved. Furthermore, by using the effects of rotation and enlargement, the number of images for which optical flow can be calculated increases.
- the present invention it is possible to estimate the optical flow only from information only in the vicinity of the attention point, in other words, only information on the luminance of the attention point and its spatiotemporal gradient.
- the conventional gradient method has a problem in implementation.
- the conventional gradient method requires explicit calculation of the spatio-temporal differentiation of luminance information, and the difference calculation is inevitable in order to obtain from the image information. This difference calculation is vulnerable to disturbances and greatly affects the reliability of optical flow estimation. Further, in order to solve the linear simultaneous equations from these difference information, regularity of the Hessian matrix is required. Again, degradation of estimation by inverse matrix calculation is a problem.
- Example 2 It is a schematic block diagram of the object motion estimation apparatus 1 which is an Example of this invention. It is a flowchart which shows an example of the image process by the object motion estimation apparatus 1 of FIG. In Example 2, it is a figure which shows the result of instantaneous estimation. In Example 2, it is a figure which shows the result of the estimation using a Kalman filter.
- FIG. 1 is a schematic block diagram of an object motion estimation apparatus 1 that is an embodiment of the present invention.
- the object motion estimation apparatus 1 is, for example, a drive recorder.
- the object motion estimation device 1 includes an imaging unit 3 that captures an image, a captured image storage unit 5 that stores a captured image captured by the imaging unit 3, and a quantum of time or / and luminance for each point on the captured image.
- Quantization unit 7 for converting to a measurement image, measurement image storage unit 9 for storing the measurement image, smoothing unit 11 for smoothing the measurement image to obtain a smoothed image, and smoothing for storing the smoothed image
- a quantized image storage unit 13 an image analysis processing unit 15 for estimating an optical flow based on the quantized image, and an optical flow storage unit 17 for storing the estimated optical flow.
- the imaging unit 3 is an in-vehicle camera, for example. Since the in-vehicle camera is basically directed in the traveling direction, movement along the optical axis is inevitable. Further, in a wide-angle image such as a drive recorder, enlargement due to image distortion is inevitable. Therefore, a large error is included in the captured image.
- the quantization unit 7 performs quantization by rounding off the fractional part of a point where x and y of (x, y) of the captured image are integers at an integer time t, and this is used as a measurement image. is there. A quantization error is added by the processing of the quantization unit 7.
- the smoothing unit 11 performs smoothing using, for example, a Gaussian filter at each time of the measurement image to obtain a smoothed image.
- the image analysis processing unit 13 estimates the optical flow using the luminance information of the smoothed image.
- a luminance gradient computing unit 21 that computes a luminance gradient at a point of interest in a captured image
- a time gradient computing unit 23 that computes a time gradient from the luminance gradient
- An equation calculation unit 25 that solves an equation
- a state estimation processing unit 27 that sequentially estimates an optical flow and its spatial gradient using a filter with luminance information in the vicinity of the point of interest of the captured image as an observation amount.
- the luminance at the position (x, y) on the image at time t is I (x, y, t), and I (x, y, t) can be differentiated as many times as necessary.
- the optical flow is represented by [u (x, y, t) v (x, y, t)] T , which is also assumed to be differentiable as many times as necessary.
- the luminance invariant constraint equation (1) may be interpreted as the gradient in the [u v 1] T direction in the space-time of the luminance I (x, y, t) is zero as in the equation (4). it can.
- Equation (5) is a condition that approximates the luminance, optical flow, and spatial differentiation thereof at neighboring points with minute amounts ⁇ x, ⁇ y, and ⁇ t.
- the generalized optical flow identity of equation (6) is a condition in which the conditional expression for invariance of luminance in the vicinity of the point of interest on the image is Taylor-expanded and truncated to the first order term.
- the conventional luminance invariant constraint equation (4) corresponds to the 0th-order term of the Taylor expansion.
- the condition of equation (6) can also be derived by partial differentiation of the conventional optical flow equation (4) with respect to x, y, and t, as well as the luminance invariant conditions corresponding to higher order terms in the Taylor expansion. Can be easily derived.
- Expressions (7a) to (7f) are conditions corresponding to the second-order terms used in this embodiment.
- the constraint condition of the optical flow equation is described as a first-order nonlinear simultaneous differential equation and handled as a dynamic system. Then, using the concept of observability of the state of the dynamic system, the estimation principle and estimable conditions for simultaneously estimating the optical flow and its spatial gradient are clarified.
- a constraint equation for describing the system should be selected based on the order of the spatial gradient used for estimation and the spatiotemporal gradient of the optical flow to be estimated. It is. Therefore, the optical flow of Equation (9) and its spatial gradient m are estimated using the spatial gradient i up to the second order of the luminance of Equation (8).
- Equation (10) the state q (t) is selected as Equation (10) using the luminance, the optical flow and the spatial gradient at a certain point P (x 0 , y 0 ) on the image. Then, by choosing the equations (4), (6a), (6b), (7a), (7b), (7c) as constraint equations that describe the time derivatives of these variables, the optical flow estimation model becomes the equation ( 11a) to (11d) are obtained.
- E m ⁇ n and O m ⁇ n represent an m ⁇ n unit matrix and a zero matrix, respectively.
- the function arguments x 0 , y 0 , and t are omitted as appropriate.
- System observability is the property of a system that can uniquely determine the state by observing the output in a finite time interval. Specifically, consider a dynamic system represented by the nonlinear differential equation of equation (12). However, the state q (t) ⁇ R n and the output z (t) ⁇ R m . At this time, observability is defined with reference to Non-Patent Document 3, and Lemma 1 is established.
- Lemma 1 The following propositions (i), (ii) and (iii) are equivalent.
- (i) A given system (12) is observable in state q 0 .
- equation (13) is obtained.
- Lie differentiation is given by equation (15), and the differential form of Lie differentiation is given by equation (16).
- the specific state q can be determined by solving the nonlinear simultaneous algebraic equation (17) for q.
- the state q (t) of Expression (10) can be uniquely determined from the output z (t).
- q (t) includes the optical flow [u v] T , it means that the optical flow can be uniquely determined if it is observable.
- a partial matrix corresponding to the seventh column and subsequent columns of the matrix A is represented by (A) 2 .
- equation (16) is calculated for the system of equation (11), equation (19) is obtained.
- N, J, and Jt are formulas (20), (21), and (22), respectively.
- the equation (23) becomes the equation (24), and the system becomes observable in the general state q (t), and the optical flow and its spatial gradient m are simultaneously measured. It can be estimated.
- a sufficient condition for satisfying equation (24) and observability of the system of equation (11) is that either of the following two conditions (i) or (ii) is satisfied.
- the luminance Hessian matrix H satisfies the equation (26) and further satisfies the incidental condition of any one of the equations (27a), (27b), and (27c) (where ⁇ is an arbitrary constant).
- the first-order spatial gradient of brightness and its time derivative satisfy equation (28). Even if the three incidental conditions of (i) of Theorem 1 are not satisfied, the optical flow [u v] T can be estimated.
- the condition of Equation (26) in Theorem 1 (i) is a condition in which the estimation possibility is determined only from the luminance of the image, and is equivalent to the regularity of the Hessian matrix of luminance known conventionally.
- the condition of equation (28) in (ii) is a new condition that has not been known so far, because the possibility of estimation is determined by the temporal change of the luminance gradient of the image.
- Equation (31) corresponds to (eq2) in the claims.
- equation (21) can be determined from a defined by the matrix J in (22), the elements of J t is the spatial gradients at all brightness, observed quantity z (t). Therefore, the equation (31) is a linear equation with respect to the variable to be estimated.
- the image data required for mounting needs to calculate a secondary spatial gradient of luminance, it is necessary to have 3 pixels ⁇ 3 pixels or more at each time. Furthermore, since it is necessary up to the second derivative with respect to the time of the spatial gradient of luminance, three or more images are required in the time series direction.
- the image size required to estimate the optical flow and its spatial gradient according to Equation (25) is at least 3 pixels (x direction) ⁇ 3 pixels (y direction) ⁇ 3 (t direction).
- the luminance at time t and coordinates (x, y) is I (x, y, t), and the optical flow [u v] T and its spatial gradient at the point of interest (x 0 , y 0 ) at time t 0 are obtained.
- the imaging unit 3 captures three or more images of three pixels or more ⁇ 3 pixels or more at different times to form a captured image (step ST1), and the quantization unit 7 Time or / and luminance are quantized to obtain a measurement image (step ST2), and the smoothing unit 11 smoothes the measurement image to obtain a smoothed image (step ST3).
- the data to be used is, among I (x, y, t), x is x 0 -1, x 0 , x 0 +1, y is y 0 -1, y 0 , Let y 0 +1, t be the luminance of 27 pixels that combines t 0 -2, t 0 -1, t 0 .
- the luminance gradient calculation unit 21 obtains a luminance gradient (step ST4).
- the data obtained by calculation is expressed as G in order to distinguish it from observation data in order to correct the luminance itself from the obtained data to the most probable value.
- the time gradient calculation unit 23 obtains a time gradient from this data (step ST5).
- This is Equation (33).
- t t 0 -1, t 0 for the equations (33a), (33b), (33c), (33g), (33h), and (33i).
- the equation calculation unit 25 obtains the optical flow and its spatial gradient by solving the simultaneous equations of the equation (34) (step ST6).
- the image processing device 13 stores the obtained optical flow and its spatial gradient in the optical flow storage unit 15 (step ST7).
- Equation (36) Using the brightness at the point P (x 0 , y 0 ) on the image, the optical flow, and its spatial gradient, the state q (t) is expressed as equation (36), and the optical flow estimation model is expressed as equation (37) Put it in.
- Equation (38) the necessary and sufficient condition for the estimation model (37) to be observable is Equation (38).
- Equation (39) is obtained.
- the invention that assumes only one of the time-invariant assumption and the translational assumption is specified as follows. That is, in an object motion estimation apparatus that estimates the motion of the object by performing image analysis processing on the captured image obtained by capturing the object, information obtained by performing second-order differentiation with respect to time with respect to at least three captured images obtained by capturing the object. It is an object motion apparatus provided with an image analysis processing means that uses an image analysis process to estimate information called optical flow. Note that information obtained by first-order differentiation is used for both assuming time invariance and translation, and information obtained by third-order differentiation is used for not assuming both time invariance and translation.
- the estimation model becomes unobservable regardless of luminance I in the basic situation where the optical flow is time invariant and simple enlargement / reduction.
- Other estimation models impose restrictions on observability of brightness.
- this estimation model is greatly different in that a condition is imposed on the spatiotemporal gradient of the optical flow regardless of the luminance. In effect, this estimation model means that optical flow cannot be estimated.
- the estimation model that does not assume time invariance has stricter conditions for observability than a model that does not assume time invariance, and care must be taken when applying it to real images.
- the instantaneous estimation requires explicit calculation of the spatiotemporal derivative of luminance, and the difference calculation is inevitable in order to obtain such information from the image information. This difference calculation is vulnerable to disturbances and greatly affects the reliability of optical flow estimation.
- the Kalman filter sequentially estimates the state of the system from the time series data of the output of the dynamic system to which noise is added. Although estimation by the Kalman filter sacrifices time resolution, explicit calculation of time differentiation is not required, and inverse matrix calculation is not required when solving the linear equation of Equation (31). It becomes.
- an estimation model that does not use a spatial gradient will be described.
- a feature of the Kalman filter is that it is not necessary to use time differentiation.
- the observation amount of the proposed estimation model includes a spatial differential of luminance. Therefore, using these estimation models as they are, the calculation of spatial differentiation is indispensable.
- an estimation model that corrects the observation equation of the estimation model and eliminates the need to calculate spatial differentials in successive estimation using the Kalman filter. Specifically, instead of the spatial gradient of the target point P (x 0, y 0) as the observation quantity, consider the use of brightness I of the spatial vicinity of the target point P (x 0, y 0), a new observation an amount ⁇ z (t), that equation (40), consider the 3 ⁇ 3 neighborhood luminance of the point of interest P (x 0, y 0) .
- n is the order (or the number of elements) of the state q.
- a rank is always 6 matrix-H of formula (42b), wherein - by (43) z (t) z (t) from can be restored. This shows that the observability of the system is maintained.
- a Kalman filter can be configured by using the luminance information of neighboring points as an observation amount instead of the spatial gradient.
- the optical flow estimation model is a nonlinear system, and its nonlinearity is strong. Therefore, state estimation is performed using an unscented Kalman filter (see Non-Patent Document 2) that approximates the covariance of the estimated value, instead of an extended Kalman filter that linearly approximates nonlinearity, and optical flow is estimated. Propose.
- the estimation accuracy can be improved by performing the estimation including the system disturbance (see Non-Patent Document 6), and in the discretization in the time direction, such as the Runge-Kutta method It is possible to use the following integration formula, which can also improve the estimation accuracy. These are trade-offs with the calculation time required for estimation, but a model including system disturbance is adopted, and a numerical integration formula should use a quadratic or higher order integration formula in consideration of nonlinearity.
- the possibility of optical flow estimation based only on the assumption of luminance invariance and local luminance information is shown from the standpoint of observability of a dynamic system.
- the derived constraint formula is a condition for the Taylor expansion of a conditional formula for invariance of luminance in the vicinity of the point of interest on the image, and for the coefficient of each term to be zero.
- an estimation model as a dynamic system was proposed from the standpoint of not only relaxing the assumption of translational property for the optical flow but also simultaneously estimating the optical flow and its spatial gradient. And from the standpoint of observability of dynamic systems, we consider the possibility of estimation and show that even if the luminance Hessian is zero, the optical flow and its spatial gradient may be estimated. It was.
- Kalman filter is an example, and other sequential estimation methods may be used.
- an estimation model based on a conditional expression using higher order terms of Taylor expansion may be derived, or optical flow time-varying may be supported.
- This example shows the effectiveness of the proposed estimation model through a specific example.
- the luminance I (x, y, t) of the original image is given as in equation (45).
- This original image is obtained by rotating an image with a flat luminance spatial gradient (the Hessian matrix is zero) at a constant angular velocity ⁇ ( ⁇ 0) around the point (0,0).
- the optical flow and its spatial gradient at the coordinates (x, y) of the original image are uniquely determined under the assumption of time invariance, and are given by Equation (46).
- the spatio-temporal gradient of the luminance is Equation (47), and the luminance Hessian matrix is a zero matrix, so that the optical flow cannot be estimated by the conventional local estimation method.
- Equation (51) becomes the minimum solution.
- u (x, y, 0) matches the true value when the smoothness weight ⁇ is ⁇ ⁇ 0, it does not match the true value as long as ⁇ > 0. From the above, it can be seen that in this numerical example, the optical flow cannot be uniquely determined even if the global method of Horn et al. (1981) is used, and even if it is determined, it does not match the true value.
- the instantaneous estimation equation (31) of the proposed estimation model becomes equation (52) when attention is paid to equation (47c).
- equation (47c) the obvious equations that are 0 on both sides are excluded, and the order of the equations is changed.
- the above equation becomes three sets of two-dimensional linear simultaneous algebraic equations, and is equivalent to the three equations of equation (53) by substituting equations (47a) and (47b).
- equations (47a) and (47b) the solution is always uniquely determined regardless of t and coincides with the theoretical value of equation (46). This fact shows the superiority of the proposed estimation model over the global method.
- FIG. 3 shows an estimated value of the optical flow [u v] T at coordinates (60, 80) for 400 frames (two rotations).
- a large error is added to the estimated value due to the influence of the quantization error, and it is difficult to say that the estimation is successful. This is presumably because the original image is an image with very few features, and even the degree of quantization error greatly affects the inverse matrix calculation.
- Equation (44) a model obtained by correcting the output equation according to the equation (42) to the equation (11) which is the proposed estimation model is used.
- Equation (44) in which noise is applied to this model estimation is performed using an unscented Kalman filter.
- the noise covariance matrices in Equation (44) are Equation (54) and Equation (55), respectively, and the initial values of the estimated values used in the Kalman filter are all zero.
- the numerical integration of the optical flow model uses the classic Runge-Kutta fourth-order formula, and the unscented Kalman filter model is an estimated model that includes system noise in the state (see Non-Patent Document 6). The experiment was conducted.
- FIG. 4 shows the estimated values of the optical flow [u v] T at the coordinates (60, 80) and the spatial gradients u x , u y , v x , and v y for 400 frames (two rotations). It can be seen that although the initial value is greatly deviated, it takes a long time to converge, but it can be estimated accurately. It can also be confirmed that the spatial gradient of the optical flow can be estimated at the same time. Thus, the effectiveness of the proposed estimation model and the estimation method using the Kalman filter could be demonstrated.
- Optical flow is a highly versatile technique that can be used to measure the movement of objects using images.
- the optical flow can be calculated with high accuracy even in such a case.
- a general detection of a moving body is also possible. For example, it is possible to use a vehicle-mounted camera to jump out, warn an overtaking vehicle, detect an intruder in a building, or correct the coordinates of a moving camera.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
Description
(i) 与えられたシステム(12)は状態q0において可観測である。
(ii) q(t)=q0において(実ベクトルの意味で)、式(13)である。
(iii) q(t)=q0において、式(14)である。
ただし、Lie微分は式(15)と、Lie微分の微分形式は式(16)と与えられる。
具体的な状態qは、式(17)の非線形連立代数方程式をqについて解くことによって決定することができる。
(i) 輝度のヘッセ行列Hが式(26)を満たし、さらに式(27a)、(27b)、(27c)のいずれかの付帯条件を満たすこと(ただし、αは任意の定数)。
(ii) 輝度の1次の空間勾配とその時間微分が式(28)を満たすこと。
なお, 定理1の(i)の3つの付帯条件が満たされない場合でも、オプティカルフロー[u v]Tの推定は可能である。
Claims (8)
- 撮像手段が物体を撮像した撮像画像を画像解析処理して前記物体の運動を推定する物体運動推定装置において、
前記撮像手段と前記物体との相対運動の速度分布であるオプティカルフローについて、前記撮像画像における前記物体の輝度の時間に関する1階全微分がゼロという時不変性が仮定され、かつ、前記撮像手段の撮像軸に対して垂直な平面上を前記物体が移動するという並進性が仮定されずに、前記撮像画像の画素の輝度Iを観測量として前記オプティカルフローを推定すると同時に、前記推定されるオプティカルフローの空間勾配をも推定する画像解析処理手段
を備える物体運動推定装置。 - 前記撮像画像は、二次元の画像であり、少なくとも3画素×3画素の大きさの画像であって、撮像時刻が異なる3枚の画像であり、撮像画像記憶手段に記憶されており、
前記画像解析処理手段は、
前記撮像画像の注目点における輝度勾配を演算する輝度勾配演算手段と、
前記輝度勾配から時間勾配を演算する時間勾配演算手段と、
前記オプティカルフロー及び当該オプティカルフローの前記空間勾配を変数とし、前記時間勾配を係数とする連立方程式を解く方程式演算手段を有する、
請求項1記載の物体運動推定装置。 - 前記画像解析処理手段は、前記オプティカルフロー及び当該オプティカルフローの空間勾配を含む状態変数に関する観測方程式を含む推定モデルに対して、前記状態変数の推定を行う状態推定処理手段を有し、
前記観測方程式は、前記撮像画像上の注目点の空間近傍の輝度を観測量として用いることにより、空間微分を含まないものである、
請求項1に記載の物体運動推定装置。 - 前記観測方程式は、システム雑音と観測雑音という雑音を加味したものであり、
前記状態変数の推定は、推定値の共分散を近似するカルマンフィルタを用いて前記状態変数を逐次推定するものである、
請求項4に記載の物体運動推定装置。 - 物体を撮像した撮像画像を画像解析処理して前記物体の運動を推定する物体運動推定方法において、
撮像手段が、撮像時刻を異にして前記物体を少なくとも3回撮像し、得られた撮像画像を撮像画像記憶手段に記憶させるステップと、
画像解析処理手段が、前記撮像画像に対して画像解析処理して、前記撮像手段と前記物体との相対運動の速度分布であるオプティカルフローについて、前記撮像画像の画素の輝度を観測量として、時間に関して2階微分した情報を用いて前記オプティカルフローを推定すると同時に、前記推定されるオプティカルフローの空間勾配をも推定するステップを含む物体運動推定方法。 - コンピュータを、請求項1から5のいずれかの物体運動推定装置として動作させるためのプログラム。
- 請求項7記載のプログラムを記録したコンピュータ読み取り可能な記録媒体。
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US13/202,751 US8805023B2 (en) | 2009-02-24 | 2010-01-25 | Object motion estimating device, object motion estimating method, program, and recording medium |
| JP2011501533A JP5504426B2 (ja) | 2009-02-24 | 2010-01-25 | 物体運動推定装置、物体運動推定方法、プログラム及び記録媒体 |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2009-041360 | 2009-02-24 | ||
| JP2009041360 | 2009-02-24 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2010098160A1 true WO2010098160A1 (ja) | 2010-09-02 |
Family
ID=42665370
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2010/050924 Ceased WO2010098160A1 (ja) | 2009-02-24 | 2010-01-25 | 物体運動推定装置、物体運動推定方法、プログラム及び記録媒体 |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US8805023B2 (ja) |
| JP (1) | JP5504426B2 (ja) |
| WO (1) | WO2010098160A1 (ja) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113870308A (zh) * | 2021-09-07 | 2021-12-31 | 广西科技大学 | 一种螺旋梯度优化估计的弱小目标检测方法 |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105027550B (zh) * | 2012-11-06 | 2018-04-06 | 阿尔卡特朗讯公司 | 用于处理视觉信息以检测事件的系统和方法 |
| CN110415276B (zh) * | 2019-07-30 | 2022-04-05 | 北京字节跳动网络技术有限公司 | 运动信息计算方法、装置及电子设备 |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2000242797A (ja) * | 1999-02-18 | 2000-09-08 | Toyota Motor Corp | 画像の運動検出方法及び物体検出装置 |
| JP2008203538A (ja) * | 2007-02-20 | 2008-09-04 | National Univ Corp Shizuoka Univ | 画像表示システム |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5991459A (en) * | 1992-01-22 | 1999-11-23 | Eastman Kodak Company | Method of modifying a time-varying image sequence by estimation of velocity vector fields |
| US6671324B2 (en) * | 2001-04-16 | 2003-12-30 | Mitsubishi Electric Research Laboratories, Inc. | Estimating total average distortion in a video with variable frameskip |
| US7558402B2 (en) * | 2003-03-07 | 2009-07-07 | Siemens Medical Solutions Usa, Inc. | System and method for tracking a global shape of an object in motion |
| US7689035B2 (en) * | 2005-06-17 | 2010-03-30 | The Regents Of The University Of California | Methods for identifying, separating and editing reflection components in multi-channel images and videos |
| US7711147B2 (en) * | 2006-07-28 | 2010-05-04 | Honda Motor Co., Ltd. | Time-to-contact estimation device and method for estimating time to contact |
| US8264614B2 (en) * | 2008-01-17 | 2012-09-11 | Sharp Laboratories Of America, Inc. | Systems and methods for video processing based on motion-aligned spatio-temporal steering kernel regression |
| WO2010044963A1 (en) * | 2008-10-15 | 2010-04-22 | Innovative Technology Distributors Llc | Digital processing method and system for determination of optical flow |
-
2010
- 2010-01-25 WO PCT/JP2010/050924 patent/WO2010098160A1/ja not_active Ceased
- 2010-01-25 US US13/202,751 patent/US8805023B2/en not_active Expired - Fee Related
- 2010-01-25 JP JP2011501533A patent/JP5504426B2/ja active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2000242797A (ja) * | 1999-02-18 | 2000-09-08 | Toyota Motor Corp | 画像の運動検出方法及び物体検出装置 |
| JP2008203538A (ja) * | 2007-02-20 | 2008-09-04 | National Univ Corp Shizuoka Univ | 画像表示システム |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113870308A (zh) * | 2021-09-07 | 2021-12-31 | 广西科技大学 | 一种螺旋梯度优化估计的弱小目标检测方法 |
| CN113870308B (zh) * | 2021-09-07 | 2024-03-22 | 广西科技大学 | 一种螺旋梯度优化估计的弱小目标检测方法 |
Also Published As
| Publication number | Publication date |
|---|---|
| US8805023B2 (en) | 2014-08-12 |
| US20110299739A1 (en) | 2011-12-08 |
| JP5504426B2 (ja) | 2014-05-28 |
| JPWO2010098160A1 (ja) | 2012-08-30 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP4814669B2 (ja) | 3次元座標取得装置 | |
| Chen et al. | Structural modal identification through high speed camera video: Motion magnification | |
| WO2008056765A1 (en) | Corresponding point searching method, and three-dimensional position measuring method | |
| JP2013239170A (ja) | ステレオ画像処理装置およびステレオ画像処理方法 | |
| US20220277476A1 (en) | Method and apparatus with augmented reality pose determination | |
| JP5504426B2 (ja) | 物体運動推定装置、物体運動推定方法、プログラム及び記録媒体 | |
| Singh et al. | Methods to estimate subpixel level small motion from video of vibrating cutting tools | |
| Jin et al. | Variational mode decomposition-based multirate data-fusion framework for estimating structural dynamic displacement by integrating vision-and acceleration-based measurements | |
| Law et al. | Modal parameter recovery from temporally aliased video recordings of cutting tools | |
| JP6996569B2 (ja) | 計測システム、補正処理装置、補正処理方法、及びプログラム | |
| Ghyabi et al. | Computer vision-based video signal fusion using deep learning architectures | |
| Shimizu et al. | Significance and attributes of subpixel estimation on area‐based matching | |
| KR101791166B1 (ko) | 오브젝트의 공간 정보 추정 장치 및 방법 | |
| JP2011064639A (ja) | 距離計測装置及び距離計測方法 | |
| JP6771696B2 (ja) | データ同期装置、データ同期方法およびデータ同期プログラム | |
| CN117455959B (zh) | 基于小波光流的纹影图像测速方法、装置及设备 | |
| JP4734568B2 (ja) | 画像上移動物体計測点決定方法及び装置 | |
| US20060020562A1 (en) | Apparatus and method for estimating optical flow | |
| Zhang et al. | CableVision: Phase-based bridge cable vibration measurement resilient to environmental disturbances | |
| JP5990042B2 (ja) | 通過人数推定方法及びコンピュータプログラム | |
| JP5874996B2 (ja) | 物体運動推定装置、物体運動推定方法及びプログラム | |
| Shao et al. | DIMMC: A 3D vision approach for structural displacement measurement using a moving camera | |
| Steinmetz et al. | Time-resolved determination of velocity fields on the example of a simple water channel flow, using a real-time capable FPGA-based camera platform | |
| Garcia et al. | Spatio-temporal ToF data enhancement by fusion | |
| Woelk et al. | Fast monocular Bayesian detection of independently moving objects by a moving observer |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10746042 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2011501533 Country of ref document: JP |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 13202751 Country of ref document: US |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 10746042 Country of ref document: EP Kind code of ref document: A1 |