WO2004003849A1 - Procede et dispositif de poursuite - Google Patents
Procede et dispositif de poursuite Download PDFInfo
- Publication number
- WO2004003849A1 WO2004003849A1 PCT/AU2003/000794 AU0300794W WO2004003849A1 WO 2004003849 A1 WO2004003849 A1 WO 2004003849A1 AU 0300794 W AU0300794 W AU 0300794W WO 2004003849 A1 WO2004003849 A1 WO 2004003849A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- features
- hypotheses
- feature
- hypothesis
- series
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/29—Graphical models, e.g. Bayesian networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/62—Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
Definitions
- the present invention relates to a method and apparatus for tracking a rigid body within a sequence of video images. It should be understood that the present invention is applicable to the tracking of any rigid body, however the present invention will be described in the context of tracking the face of a person in a sequence of video images.
- One goal of research into automatic monitoring and detection of rigid bodies within a video image is to allow any object to be placed in front of a camera and for a computer to be able to reliably track the position and orientation of the object.
- the goal of research is to allow any person to sit in front of a camera and for a computer to be able to reliably monitor the orientation of the person's head (the head pose), and the gaze direction of their eyes.
- the existing face tracking methods generally operate by testing whether facial features detected in a sequence of images fit those of a predetermined face model based on an "average” face.
- the existing methods generally experience two shortcomings.
- the reliance on some form of "average face” in the generation of a face model results in people with significantly different faces from the "average face” being unable to use the system.
- the need for calibration of such systems before use makes using these systems slow and cumbersome.
- a method of adaptively creating a tracking model from a series of visual images comprising iteratively performing the steps of: (a) locating a series of new tracked objects within a current image and adding them to the set of previously tracked objects to form a current set of tracked objects; (b) determining a new series of relationships between objects in the current set of tracked objects, and adding them to the set of previous series of relationships to form a current series of relationships; and (c) assessing the members of the current series of relationships between successive visual images and deleting a predetermined number of relationships from the current series of relationships having low assessed merit values.
- the step (b) further includes utilising the distance between objects in the determination of a relationship value and the step (c) further includes as part of the assessment, determining for each member of the current series of relationships, a fitness match with the current image.
- the method can also allow the assessment to include a measure of the distance between objects in a relationship with the greater the distance, the greater the value of assessment of the relationship.
- the method can also include modifying existing relationships by adding further tracked objects to the relationship or modifying existing relationships by altering the expected distance between tracked objects in the relationship based on the distance between corresponding objects in the visual image. Ideally the method allows members of a relationship to be occluded for a predetermined number of frames.
- a method of adaptively creating a tracking model from a series of visual images comprising the steps of: tracking objects within the series of visual images to form a series of tracked objects; determining relationships between the tracked objects to form a series of hypotheses; assessing the validity of the hypothesis over the series of visual images, and applying a selective pressure to cull the objects and the hypotheses between objects between members of the series.
- a method of tracking an object including the steps of: (a) capturing a video sequence of the object comprising a plurality of image frames; (b) detecting a plurality of features within at least an initial image frame of the video sequence; (c) generating one or more hypotheses relating to whether two or more detected features are interconnected to one another by comparing the relative positioning of the two or more features in at least the initial image frame; (d) determining the position of a plurality of features located in subsequent image frames and testing the strength of the hypotheses for the subsequent image frames utilising the determined location of the features.
- the method also comprises the step of: (e) displaying the features with a current frame when the hypotheses satisfies a first predetermined condition; and the step of: (f) determining if a second predetermined condition can be satisfied in relation to the hypotheses and not displaying the features when the second predetermined condition can be satisfied, the first and second condition having an interrelationship such that a hysteresis display condition can be set up for the display of the features.
- the hypotheses can include that the features are preferably rigidly connected to one another.
- the features can include areas of the image having a high contrast texture.
- the high contrast texture can be derived by forming a covarience matrix from derived images which are preferably derived from a current frame.
- the derived images are preferably formed from orthogonal calculations carried out on the current frame.
- the method can also include the step of discarding features which exhibit only a small amount of motion over an extended sequence of images.
- the hypothesis can include accounting for the disappearance of features for a predetermined number of image frames.
- the expected relative positioning of features within a hypotheses can be adapted to change over time from one sequence to a next sequence.
- the hypotheses are preferably assigned a quality value depending on the features in the hypotheses.
- the features are preferably also assigned a feature quality value.
- the feature quality value can be varied in accordance with the feature's proximity to other features in a hypotheses. Also, the feature quality value can be varied in accordance with the amount of strain a feature produces on a hypotheses.
- the step (d) preferably can include the sub step of: assigning a quality weighting to each of the two or more detected features of an hypothesis; and calculating a quality value for each hypothesis based on the quality of its respective detected features.
- step (d) preferably can include the sub step of selecting one of the hypotheses based on a calculated quality value of the hypothesis.
- Fig. 1 shows a flow chart representing an overview of an embodiment of the tracking method
- Fig. 2 shows a step in the generation of sub images used for feature detection in the method of Fig. 1;
- Fig. 3 shows a flow-chart depicting how detected features move between feature sets within the method of Fig. 1 ;
- Fig. 4 shows a series of image frames each having a plurality of detected features within each frame, illustrating the transition of the features between the features sets of Fig. 3;
- Fig. 5 shows a set of factors which affect the quality of a feature within a hypothesis used in the method of Fig. 1 ;
- Fig. 6 illustrates the concept of a "gravity" field which is used in the embodiment of Fig. 1 to scale the quality of detected features in a face model
- Fig. 7 illustrates a graph of gravity strength variation with distance
- Fig. 8 illustrates the concept of strain, which is used to vary a hypothesis used in the method of Fig. 1 ;
- Fig. 9 illustrates a hysteresis effect, which is used to determine which features of a model are displayed in the method of Fig. 1.
- the preferred embodiment of the present invention provides a system and method for automatic generation of 3D models of rigid bodies.
- a flow chart outlining a first embodiment of the method is shown in Fig. 1.
- the method 10 can be broken into four basic steps.
- An initial step 20 being acquisition of a series of video images of the object to be modelled.
- features of the object being modelled are detected within the series of images.
- the isolated features which are independently tracked, are turned into a model of a face.
- the notion of a hypothesis 40 is used, rather than the traditional approach of matching the detected features to a fixed template.
- the hypothesis represents a belief about whether a set of features on the object are rigidly connected to each other. This belief is not something that is definitely true or false, but it can be stronger or weaker.
- step 40 an example hypothesis relating to whether or not two or more of the features detected within the image are maintained in a fixed spatial relationship i.e, are rigidly connected, with each other, is tested against the detected features. If the detected features do not match the hypothesis, the hypothesis is refined and the process is restarted at step 20. Those features in step 40 which fit the hypothesis to within a predetermined confidence threshold are displayed in step 50 to the user to provide a model of the object being tracked.
- Each substeps 20-50 of the method 10 will now be described in greater detail beginning with the process of acquiring images.
- Step 20 Acquisition of images in step 20 is a relatively straightforward procedure and can be performed by positioning the object to be modelled in front of a set of stereo cameras mounted on a tripod. Images captured by the video cameras are transferred to a PC running application software capable of preforming steps 30 to 50 of the method.
- the application software can include suitably encoded codes using C++ or the like.
- the computer system is an IBM compatible PC with a Pentium III Processor or above.
- the PC is in communication with the video cameras via a
- Video capture process can be performed at a rate at between 60 and 100 frames per second. It will be evident to those skilled in the art of digital image processing that other computer systems and programming languages could be utilised in the construction of the preferred embodiment. For each image in the sequence of video images captured, the steps 30 to 50 of the method 10 are performed.
- step 30 of the method 10 features of the object being tracked are detected and tracked within the sequence of images from the camera.
- Features which the system can track are generally characterised by a high contrast texture. Additionally, it is preferable that the texture provides contrast in different directions.
- the system calculates a covarience matrix of a pair of subimages.
- Fig. 2 shows an example pair of subimages S1 and S2 extracted from a main image by the system.
- the main image 200 may comprise a frame or partial frame of the image sequence provided by the cameras of the system.
- Each image is broken into a horizontal difference subimage 210 and a vertical difference subimage 220.
- the system calculates the covarience matrix of the subimage.
- the eigenvalues of the covarience matrix will correspond to the amplitude of the texture in the horizontal and vertical directions.
- a covarience matrix C is formed using the product (FI) of the subimages S! and s 2 : fUs l s l rLy,5 2 ⁇
- the feature locations are used to generate a hypothesis relating to the expected relative positions of features within the object being tracked. It should be noted however that not all features are suitable for tracking or for incorporation into an hypothesis. This may be due to the fact that the same features may appear to move independently of all other features or do not move at all; Other reasons for unsuitability is that the feature may be difficult to track or have only recently been detected. For this reason during the substeps 40 (Fig. 1) of generating and refining a hypothesis, a sub process of feature management is implemented in order to ensure that the most reliable and robustly tracked features are emphasised in generating and maintaining hypothesis. The so-called "feature management" process will now be explained with reference to Figures 3 and 4.
- Fig. 3 shows a schematic view of how features move between sets of a hypothesis.
- the available features list 300 comprises any features visible within the image sequence and are common to all hypotheses currently used by the system.
- a number of feature sets are used in order to speed up computation.
- the feature management allows features to be tracked between frames and allows the model's reliance on each feature to be increased from it being initially identified within an image and it becoming part of a hypothesis.
- the following feature sets are used by the preferred embodiment of the present invention: 1. New feature (310)
- the computation speed of the system is increased as these background features, once identified are no longer tracked.
- new features searches are no longer preformed within a region defined by the background features.
- Uncommitted features are features that have been tracked in enough frames of the image sequence to be part of a hypothesis but they have not yet been entered into hypothesis. This may be due to the fact that the features are either tracking poorly or have been occluded during some of the frames of the image sequence.
- Fig. 3 all of the features identified within an image are initially grouped within the available features set 300. If these features are tracked by the system with a predetermined degree of confidence, for a set period of time, they may become part of the new feature set 310 of the hypothesis. Once the feature has been classified as a new feature 310 its trackability is used by the system to determine whether the feature either fits a current hypothesis 311 , is part of the background 312 or whether the feature is not sufficiently trackable 324 and hence not of use to the system. If the feature fits the hypothesis to within a predetermined quality, the feature moves from the new features set 310 into the hypothesis set 320.
- the feature is visible in a particular frame it falls within the visible features set 321 , and if it is temporarily hidden it falls within the hidden feature set 322. Between frames any particular feature within the hypothesis can move between the visible features set 321 and the hidden feature set 322 without being removed from the hypothesis set 320. If a feature within the new feature set is determined by the system not to be moving it is transferred 312 from the new feature set 320 into the background feature set 330. As described above the background features are used to define a background region within which no features are tracked, thereby reducing the computational load on the system.
- the new features 310 are being reliably tracked by the system but neither fall within the background feature set 330 or accurately fit the current hypothesis 320 they may be transferred to the uncommitted feature set 323. If an uncommitted feature begins to move in accordance with the current hypothesis it may be transferred into the committed features set 321 , or alternatively if it becomes untrackable for any reason it may be removed from the uncommitted feature set 340 and returned to the available feature set 300.
- a sequence of image frames each with a plurality of features are shown in Fig. 4.
- the frames 1 to 4 of Fig. 4 are shown in sequence but should not be seen as a set of consecutive images from the video sequence.
- detected features are represented by a circle.
- Those features which form part of a hypothesis are shaded with diagonal lines, and are linked with bars to denote the fixed physical relationship of each shape to its neighbour within the hypothesis.
- Those features which are determined to be background features are filled with cross hatching and new and uncommitted features are shown as open circles.
- frame 1 there is shown a frame having 12 available features within the frame.
- the hypothesis 410 as depicted in frame 1 includes 6 features e.g. 415 and 416.
- Frame 1 additionally includes 6 unfilled circles which represent new or uncommitted features.
- the uncommitted features e.g. 421 , 422, 233 may either later become part of the background, or be newly detected features with only short motion histories and therefore have not yet found their way into the hypothesis.
- frame 2 (402), which shows the features of the hypothesis 410 having been rotated with respect to the image frame 401. For example, the object being tracked within the series of frames in Fig.
- image frame 2 in addition to the features of the hypothesis 410, also includes the 5 new or uncommitted features 426-429.
- the new or uncommitted features do not appear to have moved in concert with the features of the hypothesis. For this reason these features may be determined to be background features or remain as uncommitted features.
- feature 425 appears to be the same feature as 423 however it has moved up and slightly to the right, as it would be expected to if it was rigidly attached to the features in the hypothesis. If the feature 425 continues to move in accordance with the features of hypothesis it may, once it has a sufficiently long motion history, become part of the hypothesis.
- frame 3 (403), it can be seen that the features comprising the hypothesis 410 have rotated in an anti clockwise direction.
- uncommitted feature 425 again appears to have moved in concert with the features hypothesis of the. whereas the other features of the frame have not.
- the features 430, 431, 432 appear to correspond to uncommitted features 426, 427, 428 of frame 2 respectively.
- Feature 433 on the other hand also appears to have not moved a significant amount, however due to its short motion history it has not been transferred into the background features set. It can be seen from Figs. 3 and 4 that the hypotheses can be constantly evolving overtime by adding or removing features from them. As described above, isolated features which are independently tracked are combined into a model of the object using a hypothesis representing a belief about whether a set of features on the object are rigidly connected to each other. This belief is not something that is definitely true or false, but it can be stronger or weaker.
- each feature in a hypothesis is assigned a quality value.
- a hypothesis with high quality features is strong, whereas a hypothesis with lower quality features is weaker.
- the quality of a feature in a hypothesis can be modified based on tracking, strain, visibility and proximity to other features, as shown in Fig. 5.
- Fig. 5 shows a series of factors which affect the quality of a feature within a hypothesis. Good correlation of the feature with the hypothesis increases the quality of that feature, and accordingly increases the confidence in the hypothesis. There are also four factors which decrease the quality of a feature within a hypothesis. These are:
- gravity refers to an arrangement whereby features which lie nearby each other exert a so-called gravity force on each other, which decreases the quality of the weaker of the two features.
- gravity refers to an arrangement whereby features which lie nearby each other exert a so-called gravity force on each other, which decreases the quality of the weaker of the two features.
- Fig. 6 there are shown two features 600 and 610.
- the quality rating of feature 600 is 35, whereas the quality rating of feature 610 is 3.
- feature 600 exerts a gravity field illustrated by graph 620 in Fig. 7 on feature 610.
- the graph 620 in Fig. 6B plots the strength of the gravity on the vertical axis and distance from the strong feature on the horizontal axis.
- the gravity field 620 can be based on a tanh function.
- the gravity function is used essentially as a multiplier to scale the quality of any features falling within it, thereby reducing the quality of the feature 610.
- Fig. 8 illustrates an example of the concept of strain.
- a set of features detected within an image and their correlation with their expected feature positions according to a hypothesis there is shown a set of features detected within an image and their correlation with their expected feature positions according to a hypothesis.
- the measured feature positions are represented by octagons eg. 700, 710, and the predicted positions of the features are represented by open circles eg 720 and 730.
- the fixed relationships between the predicted feature positions according to the hypothesis are represented by the lines joining the predicted feature positions.
- strain can also be used to adapt the hypothesis of the system. Instead of picking an original model position for each of the features within the hypothesis and keeping it for the duration of the hypothesis, the model positions can be made to adapt a small amount with each frame. This means that if a feature is placing a strain on the model i.e. the predicted model position is different from the actual detected feature position, the model adapts slowly towards the measured model position. The adaptation can be inversely proportional to the quality of the feature.
- new model position (1 - adaptFactor)oldM + adaptFactor x predictedM in which
- the adaptation speed can be set to be low so that features which cause significant strain do not cause significant distortion of the model. These features can be dropped from the model rather than straining the model to the point where the hypothesis becomes overly weak.
- features with good quality remain part of a hypothesis, whilst those with lower quality are removed from the hypothesis.
- new and existing features can be merged into a hypothesis if they fit the existing rigid body motion of the model, as illustrated in Fig. 4.
- Individual hypotheses can also be merged together if they appear to be moving consistently with each other.
- a limit on the number of features in hypothesis is desirably used. This means that the model becomes more robust after an initialisation time because the features with good quality keep getting better whilst those with poorer quality are eliminated and replaced until an hypothesis containing features with only high quality factors .
- the final step 50 in the method 10 of Fig. 1 is that of displaying the generated model to the user of the system.
- a hysteresis effect can be used to prevent features of the model apparently jumping into and out of the model being displayed to the user.
- Fig. 9 it can be seen from the left hand graph 810 that features already displayed in the model must have a significant degradation in quality before being removed from the displayed model. From the right hand graph it can be seen that features which are not displayed must have a significant increase in quality in order to be included in the displayed model.
- the requirement to get into the display is higher than the requirement needed to get into a hypothesis, meaning that only the most stable and robust features are displayed to the user.
- the preferred embodiment allows for the automated construction of a 3D feature model builder (i.e. determining the 3D location of features with respect to one another) and a 3D pose tracker which derives the 3D pose of the object from the measured feature locations.
- a 3D feature model builder i.e. determining the 3D location of features with respect to one another
- a 3D pose tracker which derives the 3D pose of the object from the measured feature locations.
- there is no separate step to make the model either by manual selection of features, or by using markers or by, searching for predefined features (for example the corners of the eyes).
- the model is refined in 3 ways: New features consistent with the current object hypothesis are being added, existing features which do not conform with the current object hypothesis are removed, and the location of each feature with respect to the other features is refined according to the ongoing object pose estimation.
- model building and tracking has advantages in that the building of the 3D feature model is fully automatic and does not require prior knowledge about the appearance of the object. Further the tracking is more robust under changing conditions (i.e. illumination) as features are being continually replaced with new ones.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| AU2003238549A AU2003238549A1 (en) | 2002-06-28 | 2003-06-25 | Tracking method and apparatus |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| AUPS3281A AUPS328102A0 (en) | 2002-06-28 | 2002-06-28 | Tracking method |
| AUPS3281 | 2002-06-28 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2004003849A1 true WO2004003849A1 (fr) | 2004-01-08 |
Family
ID=3836838
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/AU2003/000794 Ceased WO2004003849A1 (fr) | 2002-06-28 | 2003-06-25 | Procede et dispositif de poursuite |
Country Status (2)
| Country | Link |
|---|---|
| AU (1) | AUPS328102A0 (fr) |
| WO (1) | WO2004003849A1 (fr) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2007110731A1 (fr) * | 2006-03-24 | 2007-10-04 | Toyota Jidosha Kabushiki Kaisha | Procédé et unité de traitement d'image |
| FR2911978A1 (fr) * | 2007-01-30 | 2008-08-01 | Siemens Vdo Automotive Sas | Procede pour initialiser un dispositif de poursuite d'un visage d'une personne |
| WO2009126273A1 (fr) * | 2008-04-09 | 2009-10-15 | Cognex Corporation | Procédé et système pour la détection dynamique de caractéristiques |
| US8982046B2 (en) | 2008-12-22 | 2015-03-17 | Seeing Machines Limited | Automatic calibration of a gaze direction algorithm from user behavior |
| WO2018108926A1 (fr) * | 2016-12-14 | 2018-06-21 | Koninklijke Philips N.V. | Suivi de la tête d'un sujet |
| CN112528932A (zh) * | 2020-12-22 | 2021-03-19 | 北京百度网讯科技有限公司 | 用于优化位置信息的方法、装置、路侧设备和云控平台 |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5802220A (en) * | 1995-12-15 | 1998-09-01 | Xerox Corporation | Apparatus and method for tracking facial motion through a sequence of images |
| EP0984386A2 (fr) * | 1998-09-05 | 2000-03-08 | Sharp Kabushiki Kaisha | Procédé et dispositif pour détecter un visage humain et un display pour la poursuite de l'observateur |
-
2002
- 2002-06-28 AU AUPS3281A patent/AUPS328102A0/en not_active Abandoned
-
2003
- 2003-06-25 WO PCT/AU2003/000794 patent/WO2004003849A1/fr not_active Ceased
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5802220A (en) * | 1995-12-15 | 1998-09-01 | Xerox Corporation | Apparatus and method for tracking facial motion through a sequence of images |
| EP0984386A2 (fr) * | 1998-09-05 | 2000-03-08 | Sharp Kabushiki Kaisha | Procédé et dispositif pour détecter un visage humain et un display pour la poursuite de l'observateur |
Non-Patent Citations (1)
| Title |
|---|
| "Detecting faces in images: a survey", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (PAMI), vol. 24, no. 1, January 2002 (2002-01-01), pages 34 - 58 * |
Cited By (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2007110731A1 (fr) * | 2006-03-24 | 2007-10-04 | Toyota Jidosha Kabushiki Kaisha | Procédé et unité de traitement d'image |
| FR2911978A1 (fr) * | 2007-01-30 | 2008-08-01 | Siemens Vdo Automotive Sas | Procede pour initialiser un dispositif de poursuite d'un visage d'une personne |
| WO2008095624A1 (fr) * | 2007-01-30 | 2008-08-14 | Continental Automotive France | Procédé pour initialiser un dispositif de poursuite d'un visage d'une personne |
| WO2009126273A1 (fr) * | 2008-04-09 | 2009-10-15 | Cognex Corporation | Procédé et système pour la détection dynamique de caractéristiques |
| US8238639B2 (en) | 2008-04-09 | 2012-08-07 | Cognex Corporation | Method and system for dynamic feature detection |
| US8411929B2 (en) | 2008-04-09 | 2013-04-02 | Cognex Corporation | Method and system for dynamic feature detection |
| US8982046B2 (en) | 2008-12-22 | 2015-03-17 | Seeing Machines Limited | Automatic calibration of a gaze direction algorithm from user behavior |
| WO2018108926A1 (fr) * | 2016-12-14 | 2018-06-21 | Koninklijke Philips N.V. | Suivi de la tête d'un sujet |
| CN110073363A (zh) * | 2016-12-14 | 2019-07-30 | 皇家飞利浦有限公司 | 追踪对象的头部 |
| US11100360B2 (en) | 2016-12-14 | 2021-08-24 | Koninklijke Philips N.V. | Tracking a head of a subject |
| CN110073363B (zh) * | 2016-12-14 | 2023-11-14 | 皇家飞利浦有限公司 | 追踪对象的头部 |
| CN112528932A (zh) * | 2020-12-22 | 2021-03-19 | 北京百度网讯科技有限公司 | 用于优化位置信息的方法、装置、路侧设备和云控平台 |
| CN112528932B (zh) * | 2020-12-22 | 2023-12-08 | 阿波罗智联(北京)科技有限公司 | 用于优化位置信息的方法、装置、路侧设备和云控平台 |
Also Published As
| Publication number | Publication date |
|---|---|
| AUPS328102A0 (en) | 2002-07-18 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12067173B2 (en) | Information processing device and method, program and recording medium for identifying a gesture of a person from captured image data | |
| Maddalena et al. | Towards benchmarking scene background initialization | |
| JP5001260B2 (ja) | オブジェクト追跡方法及びオブジェクト追跡装置 | |
| JP4349367B2 (ja) | 物体の位置姿勢を推定する推定システム、推定方法および推定プログラム | |
| US6661918B1 (en) | Background estimation and segmentation based on range and color | |
| US9767568B2 (en) | Image processor, image processing method, and computer program | |
| US20170372475A1 (en) | Method and System for Vascular Disease Detection Using Recurrent Neural Networks | |
| KR101618814B1 (ko) | 단일객체에 대한 기울기를 추정하는 영상을 감시하는 장치 및 방법 | |
| KR101035055B1 (ko) | 이종 카메라를 이용한 객체 추적 시스템 및 방법 | |
| US20100208038A1 (en) | Method and system for gesture recognition | |
| US20090262989A1 (en) | Image processing apparatus and method | |
| JP2012529691A (ja) | 3次元画像生成 | |
| Poonsri et al. | Improvement of fall detection using consecutive-frame voting | |
| CN108898108B (zh) | 一种基于扫地机器人的用户异常行为监测系统及方法 | |
| WO2004003849A1 (fr) | Procede et dispositif de poursuite | |
| Loutas et al. | Probabilistic multiple face detection and tracking using entropy measures | |
| CN113611387A (zh) | 一种基于人体位姿估计的运动质量评估方法及终端设备 | |
| JP2002342762A (ja) | 物体追跡方法 | |
| Kim et al. | A real-time region-based motion segmentation using adaptive thresholding and K-means clustering | |
| CN114332082A (zh) | 清晰度的评价方法、装置、电子设备及计算机存储介质 | |
| Stone et al. | Silhouette classification using pixel and voxel features for improved elder monitoring in dynamic environments | |
| CN111931754A (zh) | 一种样本中目标物的识别方法、系统和可读存储介质 | |
| JPH08194822A (ja) | 移動物体検知装置及びその方法 | |
| CN109919999B (zh) | 一种目标位置检测的方法及装置 | |
| CN115546729A (zh) | 基于深度学习的遛狗违规行为检测方法、装置及设备 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
| AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
| 122 | Ep: pct application non-entry in european phase | ||
| NENP | Non-entry into the national phase |
Ref country code: JP |
|
| WWW | Wipo information: withdrawn in national office |
Country of ref document: JP |