[go: up one dir, main page]

WO2015080498A1 - Procédé de détection de corps humain par l'analyse d'informations de profondeur et appareil d'analyse d'informations de profondeur pour la détection de corps d'utilisateur - Google Patents

Procédé de détection de corps humain par l'analyse d'informations de profondeur et appareil d'analyse d'informations de profondeur pour la détection de corps d'utilisateur Download PDF

Info

Publication number
WO2015080498A1
WO2015080498A1 PCT/KR2014/011501 KR2014011501W WO2015080498A1 WO 2015080498 A1 WO2015080498 A1 WO 2015080498A1 KR 2014011501 W KR2014011501 W KR 2014011501W WO 2015080498 A1 WO2015080498 A1 WO 2015080498A1
Authority
WO
WIPO (PCT)
Prior art keywords
head
data
region
user
extracting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2014/011501
Other languages
English (en)
Inventor
Hyun Jin Park
Hyung Sik Yoon
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Golfzon Co Ltd
Original Assignee
Golfzon Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Golfzon Co Ltd filed Critical Golfzon Co Ltd
Publication of WO2015080498A1 publication Critical patent/WO2015080498A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B69/00Training appliances or apparatus for special sports
    • A63B69/36Training appliances or apparatus for special sports for golf
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30221Sports video; Sports image

Definitions

  • the present invention relates to a method for detecting a specific body part of a user, such as the head, torso or lower body of the user, from a depth image with respect to the body of the user, which is acquired using a depth camera, and a depth information analysis apparatus therefor.
  • Methods for analyzing human motions are implemented through a variety of technologies in a variety of fields and used in production of animation based on human motion, motion recognition games, and sports motion analysis such as golf swing position analysis, for example.
  • Using inertial sensor technology, reflective marker technology, image analysis technology and the like are representative examples of the human motion analysis methods.
  • inertial sensor technology or reflective marker technology has the merit of correctly sensing user motion, it is necessary to attach a plurality of sensors or markers to the body of a user for user motion analysis in order to employ this technology. Accordingly, the technology is used only in very limited cases, such as a case in which the user is not bothered by inconvenience that sensors or markers are attached to the body of the user for analysis of user motion.
  • image analysis using a camera that acquires two-dimensional images has a limit in analysis of user motion in a three-dimensional space and thus a method of connecting multiple cameras through a stereoscopic vision system and extracting three-dimensional information based on 2-dimensional images obtained by each camera is widely used.
  • use of many cameras is inconvenient as sensors are attached to a user body and the cameras are sensitive to noise or illumination variation and thus incorrect results may be obtained.
  • the depth camera is a device that outputs information on a distance to a pixel, that is, an image including three-dimensional information of X, Y and Z-axes.
  • the depth camera provides intuitive information about a structure of an object in an image, distinguished from a normal camera providing color or illumination information, and outputs stable data that is not sensitive to illumination variation.
  • the aforementioned techniques of discriminating a body part using the classifier additionally require a learning process for a subject to be recognized and are dependent on a learned body shape and motion characteristics of a specific user, and thus there is a limitation in recognizing an arbitrary user and arbitrary motion.
  • the motion tracking technique through modeling of the skeleton shape is useful for motion recognition that discriminates motions with a large posture difference within an appropriate motion speed range
  • depth images miss many data in the case of fast sports motion with large posture change, for example, golf swing motion. Accordingly, the motion tracking technique has difficulty detecting the perfect end of a body part.
  • a method for detecting a human body through depth image analysis including: extracting a user part from an acquired depth image; detecting a head region of the user from a region of interest set from an upper end of the extracted user part; detecting a lower body region of the user from a region set based on a position of a lower end of the extracted user part and a position of a weight center of the user part; detecting a torso region of the user from a region ranging from a lower end of the detected head region to the weight center; and detecting data left after removal of the detected head region, lower body region and torso region from the user part, as an arm region of the user.
  • a method for detecting a human body through depth image analysis including: extracting a user part from an acquired depth image; setting a region of interest in a predetermined size from an upper end of the extracted user part and extracting data within the region of interest; extracting data included in a range predetermined on the basis of a most predominantly distributed depth value from among the extracted data within the region of interest, as effective head data; and detecting a head region of a user by calculating a head center point and a radius of the head from the effective head data.
  • a method for detecting a human body through depth image analysis including: extracting a user part from an acquired depth image; setting a target region of a predetermined size on the basis of a head center point extracted in a previous user head region detection process; extracting head observation data for user head region detection using depth information of data corresponding to the target region; extracting a center coordinate point by applying a weight to the head observation data such that the head observation data has a larger value with decreasing distance to the previous head center point; and detecting a head region of a user by extracting a head center point using the center coordinate point.
  • a method for detecting a human body through depth image analysis including: acquiring a depth image of an initial frame, detecting a head region from a user part in the depth image and calculating a head center point and a head radius of the initial head region; and repeatedly extracting and updating a center coordinate point of data, to which a weight by which the data has a larger value with decreasing distance to the initial head center point has been applied, on the basis of the initial head center point for a depth image of a subsequent frame, and detecting a head region corresponding to the head radius and having the updated center coordinate point as a head center point, as a user head region in the depth image of the subsequent frame when the updated center coordinate point is not varied.
  • a method for detecting a human body through depth image analysis including: acquiring a depth image through a depth image acquisition unit set in front of a user; extracting a user part from the acquired depth image; extracting position information about data within a predetermined section from data on a two-dimensional plane including depth direction components of data corresponding to the extracted user part; and detecting a lower body region of the user through statistical analysis using the mean and covariance of the data within the predetermined section.
  • a method for detecting a human body through depth image analysis including: detecting a head region of a user from a user part extracted from an acquired depth image; calculating a weight center of the extracted user part; extracting an upper center point for user torso detection within a region predetermined on the basis of a lower end of the detected head region and located under the lower end of the detected head region, extracting a lower center point for user torso detection within a region predetermined on the basis of the weight center and located under the weight center, and extracting a torso center line connecting the upper center point and the lower center point; calculating a torso radius from data included in a predetermined region around the lower center point; and detecting a torso region of the user by extracting data within a range of the torso radius based on the torso center line.
  • an apparatus for analyzing depth information for user body detection including: a depth image acquisition unit for acquiring a depth image of a motion of a user; and a depth image analysis unit for extracting a user part from the acquired depth image, the depth image analysis unit comprising: a means for detecting a head region of the user by setting a region of interest from an upper end of the extracted user part; a means for detecting a lower body region of the user from a region ranging from a lower end of the extracted user part to a weight center of the user part; a means for detecting a torso region of the user from a region ranging from a lower end of the detected head region to the weight center; and a means for detecting data left after removal of the detected head region, lower body region and torso region from the user part, as an arm region of the user.
  • the method for detecting a human body through depth information analysis and the apparatus for analyzing depth information for user body detection according to the present invention do not require restrictions that prior learning is needed, a predefined body shape needs to be used or normal body modeling is possible only when specific data is completely detected, and enable correct detection of a body part of a user, such as the head, lower body or arm of the user, even when the user appears in an unpredictable shape in a depth image according to personal characteristics of the user, body parts of the user are not completely exposed in some postures, and the user makes a fast motion such as golf swing, in detection of a body part of the user using a depth image.
  • FIG. 1 is a view schematically showing a configuration of a depth information analysis apparatus for user body detection according to an embodiment of the present invention
  • FIG. 2 is a flowchart illustrating a process of detecting a head region of a user from a depth image through a method for detecting a human body according to an embodiment of the present invention
  • FIG. 3 shows a depth image of a user who takes an address position as an example of a depth image with respect to the user
  • FIG. 4 shows an image obtained by extracting a user part from the depth image shown in FIG. 3;
  • FIG. 5 is a view illustrating a process of determining a depth range for detecting a head region of the user
  • FIG. 6 shows a result obtained by detecting the head region from the user part of the depth image shown in FIG. 4;
  • FIG. 7 shows consecutive depth images of a plurality of frames acquired by a depth image acquisition unit shown in FIG. 1;
  • FIGS. 8, 9 and 10 are flowcharts illustrating a process of detecting a head region of a user from a depth image through the method for detecting a human body according to another embodiment of the present invention
  • FIG. 11 is a view illustrating the process of detecting a head region according to the flowcharts shown in FIGS. 8 and 9;
  • FIG. 12 shows a result obtained by detecting head regions from the consecutive depth images of the plurality of frames, shown in FIG. 7, according to the method for detecting a head region, shown in FIGS. 8 and 9;
  • FIG. 13 is a flowchart illustrating a process of detecting a lower body region of a user through the method for detecting a human body according to another embodiment of the present invention
  • FIG. 14 is a view illustrating a lower body modeling section which is set to detect the lower body region according to the method shown in FIG. 13;
  • FIG. 15 is a view illustrating a statistical analysis method used to detect the lower body region according to the method shown in FIG. 13;
  • FIGS. 16 and 17 are views for illustrating the principle of extracting effective lower body data according to the statistical analysis method shown in FIG. 15;
  • FIG. 18 is a flowchart illustrating a process of detecting a torso region of a user through the method for detecting a human body according to another embodiment of the present invention
  • FIG. 19 is a view illustrating a process of extracting an upper center point and a lower center point in order to detect the torso region of the user according to the method shown in FIG. 18;
  • FIG. 20 is a view illustrating a process of detecting z-direction components of the upper center point and the lower center point shown in FIG. 19, extracting a torso center line connecting the two points and detecting the torso region based on the torso center line;
  • FIG. 21 is a flowchart illustrating a process of detecting an arm region of a user using the method for detecting a human body through depth information analysis according to another embodiment of the present invention.
  • the method for detecting a human body through depth information analysis and the apparatus for analyzing depth information for user body detection according to the present invention can be applied to any field that requires user motion tracking, for example, motion tracking based games, sports motion analysis and the like.
  • the method for detecting a user body according to the present invention is based on a method for detecting a user body from a depth image with respect to a golf swing motion of a user, and thus a description will be given of a method for detecting a specific body part of the user when the user makes a golf swing motion.
  • FIG. 1 is a view schematically showing a configuration of the apparatus for analyzing depth information for user body detection according to an embodiment of the present invention.
  • the apparatus for analyzing depth information for user body detection includes a depth image acquisition unit 100 and a depth image analysis unit 200.
  • the depth information analysis apparatus has a simple configuration, as shown in FIG. 1, and thus the depth information analysis apparatus may be implemented as one camera apparatus, for example.
  • the depth information analysis apparatus is placed around a user U who makes a motion, facing the user U, for operation. Accordingly, the depth information analysis apparatus is very convenient for the user and very efficient in terms of space, compared to other motion analysis apparatuses that require additional complicated equipment.
  • the depth image acquisition unit 100 acquires depth images with respect to the user and is implemented by a widely used depth camera. Specifically, the depth image acquisition unit 100 acquires a two-dimensional image of the user, obtains depth information using ultrasonic waves, infrared rays or the like and acquires a depth image having three-dimensional coordinate information including depth information of each pixel of the two-dimensional image.
  • the depth image acquisition unit 100 acquires consecutive depth images of a plurality of frames and sends the acquired depth images to the depth image analysis unit 200.
  • the depth image analysis unit 200 executes a program for analyzing the depth images sent from the depth image acquisition unit 100 to detect and track a specific body part of the user.
  • the depth image analysis unit 200 may include a head region detection means 210, a lower body region detection means 220, a torso region detection means 230 and an arm region detection means 240 and each means may be implemented by software or hardware.
  • the head region detection means 210 may be configured to include a first head region detection means and a second head region detection means, which are not shown.
  • the apparatus for analyzing depth information for user body detection may be connected to an additional apparatus or equipment, which uses a depth information analysis result obtained by the depth information analysis apparatus.
  • a user motion information analysis and provision apparatus 500 shown in FIG. 1 corresponds to the additional apparatus or equipment.
  • the user motion information analysis and provision apparatus 500 may be implemented as various apparatuses.
  • the user motion information analysis and provision apparatus 500 can be implemented as a golf lesson information providing apparatus which provides golf lesson information to users using information analyzed through the depth image acquisition unit 100 and the depth image analysis unit 200.
  • Detection of a user head region may be divided into two methods. That is, when the depth image acquisition unit 100 (refer to FIG. 1) acquires consecutive depth images of a plurality of frames, detection of the user head region may be divided into an initial head region detection method for detecting the user head region from a depth image of an initial frame and a subsequent head region detection method for detecting the user head region from a subsequent depth image of a frame following the initial frame on the basis of the detected initial head region.
  • the user When the user makes a golf swing motion, the user must take an address motion and then consecutive motions such as take back, back swing, back swing top, down swing, impact and follow-through.
  • the address motion is a standstill motion and most golfers take very similar address motions.
  • the aforementioned initial head region detection method detects the user head region by analyzing the depth image (which may be a depth image of the first frame from among the depth images of the plurality of frames, acquired by the depth image acquisition unit, or a depth image at an arbitrary time after the first frame) of the initial frame. It is desirable to detect the user head region using the initial head region detection method, for a depth image when the user takes an address posture.
  • the depth image which may be a depth image of the first frame from among the depth images of the plurality of frames, acquired by the depth image acquisition unit, or a depth image at an arbitrary time after the first frame
  • the user head region Upon completion of detection of the initial head region, the user head region is detected from each of depth images with respect to motions following the address posture on the basis of the detected initial head region, according to the above-described subsequent head region detection method.
  • the first head region detection means of the head region detection means 210 (refer to FIG. 1) performs the initial head region detection and the second head region detection means performs the subsequent head region detection.
  • FIGS. 2 to 6 illustrate the initial head region detection method according to the first head detection means and FIGS. 7 to 12 illustrate the subsequent head region detection method according to the second head detection means.
  • FIG. 2 is a flowchart illustrating the initial head region detection method
  • FIG. 3 shows a depth image when the user takes an address posture as an example of a depth image with respect to the user
  • FIG. 4 shows an image obtained by extracting a user part from the depth image shown in FIG. 3
  • FIG. 5 is a view illustrating a process of determining a depth range for detecting the head region of the user
  • FIG. 6 shows a result obtained by detecting the head region from the user part of the depth image shown in FIG. 4.
  • a depth image is acquired by the depth image acquisition unit set in front of the user (S11).
  • the acquired depth image includes a ground part (the ground) and a background part in addition to the user part.
  • the background part can be easily eliminated since the background part has a much higher depth value than the user part or can be easily patterned, whereas the ground part is difficult to remove since the ground part is connected to the foot part of the user.
  • FIG. 3 shows an exemplary depth image 1 prior to removal of the ground part 2.
  • a representative method is a method of using the RANSAC algorithm.
  • the method of using the RANSAC algorithm will now be briefly described. Three arbitrary points on a depth image are selected and a ground model is predicted considering the three points as in-lier. Then, whether the predicted model corresponds to other data is checked. The predicted model is determined as a ground model when the predicted model is allowed to be the ground, whereas the aforementioned process is repeated for other arbitrary points when the predicted model is not permitted as the ground.
  • the ground part 2 can be modeled and appropriately removed from the depth image 1.
  • the image processing method using the RANSAC algorithm is widely used and thus detailed description thereof is omitted.
  • the user part 10 is extracted by removing the ground part 2 from the depth image 1, as described above (S12). Then, a region of interest of a predetermined size is set in order to detect the head region from the user part 10 and data (referred to as data of interest) included in the region of interest is extracted (S13).
  • the region of interest ROI having the predetermined size is set on the basis of the upper end of the user part 10.
  • head region detection is performed using only the data of interest, and thus data processing can be rapidly carried out.
  • An effective head data extraction process for head region detection is performed using the data of interest extracted as described above (S14, S15 and S16).
  • a depth value Z nr closest to the depth image acquisition unit 100, from among the data of interest, is extracted (S14).
  • a most predominantly distributed depth value Z MJ from among data included in a predetermined range based on the depth value Z nr is extracted (S15) and data included in an effective depth range defined as [Z MJ , Z MJ +p] is extracted as the effective head data (S16).
  • p is a predetermined constant.
  • FIG. 5 shows the head region of the user part 10, which is extracted from the depth image, more specifically, data 11 included in the region of interest.
  • the head region 11 of the user is shown in FIG. 5 to easily describe the aforementioned steps S14, S15 and S16 and does not represent data of an actual depth image.
  • the head region is detected by simply setting the center point in data included in the region of interest of the head region of the user to the center of the head and calculating the radius of the head based on the center point, a wrong part may be detected as the head region.
  • the head state of the user may be represented in various forms in the depth image, depending on the hair style of the user or whether the user wears a cap, but the various forms are processed as data corresponding to the same head part in the depth image.
  • the method for detecting a human body according to an embodiment of the present invention can correctly detect the head region even in the above-described cases.
  • the principle of the method for detecting a human body will now be described with reference to FIG. 5.
  • L0 denotes a reference depth position based on the depth image acquisition unit 100 and L1 denotes a depth position of data closest to the reference depth position L0.
  • L1 corresponds to the position of the depth value Z nr closest to the reference depth position L0.
  • L3 represents the position of a value Z nr +p, which is obtained by adding the predetermined number p to the depth value Z nr of L1
  • L2 represents a depth position of the depth value Z MJ of data, which is most predominantly distributed in the depth range of Z nr to Z nr +p.
  • L4 represents the position of a value Z MJ +p obtained by adding the predetermined value p to the depth value Z MJ most predominantly distributed in the range of [Z nr , Z nr +p].
  • Step S14 is a process of extracting the ranges of L1 and L3 from among data included in the interest region and step S15 is a process of detecting L2 within the ranges of L1 and L3. That is, the depth ranges of L1 and L3 at the side of the face of the head are detected and data of widest distribution in the ranges is found.
  • the data of widest distribution refers to a portion with a widest cross-section when the sphere is cut.
  • L2 that is, the position corresponding to the depth value Z MJ is the part with a widest cross-section within the depth ranges of L1 and L3.
  • a depth range of [Z nr , Z MJ +p] can be set to an effective depth range and data included in the effective depth range can be determined as effective head data in S16.
  • the effective head data is determined on the basis of the head region at the side of the face of the user in the aforementioned process, the effective head data is not affected by hair style change of the user and thus a correct head region can be detected.
  • the effective head data extracted from the effective depth range is data constituting the head region and the mean position (X 1 , Y 1 , Z 1 ) of three-dimensional positions of the data can be determined as a head center point P H1 (S17).
  • a radius which is calculated using the number of pieces of the effective head data as the area of a circle, can be set to the radius of head R H (S18). That is, the radius R H can be calculated according to Math Figure 1.
  • the head region HR1 of the user can be detected from the depth image, as shown in FIG. 6, according to the head center point P H1 and the radius of head R H , calculated from the effective head data (S19).
  • FIG. 7 shows consecutive depth images of a plurality of frames obtained by the depth image acquisition unit shown in FIG. 1
  • FIGS. 8, 9 and 10 are flowcharts illustrating the subsequent head region detection method
  • FIG. 11 is a view illustrating the subsequent head region detection process according to the flowcharts shown in FIGS. 8, 9 and 10
  • FIG. 12 shows a result obtained when head regions are detected from the consecutive depth images of the plurality of frames, shown in FIG. 7, according to the head region detection method shown in FIGS. 8, 9 and 10.
  • the method for detecting a human body provides a method for detecting a head region from a depth image of a current frame on the basis of a head region detected from a depth image of a previous frame, for a plurality of consecutive depth images.
  • a head region is detected from a depth image 1a of the first frame according to the method illustrated in FIGS. 2 to 6 and head regions are detected from depth images 1b of other frames on the basis of the head region detected from the depth image 1a of the first frame.
  • FIGS. 8, 9 and 10 separately show steps of one flowchart, A of FIG. 8 is linked to A of FIG. 9 and B of FIG. 9 is linked to B of FIG. 10).
  • a depth image is acquired by the depth image acquisition unit set in front of the user (S21) and a user part is extracted from the acquired depth image (S22).
  • a target region of a predetermined size is set on the basis of the center of a head region detected from a depth image of a previous frame (S23).
  • the center of the head region detected from the depth image of the previous frame corresponds to the head center point P H1 (X 1 , Y 1 , Z 1 ) extracted according to the method shown in FIGS. 2 to 6 and the target region corresponds to a circle having the head center point P H1 as the center and having a predetermined radius.
  • the predetermined radius of the target region is preferably set to a value greater than the above-described radius of head R H .
  • An observation depth range [Z 1 - ⁇ z, Z 1 + ⁇ z] for extracting head observation data from among data included in the target region is set (S24). That is, the observation depth range is set to [Z 1 - ⁇ z, Z 1 + ⁇ z] and data belonging to the observation depth data is defined as head observation data.
  • ⁇ z is a value that satisfies the condition of the following Math Figure 2.
  • N HD1 represents the number of pieces of the head observation data and R H represents the radius of the head, which is calculated in the process of extracting the head region from the depth image of the previous frame.
  • Math Figure 2 represents the condition that the number of pieces of the head observation data needs to be greater than the area of the circle with the radius of head R H .
  • ⁇ z is replaced by a predetermined initial value N 0 to determine whether the number of pieces of data included in a depth range [Z 1 -N 0 , Z 1 +N 0 ] is greater than the area of the circle having the radius R H of the head according to Math Figure 2 (S25).
  • step S25 is repeated while increasing ⁇ z by a predetermined unit C (S26).
  • step S26 a increases whenever S26 is repeated and C is a constant that indicates the predetermined unit.
  • the range of [Z 1 -N 0 , Z 1 +N 0 ] is applied to check the condition of Math Figure 2 and the value ⁇ z is found while sequentially applying [Z 1 -(N 0 +C), Z 1 +(N 0 +C)], [Z 1 -(N 0 +2C), Z 1 +(N 0 +2C)], [Z 1 -(N 0 +3C), Z 1 +(N 0 +3C)], ... until Math Figure 2 is satisfied in order to determine the observation depth range [Z 1 - ⁇ z, Z 1 + ⁇ z].
  • ⁇ z is increased by the predetermined number C until Math Figure 2 is satisfied in order to determine the observation depth range [Z 1 - ⁇ z, Z 1 + ⁇ z].
  • a weight W 1 which has a larger value for data closer to a previous head center point (X 1 , Y 1 ), is calculated and applied to the head observation data (S28). For subsequent new head observation data, the weight is calculated on the basis of the previous head center point (X 1 , Y 1 ) and applied.
  • the weight W 1i Upon calculation of the weight W 1i based on the previous head center point (X 1 , Y 1 ) for the head observation data, as described above, the weight W 1i is applied to the head observation data (S28) and center coordinates of the data to which the weight W 1i has been applied are calculated (S29).
  • the center coordinates (X 2 , Y 2 ) of the data to which the weight W 1i has been applied can be calculated according to Math Figure 4.
  • N denotes N HD1 of Math Figure 2
  • W 1i denotes the weight with respect to i-th head observation data according to Math Figure 3.
  • the target region is updated on the basis of the calculated new center coordinates, the observation depth range is recalculated for data included in the updated target region and the weight is recalculated and applied.
  • the target region is updated based on the calculated center coordinates (X 2 , Y 2 ) (S31) and the process of determining a new observation depth range [Z 1 - ⁇ z, Z 1 + ⁇ z] for the data included in the updated target region is performed (S32, S33 and S34).
  • ⁇ z is replaced by the initial value N 0 in the observation depth range [Z 1 - ⁇ z, Z 1 + ⁇ z] and increased by the predetermined number C until the number N HD2 of pieces of head observation data exceeds the area of the circle having the radius of head R H .
  • N HD2 in step S33 represents the number of pieces of new head observation data extracted according to update of the observation depth range for data included in the target region updated based on the center coordinates (X 2 , Y 2 ).
  • data included in the determined observation depth range that is, head observation data is extracted (S35).
  • a weight W 2i which has a larger value for data closer to the previous head center point (X 1 , Y 1 ), is calculated and applied to the head observation data (S36).
  • the weight W 2i is represented by Math Figure 6.
  • the weight W 2i is applied to the head observation data (S36) and center coordinates (X 3 , Y 3 ) of the data to which the weight W 2i has been applied are calculated (S37).
  • the center coordinates (X 2 , Y 2 ) of the data to which the weight W 1i has been applied can be calculated in the same manner as used to calculate the center coordinates (X 2 , Y 2 ) according to Math Figure 4 and are represented by Math Figure 7.
  • N corresponds to N HD2 in Math Figure 5 and W 2i denotes the weight with respect to i-th head observation data according to Math Figure 6.
  • the center coordinates (X 3 , Y 3 ) are compared with the previously calculated center coordinates (X 2 , Y 2 ) to determine whether the center coodinates have been changed (S38). That is, it is determined whether (X 2 + ⁇ x, Y 2 + ⁇ y) equals (X 3 , Y 3 ).
  • ⁇ x and ⁇ y represent fine variations, which may be 0 or values larger than 0, and values determined as desirable values through a plurality of tests may be applied as ⁇ x and ⁇ y.
  • the center coordinates (X 3 , Y 3 ) are set to the center point of the head region of the user in the depth image of the current frame and the radius R H of the head is applied to the center point of the head region to detect the head region.
  • step S38 results in (X 2 + ⁇ x, Y 2 + ⁇ y) ⁇ (X 3 , Y 3 )
  • the target region is updated on the basis of the new center coordinates (X 3 , Y 3 )
  • the observation depth range [Z 1 - ⁇ z, Z 1 + ⁇ z] is redetermined based on the new center coordinates (X 3 , Y 3 ) and a weight W 3i based on the previous head center point (X 1 , Y 1 ) is calculated and applied for head observation data included in the updated observation depth range so as to calculate new center coordinates (X 4 , Y 4 ) (repeated steps are omitted and represented by " ... " in FIG. 10).
  • the target region is set, the observation depth range is set and the weight W mi is updated based on the previous head center point (X 1 , Y 1 ) (S46m).
  • the current center coordinates substantially correspond to the previous center coordinates, the current center coordinates are set to the head center point and the radius of head R H is applied to the head center point to detect the head region (S49m).
  • FIG. 11 illustrates the process of FIGS. 8, 9 and 10 as a picture.
  • reference numerals 11a and 11b respectively denote a user head part in a depth image of a previous frame and a user head part in a depth image of the current frame
  • HR1 and HR2 respectively denote a head region in the previous frame and a head region in the current frame.
  • an observation depth range is determined using Z 1 of the previous head center point P H1 (X 1 , Y1, Z 1 ) calculated with respect to the previous frame, the weight W 1i is calculated and applied for head observation data included in the observation depth range to calculate a center coordinate point P2 (X 2 , Y 2 ), the target region is updated based on the center coordinate point P2 (X 2 , Y 2 ), and the head observation data and the weight are recalculated based on the head center point P H1 (X 1 , Y 1 , Z 1 ) to calculate a center coordinate point P 3 (X 3 , Y 3 ) of data to which weight W 2i has been applied. Update and recalculation are repeated in this manner.
  • P m+1 (X m+1 , Y m+1 ) has not been changed from the previous center coordinate point
  • P m+1 (X m+1 , Y m+1 ) is set to the head center coordinates of the head region of the current frame.
  • the head part of the user can be correctly found so as to detect the correct head region.
  • FIG. 13 is a flowchart illustrating the process of detecting the lower body region of the user through the method for detecting a human body according to another embodiment of the present invention
  • FIG. 14 is a view illustrating a lower body modeling section for lower body region detection
  • FIG. 15 shows the mean and covariance of data included in the lower body modeling section
  • FIGS. 16 and 17 are views for illustrating the principle of extracting effective lower body data according to the statistical analysis method shown in FIG. 15.
  • Detection of the lower body region of the user through the method for detecting a human body according to the present invention is performed by setting a section of the user part in a depth image, which is determined as a lower body of the user, as a lower body modeling section, extracting data corresponding to the section as lower body modeling data, setting a section, which is extended from the lower body modeling section by a predetermined range, as a lower body investigation section, and extracting data, which is statistically close to the lower body modeling data, from among data included in the lower body investigation section, as effective lower body data so as to detect the lower body modeling data and the effective lower body data as the lower body region.
  • a depth image is acquired by the depth image acquisition unit set in front of the user (S51) and a user part is extracted from the acquired depth image (S52).
  • Coordinate information (X G , Y G , Z G ) of a weight center Pwc of the user part is calculated (S53).
  • the weight center can be calculated as the mean of coordinates of all data constituting the extracted user part.
  • S54 in the flowchart of FIG. 13 is a step of extracting the aforementioned lower body modeling data, which is described with reference to FIG. 14.
  • FIG. 14 is a virtual representation of distribution of y components and z components of data constituting the user part extracted from the depth image on the y-z plane and shows a side 10S of the user.
  • the lower body region detection process through the method for detecting a human body according to the present invention uses data distribution on the y-z plane according to y components and z components in three-dimensional coordinates (x, y, z) of the data constituting the user part because the data distribution on the y-z plane can be used to easily extract characteristics of a leg shape and the lower body part is distinctly discriminated form an arm part in the data distribution, as shown in FIG. 14.
  • Y min is a y-coordinate of data corresponding to the lower end of the user part
  • Y G is a y-coordinate of the weight center Pwc.
  • a thigh is close to an arm in many cases and a part under the thighs is easily extracted since the part is rarely close to other parts. Accordingly, when a thigh part is extracted, the key to detection of the lower body region is to detect the thigh part of the user because all data with respect to parts under the thigh part corresponds to calves.
  • a point (Y G +Y min )/2 which is a middle point between Y G and Y min , is extracted, as shown in FIG. 14.
  • (Y G +Y min )/2 is extracted as a point near the knees of the user.
  • (Y G +Y min )/2 is not necessarily a point corresponding to the knees of the user.
  • the y-coordinate of the lower body region to be detected does not necessarily correspond to a specific part of the lower body of the user and the lower body region is detected through a statistical method using the mean and covariance with respect to data included in the lower body modeling section including the lower body region.
  • the lower body modeling section may be preset to a section between the position of the y component of the weight center, that is, Y G , and the mean position of the lower end Y min of the user part and the weight center Y G , that is, (Y G +Y min )/2. That is, the section [Y G , (Y G +Y min )/2] is preset to the lower body modeling section and data included in the lower body modeling section is extracted as lower body modeling data (data corresponding to a region LR of FIG. 14).
  • the preset lower body modeling section does not include the lower body region of the user in depth images of all frames all the time and data existing outside the lower modeling section may include data included in the lower body region.
  • the lower body modeling section is set to as narrow a region as possible in consideration of this possibility and thus the aforementioned lower body modeling section may be set to a further reduced section.
  • a section which is obtained by extending the lower body modeling section [Y G , (Y G +Y min )/2] n times on the y-z plane, can be preset as the lower body investigation section.
  • n is a positive number and the lower body investigation section is defined as [Y Lx , Y Lm ].
  • the lower body investigation section [Y Lx , Y Lm ] is represented by Math Figure 8.
  • [Y Lx , Y Lm ] [ ⁇ Y m - (Y G -Y m )/2 ⁇ , ⁇ Y G + (Y G -Y m )/2 ⁇ ]
  • FIG. 15 shows the mean M and covariance CV of the lower body modeling data.
  • the covariance CV is shown as a region which represents the pattern of the lower body modeling data distributed around the mean M. That is, it is assumed that the covariance of the lower body modeling data corresponds to a region A and data corresponding to the lower body investigation section is present in a region B in FIG. 15.
  • the Mahalanobis distance is a measure indicating a degree to which data is close to the mean. Referring to FIG. 15, the Mahalanobis distance of a point a is a small value and the Mahalanobis distance of a point b is a large value since the point a is located within the mean distribution (region A) but the point b is located outside the mean distribution (located in the region B) even though the points a and b are positioned at the same distance from the mean M.
  • the step S57 may be regarded as a process of extracting data within a predetermined range of the mean M, from among the data included in the region B, shown in FIG. 15, as the effective lower body data.
  • FIG. 16 shows an exemplary histogram of Mahalanobis distance values of the data included in the lower body investigation section [Y Lx , Y Lm ], which are calculated on the basis of the mean and covariance of the lower body modeling data. As shown in FIG. 16, data is closer to the lower body modeling section as the Mahalanobis distance value thereof decreases and becomes distant from the lower body modeling section as the Mahalanobis distance value thereof increases.
  • the quantity of data close to the lower body modeling data that is, data having small Mahalanobis distance values, is large and the quantity of data at a distance from the lower body modeling data is small.
  • the data part forms a data group and thus the quantity of data having large Mahalanobis distance values may increase. (For this reason, the quantity of data slightly increases at the right end in FIG. 16.)
  • a section having a very small quantity of data is present between a section close to the lower body modeling data and a section including data corresponding to an arm or the abdomen, which is spaced apart from the lower body modeling data.
  • curve fitting may be applied to the histogram distribution of the quantity of data with respect to Mahalanobis distance values, as shown in FIG. 17, to extract a lowest point Dm of the fitted curve.
  • Data corresponding to the extracted lowest point Dm may be regarded as data present at a boundary close to the lower body modeling data from among data included in the lower body investigation section.
  • the range from 0 to Dm can be set to the effective distance value range and data belonging to the effective distance value range can be extracted as the effective lower body data.
  • the effective distance value range set in this manner may be equally applied to the lower body region detected from a depth image of a subsequent frame.
  • the effective distance value range may be calculated per frame in the aforementioned manner and used.
  • the lower body modeling data is extracted from the lower body modeling section [Y G , (Y G +Y min )/2] and the effective distance value range is set within the lower body investigation section [Y Lx , Y Lm ] so as to extract the effective lower body data, and the lower body modeling data and the effective lower body data are detected as the lower body region of the user (S59).
  • FIG. 18 is a flowchart illustrating the process of detecting the torso region of the user through the method for detecting a human body according to the present invention
  • FIG. 19 is a view illustrating a process of extracting an upper center point and a lower center point in order to detect the torso region of the user from a user part extracted from a depth image
  • FIG. 20 is a view illustrating a process of detecting z-direction components of the upper center point and the lower center point shown in FIG. 19, extracting a torso center line connecting the two points and detecting the torso region based on the torso center line.
  • FIG. 20 shows the side 10S of the user on the y-z plane, which is related to the user part in the depth image, for convenience of description.
  • a depth image is acquired by the depth image acquisition unit set in front of the user (S61) and a user part is extracted from the acquired depth image (S62).
  • a head region is detected from the extracted user part (S63). While the head region is detected according to the method illustrated in FIGS. 2 to 13, the head region detection method is not limited thereto and can include any method of detecting a user head region from a depth image.
  • Coordinates (X G , Y G , Z G ) of a weight center Pwc of the user part are calculated (S64).
  • the weight center is calculated as the mean of coordinates of all data constituting the extracted user part.
  • a region of a predetermined size is set on the basis of data located at the lower end of the detected head region and the mean position of data included in the set region is extracted (S65).
  • the mean position is a mean position with respect to x and y components and coordinates thereof are referred to as (x a , y a ).
  • a region of a predetermined size that is, a region for upper center point extraction, is set on the basis of y a of the mean position and a value Z UC of a depth direction component of data most predominantly distributed in the set region is extracted as the upper center point (S66).
  • a region of a predetermined size that is, a region for lower center point extraction, is set on the basis of Y G of the extracted weight center and a value Z LC of a depth direction component of data most predominantly distributed in the set region is extracted as the lower center point (S67).
  • a region R U of a predetermined size which includes data located at the lower end of the head region HR, is set and the mean position (x a , y a ) of data included in the region R is extracted.
  • the mean position (x a , y a ) corresponds to x- and y-coordinates of the upper center point.
  • the value Z UC of the most predominantly distributed depth direction component is extracted and determined as a z-coordinate of the upper center point P TU , as shown in FIG. 20.
  • x and y components of the weight center, X G and Y G are determined as x- and y-coordinates of the lower center point.
  • the value Z LC of the most predominantly distributed depth direction component is extracted and determined as a z-coordinate of the lower center point P TL , as shown in FIG. 20.
  • the upper center point P TU and the lower center point P TL are respectively extracted as described above and a torso center line LC which connects the upper center point P TU and the lower center point P TL is extracted, as shown in FIG. 20 (S68 of FIG. 18).
  • a maximum x value X max and a minimum x value X min are obtained for the data included in the region R z2 for lower center point extraction and the value (X max - X min )/2 is calculated as a radius of the torso, as shown in FIG. 19 (S69 of FIG. 18).
  • the torso region of the user can be detected using the extracted torso center line and radius of the torso.
  • the method for detecting a human body detects all body parts other than the arm region and then detects the remaining part as the arm region instead of directly detecting the arm region.
  • a depth image is acquired by the depth image acquisition unit set in front of the user (S71), a user part is extracted from the acquired depth image (S72), a head region of the user is detected from the user part and data corresponding to the detected head region is deleted (S73).
  • a lower body region of the user is detected from the user part and data corresponding to the detected lower body region is deleted (74).
  • a torso region of the user is detected from the user part and data corresponding to the detected torso region is deleted (75).
  • the remaining data is detected as the arm region of the user (S76).
  • Noise may be left even after the head region, lower body region and torso region are detected from the user part and data corresponding thereto is deleted.
  • the arm region may be detected using a method of extracting data having distance values close to the mean by calculating the mean and covariance of all data and calculating Mahalanobis distance values of all data.
  • the mean, covariance and Mahalanobis distance values of data have been described with reference to FIG. 15 and thus description thereof is omitted.
  • the method for detecting a human body can correctly detect body parts of a user, such as the head, lower body, torso, arm and the like not only when a user image is displayed in an unpredictable shape in a depth image according to personal characteristics of the user or takes a position that does not show whole body parts but also when the user makes a fast motion such as golf swing, and detect a body part for a plurality of frames so as to track the detected body part.
  • body parts of a user such as the head, lower body, torso, arm and the like not only when a user image is displayed in an unpredictable shape in a depth image according to personal characteristics of the user or takes a position that does not show whole body parts but also when the user makes a fast motion such as golf swing, and detect a body part for a plurality of frames so as to track the detected body part.
  • the method for detecting a human body through depth information analysis and the apparatus for analyzing depth information for user body detection according to the present invention are applicable to industry fields related to sports, such as golf swing motion analysis, industry fields related to sports instruction, screen golf industry for allowing users to enjoy virtual golf games through golf simulation based on virtual reality, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Social Psychology (AREA)
  • Databases & Information Systems (AREA)
  • Psychiatry (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

L'invention concerne un procédé pour détecter un corps humain par l'analyse d'informations de profondeur et un appareil pour analyser des informations de profondeur pour la détection de corps d'utilisateur qui ne nécessitent pas des conditions restrictives telles qu'un apprentissage préalable est nécessaire, une forme de corps prédéfinie doit être utilisée, une modélisation de corps normal n'est possible que lorsque des données spécifiques sont complètement détectées ou similaire pour la détection d'une partie de corps d'un utilisateur en utilisant une image de profondeur, et permettent une détection correcte d'une partie de corps de l'utilisateur, telle que la tête, la partie inférieure du corps, le torse, le bras ou similaire, non seulement lorsque l'utilisateur apparaît dans une forme imprévisible dans une image de profondeur conformément à des caractéristiques personnelles de l'utilisateur ou prend une position qui ne montre pas toutes les parties du corps, mais également lorsque l'utilisateur effectue un mouvement rapide tel qu'un balancement de golf.
PCT/KR2014/011501 2013-11-27 2014-11-27 Procédé de détection de corps humain par l'analyse d'informations de profondeur et appareil d'analyse d'informations de profondeur pour la détection de corps d'utilisateur Ceased WO2015080498A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2013-0145606 2013-11-27
KR1020130145606A KR101394274B1 (ko) 2013-11-27 2013-11-27 뎁스 정보 분석을 통한 신체 검출 방법 및 사용자 신체 검출을 위한 뎁스 정보 분석 장치

Publications (1)

Publication Number Publication Date
WO2015080498A1 true WO2015080498A1 (fr) 2015-06-04

Family

ID=50893903

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2014/011501 Ceased WO2015080498A1 (fr) 2013-11-27 2014-11-27 Procédé de détection de corps humain par l'analyse d'informations de profondeur et appareil d'analyse d'informations de profondeur pour la détection de corps d'utilisateur

Country Status (2)

Country Link
KR (1) KR101394274B1 (fr)
WO (1) WO2015080498A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529399A (zh) * 2016-09-26 2017-03-22 深圳奥比中光科技有限公司 人体信息采集方法、装置及系统
WO2022080678A1 (fr) * 2020-10-15 2022-04-21 주식회사 모아이스 Procédé, système et support d'enregistrement lisible par ordinateur non transitoire permettant l'estimation d'informations sur une posture d'élan de golf

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109887034B (zh) * 2019-03-13 2022-11-22 安徽大学 一种基于深度图像的人体定位方法
KR102353637B1 (ko) 2019-03-17 2022-01-21 이상국 골프 동작 분석 방법 및 장치
CN113496245B (zh) * 2020-06-23 2024-12-20 海信集团控股股份有限公司 智能冰箱及识别存取食材的方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001184491A (ja) * 1999-12-27 2001-07-06 Hitachi Medical Corp 三次元画像表示装置
JP2006503379A (ja) * 2002-10-15 2006-01-26 ユニバーシティ・オブ・サザン・カリフォルニア 拡張仮想環境
JP2006227774A (ja) * 2005-02-16 2006-08-31 Hitachi Medical Corp 画像表示方法
JP2010533338A (ja) * 2007-07-12 2010-10-21 トムソン ライセンシング 2次元画像からの3次元オブジェクト認識システム及び方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001184491A (ja) * 1999-12-27 2001-07-06 Hitachi Medical Corp 三次元画像表示装置
JP2006503379A (ja) * 2002-10-15 2006-01-26 ユニバーシティ・オブ・サザン・カリフォルニア 拡張仮想環境
JP2006227774A (ja) * 2005-02-16 2006-08-31 Hitachi Medical Corp 画像表示方法
JP2010533338A (ja) * 2007-07-12 2010-10-21 トムソン ライセンシング 2次元画像からの3次元オブジェクト認識システム及び方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529399A (zh) * 2016-09-26 2017-03-22 深圳奥比中光科技有限公司 人体信息采集方法、装置及系统
WO2022080678A1 (fr) * 2020-10-15 2022-04-21 주식회사 모아이스 Procédé, système et support d'enregistrement lisible par ordinateur non transitoire permettant l'estimation d'informations sur une posture d'élan de golf

Also Published As

Publication number Publication date
KR101394274B1 (ko) 2014-05-13

Similar Documents

Publication Publication Date Title
WO2015080498A1 (fr) Procédé de détection de corps humain par l'analyse d'informations de profondeur et appareil d'analyse d'informations de profondeur pour la détection de corps d'utilisateur
WO2015102391A1 (fr) Procédé de génération d'image pour analyser la position d'élan de golf d'un utilisateur au moyen d'une analyse d'image de profondeur, et procédé et dispositif pour analyser une position d'élan de golf à l'aide de celui-ci
WO2018048054A1 (fr) Procédé et dispositif de production d'une interface de réalité virtuelle sur la base d'une analyse d'image 3d à caméra unique
WO2020141729A1 (fr) Dispositif de mesure corporelle et procédé de commande associé
WO2013085193A1 (fr) Appareil et procédé pour améliorer la reconnaissance d'un utilisateur
WO2015160207A1 (fr) Système et procédé de détection de région d'intérêt
WO2016074169A1 (fr) Procédé de détection de cible, dispositif détecteur, et robot
EP3921808A1 (fr) Appareil et procédé d'affichage de contenus sur un dispositif de réalité augmentée
WO2018074893A1 (fr) Appareil d'affichage et procédé de traitement d'image associé
WO2019004530A1 (fr) Procédé de suppression d'un objet à traiter d'une image et dispositif d'exécution dudit procédé
WO2013105720A1 (fr) Dispositif et procédé pour analyser la qualité d'une image stéréoscopique en trois dimensions
WO2021020813A1 (fr) Procédé de détection pour club de golf et appareil de détection l'utilisant
WO2018143634A1 (fr) Aspirateur
WO2020101420A1 (fr) Procédé et appareil de mesurer des caractéristiques optiques d'un dispositif de réalité augmentée
EP3656124A1 (fr) Dispositif et procédé de fourniture de contenu
WO2018030742A1 (fr) Procédé et appareil de reconnaissance d'exercice
WO2022065763A1 (fr) Appareil d'affichage et son procédé de commande
WO2021145713A1 (fr) Appareil et procédé de génération d'un modèle virtuel
WO2020159077A1 (fr) Appareil et procédé pour mesurer une distance interpupillaire
WO2016126083A1 (fr) Procédé, dispositif électronique et support d'enregistrement pour notifier des informations de situation environnante
WO2019004531A1 (fr) Procédé de traitement de signal d'utilisateur et dispositif d'exécution dudit procédé
WO2019039668A1 (fr) Procédé de service de vue de rue et appareil de mise en œuvre d'un tel procédé
WO2015167081A1 (fr) Procédé et dispositif permettant de détecter une partie d'un corps humain
WO2016018006A1 (fr) Procédé et dispositif de détection d'un affaissement dans une image, et système les utilisant
WO2020111345A1 (fr) Dispositif et procédé de détection de posture d'utilisateur par la détection de variations du centre de gravité de l'utilisateur

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14865141

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14865141

Country of ref document: EP

Kind code of ref document: A1