[go: up one dir, main page]

WO2022139200A1 - Same object identification device and identification method, based on skeleton analysis of continuous image frames - Google Patents

Same object identification device and identification method, based on skeleton analysis of continuous image frames Download PDF

Info

Publication number
WO2022139200A1
WO2022139200A1 PCT/KR2021/017369 KR2021017369W WO2022139200A1 WO 2022139200 A1 WO2022139200 A1 WO 2022139200A1 KR 2021017369 W KR2021017369 W KR 2021017369W WO 2022139200 A1 WO2022139200 A1 WO 2022139200A1
Authority
WO
WIPO (PCT)
Prior art keywords
coordinate
skeleton
coordinates
objects
image frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2021/017369
Other languages
French (fr)
Korean (ko)
Inventor
양동석
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neofect Co Ltd
Original Assignee
Neofect Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neofect Co Ltd filed Critical Neofect Co Ltd
Publication of WO2022139200A1 publication Critical patent/WO2022139200A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • G06T2207/20044Skeletonization; Medial axis transform
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/033Recognition of patterns in medical or anatomical images of skeletal patterns

Definitions

  • the present invention relates to an apparatus and method for identifying the same object based on skeleton analysis of successive image frames.
  • An object of the present invention is to provide an apparatus and method capable of solving the above-described problems.
  • An object of the present invention is to provide an apparatus and method for identifying the same object based on skeleton analysis for successive image frames, which classifies human objects through analysis using the skeleton coordinates of the neck and head.
  • the same object identification method based on skeleton analysis for consecutive image frames is performed by a computer, and as a method of identifying the same object in the image frame to be curbed, extracting a skeleton coordinate set of an object included in successive first and second image frames; selecting a preset reference coordinate from the skeleton coordinate set and a relative coordinate connected to the reference coordinate; and determining the identity of objects included in the first and second image frames based on the reference coordinates and the relative coordinates.
  • the extracting may include: extracting, from a plurality of first objects included in a first image frame, a plurality of first skeleton coordinate sets each corresponding to each of the plurality of first objects; and extracting, from a plurality of second objects included in a second image frame successive to the first image frame, a plurality of second skeleton coordinate sets each corresponding to each of the plurality of second objects.
  • the selecting may include: selecting a first reference coordinate and a first relative coordinate connected to the first reference coordinate for each of the plurality of first skeleton coordinate sets; and selecting a second reference coordinate and a second relative coordinate connected to the second reference coordinate for each of the plurality of second skeleton coordinate sets.
  • the determining may include: selecting any one of the plurality of first objects as a first comparison object; selecting any one of the plurality of second objects as a second comparison object; Comparing a first length between the first reference coordinates of the first comparison object and the second reference coordinates of the second comparison object and a second length between the second reference coordinates and the second relative coordinates of the second comparison object step; determining that the first comparison object and the second comparison object are the same object when the first length is smaller than the second length; and when the first length is not smaller than the second length, determining the first comparison object and the second comparison object as different objects.
  • the reference coordinate may be a coordinate corresponding to a neck part of the skeleton coordinate set
  • the relative coordinate may be a coordinate corresponding to a head part connected to the neck part of the skeleton coordinate set.
  • the human object identification process can be lightweight while maintaining identification accuracy.
  • human object identification is performed taking into account that the distance between the skeleton coordinates increases or decreases according to the distance between the image sensing device and the human object, so the distance between the image sensing device and the human object is Regardless, human objects can be accurately classified.
  • FIG. 1 is a conceptual diagram illustrating an object identification apparatus according to an embodiment of the present invention.
  • FIG. 2 is a diagram illustrating a different skeleton coordinate set for moving a frame.
  • 3 is a diagram illustrating a skeleton coordinate set of successive frames by overlapping each other.
  • 4 and 5 are diagrams illustrating a process of identifying the identity between objects using reference coordinates and relative coordinates.
  • FIG. 6 is a flowchart illustrating a process of an object identification method according to an embodiment of the present invention.
  • the object identification apparatus 10 determines the identity of at least one human object 1 detected through the image sensor 11 .
  • the image sensor 11 detects an image file including the human object 1 at a preset time interval.
  • the image sensor 11 continuously detects a plurality of image frames at preset time intervals.
  • the image detection sensor 11 is communicatively connected to the object identification apparatus 10 , and transmits a plurality of detected image frames to the object identification apparatus 10 .
  • the image sensor 11 may be included as a component of the object identification device 10 .
  • the object identification apparatus 10 extracts a skeleton coordinate set for each object included in the plurality of received image frames.
  • a skeleton coordinate set extracted for each object is illustrated as an example.
  • only two objects are included in the image frame, but the present invention is not limited thereto, and two or more objects may be included.
  • the skeleton coordinate sets of the first objects are respectively extracted from the first image frame (Frame 1), and also the second image frame (Frame 2) continuous with the first image frame (Frame 2).
  • the skeleton coordinate sets of the objects are extracted, respectively.
  • the skeleton coordinate set consists of coordinates corresponding to the head, neck, shoulder, elbow, wrist, waist, pelvis, knee, and ankle, but is not limited thereto.
  • first objects Object 1-a, Object 1-b
  • second objects Object 2-a, Object 2-b
  • an additional identity identification process is required to distinguish which objects are objects having the same identity in the first image frame and the second image frame.
  • the object identification device 10 selects a reference coordinate and a relative coordinate connected to the reference coordinate from the skeleton coordinate sets of the first object and the second object.
  • the coordinates with the smallest distance change according to the change of the frame among the coordinates of the skeleton coordinate set may be selected as the reference coordinates.
  • coordinates corresponding to the neck may be selected as the reference coordinates.
  • the coordinates connected to the reference coordinates may be selected as relative coordinates, and coordinates corresponding to the head connected to the neck may be selected as the reference coordinates.
  • the first image frame and the second image frame are shown overlapping each other, coordinates corresponding to the neck are selected as reference coordinates N1 and N2, and coordinates corresponding to the head are relative coordinates H1 , H2) is selected.
  • the object identification apparatus 10 compares the objects included in the first image frame with the objects included in the second image frame, and the reference coordinates The identity is determined based on (N1, N2) and the relative coordinates (H1, H2).
  • the object identification apparatus 10 selects an object to be determined for equality among the first objects as a first comparison object.
  • the first object (Object 1-a) disposed on the left side of the drawing is selected as the first comparison object.
  • the object identification apparatus 10 selects an object to be determined for equality among the second objects as the second comparison object.
  • the object identification device 10 sets the reference coordinates N1 of the first comparison object and the reference coordinates N2 of the second comparison object.
  • the distance (the first length, L1) is compared with the distance (the second length, L2) between the reference coordinate (N2) and the relative coordinate (H2) of the second comparison object.
  • the object identification device 10 determines that the first length L1 is less than the second length L2 as the same object, and when the first length L1 is not smaller than the second length L2, the object is not the same. do.
  • the object identification apparatus 10 determines the first comparison object and the second comparison object as objects having the same identity. That is, it is determined that the first object (Object 1-a) and the second object (Obeject 2-a) arranged on the left side of the drawing are the same object.
  • the object identification device 10 determines the reference coordinates N1 of the first comparison object and the reference coordinates N2 of the second comparison object.
  • the distance (the first length, L1) is compared with the distance (the second length, L2) between the reference coordinate (N2) and the relative coordinate (H2) of the second comparison object.
  • the object identification apparatus 10 determines the first comparison object and the second comparison object as unequal objects. That is, it is determined that the first object (Object 1-a) disposed on the left side and the second object (Object 2-b) disposed on the right side are not identical to each other based on the drawing.
  • the object identification apparatus 10 selects a first object (Object 1-b) disposed on the right side of the drawing as a first comparison object.
  • the object identification device 10 sets the reference coordinates N1 of the first comparison object and the reference coordinates N2 of the second comparison object.
  • the distance (the first length, L1) is compared with the distance (the second length, L2) between the reference coordinate (N2) and the relative coordinate (H2) of the second comparison object.
  • the object identification apparatus 10 determines the first comparison object and the second comparison object as unequal objects. That is, based on the drawing, the first object (Object 1-b) disposed on the right side and the second object (Object 2-a) disposed on the left side are determined as unequal objects.
  • the object identification device 10 determines the reference coordinates N1 of the first comparison object and the reference coordinates N2 of the second comparison object.
  • the distance (the first length, L1) is compared with the distance (the second length, L2) between the reference coordinate (N2) and the relative coordinate (H2) of the second comparison object.
  • the object identification apparatus 10 determines the first comparison object and the second comparison object as objects having the same identity. That is, it is determined that the first object (Object 1-b) disposed on the right side and the second object (Object 2-b) disposed on the right side based on the drawing are the same object.
  • the part with the least movement is the skeleton coordinates corresponding to the neck, and by using this as a reference coordinate, the sameness between objects can be inferred.
  • the movement distance of the reference coordinates may vary when the image frame is changed according to the distance between the object and the image sensor 11, a miscalculation may occur when determining the sameness in consideration of only the reference coordinates.
  • the movement distance of the reference coordinate increases according to the change of the image frame
  • the reference when the object is separated from the image sensor 11, the reference according to the change of the image frame The moving distance of the coordinates is reduced.
  • the object identification apparatus 10 additionally considers the distance between the reference coordinates and the relative coordinates.
  • the distance between the reference coordinate and the relative coordinate is a fixed value regardless of the fluctuation of the image frame, and the moving distance of the reference coordinate in successive image frames cannot be greater than the distance between the reference coordinate and the relative coordinate. It is possible to accurately determine the identity of objects regardless of the distance to (11).
  • the reference coordinates are designated as the coordinates corresponding to the neck and the relative coordinates are designated as the coordinates corresponding to the head, continuous images
  • the moving distance of the reference coordinates in the frame is shorter than the distance between the reference coordinates and the relative coordinates.
  • the movement distance of the reference coordinates in successive image frames is the reference coordinate can be greater than the distance between the and relative coordinates.
  • the relative coordinates are preferably selected in consideration of the time interval between image frames.
  • the reference coordinates may be selected as coordinates corresponding to the neck, and the relative coordinates may be selected as coordinates corresponding to the waist.
  • the object identification process can be lightened.
  • the object identification apparatus 10 may include a communication unit for transmitting and receiving information, a control unit for calculating information, and a memory (or database) for storing information, respectively.
  • control unit in hardware, ASICs (application specific integrated circuits), DSPs (digital signal processors), DSPDs (digital signal processing devices), PLDs (programmable logic devices), FPGAs (field programmable gate arrays), processors (processors), controller It may be implemented using at least one of (controllers), micro-controllers, microprocessors, and other electrical units for performing functions.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • processors processors
  • controller It may be implemented using at least one of (controllers), micro-controllers, microprocessors, and other electrical units for performing functions.
  • the software code is a software application written in a suitable programming language, and the software code may be implemented.
  • the software code may be stored in the memory and executed by the control unit.
  • the communication unit may be implemented through at least one of a wired communication module, a wireless communication module, and a short-range communication module.
  • the wireless Internet module refers to a module for wireless Internet access, and may be built-in or external to each device.
  • Wireless Internet technologies include WLAN (Wireless LAN) (Wi-Fi), Wibro (Wireless broadband), Wimax (World Interoperability for Microwave Access), HSDPA (High Speed Downlink Packet Access), LTE (long term evolution), LTE-A (Long Term Evolution-Advanced) may be used.
  • the memory is a flash memory type, a hard disk type (hard
  • a disk type multimedia card micro type, card type memory (eg SD or XD memory, etc.), random access memory (RAM), static random access memory (SRAM), ROM (read -only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), a magnetic memory, a magnetic disk, and an optical disk may include at least one type of storage medium.
  • RAM random access memory
  • SRAM static random access memory
  • ROM read -only memory
  • EEPROM electrically erasable programmable read-only memory
  • PROM programmable read-only memory
  • a magnetic memory a magnetic disk, and an optical disk may include at least one type of storage medium.
  • the object identification method (S1) is a step of extracting a skeleton coordinate set for each object included in successive first and second image frames (S10); Selecting the set reference coordinates and relative coordinates connected to the reference coordinates (S20) and determining the identity of the objects included in the first and second image frames based on the reference coordinates and the relative coordinates (S30). .
  • the object identification method S1 according to the present embodiment is performed by the above-described object identification apparatus 10, and the object identification apparatus 10 may be implemented by a computer.
  • the computer extracts a skeleton coordinate set for each object included in successive first and second image frames (S10).
  • a plurality of first skeleton coordinate sets each corresponding to each of the plurality of first objects are extracted, and included in a second image frame consecutive to the first image frame
  • a plurality of second skeleton coordinate sets each corresponding to each of the plurality of second objects are extracted from the plurality of second objects.
  • the computer selects a preset reference coordinate from the skeleton coordinate set and a relative coordinate connected to the reference coordinate (S20).
  • a first reference coordinate and a first relative coordinate connected to the first reference coordinate are selected for each of the plurality of first skeleton coordinate sets, and the second reference coordinate and the second reference coordinate are connected to each of the plurality of second skeleton coordinate sets A second relative coordinate is selected.
  • the identity of the objects included in the first and second image frames is determined based on the reference coordinates and the relative coordinates (S30).
  • Any one of the plurality of first objects is selected as the first comparison object, and any one of the plurality of second objects is selected as the second comparison object.
  • the first length between the first reference coordinates of the first comparison object and the second reference coordinates of the second comparison object are compared with the second length between the second reference coordinates and the second relative coordinates of the second comparison object.
  • the first comparison object and the second comparison object are determined as the same object, and when the first length is not smaller than the second length, the first comparison object and the second comparison object are mutually judged by another object.
  • the above-described identity determination process may be repeatedly performed until the comparison between all the plurality of first objects and the plurality of second objects is completed.
  • the method according to an embodiment of the present invention described above may be implemented as a program (or application) to be executed in combination with a server, which is hardware, and stored in a medium.
  • the above-described program is C, C++, JAVA, machine language, etc. that a processor (CPU) of the computer can read through a device interface of the computer in order for the computer to read the program and execute the methods implemented as a program
  • It may include code (Code) coded in the computer language of Such code may include functional code related to a function defining functions necessary for executing the methods, etc., and includes an execution procedure related control code necessary for the processor of the computer to execute the functions according to a predetermined procedure. can do.
  • this code may further include additional information necessary for the processor of the computer to execute the functions or code related to memory reference for which location (address address) in the internal or external memory of the computer should be referenced. have.
  • the code uses the communication module of the computer to determine how to communicate with any other computer or server remotely. It may further include a communication-related code for whether to communicate and what information or media to transmit and receive during communication.
  • the storage medium is not a medium that stores data for a short moment, such as a register, a cache, a memory, etc., but a medium that stores data semi-permanently and can be read by a device.
  • examples of the storage medium include, but are not limited to, ROM, RAM, CD-ROM, magnetic tape, floppy disk, and an optical data storage device.
  • the program may be stored in various recording media on various servers accessible by the computer or in various recording media on the computer of the user.
  • the medium may be distributed in a computer system connected to a network, and a computer-readable code may be stored in a distributed manner.
  • a software module may contain random access memory (RAM), read only memory (ROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, hard disk, removable disk, CD-ROM, or It may reside in any type of computer-readable recording medium well known in the art to which the present invention pertains.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable ROM
  • EEPROM electrically erasable programmable ROM
  • flash memory hard disk, removable disk, CD-ROM, or It may reside in any type of computer-readable recording medium well known in the art to which the present invention pertains.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A same object identification method based on a skeleton analysis of continuous image frames, according to one embodiment of the present invention, is a method which is carried out by a computer and is for identifying the same object in continuous image frames, the method comprising the steps of: extracting respective skeleton coordinate sets for objects which are respectively included in first and second continuous image frames; selecting, from the skeleton coordinate sets, preset reference coordinates, and relative coordinates which are connected to the reference coordinates; and, on the basis of the reference coordinates and the relative coordinates, determining the sameness of the objects respectively included in the first and second image frames.

Description

μ—°μ†λ˜λŠ” 이미지 ν”„λ ˆμž„μ— λŒ€ν•œ μŠ€μΌˆλ ˆν†€ 뢄석에 κΈ°μ΄ˆν•œ 동일객체 식별μž₯치 및 식별방법Same object identification device and identification method based on skeleton analysis for consecutive image frames

λ³Έ 발λͺ…은 μ—°μ†λ˜λŠ” 이미지 ν”„λ ˆμž„μ— λŒ€ν•œ μŠ€μΌˆλ ˆν†€ 뢄석에 κΈ°μ΄ˆν•œ 동일객체 식별μž₯치 및 식별방법에 κ΄€ν•œ 것이닀. The present invention relates to an apparatus and method for identifying the same object based on skeleton analysis of successive image frames.

μ˜μƒμœΌλ‘œλΆ€ν„° 객체λ₯Ό μ‹λ³„ν•˜λŠ” κΈ°μˆ μ€ μž¬ν™œμΉ˜λ£Œ, λ°©λ²” λ“± λ‹€μ–‘ν•œ λΆ„μ•Όμ—μ„œ μ‚¬μš©λ˜κ³  개발되고 μžˆλ‹€. Technology for identifying objects from images is being used and developed in various fields such as rehabilitation treatment and crime prevention.

특히, 인곡지λŠ₯의 λ°œμ „μœΌλ‘œ 인해 μ˜μƒμ—μ„œ 인적 객체λ₯Ό μ‹λ³„ν•˜λŠ” 방법에 λ§Žμ€ λ°œμ „μ΄ μžˆμ—ˆμœΌλ©°, 인적 객체의 외관상 νŠΉμ§•μ„ μ΄μš©ν•œ ν•™μŠ΅μ„ 톡해 객체λ₯Ό μ‹λ³„ν•˜λŠ” 방법, 인적 객체의 μŠ€μΌˆλ ˆν†€ μ’Œν‘œλ₯Ό μΆ”μΆœν•˜μ—¬ 객체λ₯Ό μ‹λ³„ν•˜λŠ” 방법 등이 μ‚¬μš©λ˜κ³  μžˆλ‹€. In particular, due to the development of artificial intelligence, there have been many advances in the method of identifying human objects in images, the method of identifying objects through learning using the appearance features of human objects, and the method of identifying objects by extracting skeleton coordinates of human objects. methods are being used.

λ‹€λ§Œ, μ •ν™•ν•œ 인적 객체의 ꡬ별을 μœ„ν•΄ κ³Όλ„ν•˜κ²Œ μ—°μ‚°λŸ‰μ΄ 증가함에 따라, 인적 객체의 ꡬ뢄에 κ³ μˆ˜μ€€μ˜ μ»΄ν“¨νŒ… μž₯μΉ˜κ°€ μš”κ΅¬λ˜λŠ” λ¬Έμ œκ°€ λ°œμƒλ˜κ³  μžˆλ‹€. However, as the amount of computation is excessively increased for accurate identification of human objects, there is a problem that a high-level computing device is required for identification of human objects.

λ³Έ 발λͺ…은 μƒμˆ ν•œ λ¬Έμ œμ μ„ ν•΄κ²°ν•  수 μžˆλŠ” μž₯치 및 방법을 μ œκ³΅ν•˜λŠ” 것을 λͺ©μ μœΌλ‘œ ν•œλ‹€.An object of the present invention is to provide an apparatus and method capable of solving the above-described problems.

λ³Έ 발λͺ…은, λͺ© λΆ€λΆ„κ³Ό 머리 λΆ€λΆ„μ˜ μŠ€μΌˆλ ˆν†€ μ’Œν‘œλ₯Ό μ΄μš©ν•œ 뢄석을 톡해 인적객체λ₯Ό λΆ„λ₯˜ν•˜λŠ”, μ—°μ†λ˜λŠ” 이미지 ν”„λ ˆμž„μ— λŒ€ν•œ μŠ€μΌˆλ ˆν†€ 뢄석에 κΈ°μ΄ˆν•œ 동일객체 식별μž₯치 및 식별방법을 μ œκ³΅ν•˜λŠ” 것을 일 λͺ©μ μœΌλ‘œ ν•œλ‹€.An object of the present invention is to provide an apparatus and method for identifying the same object based on skeleton analysis for successive image frames, which classifies human objects through analysis using the skeleton coordinates of the neck and head.

λ³Έ 발λͺ…이 ν•΄κ²°ν•˜κ³ μž ν•˜λŠ” κ³Όμ œλ“€μ€ μ΄μƒμ—μ„œ μ–ΈκΈ‰λœ 과제둜 μ œν•œλ˜μ§€ μ•ŠμœΌλ©°, μ–ΈκΈ‰λ˜μ§€ μ•Šμ€ 또 λ‹€λ₯Έ κ³Όμ œλ“€μ€ μ•„λž˜μ˜ κΈ°μž¬λ‘œλΆ€ν„° ν†΅μƒμ˜ κΈ°μˆ μžμ—κ²Œ λͺ…ν™•ν•˜κ²Œ 이해될 수 μžˆμ„ 것이닀.The problems to be solved by the present invention are not limited to the problems mentioned above, and other problems not mentioned will be clearly understood by those skilled in the art from the following description.

μƒμˆ ν•œ 과제λ₯Ό ν•΄κ²°ν•˜κΈ° μœ„ν•œ λ³Έ 발λͺ…μ˜ 일 μ‹€μ‹œ μ˜ˆμ— λ”°λ₯Έ μ—°μ†λ˜λŠ” 이미지 ν”„λ ˆμž„μ— λŒ€ν•œ μŠ€μΌˆλ ˆν†€ 뢄석에 κΈ°μ΄ˆν•œ 동일 객체 식별 방법은, 컴퓨터에 μ˜ν•΄ μˆ˜ν–‰λ˜λ©°, μ—°μ„λ˜λŠ” 이미지 ν”„λ ˆμž„μ—μ„œ λ™μΌν•œ 객체λ₯Ό μ‹λ³„ν•˜λŠ” λ°©λ²•μœΌλ‘œμ„œ, μ—°μ†λ˜λŠ” 제1 및 제2 이미지 ν”„λ ˆμž„μ— ν¬ν•¨λœ 객체의 μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹μ„ μΆ”μΆœν•˜λŠ” 단계; 상기 μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹μ—μ„œ κΈ° μ„€μ •λœ κΈ°μ€€μ’Œν‘œ 및 상기 κΈ°μ€€μ’Œν‘œμ™€ μ—°κ²°λœ μƒλŒ€μ’Œν‘œλ₯Ό μ„ μ •ν•˜λŠ” 단계; 상기 κΈ°μ€€μ’Œν‘œ 및 상기 μƒλŒ€μ’Œν‘œμ— κΈ°μ΄ˆν•˜μ—¬, 제1 및 제2 이미지 ν”„λ ˆμž„μ— ν¬ν•¨λœ 객체의 동일성을 νŒλ‹¨ν•˜λŠ” 단계λ₯Ό ν¬ν•¨ν•œλ‹€.The same object identification method based on skeleton analysis for consecutive image frames according to an embodiment of the present invention for solving the above problems is performed by a computer, and as a method of identifying the same object in the image frame to be curbed, extracting a skeleton coordinate set of an object included in successive first and second image frames; selecting a preset reference coordinate from the skeleton coordinate set and a relative coordinate connected to the reference coordinate; and determining the identity of objects included in the first and second image frames based on the reference coordinates and the relative coordinates.

λ˜ν•œ, 상기 μΆ”μΆœν•˜λŠ” λ‹¨κ³„λŠ”, 제1 이미지 ν”„λ ˆμž„μ— ν¬ν•¨λœ 볡수의 제1 κ°μ²΄λ‘œλΆ€ν„°, 각각이 상기 볡수의 제1 객체 각각에 λŒ€μ‘λ˜λŠ” 볡수의 제1 μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹μ„ μΆ”μΆœν•˜λŠ” 단계; 및 상기 제1 이미지 ν”„λ ˆμž„κ³Ό μ—°μ†λ˜λŠ” 제2 이미지 ν”„λ ˆμž„μ— ν¬ν•¨λœ 볡수의 제2 κ°μ²΄λ‘œλΆ€ν„°, 각각이 상기 볡수의 제2 객체 각각에 λŒ€μ‘λ˜λŠ” 볡수의 제2 μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹μ„ μΆ”μΆœν•˜λŠ” 단계λ₯Ό ν¬ν•¨ν•œλ‹€.In addition, the extracting may include: extracting, from a plurality of first objects included in a first image frame, a plurality of first skeleton coordinate sets each corresponding to each of the plurality of first objects; and extracting, from a plurality of second objects included in a second image frame successive to the first image frame, a plurality of second skeleton coordinate sets each corresponding to each of the plurality of second objects.

λ˜ν•œ, 상기 μ„ μ •ν•˜λŠ” λ‹¨κ³„λŠ”, 상기 볡수의 제1 μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹ 각각에 λŒ€ν•˜μ—¬ 제1 κΈ°μ€€μ’Œν‘œ 및 상기 제1 κΈ°μ€€μ’Œν‘œμ™€ μ—°κ²°λœ 제1 μƒλŒ€μ’Œν‘œλ₯Ό μ„ μ •ν•˜λŠ” 단계; 및 상기 볡수의 제2 μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹ 각각에 λŒ€ν•˜μ—¬ 제2 κΈ°μ€€μ’Œν‘œ 및 상기 제2 κΈ°μ€€μ’Œν‘œμ™€ μ—°κ²°λœ 제2 μƒλŒ€μ’Œν‘œλ₯Ό μ„ μ •ν•˜λŠ” 단계λ₯Ό ν¬ν•¨ν•œλ‹€.In addition, the selecting may include: selecting a first reference coordinate and a first relative coordinate connected to the first reference coordinate for each of the plurality of first skeleton coordinate sets; and selecting a second reference coordinate and a second relative coordinate connected to the second reference coordinate for each of the plurality of second skeleton coordinate sets.

λ˜ν•œ, 상기 νŒλ‹¨ν•˜λŠ” λ‹¨κ³„λŠ”, 상기 볡수의 제1 객체 쀑 μ–΄λŠ ν•˜λ‚˜λ₯Ό 제1 λΉ„κ΅κ°μ²΄λ‘œ μ„ μ •ν•˜λŠ” 단계; 상기 볡수의 제2 객체 쀑 μ–΄λŠ ν•˜λ‚˜λ₯Ό 제2 λΉ„κ΅κ°μ²΄λ‘œ μ„ μ •ν•˜λŠ” 단계; 상기 제1 λΉ„κ΅κ°μ²΄μ˜ 제1 κΈ°μ€€μ’Œν‘œμ™€ 상기 제2 λΉ„κ΅κ°μ²΄μ˜ 제2 κΈ°μ€€μ’Œν‘œ μ‚¬μ΄μ˜ 제1 길이와 상기 제2 λΉ„κ΅κ°μ²΄μ˜ 제2 κΈ°μ€€μ’Œν‘œ 및 제2 μƒλŒ€μ’Œν‘œ μ‚¬μ΄μ˜ 제2 길이λ₯Ό λΉ„κ΅ν•˜λŠ” 단계; 제1 길이가 제2 길이보닀 μž‘μ€ 경우, 상기 제1 비ꡐ객체와 상기 제2 비ꡐ객체λ₯Ό λ™μΌν•œ 객체둜 νŒλ‹¨ν•˜λŠ” 단계; 및 제1 길이가 제2 길이보닀 μž‘μ§€ μ•Šμ€ 경우, 상기 제1 비ꡐ객체와 상기 제2 비ꡐ객체λ₯Ό μ„œλ‘œ λ‹€λ₯Έ 객체둜 νŒλ‹¨ν•˜λŠ” 단계λ₯Ό ν¬ν•¨ν•œλ‹€.In addition, the determining may include: selecting any one of the plurality of first objects as a first comparison object; selecting any one of the plurality of second objects as a second comparison object; Comparing a first length between the first reference coordinates of the first comparison object and the second reference coordinates of the second comparison object and a second length between the second reference coordinates and the second relative coordinates of the second comparison object step; determining that the first comparison object and the second comparison object are the same object when the first length is smaller than the second length; and when the first length is not smaller than the second length, determining the first comparison object and the second comparison object as different objects.

λ˜ν•œ, 상기 κΈ°μ€€μ’Œν‘œλŠ” 상기 μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹ 쀑 λͺ© 뢀뢄에 λŒ€μ‘λ˜λŠ” μ’Œν‘œμ΄κ³ , 상기 μƒλŒ€μ’Œν‘œλŠ” 상기 μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹ 쀑 상기 λͺ© λΆ€λΆ„κ³Ό μ—°κ²°λœ 머리 뢀뢄에 λŒ€μ‘λ˜λŠ” μ’Œν‘œμΌ 수 μžˆλ‹€.In addition, the reference coordinate may be a coordinate corresponding to a neck part of the skeleton coordinate set, and the relative coordinate may be a coordinate corresponding to a head part connected to the neck part of the skeleton coordinate set.

이 외에도, λ³Έ 발λͺ…을 κ΅¬ν˜„ν•˜κΈ° μœ„ν•œ λ‹€λ₯Έ 방법, λ‹€λ₯Έ μ‹œμŠ€ν…œ 및 상기 방법을 μ‹€ν–‰ν•˜κΈ° μœ„ν•œ 컴퓨터 ν”„λ‘œκ·Έλž¨μ„ κΈ°λ‘ν•˜λŠ” 컴퓨터 νŒλ… κ°€λŠ₯ν•œ 기둝 맀체가 더 제곡될 수 μžˆλ‹€.In addition to this, another method for implementing the present invention, another system, and a computer-readable recording medium for recording a computer program for executing the method may be further provided.

λ³Έ 발λͺ…μ˜ μ‹€μ‹œ μ˜ˆμ— λ”°λ₯΄λ©΄, 전체 μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ€‘ μΌλΆ€λΆ„λ§Œμ„ μ΄μš©ν•˜μ—¬ 인적 객체의 동일성 식별이 κ°€λŠ₯ν•˜λ―€λ‘œ μ—°μ‚°λŸ‰μ΄ κ°μ†Œλ¨κ³Ό λ™μ‹œμ— μ •ν™•ν•œ 인적 객체의 λΆ„λ₯˜κ°€ μˆ˜ν–‰λ  수 μžˆλ‹€. 즉, 식별정확도λ₯Ό μœ μ§€ν•¨κ³Ό λ™μ‹œμ— 인적 객체 식별 ν”„λ‘œμ„ΈμŠ€κ°€ κ²½λŸ‰ν™”λ  수 μžˆλ‹€. According to an embodiment of the present invention, since it is possible to identify the identity of a human object by using only a part of the coordinates of the entire skeleton, the amount of computation can be reduced and accurate classification of the human object can be performed. That is, the human object identification process can be lightweight while maintaining identification accuracy.

λ˜ν•œ, λ³Έ 발λͺ…μ˜ μ‹€μ‹œ μ˜ˆμ— λ”°λ₯΄λ©΄, μ˜μƒκ°μ§€μž₯μΉ˜μ™€ 인적객체 μ‚¬μ΄μ˜ 거리에 λ”°λΌμ„œ μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ‚¬μ΄μ˜ 거리가 μ¦κ°λ˜λŠ” 점을 κ³ λ €ν•˜μ—¬ 인적 객체 식별이 μˆ˜ν–‰λ˜λ―€λ‘œ, μ˜μƒκ°μ§€μž₯μΉ˜μ™€ 인적객체 μ‚¬μ΄μ˜ 거리에 λ¬΄κ΄€ν•˜κ²Œ 인적 객체가 μ •ν™•νžˆ λΆ„λ₯˜λ  수 μžˆλ‹€. In addition, according to an embodiment of the present invention, human object identification is performed taking into account that the distance between the skeleton coordinates increases or decreases according to the distance between the image sensing device and the human object, so the distance between the image sensing device and the human object is Regardless, human objects can be accurately classified.

λ³Έ 발λͺ…μ˜ νš¨κ³Όλ“€μ€ μ΄μƒμ—μ„œ μ–ΈκΈ‰λœ 효과둜 μ œν•œλ˜μ§€ μ•ŠμœΌλ©°, μ–ΈκΈ‰λ˜μ§€ μ•Šμ€ 또 λ‹€λ₯Έ νš¨κ³Όλ“€μ€ μ•„λž˜μ˜ κΈ°μž¬λ‘œλΆ€ν„° ν†΅μƒμ˜ κΈ°μˆ μžμ—κ²Œ λͺ…ν™•ν•˜κ²Œ 이해될 수 μžˆμ„ 것이닀.Effects of the present invention are not limited to the effects mentioned above, and other effects not mentioned will be clearly understood by those skilled in the art from the following description.

도 1은 λ³Έ 발λͺ…μ˜ μ‹€μ‹œ μ˜ˆμ— λ”°λ₯Έ 객체식별μž₯치λ₯Ό λ„μ‹œν•˜λŠ” κ°œλ…λ„μ΄λ‹€.1 is a conceptual diagram illustrating an object identification apparatus according to an embodiment of the present invention.

도 2λŠ” ν”„λ ˆμž„μ˜ 이동에 λ‹€λ₯Έ μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹μ„ λ„μ‹œν•˜λŠ” 도면이닀.2 is a diagram illustrating a different skeleton coordinate set for moving a frame.

도 3은 μ—°μ†λœ ν”„λ ˆμž„μ˜ μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹μ„ μ€‘μ²©ν•˜μ—¬ λ„μ‹œν•˜λŠ” 도면이닀. 3 is a diagram illustrating a skeleton coordinate set of successive frames by overlapping each other.

도 4 및 도 5λŠ” κΈ°μ€€μ’Œν‘œ 및 μƒλŒ€μ’Œν‘œλ₯Ό μ΄μš©ν•˜μ—¬ 객체 μ‚¬μ΄μ˜ 동일성을 μ‹λ³„ν•˜λŠ” 과정을 λ„μ‹œν•˜λŠ” 도면이닀. 4 and 5 are diagrams illustrating a process of identifying the identity between objects using reference coordinates and relative coordinates.

도 6은 λ³Έ 발λͺ…μ˜ μ‹€μ‹œ μ˜ˆμ— λ”°λ₯Έ κ°μ²΄μ‹λ³„λ°©λ²•μ˜ 과정을 λ„μ‹œν•˜λŠ” 흐름도이닀. 6 is a flowchart illustrating a process of an object identification method according to an embodiment of the present invention.

λ³Έ 발λͺ…μ˜ 이점 및 νŠΉμ§•, 그리고 그것듀을 λ‹¬μ„±ν•˜λŠ” 방법은 μ²¨λΆ€λ˜λŠ” 도면과 ν•¨κ»˜ μƒμ„Έν•˜κ²Œ ν›„μˆ λ˜μ–΄ μžˆλŠ” μ‹€μ‹œμ˜ˆλ“€μ„ μ°Έμ‘°ν•˜λ©΄ λͺ…ν™•ν•΄μ§ˆ 것이닀. κ·ΈλŸ¬λ‚˜, λ³Έ 발λͺ…은 μ΄ν•˜μ—μ„œ κ°œμ‹œλ˜λŠ” μ‹€μ‹œμ˜ˆλ“€μ— μ œν•œλ˜λŠ” 것이 μ•„λ‹ˆλΌ μ„œλ‘œ λ‹€λ₯Έ λ‹€μ–‘ν•œ ν˜•νƒœλ‘œ κ΅¬ν˜„λ  수 있으며, 단지 λ³Έ μ‹€μ‹œμ˜ˆλ“€μ€ λ³Έ 발λͺ…μ˜ κ°œμ‹œκ°€ μ™„μ „ν•˜λ„λ‘ ν•˜κ³ , λ³Έ 발λͺ…이 μ†ν•˜λŠ” 기술 λΆ„μ•Όμ˜ ν†΅μƒμ˜ κΈ°μˆ μžμ—κ²Œ λ³Έ 발λͺ…μ˜ λ²”μ£Όλ₯Ό μ™„μ „ν•˜κ²Œ μ•Œλ €μ£ΌκΈ° μœ„ν•΄ μ œκ³΅λ˜λŠ” 것이며, λ³Έ 발λͺ…은 μ²­κ΅¬ν•­μ˜ 범주에 μ˜ν•΄ μ •μ˜λ  뿐이닀. Advantages and features of the present invention and methods of achieving them will become apparent with reference to the embodiments described below in detail in conjunction with the accompanying drawings. However, the present invention is not limited to the embodiments disclosed below, but may be implemented in various different forms, and only the present embodiments allow the disclosure of the present invention to be complete, and those of ordinary skill in the art to which the present invention pertains. It is provided to fully understand the scope of the present invention to those skilled in the art, and the present invention is only defined by the scope of the claims.

λ³Έ λͺ…μ„Έμ„œμ—μ„œ μ‚¬μš©λœ μš©μ–΄λŠ” μ‹€μ‹œμ˜ˆλ“€μ„ μ„€λͺ…ν•˜κΈ° μœ„ν•œ 것이며 λ³Έ 발λͺ…을 μ œν•œν•˜κ³ μž ν•˜λŠ” 것은 μ•„λ‹ˆλ‹€. λ³Έ λͺ…μ„Έμ„œμ—μ„œ, λ‹¨μˆ˜ν˜•μ€ λ¬Έκ΅¬μ—μ„œ νŠΉλ³„νžˆ μ–ΈκΈ‰ν•˜μ§€ μ•ŠλŠ” ν•œ λ³΅μˆ˜ν˜•λ„ ν¬ν•¨ν•œλ‹€. λͺ…μ„Έμ„œμ—μ„œ μ‚¬μš©λ˜λŠ” "ν¬ν•¨ν•œλ‹€(comprises)" 및/λ˜λŠ” "ν¬ν•¨ν•˜λŠ”(comprising)"은 μ–ΈκΈ‰λœ κ΅¬μ„±μš”μ†Œ 외에 ν•˜λ‚˜ μ΄μƒμ˜ λ‹€λ₯Έ κ΅¬μ„±μš”μ†Œμ˜ 쑴재 λ˜λŠ” μΆ”κ°€λ₯Ό λ°°μ œν•˜μ§€ μ•ŠλŠ”λ‹€. λͺ…μ„Έμ„œ 전체에 걸쳐 λ™μΌν•œ 도면 λΆ€ν˜ΈλŠ” λ™μΌν•œ ꡬ성 μš”μ†Œλ₯Ό μ§€μΉ­ν•˜λ©°, "및/λ˜λŠ”"은 μ–ΈκΈ‰λœ κ΅¬μ„±μš”μ†Œλ“€μ˜ 각각 및 ν•˜λ‚˜ μ΄μƒμ˜ λͺ¨λ“  쑰합을 ν¬ν•¨ν•œλ‹€. 비둝 "제1", "제2" 등이 λ‹€μ–‘ν•œ κ΅¬μ„±μš”μ†Œλ“€μ„ μ„œμˆ ν•˜κΈ° μœ„ν•΄μ„œ μ‚¬μš©λ˜λ‚˜, 이듀 κ΅¬μ„±μš”μ†Œλ“€μ€ 이듀 μš©μ–΄μ— μ˜ν•΄ μ œν•œλ˜μ§€ μ•ŠμŒμ€ 물둠이닀. 이듀 μš©μ–΄λ“€μ€ 단지 ν•˜λ‚˜μ˜ κ΅¬μ„±μš”μ†Œλ₯Ό λ‹€λ₯Έ κ΅¬μ„±μš”μ†Œμ™€ κ΅¬λ³„ν•˜κΈ° μœ„ν•˜μ—¬ μ‚¬μš©ν•˜λŠ” 것이닀. λ”°λΌμ„œ, μ΄ν•˜μ—μ„œ μ–ΈκΈ‰λ˜λŠ” 제1 κ΅¬μ„±μš”μ†ŒλŠ” λ³Έ 발λͺ…μ˜ 기술적 사상 λ‚΄μ—μ„œ 제2 κ΅¬μ„±μš”μ†ŒμΌ μˆ˜λ„ μžˆμŒμ€ 물둠이닀.The terminology used herein is for the purpose of describing the embodiments and is not intended to limit the present invention. In this specification, the singular also includes the plural unless specifically stated otherwise in the phrase. As used herein, β€œcomprises” and/or β€œcomprising” does not exclude the presence or addition of one or more other components in addition to the stated components. Like reference numerals refer to like elements throughout, and "and/or" includes each and every combination of one or more of the recited elements. Although "first", "second", etc. are used to describe various elements, these elements are not limited by these terms, of course. These terms are only used to distinguish one component from another. Accordingly, it goes without saying that the first component mentioned below may be the second component within the spirit of the present invention.

λ‹€λ₯Έ μ •μ˜κ°€ μ—†λ‹€λ©΄, λ³Έ λͺ…μ„Έμ„œμ—μ„œ μ‚¬μš©λ˜λŠ” λͺ¨λ“  μš©μ–΄(기술 및 과학적 μš©μ–΄λ₯Ό 포함)λŠ” λ³Έ 발λͺ…이 μ†ν•˜λŠ” κΈ°μˆ λΆ„μ•Όμ˜ ν†΅μƒμ˜ κΈ°μˆ μžμ—κ²Œ κ³΅ν†΅μ μœΌλ‘œ 이해될 수 μžˆλŠ” 의미둜 μ‚¬μš©λ  수 μžˆμ„ 것이닀. λ˜ν•œ, 일반적으둜 μ‚¬μš©λ˜λŠ” 사전에 μ •μ˜λ˜μ–΄ μžˆλŠ” μš©μ–΄λ“€μ€ λͺ…λ°±ν•˜κ²Œ νŠΉλ³„νžˆ μ •μ˜λ˜μ–΄ μžˆμ§€ μ•ŠλŠ” ν•œ μ΄μƒμ μœΌλ‘œ λ˜λŠ” κ³Όλ„ν•˜κ²Œ ν•΄μ„λ˜μ§€ μ•ŠλŠ”λ‹€.Unless otherwise defined, all terms (including technical and scientific terms) used herein will have the meaning commonly understood by those of ordinary skill in the art to which this invention belongs. In addition, terms defined in a commonly used dictionary are not to be interpreted ideally or excessively unless specifically defined explicitly.

μ΄ν•˜, μ²¨λΆ€λœ 도면을 μ°Έμ‘°ν•˜μ—¬ λ³Έ 발λͺ…μ˜ μ‹€μ‹œμ˜ˆλ₯Ό μƒμ„Έν•˜κ²Œ μ„€λͺ…ν•œλ‹€. Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings.

1. λ³Έ 발λͺ…μ˜ μ‹€μ‹œ μ˜ˆμ— λ”°λ₯Έ 객체식별μž₯치(10)의 μ„€λͺ…1. Description of the object identification device 10 according to an embodiment of the present invention

도 1을 μ°Έμ‘°ν•˜λ©΄, λ³Έ μ‹€μ‹œ μ˜ˆμ— λ”°λ₯Έ 객체식별μž₯치(10)λŠ” μ˜μƒκ°μ§€μ„Όμ„œ(11)λ₯Ό 톡해 κ°μ§€λ˜λŠ” 적어도 ν•˜λ‚˜μ˜ 인적 객체(1)에 λŒ€ν•œ 동일성을 νŒλ‹¨ν•œλ‹€.Referring to FIG. 1 , the object identification apparatus 10 according to the present embodiment determines the identity of at least one human object 1 detected through the image sensor 11 .

일 μ‹€μ‹œ μ˜ˆμ—μ„œ, μ˜μƒκ°μ§€μ„Όμ„œ(11)λŠ” 인적 객체(1)κ°€ ν¬ν•¨λœ 이미지 ν˜•νƒœμ˜ νŒŒμΌμ„ κΈ° μ„€μ •λœ μ‹œκ°„ κ°„κ²©μœΌλ‘œ κ°μ§€ν•œλ‹€. μ˜μƒκ°μ§€μ„Όμ„œ(11)λŠ” κΈ° μ„€μ •λœ μ‹œκ°„ κ°„κ²©μœΌλ‘œ 볡수의 이미지 ν”„λ ˆμž„μ„ μ—°μ†μ μœΌλ‘œ κ°μ§€ν•œλ‹€. In an embodiment, the image sensor 11 detects an image file including the human object 1 at a preset time interval. The image sensor 11 continuously detects a plurality of image frames at preset time intervals.

일 μ‹€μ‹œ μ˜ˆμ—μ„œ, μ˜μƒκ°μ§€μ„Όμ„œ(11)λŠ” 객체식별μž₯치(10)와 톡신 κ°€λŠ₯ν•˜κ²Œ μ—°κ²°λ˜λ©°, κ°μ§€λœ 볡수의 이미지 ν”„λ ˆμž„μ„ 객체식별μž₯치(10)둜 μ „μ†‘ν•œλ‹€.In an embodiment, the image detection sensor 11 is communicatively connected to the object identification apparatus 10 , and transmits a plurality of detected image frames to the object identification apparatus 10 .

λ„μ‹œλ˜μ§€ μ•Šμ€ μ‹€μ‹œ μ˜ˆμ—μ„œ, μ˜μƒκ°μ§€μ„Όμ„œ(11)λŠ” 객체식별μž₯치(10)의 κ΅¬μ„±μš”μ†Œλ‘œ 포함될 수 μžˆλ‹€. In an embodiment not shown, the image sensor 11 may be included as a component of the object identification device 10 .

볡수의 이미지 ν”„λ ˆμž„μ΄ μˆ˜μ‹ λ˜λ©΄, 객체식별μž₯치(10)λŠ” μˆ˜μ‹ λœ 볡수의 이미지 ν”„λ ˆμž„μ— ν¬ν•¨λœ κ°μ²΄λ³„λ‘œ μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹μ„ μΆ”μΆœν•œλ‹€. When a plurality of image frames are received, the object identification apparatus 10 extracts a skeleton coordinate set for each object included in the plurality of received image frames.

도 2λ₯Ό μ°Έμ‘°ν•˜λ©΄, κ°μ²΄λ³„λ‘œ μΆ”μΆœλœ μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹μ΄ μ˜ˆμ‹œμ μœΌλ‘œ λ„μ‹œλœλ‹€. 이미지 ν”„λ ˆμž„μ— μ˜ˆμ‹œμ μœΌλ‘œ 두 개의 객체만이 ν¬ν•¨λ˜μ—ˆμœΌλ‚˜, 이에 ν•œμ •λ˜λŠ” 것은 μ•„λ‹ˆλ©°, 두 개 μ΄μƒμ˜ 객체듀이 포함될 수 μžˆλ‹€. Referring to FIG. 2 , a skeleton coordinate set extracted for each object is illustrated as an example. For example, only two objects are included in the image frame, but the present invention is not limited thereto, and two or more objects may be included.

제1 이미지 ν”„λ ˆμž„(Frame 1)μ—λŠ” 제1 객체(Object 1-a, Object 1-b)의 μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹μ΄ 각각 μΆ”μΆœλ˜λ©°, 제1 이미지 ν”„λ ˆμž„κ³Ό μ—°μ†λ˜λŠ” 제2 이미지 ν”„λ ˆμž„(Frame 2)에도 제2 객체(Object 2-a, Object 2-b)의 μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹μ΄ 각각 μΆ”μΆœλœλ‹€. λ„μ‹œλœ μ‹€μ‹œ μ˜ˆμ—μ„œ, μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹μ€ 머리, λͺ©, μ–΄κΉ¨, νŒ”κΏˆμΉ˜, 손λͺ©, ν—ˆλ¦¬, 골반, 무릎, 발λͺ©μ— λŒ€μ‘λ˜λŠ” μ’Œν‘œλ“€λ‘œ κ΅¬μ„±λ˜λ‚˜, 이에 ν•œμ •λ˜λŠ” 것은 μ•„λ‹ˆλ‹€.The skeleton coordinate sets of the first objects (Object 1-a, Object 1-b) are respectively extracted from the first image frame (Frame 1), and also the second image frame (Frame 2) continuous with the first image frame (Frame 2). The skeleton coordinate sets of the objects (Object 2-a, Object 2-b) are extracted, respectively. In the illustrated embodiment, the skeleton coordinate set consists of coordinates corresponding to the head, neck, shoulder, elbow, wrist, waist, pelvis, knee, and ankle, but is not limited thereto.

제1 ν”„λ ˆμž„μ—μ„œ 두 개의 제1 객체(Object 1-a, Object 1-b)κ°€ μ„œλ‘œ κ΅¬λΆ„ν•˜κ³ , 제2 ν”„λ ˆμž„μ—μ„œ 두 개의 제2 객체(Object 2-a, Object 2-b)κ°€ μ„œλ‘œ ꡬ뢄할 수 μžˆμœΌλ‚˜, μ–΄λ– ν•œ 객체듀이 제1 이미지 ν”„λ ˆμž„κ³Ό 제2 이미지 ν”„λ ˆμž„μ—μ„œ 동일성을 κ°–λŠ” 객체인지 κ΅¬λΆ„ν•˜κΈ° μœ„ν•΄μ„œλŠ” 좔가적인 동일성 식별 ν”„λ‘œμ„ΈμŠ€κ°€ ν•„μš”ν•˜λ‹€. In the first frame, two first objects (Object 1-a, Object 1-b) can be distinguished from each other, and in the second frame, two second objects (Object 2-a, Object 2-b) can be distinguished from each other, but , an additional identity identification process is required to distinguish which objects are objects having the same identity in the first image frame and the second image frame.

동일성을 μ‹λ³„ν•˜κΈ° μœ„ν•΄μ„œ, 객체식별μž₯치(10)λŠ” 제1 객체 및 제2 객체의 μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹μœΌλ‘œλΆ€ν„° κΈ°μ€€μ’Œν‘œ 및 κΈ°μ€€μ’Œν‘œμ™€ μ—°κ²°λ˜λŠ” μƒλŒ€μ’Œν‘œλ₯Ό μ„ μ •ν•œλ‹€. In order to identify the identity, the object identification device 10 selects a reference coordinate and a relative coordinate connected to the reference coordinate from the skeleton coordinate sets of the first object and the second object.

일 μ‹€μ‹œ μ˜ˆμ—μ„œ, μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹μ˜ μ’Œν‘œ 쀑 ν”„λ ˆμž„μ˜ 변화됨에 따라 κ°€μž₯ 거리변화가 적은 μ’Œν‘œλ₯Ό κΈ°μ€€μ’Œν‘œλ‘œ μ„ μ •ν•  수 μžˆλ‹€. 예λ₯Ό λ“€μ–΄, λͺ© 뢀뢄에 λŒ€μ‘λ˜λŠ” μ’Œν‘œλ₯Ό κΈ°μ€€μ’Œν‘œλ‘œ μ„ μ •ν•  수 μžˆλ‹€.In an embodiment, the coordinates with the smallest distance change according to the change of the frame among the coordinates of the skeleton coordinate set may be selected as the reference coordinates. For example, coordinates corresponding to the neck may be selected as the reference coordinates.

일 μ‹€μ‹œ μ˜ˆμ—μ„œ, κΈ°μ€€μžν‘œμ™€ μ—°κ²°λ˜λŠ” μ’Œν‘œλŠ” μƒλŒ€μ’Œν‘œλ‘œ μ„ μ •ν•˜λ©°, λͺ© λΆ€λΆ„κ³Ό μ—°κ²°λ˜λŠ” 머리 λΆ€λΆ„κ³Ό λŒ€μ‘λ˜λŠ” μ’Œν‘œλ₯Ό κΈ°μ€€μ’Œν‘œλ‘œ μ„ μ •ν•  수 μžˆλ‹€. In an embodiment, the coordinates connected to the reference coordinates may be selected as relative coordinates, and coordinates corresponding to the head connected to the neck may be selected as the reference coordinates.

도 3을 μ°Έμ‘°ν•˜λ©΄, 제1 이미지 ν”„λ ˆμž„κ³Ό 제2 이미지 ν”„λ ˆμž„μ΄ μ€‘μ²©λ˜μ–΄ λ„μ‹œλ˜λ©°, λͺ© 뢀뢄에 λŒ€μ‘λ˜λŠ” μ’Œν‘œκ°€ κΈ°μ€€μ’Œν‘œ(N1, N2)둜 μ„ μ •λ˜λ©°, 머리 뢀뢄에 λŒ€μ‘λ˜λŠ” μ’Œν‘œκ°€ μƒλŒ€μ’Œν‘œ(H1, H2)둜 μ„ μ •λœλ‹€. Referring to FIG. 3 , the first image frame and the second image frame are shown overlapping each other, coordinates corresponding to the neck are selected as reference coordinates N1 and N2, and coordinates corresponding to the head are relative coordinates H1 , H2) is selected.

κΈ°μ€€μ’Œν‘œ(N1, N2) 및 μƒλŒ€μ’Œν‘œ(H1, H2)κ°€ μ„ μ •λ˜λ©΄, 객체식별μž₯치(10)λŠ” 제1 이미지 ν”„λ ˆμž„μ— ν¬ν•¨λœ 객체듀과 제2 이미지 ν”„λ ˆμž„μ— ν¬ν•¨λœ 객체듀을 λΉ„κ΅ν•˜κ³ , κΈ°μ€€μ’Œν‘œ(N1, N2) 및 μƒλŒ€μ’Œν‘œ(H1, H2)에 κΈ°μ΄ˆν•˜μ—¬ 동일성을 νŒλ‹¨ν•œλ‹€. When the reference coordinates N1 and N2 and the relative coordinates H1 and H2 are selected, the object identification apparatus 10 compares the objects included in the first image frame with the objects included in the second image frame, and the reference coordinates The identity is determined based on (N1, N2) and the relative coordinates (H1, H2).

도 4λ₯Ό μ°Έμ‘°ν•˜λ©΄, 객체식별μž₯치(10)λŠ” 제1 객체 쀑 동일성을 νŒλ‹¨ν•  객체λ₯Ό 제1 λΉ„κ΅κ°μ²΄λ‘œ μ„ μ •ν•œλ‹€. λ„μ‹œλœ μ‹€μ‹œ μ˜ˆμ—μ„œ, 도면을 κΈ°μ€€μœΌλ‘œ μ’ŒμΈ‘μ— 배치된 제1 객체(Object 1-a)κ°€ 제 1λΉ„κ΅κ°μ²΄λ‘œ μ„ μ •λœλ‹€.Referring to FIG. 4 , the object identification apparatus 10 selects an object to be determined for equality among the first objects as a first comparison object. In the illustrated embodiment, the first object (Object 1-a) disposed on the left side of the drawing is selected as the first comparison object.

λ˜ν•œ, 객체식별μž₯치(10)λŠ” 제2 객체 쀑 동일성을 νŒλ‹¨ν•  객체λ₯Ό 제2 λΉ„κ΅κ°μ²΄λ‘œ μ„ μ •ν•œλ‹€. In addition, the object identification apparatus 10 selects an object to be determined for equality among the second objects as the second comparison object.

λ¨Όμ €, 도면을 κΈ°μ€€μœΌλ‘œ μ’ŒμΈ‘μ— 배치된 제2 객체λ₯Ό 제2 λΉ„κ΅κ°μ²΄λ‘œ μ„ μ •ν•˜λ©΄, 객체식별μž₯치(10)λŠ” 제1 λΉ„κ΅κ°μ²΄μ˜ κΈ°μ€€μ’Œν‘œ(N1)와 제2 λΉ„κ΅κ°μ²΄μ˜ κΈ°μ€€μ’Œν‘œ(N2) μ‚¬μ΄μ˜ 거리(제1 길이, L1)와 제2 λΉ„κ΅κ°μ²΄μ˜ κΈ°μ€€μ’Œν‘œ(N2)와 μƒλŒ€μ’Œν‘œ(H2) μ‚¬μ΄μ˜ 거리(제2 길이, L2)λ₯Ό λΉ„κ΅ν•œλ‹€. First, when the second object disposed on the left side of the drawing is selected as the second comparison object, the object identification device 10 sets the reference coordinates N1 of the first comparison object and the reference coordinates N2 of the second comparison object. The distance (the first length, L1) is compared with the distance (the second length, L2) between the reference coordinate (N2) and the relative coordinate (H2) of the second comparison object.

객체식별μž₯치(10)λŠ” 제1 길이(L1)κ°€ 제2 길이(L2)보닀 μž‘μ€ 경우 λ™μΌν•œ 객체둜, 제1 길이(L1)κ°€ 제2 길이(L2)보닀 μž‘μ§€ μ•Šμ€ 경우 λ™μΌν•˜μ§€ μ•Šμ€ 객체둜 νŒλ‹¨ν•œλ‹€. The object identification device 10 determines that the first length L1 is less than the second length L2 as the same object, and when the first length L1 is not smaller than the second length L2, the object is not the same. do.

제1 길이(L1)κ°€ 제2 길이(L2)보닀 μž‘μœΌλ―€λ‘œ, 객체식별μž₯치(10)λŠ” 제1 비ꡐ객체와 제2 비ꡐ객체λ₯Ό 동일성이 μžˆλŠ” 객체둜 νŒλ³„ν•œλ‹€. 즉, 도면을 κΈ°μ€€μœΌλ‘œ μ’ŒμΈ‘μ— 배치된 제1 객체(Object 1-a) 및 제2 객체(Obeject 2-a)λ₯Ό λ™μΌν•œ 객체둜 νŒλ‹¨ν•œλ‹€. Since the first length L1 is smaller than the second length L2, the object identification apparatus 10 determines the first comparison object and the second comparison object as objects having the same identity. That is, it is determined that the first object (Object 1-a) and the second object (Obeject 2-a) arranged on the left side of the drawing are the same object.

λ˜ν•œ, 도면을 κΈ°μ€€μœΌλ‘œ μš°μΈ‘μ— 배치된 제2 객체λ₯Ό 제2 λΉ„κ΅κ°μ²΄λ‘œ μ„ μ •ν•˜λ©΄, 객체식별μž₯치(10)λŠ” 제1 λΉ„κ΅κ°μ²΄μ˜ κΈ°μ€€μ’Œν‘œ(N1)와 제2 λΉ„κ΅κ°μ²΄μ˜ κΈ°μ€€μ’Œν‘œ(N2) μ‚¬μ΄μ˜ 거리(제1 길이, L1)와 제2 λΉ„κ΅κ°μ²΄μ˜ κΈ°μ€€μ’Œν‘œ(N2)와 μƒλŒ€μ’Œν‘œ(H2) μ‚¬μ΄μ˜ 거리(제2 길이, L2)λ₯Ό λΉ„κ΅ν•œλ‹€.In addition, when the second object disposed on the right side with respect to the drawing is selected as the second comparison object, the object identification device 10 determines the reference coordinates N1 of the first comparison object and the reference coordinates N2 of the second comparison object. The distance (the first length, L1) is compared with the distance (the second length, L2) between the reference coordinate (N2) and the relative coordinate (H2) of the second comparison object.

제1 길이(L1)κ°€ 제2 길이(L2)보닀 μž‘μ§€ μ•ŠμœΌλ―€λ‘œ, 객체식별μž₯치(10)λŠ” 제1 비ꡐ객체와 제2 비ꡐ객체λ₯Ό λ™μΌν•˜μ§€ μ•Šμ€ 객체둜 νŒλ³„ν•œλ‹€. 즉, 도면을 κΈ°μ€€μœΌλ‘œ μ’ŒμΈ‘μ— 배치된 제1 객체(Object 1-a)와 μš°μΈ‘μ— 배치된 제2 객체(Obeject 2-b)λ₯Ό λ™μΌν•˜μ§€ μ•Šμ€ 객체둜 νŒλ‹¨ν•œλ‹€. Since the first length L1 is not smaller than the second length L2, the object identification apparatus 10 determines the first comparison object and the second comparison object as unequal objects. That is, it is determined that the first object (Object 1-a) disposed on the left side and the second object (Object 2-b) disposed on the right side are not identical to each other based on the drawing.

도 5λ₯Ό μ°Έμ‘°ν•˜λ©΄, 객체식별μž₯치(10)λŠ” 도면을 κΈ°μ€€μœΌλ‘œ μš°μΈ‘μ— 배치된 제1 객체(Object 1-b)λ₯Ό 제1 λΉ„κ΅κ°μ²΄λ‘œ μ„ μ •ν•œλ‹€. Referring to FIG. 5 , the object identification apparatus 10 selects a first object (Object 1-b) disposed on the right side of the drawing as a first comparison object.

λ¨Όμ €, 도면을 κΈ°μ€€μœΌλ‘œ μ’ŒμΈ‘μ— 배치된 제2 객체λ₯Ό 제2 λΉ„κ΅κ°μ²΄λ‘œ μ„ μ •ν•˜λ©΄, 객체식별μž₯치(10)λŠ” 제1 λΉ„κ΅κ°μ²΄μ˜ κΈ°μ€€μ’Œν‘œ(N1)와 제2 λΉ„κ΅κ°μ²΄μ˜ κΈ°μ€€μ’Œν‘œ(N2) μ‚¬μ΄μ˜ 거리(제1 길이, L1)와 제2 λΉ„κ΅κ°μ²΄μ˜ κΈ°μ€€μ’Œν‘œ(N2)와 μƒλŒ€μ’Œν‘œ(H2) μ‚¬μ΄μ˜ 거리(제2 길이, L2)λ₯Ό λΉ„κ΅ν•œλ‹€. First, when the second object disposed on the left side of the drawing is selected as the second comparison object, the object identification device 10 sets the reference coordinates N1 of the first comparison object and the reference coordinates N2 of the second comparison object. The distance (the first length, L1) is compared with the distance (the second length, L2) between the reference coordinate (N2) and the relative coordinate (H2) of the second comparison object.

제1 길이(L1)κ°€ 제2 길이(L2)보닀 μž‘μ§€ μ•ŠμœΌλ―€λ‘œ, 객체식별μž₯치(10)λŠ” 제1 비ꡐ객체와 제2 비ꡐ객체λ₯Ό λ™μΌν•˜μ§€ μ•Šμ€ 객체둜 νŒλ³„ν•œλ‹€. 즉, 도면을 κΈ°μ€€μœΌλ‘œ μš°μΈ‘μ— 배치된 제1 객체(Object 1-b)와 μ’ŒμΈ‘μ— 배치된 제2 객체(Obeject 2-a)λ₯Ό λ™μΌν•˜μ§€ μ•Šμ€ 객체둜 νŒλ‹¨ν•œλ‹€. Since the first length L1 is not smaller than the second length L2, the object identification apparatus 10 determines the first comparison object and the second comparison object as unequal objects. That is, based on the drawing, the first object (Object 1-b) disposed on the right side and the second object (Object 2-a) disposed on the left side are determined as unequal objects.

λ˜ν•œ, 도면을 κΈ°μ€€μœΌλ‘œ μš°μΈ‘μ— 배치된 제2 객체λ₯Ό 제2 λΉ„κ΅κ°μ²΄λ‘œ μ„ μ •ν•˜λ©΄, 객체식별μž₯치(10)λŠ” 제1 λΉ„κ΅κ°μ²΄μ˜ κΈ°μ€€μ’Œν‘œ(N1)와 제2 λΉ„κ΅κ°μ²΄μ˜ κΈ°μ€€μ’Œν‘œ(N2) μ‚¬μ΄μ˜ 거리(제1 길이, L1)와 제2 λΉ„κ΅κ°μ²΄μ˜ κΈ°μ€€μ’Œν‘œ(N2)와 μƒλŒ€μ’Œν‘œ(H2) μ‚¬μ΄μ˜ 거리(제2 길이, L2)λ₯Ό λΉ„κ΅ν•œλ‹€.In addition, when the second object disposed on the right side with respect to the drawing is selected as the second comparison object, the object identification device 10 determines the reference coordinates N1 of the first comparison object and the reference coordinates N2 of the second comparison object. The distance (the first length, L1) is compared with the distance (the second length, L2) between the reference coordinate (N2) and the relative coordinate (H2) of the second comparison object.

제1 길이(L1)κ°€ 제2 길이(L2)보닀 μž‘μœΌλ―€λ‘œ, 객체식별μž₯치(10)λŠ” 제1 비ꡐ객체와 제2 비ꡐ객체λ₯Ό 동일성이 μžˆλŠ” 객체둜 νŒλ³„ν•œλ‹€. 즉, 도면을 κΈ°μ€€μœΌλ‘œ μš°μΈ‘μ— 배치된 제1 객체(Object 1-b) 및 μš°μΈ‘μ— 배치된 제2 객체(Obeject 2-b)λ₯Ό λ™μΌν•œ 객체둜 νŒλ‹¨ν•œλ‹€. Since the first length L1 is smaller than the second length L2, the object identification apparatus 10 determines the first comparison object and the second comparison object as objects having the same identity. That is, it is determined that the first object (Object 1-b) disposed on the right side and the second object (Object 2-b) disposed on the right side based on the drawing are the same object.

이미지 ν”„λ ˆμž„μ˜ 변동 μ‹œ κ°€μž₯ 이동이 적은 뢀뢄은 λͺ© 뢀뢄에 λŒ€μ‘λ˜λŠ” μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ΄λ©°, 이λ₯Ό κΈ°μ€€μ’Œν‘œλ‘œ ν•˜μ—¬ κ°μ²΄κ°„μ˜ 동일성을 μœ μΆ”ν•΄λ³Ό 수 μžˆλ‹€. When the image frame changes, the part with the least movement is the skeleton coordinates corresponding to the neck, and by using this as a reference coordinate, the sameness between objects can be inferred.

λ‹€λ§Œ, 객체와 μ˜μƒκ°μ§€μ„Όμ„œ(11) μ‚¬μ΄μ˜ 거리에 따라 이미지 ν”„λ ˆμž„μ˜ 변동 μ‹œ κΈ°μ€€μ’Œν‘œμ˜ μ΄λ™κ±°λ¦¬λŠ” λ‹¬λΌμ§ˆ 수 μžˆμœΌλ―€λ‘œ, κΈ°μ€€μ’Œν‘œλ§Œμ„ κ³ λ €ν•˜μ—¬ 동일성을 νŒλ‹¨ν•˜λŠ” 경우 였판이 λ°œμƒλ  수 μžˆλ‹€. However, since the movement distance of the reference coordinates may vary when the image frame is changed according to the distance between the object and the image sensor 11, a miscalculation may occur when determining the sameness in consideration of only the reference coordinates.

예λ₯Ό λ“€μ–΄, 객체가 μ˜μƒκ°μ§€μ„Όμ„œ(11)에 μΈμ ‘λ˜λŠ” 경우 이미지 ν”„λ ˆμž„μ˜ 변동에 λ”°λ₯Έ κΈ°μ€€μ’Œν‘œμ˜ 이동거리가 μ¦κ°€λ˜λ©°, 객체가 μ˜μƒκ°μ§€μ„Όμ„œ(11)μ—μ„œ μ΄κ²©λ˜λŠ” 경우 이미지 ν”„λ ˆμž„μ˜ 변동에 λ”°λ₯Έ κΈ°μ€€μ’Œν‘œμ˜ 이동거리가 κ°μ†Œλœλ‹€. For example, when the object is adjacent to the image sensor 11, the movement distance of the reference coordinate increases according to the change of the image frame, and when the object is separated from the image sensor 11, the reference according to the change of the image frame The moving distance of the coordinates is reduced.

μ΄λŸ¬ν•œ λ¬Έμ œμ μ„ κ³ λ €ν•˜μ—¬, λ³Έ μ‹€μ‹œ μ˜ˆμ— λ”°λ₯Έ 객체식별μž₯치(10)λŠ” κΈ°μ€€μ’Œν‘œμ™€ μƒλŒ€μ’Œν‘œ μ‚¬μ΄μ˜ 거리λ₯Ό μΆ”κ°€λ‘œ κ³ λ €ν•œλ‹€. In consideration of this problem, the object identification apparatus 10 according to the present embodiment additionally considers the distance between the reference coordinates and the relative coordinates.

κΈ°μ€€μ’Œν‘œμ™€ μƒλŒ€μ’Œν‘œ μ‚¬μ΄μ˜ κ±°λ¦¬λŠ” 이미지 ν”„λ ˆμž„μ˜ 변동과 λ¬΄κ΄€ν•˜κ²Œ κ³ μ •λ˜λŠ” 값이며, μ—°μ†λ˜λŠ” 이미지 ν”„λ ˆμž„μ—μ„œ κΈ°μ€€μ’Œν‘œμ˜ 이동 κ±°λ¦¬λŠ” κΈ°μ€€μ’Œν‘œμ™€ μƒλŒ€μ’Œν‘œ μ‚¬μ΄μ˜ 거리보닀 컀질 수 μ—†λŠ” 점을 μ΄μš©ν•˜λ©΄, μ˜μƒκ°μ§€μ„Όμ„œ(11)μ™€μ˜ 거리와 λ¬΄κ΄€ν•˜κ²Œ κ°μ²΄λ“€μ˜ 동일성을 μ •ν™•ν•˜κ²Œ νŒλ‹¨ν•  수 μžˆλ‹€. The distance between the reference coordinate and the relative coordinate is a fixed value regardless of the fluctuation of the image frame, and the moving distance of the reference coordinate in successive image frames cannot be greater than the distance between the reference coordinate and the relative coordinate. It is possible to accurately determine the identity of objects regardless of the distance to (11).

일반적으둜, μ˜μƒκ°μ§€μ„Όμ„œ(11)의 이미지 ν”„λ ˆμž„ μ‚¬μ΄μ˜ μ‹œκ°„ 간격이 극히 짧은 점을 κ³ λ €ν•  λ•Œ, κΈ°μ€€μ’Œν‘œλ₯Ό λͺ© 뢀뢄에 λŒ€μ‘λ˜λŠ” μ’Œν‘œλ‘œ, μƒλŒ€μ’Œν‘œλ₯Ό 머리 뢀뢄에 λŒ€μ‘λ˜λŠ” μ’Œν‘œλ‘œ μ§€μ •ν•˜λ©΄ μ—°μ†λ˜λŠ” 이미지 ν”„λ ˆμž„μ—μ„œ κΈ°μ€€μ’Œν‘œμ˜ 이동 κ±°λ¦¬λŠ” κΈ°μ€€μ’Œν‘œμ™€ μƒλŒ€μ’Œν‘œ μ‚¬μ΄μ˜ 거리보닀 μ§§λ‹€. In general, considering that the time interval between image frames of the image detection sensor 11 is extremely short, if the reference coordinates are designated as the coordinates corresponding to the neck and the relative coordinates are designated as the coordinates corresponding to the head, continuous images The moving distance of the reference coordinates in the frame is shorter than the distance between the reference coordinates and the relative coordinates.

λ‹€λ§Œ, 이미지 ν”„λ ˆμž„ μ‚¬μ΄μ˜ μ‹œκ°„ 간격이 κ³Όλ„ν•˜κ²Œ μ¦κ°€λ˜λŠ” 경우, κΈ°μ€€μ’Œν‘œλ₯Ό λͺ© 뢀뢄에 λŒ€μ‘λ˜λŠ” μ’Œν‘œλ‘œ, μƒλŒ€μ’Œν‘œλ₯Ό 머리 뢀뢄에 λŒ€μ‘λ˜λŠ” μ’Œν‘œλ‘œ μ§€μ •ν•˜λ©΄ μ—°μ†λ˜λŠ” 이미지 ν”„λ ˆμž„μ—μ„œ κΈ°μ€€μ’Œν‘œμ˜ 이동 κ±°λ¦¬λŠ” κΈ°μ€€μ’Œν‘œμ™€ μƒλŒ€μ’Œν‘œ μ‚¬μ΄μ˜ 거리보닀 컀질 수 μžˆλ‹€. However, if the time interval between image frames is excessively increased, if the reference coordinates are designated as the coordinates corresponding to the neck and the relative coordinates are designated as the coordinates corresponding to the head, the movement distance of the reference coordinates in successive image frames is the reference coordinate can be greater than the distance between the and relative coordinates.

λ”°λΌμ„œ, μƒλŒ€μ’Œν‘œλŠ” 이미지 ν”„λ ˆμž„ μ‚¬μ΄μ˜ μ‹œκ°„ 간격을 κ³ λ €ν•˜μ—¬ μ„ μ •λ˜λŠ” 것이 λ°”λžŒμ§ν•˜λ‹€.Therefore, the relative coordinates are preferably selected in consideration of the time interval between image frames.

예λ₯Ό λ“€μ–΄, 이미지 ν”„λ ˆμž„ μ‚¬μ΄μ˜ μ‹œκ°„ 간격이 κ³Όλ„ν•˜κ²Œ μ¦κ°€λ˜λŠ” 경우, κΈ°μ€€μ’Œν‘œλ₯Ό λͺ© 뢀뢄에 λŒ€μ‘λ˜λŠ” μ’Œν‘œλ‘œ, μƒλŒ€μ’Œν‘œλ₯Ό ν—ˆλ¦¬ 뢀뢄에 λŒ€μ‘λ˜λŠ” μ’Œν‘œλ‘œ μ„ μ •ν•  수 μžˆλ‹€. For example, when the time interval between image frames is excessively increased, the reference coordinates may be selected as coordinates corresponding to the neck, and the relative coordinates may be selected as coordinates corresponding to the waist.

κΈ°μ€€μ’Œν‘œ 및 μƒλŒ€μ’Œν‘œλ₯Ό λΉ„κ΅ν•˜λŠ” κ³Όμ •λ§ŒμœΌλ‘œ 이미지 ν”„λ ˆμž„μ— ν¬ν•¨λœ 객체듀 μ‚¬μ΄μ˜ 동일성이 νŒλ³„λ˜λ―€λ‘œ, κ°μ²΄μ‹λ³„κ³Όμ •μ˜ μ—°μ‚°λŸ‰μ΄ μƒλŒ€μ μœΌλ‘œ 절감될 수 μžˆλ‹€. λ”°λΌμ„œ, μ •ν™•ν•œ 객체식별을 μˆ˜ν–‰ν•¨κ³Ό λ™μ‹œμ— 객체식별 ν”„λ‘œμ„ΈμŠ€κ°€ κ²½λŸ‰ν™”λ  수 μžˆλ‹€. Since the sameness between the objects included in the image frame is determined only by the process of comparing the reference coordinates and the relative coordinates, the amount of computation of the object identification process can be relatively reduced. Accordingly, while performing accurate object identification, the object identification process can be lightened.

λ˜ν•œ, 객체식별μž₯치(10)λŠ” 각각 정보λ₯Ό μ „μ†‘ν•˜κ³  μˆ˜μ‹ ν•˜κΈ° μœ„ν•œ 톡신뢀, 정보λ₯Ό μ—°μ‚°ν•˜κΈ° μœ„ν•œ μ œμ–΄λΆ€ 및 정보λ₯Ό μ €μž₯ν•˜κΈ° μœ„ν•œ λ©”λͺ¨λ¦¬(λ˜λŠ” λ°μ΄ν„°λ² μ΄μŠ€)λ₯Ό 포함할 수 μžˆλ‹€.Also, the object identification apparatus 10 may include a communication unit for transmitting and receiving information, a control unit for calculating information, and a memory (or database) for storing information, respectively.

μ œμ–΄λΆ€λŠ”, ν•˜λ“œμ›¨μ–΄μ μœΌλ‘œ, ASICs(applicationspecific integrated circuits), DSPs(digital signal processors), DSPDs(digital signal processing devices), PLDs(programmable logic devices), FPGAs(field programmable gate arrays), ν”„λ‘œμ„Έμ„œ(processors), μ œμ–΄κΈ°(controllers), 마이크둜컨트둀러(micro-controllers), 마이크둜 ν”„λ‘œμ„Έμ„œ(microprocessors), 기타 κΈ°λŠ₯ μˆ˜ν–‰μ„ μœ„ν•œ 전기적인 μœ λ‹› 쀑 적어도 ν•˜λ‚˜λ₯Ό μ΄μš©ν•˜μ—¬ κ΅¬ν˜„λ  수 μžˆλ‹€.The control unit, in hardware, ASICs (application specific integrated circuits), DSPs (digital signal processors), DSPDs (digital signal processing devices), PLDs (programmable logic devices), FPGAs (field programmable gate arrays), processors (processors), controller It may be implemented using at least one of (controllers), micro-controllers, microprocessors, and other electrical units for performing functions.

λ˜ν•œ, μ†Œν”„νŠΈμ›¨μ–΄μ μœΌλ‘œ, λ³Έ λͺ…μ„Έμ„œμ—μ„œ μ„€λͺ…λ˜λŠ” 절차 및 κΈ°λŠ₯κ³Ό 같은 μ‹€μ‹œ μ˜ˆλ“€μ€ λ³„λ„μ˜ μ†Œν”„νŠΈμ›¨μ–΄ λͺ¨λ“ˆλ“€λ‘œ κ΅¬ν˜„λ  수 μžˆλ‹€. 상기 μ†Œν”„νŠΈμ›¨μ–΄ λͺ¨λ“ˆλ“€ 각각은 λ³Έ λͺ…μ„Έμ„œμ—μ„œ μ„€λͺ…λ˜λŠ” ν•˜λ‚˜ μ΄μƒμ˜ κΈ°λŠ₯ 및 μž‘λ™μ„ μˆ˜ν–‰ν•  수In addition, in terms of software, embodiments such as procedures and functions described in this specification may be implemented as separate software modules. Each of the software modules may perform one or more functions and operations described herein.

μžˆλ‹€. μ†Œν”„νŠΈμ›¨μ–΄ μ½”λ“œλŠ” μ μ ˆν•œ ν”„λ‘œκ·Έλž¨ μ–Έμ–΄λ‘œ μ“°μ—¬μ§„ μ†Œν”„νŠΈμ›¨μ–΄ μ• ν”Œλ¦¬μΌ€μ΄μ…˜μœΌλ‘œ μ†Œν”„νŠΈμ›¨μ–΄ μ½”λ“œκ°€ κ΅¬ν˜„λ  수 μžˆλ‹€. 상기 μ†Œν”„νŠΈμ›¨μ–΄ μ½”λ“œλŠ” λ©”λͺ¨λ¦¬μ— μ €μž₯되고, μ œμ–΄λΆ€μ— μ˜ν•΄ 싀행될 수 μžˆλ‹€.have. The software code is a software application written in a suitable programming language, and the software code may be implemented. The software code may be stored in the memory and executed by the control unit.

ν†΅μ‹ λΆ€λŠ” μœ μ„ ν†΅μ‹ λͺ¨λ“ˆ, 무선톡신λͺ¨λ“ˆ 및 근거리톡신λͺ¨λ“ˆ 쀑 적어도 ν•˜λ‚˜λ₯Ό 톡해 κ΅¬ν˜„λ  수 μžˆλ‹€. 무선 인터넷 λͺ¨λ“ˆμ€ 무선 인터넷 접속을 μœ„ν•œ λͺ¨λ“ˆμ„ λ§ν•˜λŠ” κ²ƒμœΌλ‘œ 각 μž₯μΉ˜μ— λ‚΄μž₯λ˜κ±°λ‚˜ μ™Έμž₯될 수 μžˆλ‹€. 무선 인터넷 κΈ°μˆ λ‘œλŠ” WLAN(Wireless LAN)(Wi-Fi), Wibro(Wireless broadband), Wimax(World Interoperability for Microwave Access), HSDPA(High Speed Downlink Packet Access), LTE(long term evolution), LTE-A(Long Term Evolution-Advanced) 등이 이용될 수 μžˆλ‹€.The communication unit may be implemented through at least one of a wired communication module, a wireless communication module, and a short-range communication module. The wireless Internet module refers to a module for wireless Internet access, and may be built-in or external to each device. Wireless Internet technologies include WLAN (Wireless LAN) (Wi-Fi), Wibro (Wireless broadband), Wimax (World Interoperability for Microwave Access), HSDPA (High Speed Downlink Packet Access), LTE (long term evolution), LTE-A (Long Term Evolution-Advanced) may be used.

λ©”λͺ¨λ¦¬λŠ” ν”Œλž˜μ‹œ λ©”λͺ¨λ¦¬ νƒ€μž…(flash memory type), ν•˜λ“œλ””μŠ€ν¬ νƒ€μž…(hardThe memory is a flash memory type, a hard disk type (hard

disk type), λ©€ν‹°λ―Έλ””μ–΄ μΉ΄λ“œ 마이크둜 νƒ€μž…(multimedia card micro type), μΉ΄λ“œ νƒ€μž…μ˜ λ©”λͺ¨λ¦¬(예λ₯Ό λ“€μ–΄ SD λ˜λŠ” XD λ©”λͺ¨λ¦¬ λ“±), 램(random access memory; RAM), SRAM(static random access memory), 둬(read-only memory; ROM), EEPROM(electrically erasable programmable read-only memory), PROM(programmable read-only memory), 자기 λ©”λͺ¨λ¦¬, 자기 λ””μŠ€ν¬, κ΄‘λ””μŠ€ν¬ 쀑 적어도 ν•˜λ‚˜μ˜ νƒ€μž…μ˜ μ €μž₯맀체λ₯Ό 포함할 수 μžˆλ‹€. disk type), multimedia card micro type, card type memory (eg SD or XD memory, etc.), random access memory (RAM), static random access memory (SRAM), ROM (read -only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), a magnetic memory, a magnetic disk, and an optical disk may include at least one type of storage medium.

2. λ³Έ 발λͺ…μ˜ μ‹€μ‹œ μ˜ˆμ— λ”°λ₯Έ 객체식별방법(S1)의 μ„€λͺ…2. Description of the object identification method (S1) according to an embodiment of the present invention

도 6을 μ°Έμ‘°ν•˜λ©΄, λ³Έ 발λͺ…μ˜ μ‹€μ‹œ μ˜ˆμ— λ‹€λ₯Έ 객체식별방법(S1)은 μ—°μ†λ˜λŠ” 제1 및 제2 이미지 ν”„λ ˆμž„μ— ν¬ν•¨λœ κ°μ²΄λ³„λ‘œ μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹μ„ μΆ”μΆœν•˜λŠ” 단계(S10), μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹μ—μ„œ κΈ° μ„€μ •λœ κΈ°μ€€μ’Œν‘œ 및 κΈ°μ€€μ’Œν‘œμ™€ μ—°κ²°λœ μƒλŒ€μ’Œν‘œλ₯Ό μ„ μ •ν•˜λŠ” 단계(S20) 및 κΈ°μ€€μ’Œν‘œ 및 μƒλŒ€μ’Œν‘œμ— κΈ°μ΄ˆν•˜μ—¬, 제1 및 제2 이미지 ν”„λ ˆμž„μ— ν¬ν•¨λœ 객체의 동일성을 νŒλ‹¨ν•˜λŠ” 단계(S30)λ₯Ό ν¬ν•¨ν•œλ‹€. 6, the object identification method (S1) according to an embodiment of the present invention is a step of extracting a skeleton coordinate set for each object included in successive first and second image frames (S10); Selecting the set reference coordinates and relative coordinates connected to the reference coordinates (S20) and determining the identity of the objects included in the first and second image frames based on the reference coordinates and the relative coordinates (S30). .

λ³Έ μ‹€μ‹œ μ˜ˆμ— λ”°λ₯Έ 객체식별방법(S1)은 μƒμˆ ν•œ 객체식별μž₯치(10)에 μ˜ν•΄ μˆ˜ν–‰λ˜λ©°, 객체식별μž₯치(10)λŠ” μ»΄ν“¨ν„°λ‘œ κ΅¬ν˜„λ  수 μžˆλ‹€. The object identification method S1 according to the present embodiment is performed by the above-described object identification apparatus 10, and the object identification apparatus 10 may be implemented by a computer.

λ¨Όμ €, 컴퓨터가 μ—°μ†λ˜λŠ” 제1 및 제2 이미지 ν”„λ ˆμž„μ— ν¬ν•¨λœ κ°μ²΄λ³„λ‘œ μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹μ„ μΆ”μΆœν•œλ‹€(S10).First, the computer extracts a skeleton coordinate set for each object included in successive first and second image frames (S10).

제1 이미지 ν”„λ ˆμž„μ— ν¬ν•¨λœ 볡수의 제1 κ°μ²΄λ‘œλΆ€ν„°, 각각이 상기 볡수의 제1 객체 각각에 λŒ€μ‘λ˜λŠ” 볡수의 제1 μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹μ„ μΆ”μΆœν•˜κ³ , 제1 이미지 ν”„λ ˆμž„κ³Ό μ—°μ†λ˜λŠ” 제2 이미지 ν”„λ ˆμž„μ— ν¬ν•¨λœ 볡수의 제2 κ°μ²΄λ‘œλΆ€ν„°, 각각이 볡수의 제2 객체 각각에 λŒ€μ‘λ˜λŠ” 볡수의 제2 μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹μ„ μΆ”μΆœν•œλ‹€. From a plurality of first objects included in the first image frame, a plurality of first skeleton coordinate sets each corresponding to each of the plurality of first objects are extracted, and included in a second image frame consecutive to the first image frame A plurality of second skeleton coordinate sets each corresponding to each of the plurality of second objects are extracted from the plurality of second objects.

객체별 μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹μ˜ μΆ”μΆœμ΄ μ™„λ£Œλ˜λ©΄, 컴퓨터가 μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹μ—μ„œ κΈ° μ„€μ •λœ κΈ°μ€€μ’Œν‘œ 및 κΈ°μ€€μ’Œν‘œμ™€ μ—°κ²°λœ μƒλŒ€μ’Œν‘œλ₯Ό μ„ μ •ν•œλ‹€(S20). When the extraction of the skeleton coordinate set for each object is completed, the computer selects a preset reference coordinate from the skeleton coordinate set and a relative coordinate connected to the reference coordinate (S20).

볡수의 제1 μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹ 각각에 λŒ€ν•˜μ—¬ 제1 κΈ°μ€€μ’Œν‘œ 및 제1 κΈ°μ€€μ’Œν‘œμ™€ μ—°κ²°λœ 제1 μƒλŒ€μ’Œν‘œλ₯Ό μ„ μ •ν•˜λ©°, 볡수의 제2 μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹ 각각에 λŒ€ν•˜μ—¬ 제2 κΈ°μ€€μ’Œν‘œ 및 제2 κΈ°μ€€μ’Œν‘œμ™€ μ—°κ²°λœ 제2 μƒλŒ€μ’Œν‘œλ₯Ό μ„ μ •ν•œλ‹€. A first reference coordinate and a first relative coordinate connected to the first reference coordinate are selected for each of the plurality of first skeleton coordinate sets, and the second reference coordinate and the second reference coordinate are connected to each of the plurality of second skeleton coordinate sets A second relative coordinate is selected.

κΈ°μ€€μ’Œν‘œ 및 μƒλŒ€μ’Œν‘œ 선정이 μ™„λ£Œλ˜λ©΄, κΈ°μ€€μ’Œν‘œ 및 μƒλŒ€μ’Œν‘œμ— κΈ°μ΄ˆν•˜μ—¬, 제1 및 제2 이미지 ν”„λ ˆμž„μ— ν¬ν•¨λœ 객체의 동일성을 νŒλ‹¨ν•œλ‹€(S30). When the selection of the reference coordinates and the relative coordinates is completed, the identity of the objects included in the first and second image frames is determined based on the reference coordinates and the relative coordinates (S30).

볡수의 제1 객체 쀑 μ–΄λŠ ν•˜λ‚˜λ₯Ό 제1 λΉ„κ΅κ°μ²΄λ‘œ μ„ μ •ν•˜κ³ , 볡수의 제2 객체 쀑 μ–΄λŠ ν•˜λ‚˜λ₯Ό 제2 λΉ„κ΅κ°μ²΄λ‘œ μ„ μ •ν•œλ‹€.Any one of the plurality of first objects is selected as the first comparison object, and any one of the plurality of second objects is selected as the second comparison object.

κ·Έ λ‹€μŒ, 제1 λΉ„κ΅κ°μ²΄μ˜ 제1 κΈ°μ€€μ’Œν‘œμ™€ 제2 λΉ„κ΅κ°μ²΄μ˜ 제2 κΈ°μ€€μ’Œν‘œ μ‚¬μ΄μ˜ 제1 길이와 제2 λΉ„κ΅κ°μ²΄μ˜ 제2 κΈ°μ€€μ’Œν‘œ 및 제2 μƒλŒ€μ’Œν‘œ μ‚¬μ΄μ˜ 제2 길이λ₯Ό λΉ„κ΅ν•œλ‹€. Then, the first length between the first reference coordinates of the first comparison object and the second reference coordinates of the second comparison object are compared with the second length between the second reference coordinates and the second relative coordinates of the second comparison object. .

제1 길이가 제2 길이보닀 μž‘μ€ 경우, 제1 비ꡐ객체와 제2 비ꡐ객체λ₯Ό λ™μΌν•œ 객체둜 νŒλ‹¨ν•˜κ³ , 제1 길이가 제2 길이보닀 μž‘μ§€ μ•Šμ€ 경우, 제1 비ꡐ객체와 제2 비ꡐ객체λ₯Ό μ„œλ‘œ λ‹€λ₯Έ 객체둜 νŒλ‹¨ν•œλ‹€.When the first length is less than the second length, the first comparison object and the second comparison object are determined as the same object, and when the first length is not smaller than the second length, the first comparison object and the second comparison object are mutually judged by another object.

μƒμˆ ν•œ 동일성 νŒλ‹¨κ³Όμ •μ€ λͺ¨λ“  볡수의 제1 객체 및 볡수의 제2 객체 μ‚¬μ΄μ˜ 비ꡐ가 μ™„λ£Œλ  λ•ŒκΉŒμ§€ 반볡적으둜 μˆ˜ν–‰λ  수 μžˆλ‹€. The above-described identity determination process may be repeatedly performed until the comparison between all the plurality of first objects and the plurality of second objects is completed.

μ΄μƒμ—μ„œ μ „μˆ ν•œ λ³Έ 발λͺ…μ˜ 일 μ‹€μ‹œμ˜ˆμ— λ”°λ₯Έ 방법은, ν•˜λ“œμ›¨μ–΄μΈ μ„œλ²„μ™€ κ²°ν•©λ˜μ–΄ μ‹€ν–‰λ˜κΈ° μœ„ν•΄ ν”„λ‘œκ·Έλž¨(λ˜λŠ” μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜)으둜 κ΅¬ν˜„λ˜μ–΄ 맀체에 μ €μž₯될 수 μžˆλ‹€.The method according to an embodiment of the present invention described above may be implemented as a program (or application) to be executed in combination with a server, which is hardware, and stored in a medium.

상기 μ „μˆ ν•œ ν”„λ‘œκ·Έλž¨μ€, 상기 컴퓨터가 ν”„λ‘œκ·Έλž¨μ„ 읽어 λ“€μ—¬ ν”„λ‘œκ·Έλž¨μœΌλ‘œ κ΅¬ν˜„λœ 상기 방법듀을 μ‹€ν–‰μ‹œν‚€κΈ° μœ„ν•˜μ—¬, 상기 μ»΄ν“¨ν„°μ˜ ν”„λ‘œμ„Έμ„œ(CPU)κ°€ 상기 μ»΄ν“¨ν„°μ˜ μž₯치 μΈν„°νŽ˜μ΄μŠ€λ₯Ό 톡해 읽힐 수 μžˆλŠ” C, C++, JAVA, 기계어 λ“±μ˜ 컴퓨터 μ–Έμ–΄λ‘œ μ½”λ“œν™”λœ μ½”λ“œ(Code)λ₯Ό 포함할 수 μžˆλ‹€. μ΄λŸ¬ν•œ μ½”λ“œλŠ” 상기 방법듀을 μ‹€ν–‰ν•˜λŠ” ν•„μš”ν•œ κΈ°λŠ₯듀을 μ •μ˜ν•œ ν•¨μˆ˜ λ“±κ³Ό κ΄€λ ¨λœ κΈ°λŠ₯적인 μ½”λ“œ(Functional Code)λ₯Ό 포함할 수 있고, 상기 κΈ°λŠ₯듀을 상기 μ»΄ν“¨ν„°μ˜ ν”„λ‘œμ„Έμ„œκ°€ μ†Œμ •μ˜ μ ˆμ°¨λŒ€λ‘œ μ‹€ν–‰μ‹œν‚€λŠ”λ° ν•„μš”ν•œ μ‹€ν–‰ 절차 κ΄€λ ¨ μ œμ–΄ μ½”λ“œλ₯Ό 포함할 수 μžˆλ‹€. λ˜ν•œ, μ΄λŸ¬ν•œ μ½”λ“œλŠ” 상기 κΈ°λŠ₯듀을 상기 μ»΄ν“¨ν„°μ˜ ν”„λ‘œμ„Έμ„œκ°€ μ‹€ν–‰μ‹œν‚€λŠ”λ° ν•„μš”ν•œ μΆ”κ°€ μ •λ³΄λ‚˜ λ―Έλ””μ–΄κ°€ 상기 μ»΄ν“¨ν„°μ˜ λ‚΄λΆ€ λ˜λŠ” μ™ΈλΆ€ λ©”λͺ¨λ¦¬μ˜ μ–΄λŠ μœ„μΉ˜(μ£Όμ†Œ λ²ˆμ§€)μ—μ„œ μ°Έμ‘°λ˜μ–΄μ•Ό ν•˜λŠ”μ§€μ— λŒ€ν•œ λ©”λͺ¨λ¦¬ μ°Έμ‘°κ΄€λ ¨ μ½”λ“œλ₯Ό 더 포함할 수 μžˆλ‹€. λ˜ν•œ, 상기 μ»΄ν“¨ν„°μ˜ ν”„λ‘œμ„Έμ„œκ°€ 상기 κΈ°λŠ₯듀을 μ‹€ν–‰μ‹œν‚€κΈ° μœ„ν•˜μ—¬ 원격(Remote)에 μžˆλŠ” μ–΄λ– ν•œ λ‹€λ₯Έ μ»΄ν“¨ν„°λ‚˜ μ„œλ²„ λ“±κ³Ό 톡신이 ν•„μš”ν•œ 경우, μ½”λ“œλŠ” 상기 μ»΄ν“¨ν„°μ˜ 톡신 λͺ¨λ“ˆμ„ μ΄μš©ν•˜μ—¬ 원격에 μžˆλŠ” μ–΄λ– ν•œ λ‹€λ₯Έ μ»΄ν“¨ν„°λ‚˜ μ„œλ²„ λ“±κ³Ό μ–΄λ–»κ²Œ 톡신해야 ν•˜λŠ”μ§€, 톡신 μ‹œ μ–΄λ– ν•œ μ •λ³΄λ‚˜ λ―Έλ””μ–΄λ₯Ό μ†‘μˆ˜μ‹ ν•΄μ•Ό ν•˜λŠ”μ§€ 등에 λŒ€ν•œ 톡신 κ΄€λ ¨ μ½”λ“œλ₯Ό 더 포함할 수 μžˆλ‹€.The above-described program is C, C++, JAVA, machine language, etc. that a processor (CPU) of the computer can read through a device interface of the computer in order for the computer to read the program and execute the methods implemented as a program It may include code (Code) coded in the computer language of Such code may include functional code related to a function defining functions necessary for executing the methods, etc., and includes an execution procedure related control code necessary for the processor of the computer to execute the functions according to a predetermined procedure. can do. In addition, this code may further include additional information necessary for the processor of the computer to execute the functions or code related to memory reference for which location (address address) in the internal or external memory of the computer should be referenced. have. In addition, when the processor of the computer needs to communicate with any other computer or server located remotely in order to execute the functions, the code uses the communication module of the computer to determine how to communicate with any other computer or server remotely. It may further include a communication-related code for whether to communicate and what information or media to transmit and receive during communication.

상기 μ €μž₯λ˜λŠ” λ§€μ²΄λŠ”, λ ˆμ§€μŠ€ν„°, 캐쉬, λ©”λͺ¨λ¦¬ λ“±κ³Ό 같이 짧은 μˆœκ°„ λ™μ•ˆ 데이터λ₯Ό μ €μž₯ν•˜λŠ” 맀체가 μ•„λ‹ˆλΌ 반영ꡬ적으둜 데이터λ₯Ό μ €μž₯ν•˜λ©°, 기기에 μ˜ν•΄ νŒλ…(reading)이 κ°€λŠ₯ν•œ 맀체λ₯Ό μ˜λ―Έν•œλ‹€. κ΅¬μ²΄μ μœΌλ‘œλŠ”, 상기 μ €μž₯λ˜λŠ” 맀체의 μ˜ˆλ‘œλŠ” ROM, RAM, CD-ROM, 자기 ν…Œμ΄ν”„, ν”Œλ‘œν”Όλ””μŠ€ν¬, κ΄‘ 데이터 μ €μž₯μž₯치 등이 μžˆμ§€λ§Œ, 이에 μ œν•œλ˜μ§€ μ•ŠλŠ”λ‹€. 즉, 상기 ν”„λ‘œκ·Έλž¨μ€ 상기 컴퓨터가 접속할 수 μžˆλŠ” λ‹€μ–‘ν•œ μ„œλ²„ μƒμ˜ λ‹€μ–‘ν•œ 기둝맀체 λ˜λŠ” μ‚¬μš©μžμ˜ 상기 μ»΄ν“¨ν„°μƒμ˜ λ‹€μ–‘ν•œ 기둝맀체에 μ €μž₯될 수 μžˆλ‹€. λ˜ν•œ, 상기 λ§€μ²΄λŠ” λ„€νŠΈμ›Œν¬λ‘œ μ—°κ²°λœ 컴퓨터 μ‹œμŠ€ν…œμ— λΆ„μ‚°λ˜μ–΄, λΆ„μ‚°λ°©μ‹μœΌλ‘œ 컴퓨터가 읽을 수 μžˆλŠ” μ½”λ“œκ°€ μ €μž₯될 수 μžˆλ‹€.The storage medium is not a medium that stores data for a short moment, such as a register, a cache, a memory, etc., but a medium that stores data semi-permanently and can be read by a device. Specifically, examples of the storage medium include, but are not limited to, ROM, RAM, CD-ROM, magnetic tape, floppy disk, and an optical data storage device. That is, the program may be stored in various recording media on various servers accessible by the computer or in various recording media on the computer of the user. In addition, the medium may be distributed in a computer system connected to a network, and a computer-readable code may be stored in a distributed manner.

λ³Έ 발λͺ…μ˜ μ‹€μ‹œμ˜ˆμ™€ κ΄€λ ¨ν•˜μ—¬ μ„€λͺ…λœ 방법 λ˜λŠ” μ•Œκ³ λ¦¬μ¦˜μ˜ 단계듀은 ν•˜λ“œμ›¨μ–΄λ‘œ 직접 κ΅¬ν˜„λ˜κ±°λ‚˜, ν•˜λ“œμ›¨μ–΄μ— μ˜ν•΄ μ‹€ν–‰λ˜λŠ” μ†Œν”„νŠΈμ›¨μ–΄ λͺ¨λ“ˆλ‘œ κ΅¬ν˜„λ˜κ±°λ‚˜, λ˜λŠ” μ΄λ“€μ˜ 결합에 μ˜ν•΄ κ΅¬ν˜„λ  수 μžˆλ‹€. μ†Œν”„νŠΈμ›¨μ–΄ λͺ¨λ“ˆμ€ RAM(Random Access Memory), ROM(Read Only Memory), EPROM(Erasable Programmable ROM), EEPROM(Electrically Erasable Programmable ROM), ν”Œλž˜μ‹œ λ©”λͺ¨λ¦¬(Flash Memory), ν•˜λ“œ λ””μŠ€ν¬, μ°©νƒˆν˜• λ””μŠ€ν¬, CD-ROM, λ˜λŠ” λ³Έ 발λͺ…이 μ†ν•˜λŠ” 기술 λΆ„μ•Όμ—μ„œ 잘 μ•Œλ €μ§„ μž„μ˜μ˜ ν˜•νƒœμ˜ 컴퓨터 νŒλ…κ°€λŠ₯ 기둝맀체에 상주할 μˆ˜λ„ μžˆλ‹€.The steps of a method or algorithm described in connection with an embodiment of the present invention may be implemented directly in hardware, as a software module executed by hardware, or by a combination thereof. A software module may contain random access memory (RAM), read only memory (ROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, hard disk, removable disk, CD-ROM, or It may reside in any type of computer-readable recording medium well known in the art to which the present invention pertains.

이상, μ²¨λΆ€λœ 도면을 참쑰둜 ν•˜μ—¬ λ³Έ 발λͺ…μ˜ μ‹€μ‹œμ˜ˆλ₯Ό μ„€λͺ…ν•˜μ˜€μ§€λ§Œ, λ³Έ 발λͺ…이 μ†ν•˜λŠ” κΈ°μˆ λΆ„μ•Όμ˜ ν†΅μƒμ˜ κΈ°μˆ μžλŠ” λ³Έ 발λͺ…이 κ·Έ 기술적 μ‚¬μƒμ΄λ‚˜ ν•„μˆ˜μ μΈ νŠΉμ§•μ„ λ³€κ²½ν•˜μ§€ μ•Šκ³ μ„œ λ‹€λ₯Έ ꡬ체적인 ν˜•νƒœλ‘œ μ‹€μ‹œλ  수 μžˆλ‹€λŠ” 것을 이해할 수 μžˆμ„ 것이닀. κ·ΈλŸ¬λ―€λ‘œ, μ΄μƒμ—μ„œ κΈ°μˆ ν•œ μ‹€μ‹œμ˜ˆλ“€μ€ λͺ¨λ“  λ©΄μ—μ„œ μ˜ˆμ‹œμ μΈ 것이며, μ œν•œμ μ΄ μ•„λ‹Œ κ²ƒμœΌλ‘œ μ΄ν•΄ν•΄μ•Όλ§Œ ν•œλ‹€.In the above, embodiments of the present invention have been described with reference to the accompanying drawings, but those of ordinary skill in the art to which the present invention pertains can realize that the present invention can be embodied in other specific forms without changing the technical spirit or essential features thereof. you will be able to understand Therefore, it should be understood that the embodiments described above are illustrative in all respects and not restrictive.

Claims (6)

컴퓨터에 μ˜ν•΄ μˆ˜ν–‰λ˜λ©°, μ—°μ„λ˜λŠ” 이미지 ν”„λ ˆμž„μ—μ„œ λ™μΌν•œ 객체λ₯Ό μ‹λ³„ν•˜λŠ” 방법에 μžˆμ–΄μ„œ,A method performed by a computer for identifying identical objects in an image frame to be curbed, comprising: μ—°μ†λ˜λŠ” 제1 및 제2 이미지 ν”„λ ˆμž„μ— ν¬ν•¨λœ 객체의 μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹μ„ μΆ”μΆœν•˜λŠ” 단계; extracting a skeleton coordinate set of an object included in successive first and second image frames; 상기 μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹μ—μ„œ κΈ° μ„€μ •λœ κΈ°μ€€μ’Œν‘œ 및 상기 κΈ°μ€€μ’Œν‘œμ™€ μ—°κ²°λœ μƒλŒ€μ’Œν‘œλ₯Ό μ„ μ •ν•˜λŠ” 단계; selecting a preset reference coordinate from the skeleton coordinate set and a relative coordinate connected to the reference coordinate; 상기 κΈ°μ€€μ’Œν‘œ 및 상기 μƒλŒ€μ’Œν‘œμ— κΈ°μ΄ˆν•˜μ—¬, 제1 및 제2 이미지 ν”„λ ˆμž„μ— ν¬ν•¨λœ 객체의 동일성을 νŒλ‹¨ν•˜λŠ” 단계λ₯Ό ν¬ν•¨ν•˜λŠ”, Based on the reference coordinates and the relative coordinates, comprising the step of determining the identity of the objects included in the first and second image frames, μ—°μ†λ˜λŠ” 이미지 ν”„λ ˆμž„μ— λŒ€ν•œ μŠ€μΌˆλ ˆν†€ 뢄석에 κΈ°μ΄ˆν•œ 동일 객체 식별 방법. A method of identifying identical objects based on skeleton analysis of successive image frames. 제1항에 μžˆμ–΄μ„œ, According to claim 1, 상기 μΆ”μΆœν•˜λŠ” λ‹¨κ³„λŠ”, The extracting step is 제1 이미지 ν”„λ ˆμž„μ— ν¬ν•¨λœ 볡수의 제1 κ°μ²΄λ‘œλΆ€ν„°, 각각이 상기 볡수의 제1 객체 각각에 λŒ€μ‘λ˜λŠ” 볡수의 제1 μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹μ„ μΆ”μΆœν•˜λŠ” 단계; 및extracting, from a plurality of first objects included in a first image frame, a plurality of first skeleton coordinate sets each corresponding to each of the plurality of first objects; and 상기 제1 이미지 ν”„λ ˆμž„κ³Ό μ—°μ†λ˜λŠ” 제2 이미지 ν”„λ ˆμž„μ— ν¬ν•¨λœ 볡수의 제2 κ°μ²΄λ‘œλΆ€ν„°, 각각이 상기 볡수의 제2 객체 각각에 λŒ€μ‘λ˜λŠ” 볡수의 제2 μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹μ„ μΆ”μΆœν•˜λŠ” 단계λ₯Ό ν¬ν•¨ν•˜λŠ”, Including the step of extracting a plurality of second skeleton coordinate sets each corresponding to each of the plurality of second objects from a plurality of second objects included in a second image frame continuous with the first image frame, μ—°μ†λ˜λŠ” 이미지 ν”„λ ˆμž„μ— λŒ€ν•œ μŠ€μΌˆλ ˆν†€ 뢄석에 κΈ°μ΄ˆν•œ 동일 객체 식별 방법. A method of identifying identical objects based on skeleton analysis of successive image frames. 제2항에 μžˆμ–΄μ„œ, 3. The method of claim 2, 상기 μ„ μ •ν•˜λŠ” λ‹¨κ³„λŠ”, The selecting step is 상기 볡수의 제1 μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹ 각각에 λŒ€ν•˜μ—¬ 제1 κΈ°μ€€μ’Œν‘œ 및 상기 제1 κΈ°μ€€μ’Œν‘œμ™€ μ—°κ²°λœ 제1 μƒλŒ€μ’Œν‘œλ₯Ό μ„ μ •ν•˜λŠ” 단계; 및selecting a first reference coordinate and a first relative coordinate connected to the first reference coordinate for each of the plurality of first skeleton coordinate sets; and 상기 볡수의 제2 μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹ 각각에 λŒ€ν•˜μ—¬ 제2 κΈ°μ€€μ’Œν‘œ 및 상기 제2 κΈ°μ€€μ’Œν‘œμ™€ μ—°κ²°λœ 제2 μƒλŒ€μ’Œν‘œλ₯Ό μ„ μ •ν•˜λŠ” 단계λ₯Ό ν¬ν•¨ν•˜λŠ”,Selecting a second reference coordinate and a second relative coordinate connected to the second reference coordinate for each of the plurality of second skeleton coordinate sets, μ—°μ†λ˜λŠ” 이미지 ν”„λ ˆμž„μ— λŒ€ν•œ μŠ€μΌˆλ ˆν†€ 뢄석에 κΈ°μ΄ˆν•œ 동일 객체 식별 방법. A method of identifying identical objects based on skeleton analysis of successive image frames. 제3항에 μžˆμ–΄μ„œ, 4. The method of claim 3, 상기 νŒλ‹¨ν•˜λŠ” λ‹¨κ³„λŠ”, The determining step is 상기 볡수의 제1 객체 쀑 μ–΄λŠ ν•˜λ‚˜λ₯Ό 제1 λΉ„κ΅κ°μ²΄λ‘œ μ„ μ •ν•˜λŠ” 단계; selecting any one of the plurality of first objects as a first comparison object; 상기 볡수의 제2 객체 쀑 μ–΄λŠ ν•˜λ‚˜λ₯Ό 제2 λΉ„κ΅κ°μ²΄λ‘œ μ„ μ •ν•˜λŠ” 단계; selecting any one of the plurality of second objects as a second comparison object; 상기 제1 λΉ„κ΅κ°μ²΄μ˜ 제1 κΈ°μ€€μ’Œν‘œμ™€ 상기 제2 λΉ„κ΅κ°μ²΄μ˜ 제2 κΈ°μ€€μ’Œν‘œ μ‚¬μ΄μ˜ 제1 길이와 상기 제2 λΉ„κ΅κ°μ²΄μ˜ 제2 κΈ°μ€€μ’Œν‘œ 및 제2 μƒλŒ€μ’Œν‘œ μ‚¬μ΄μ˜ 제2 길이λ₯Ό λΉ„κ΅ν•˜λŠ” 단계; Comparing a first length between the first reference coordinates of the first comparison object and the second reference coordinates of the second comparison object and a second length between the second reference coordinates and the second relative coordinates of the second comparison object step; 제1 길이가 제2 길이보닀 μž‘μ€ 경우, 상기 제1 비ꡐ객체와 상기 제2 비ꡐ객체λ₯Ό λ™μΌν•œ 객체둜 νŒλ‹¨ν•˜λŠ” 단계; 및determining that the first comparison object and the second comparison object are the same object when the first length is smaller than the second length; and 제1 길이가 제2 길이보닀 μž‘μ§€ μ•Šμ€ 경우, 상기 제1 비ꡐ객체와 상기 제2 비ꡐ객체λ₯Ό μ„œλ‘œ λ‹€λ₯Έ 객체둜 νŒλ‹¨ν•˜λŠ” 단계λ₯Ό ν¬ν•¨ν•˜λŠ”,Comprising the step of determining the first comparison object and the second comparison object as different objects when the first length is not smaller than the second length, μ—°μ†λ˜λŠ” 이미지 ν”„λ ˆμž„μ— λŒ€ν•œ μŠ€μΌˆλ ˆν†€ 뢄석에 κΈ°μ΄ˆν•œ 동일 객체 식별 방법. A method of identifying identical objects based on skeleton analysis of successive image frames. 제1항에 μžˆμ–΄μ„œ, According to claim 1, 상기 κΈ°μ€€μ’Œν‘œλŠ” 상기 μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹ 쀑 λͺ© 뢀뢄에 λŒ€μ‘λ˜λŠ” μ’Œν‘œμ΄κ³ , The reference coordinate is a coordinate corresponding to the neck part of the skeleton coordinate set, 상기 μƒλŒ€μ’Œν‘œλŠ” 상기 μŠ€μΌˆλ ˆν†€ μ’Œν‘œμ…‹ 쀑 상기 λͺ© λΆ€λΆ„κ³Ό μ—°κ²°λœ 머리 뢀뢄에 λŒ€μ‘λ˜λŠ” μ’Œν‘œμΈ, The relative coordinates are coordinates corresponding to the head part connected to the neck part of the skeleton coordinate set, μ—°μ†λ˜λŠ” 이미지 ν”„λ ˆμž„μ— λŒ€ν•œ μŠ€μΌˆλ ˆν†€ 뢄석에 κΈ°μ΄ˆν•œ 동일 객체 식별 방법. A method of identifying identical objects based on skeleton analysis of successive image frames. ν•˜λ“œμ›¨μ–΄μΈ 컴퓨터와 κ²°ν•©λ˜μ–΄, 제1ν•­ λ‚΄μ§€ 제5ν•­ 쀑 μ–΄λŠ ν•œ ν•­μ˜ 방법을 μ‹€ν–‰μ‹œν‚€κΈ° μœ„ν•˜μ—¬ 맀체에 μ €μž₯된, ν”„λ‘œκ·Έλž¨.A program that is combined with a computer that is hardware and stored in a medium to execute the method of any one of claims 1 to 5.
PCT/KR2021/017369 2020-12-22 2021-11-24 Same object identification device and identification method, based on skeleton analysis of continuous image frames Ceased WO2022139200A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2020-0181252 2020-12-22
KR1020200181252A KR20220090248A (en) 2020-12-22 2020-12-22 Identical object identification device and identification method based on skeleton analysis for consecutive image frames

Publications (1)

Publication Number Publication Date
WO2022139200A1 true WO2022139200A1 (en) 2022-06-30

Family

ID=82158148

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/017369 Ceased WO2022139200A1 (en) 2020-12-22 2021-11-24 Same object identification device and identification method, based on skeleton analysis of continuous image frames

Country Status (2)

Country Link
KR (1) KR20220090248A (en)
WO (1) WO2022139200A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102826071B1 (en) * 2024-08-29 2025-06-27 차인성 Method of identifying an object, and computer program stored on a recording medium to execute the method thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4994525B1 (en) * 2011-01-26 2012-08-08 γƒ‘γƒŠγ‚½γƒ‹γƒƒγ‚―ζ ͺ式会瀾 Joint region display device, joint region detection device, joint region belonging degree calculating device, joint-like region belonging degree calculating device, and joint region displaying method
KR101956275B1 (en) * 2012-09-26 2019-06-24 μ‚Όμ„±μ „μžμ£Όμ‹νšŒμ‚¬ Method and apparatus for detecting information of body skeleton and body region from image
US20190251341A1 (en) * 2017-12-08 2019-08-15 Huawei Technologies Co., Ltd. Skeleton Posture Determining Method and Apparatus, and Computer Readable Storage Medium
JP2019145025A (en) * 2018-02-23 2019-08-29 η”°δΈ­γ€€ζˆε…Έ Positional relationship determination device
KR102165128B1 (en) * 2018-03-06 2020-10-13 μ†Œλ‹ˆ μ£Όμ‹νšŒμ‚¬ Automated tracking and retaining of an articulated object in a sequence of image frames

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101762010B1 (en) 2015-08-28 2017-07-28 κ²½ν¬λŒ€ν•™κ΅ μ‚°ν•™ν˜‘λ ₯단 Method of modeling a video-based interactive activity using the skeleton posture datset

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4994525B1 (en) * 2011-01-26 2012-08-08 γƒ‘γƒŠγ‚½γƒ‹γƒƒγ‚―ζ ͺ式会瀾 Joint region display device, joint region detection device, joint region belonging degree calculating device, joint-like region belonging degree calculating device, and joint region displaying method
KR101956275B1 (en) * 2012-09-26 2019-06-24 μ‚Όμ„±μ „μžμ£Όμ‹νšŒμ‚¬ Method and apparatus for detecting information of body skeleton and body region from image
US20190251341A1 (en) * 2017-12-08 2019-08-15 Huawei Technologies Co., Ltd. Skeleton Posture Determining Method and Apparatus, and Computer Readable Storage Medium
JP2019145025A (en) * 2018-02-23 2019-08-29 η”°δΈ­γ€€ζˆε…Έ Positional relationship determination device
KR102165128B1 (en) * 2018-03-06 2020-10-13 μ†Œλ‹ˆ μ£Όμ‹νšŒμ‚¬ Automated tracking and retaining of an articulated object in a sequence of image frames

Also Published As

Publication number Publication date
KR20220090248A (en) 2022-06-29

Similar Documents

Publication Publication Date Title
WO2023229345A1 (en) System and method for detecting unhandled applications in contrastive siamese network training
WO2019132168A1 (en) System for learning surgical image data
WO2022059969A1 (en) Deep neural network pre-training method for electrocardiogram data classificiation
WO2021194056A1 (en) Method for training deep learning network based on artificial intelligence and learning device using the same
WO2018106005A1 (en) System for diagnosing disease using neural network and method therefor
WO2019235828A1 (en) Two-face disease diagnosis system and method thereof
WO2020111754A9 (en) Method for providing diagnostic system using semi-supervised learning, and diagnostic system using same
WO2021125619A1 (en) Method for inspecting labeling on bounding box by using deep learning model and apparatus using same
WO2021040287A1 (en) Person re-identification device and method
WO2021071288A1 (en) Fracture diagnosis model training method and device
WO2022055099A1 (en) Anomaly detection method and device therefor
WO2020032562A2 (en) Bioimage diagnosis system, bioimage diagnosis method, and terminal for executing same
WO2020246655A1 (en) Situation recognition method and device for implementing same
WO2016108327A1 (en) Vehicle detection method, database structure for vehicle detection, and database construction method for vehicle detection
WO2019190076A1 (en) Eye tracking method and terminal for performing same
WO2019164277A1 (en) Method and device for evaluating bleeding by using surgical image
WO2023106498A1 (en) Personal information detection enhancement method and apparatus using multi-filtering
WO2022240145A1 (en) Method for correcting colored image using artificial neural network, and device therefor
WO2022139200A1 (en) Same object identification device and identification method, based on skeleton analysis of continuous image frames
WO2021206338A1 (en) Method and device for recognizing characters on cargo container
WO2023200114A1 (en) Electronic device and method for verifying open source license
WO2024090786A1 (en) Radar data-based fall detection model training method
WO2021157776A1 (en) Device, method and recording medium, recording command, for supporting learning
WO2024101466A1 (en) Attribute-based missing person tracking apparatus and method
WO2022045519A1 (en) Optimization model selection device and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21911267

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21911267

Country of ref document: EP

Kind code of ref document: A1