[go: up one dir, main page]

WO2024117457A1 - Procédé et système pour empêcher le vol dans un environnement de magasin en libre-service - Google Patents

Procédé et système pour empêcher le vol dans un environnement de magasin en libre-service Download PDF

Info

Publication number
WO2024117457A1
WO2024117457A1 PCT/KR2023/011921 KR2023011921W WO2024117457A1 WO 2024117457 A1 WO2024117457 A1 WO 2024117457A1 KR 2023011921 W KR2023011921 W KR 2023011921W WO 2024117457 A1 WO2024117457 A1 WO 2024117457A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
theft
payment
unmanned store
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2023/011921
Other languages
English (en)
Korean (ko)
Inventor
김동칠
양창모
서경은
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Korea Electronics Technology Institute
Original Assignee
Korea Electronics Technology Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Korea Electronics Technology Institute filed Critical Korea Electronics Technology Institute
Publication of WO2024117457A1 publication Critical patent/WO2024117457A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19613Recognition of a predetermined image pattern or behaviour pattern indicating theft or intrusion
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19608Tracking movement of a target, e.g. by detecting an object predefined as a target, using target direction and or velocity to predict its new position
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • G06T2207/20044Skeletonization; Medial axis transform

Definitions

  • the present invention relates to a method and system for preventing theft in an unmanned store environment.
  • the representative abnormal behavior to be detected in an unmanned store environment is theft.
  • the technology currently being developed to recognize theft detects the act of stealing items from a store as a simple form of theft, not the sophisticated and meticulous theft that actually occurs.
  • An embodiment of the present invention does not simply utilize computer vision, but rather determines human behavior through multiple camera views and simultaneously utilizes payment information images on the POS system screen to more accurately recognize theft. Provides a theft prevention method and system in a store environment.
  • the method for preventing theft in an unmanned store environment uses a plurality of first images captured by a plurality of CCTV cameras installed in the indoor space of the unmanned store.
  • the step of extracting object information from the plurality of first images includes extracting object object information and person object information located in the indoor space as the object information, wherein the person object information is Object pose information determined based on skeleton information may be included.
  • the step of deriving the behavior information of the object based on the object information includes the step of deriving the object behavior information based on the movement change amount of the object object information and the object pose information of the person object information.
  • Behavior information can be derived.
  • the step of determining whether an act of theft has occurred based on the object's behavior information and payment status information includes, in which behavior information regarding the act of purchasing the product is derived, and payment is performed from the second image. If it is confirmed that it has not been done, it can be judged as an act of theft.
  • Some embodiments of the present invention include configuring a payment completed image and a payment incomplete image that match the behavior information of the object among the second images as learning data for learning an artificial intelligence algorithm; And it may further include the step of performing learning by setting the learning data as an input stage of the artificial intelligence algorithm and setting the payment completion result information as an output stage.
  • the theft prevention system in an unmanned store environment includes a plurality of CCTV cameras that obtain a first image capturing the indoor space of the unmanned store and a second image capturing the screen of the POS device, Based on the first and second images, a memory stores a program for detecting theft in an unmanned store, and by executing the program stored in the memory, object information is extracted from the first image, and the object information is stored. Derives behavior information of the object based on , and includes a processor that extracts payment status information from the second image and then determines whether or not theft has occurred based on the object's behavior information and payment status information.
  • the processor extracts object information and person object information located in the indoor space as the object information, wherein the person object information includes object pose information determined based on skeleton information. can do.
  • the processor may derive object behavior information related to the act of purchasing a product based on the movement change amount of the object object information and object pose information of the human object information.
  • the processor may derive behavioral information regarding the act of purchasing the product, and determine the act of theft when it is confirmed from the second image that payment has not been made.
  • the processor configures the payment completed image and the payment uncompleted image that match the behavior information of the object among the second images as learning data for learning an artificial intelligence algorithm, and uses the learning data Learning can be performed by setting it as the input stage of the artificial intelligence algorithm and setting the payment completion result information as the output stage.
  • a purchasing action is performed using computer vision using multiple images. It has the advantage of reducing false positives by accurately detecting theft by linking with the POS system and, as a result, reducing material damage to store owners.
  • FIG. 1 is a block diagram showing the configuration of an anti-theft system according to an embodiment of the present invention.
  • FIG. 2 is a flowchart of a theft prevention method according to an embodiment of the present invention.
  • Figure 3 is a diagram for explaining analysis of object pose information from a first image.
  • Figure 4 is a diagram for explaining the content of determining whether an act of theft is committed using a second image in one embodiment of the present invention.
  • the present invention relates to a theft prevention method and system (100) in an unmanned store environment.
  • an embodiment of the present invention does not simply utilize computer vision, but rather determines human behavior through multiple camera views and simultaneously utilizes payment information images on the POS system screen to more accurately recognize theft. The purpose is to enable this.
  • Figure 1 is a block diagram showing the configuration of an anti-theft system 100 according to an embodiment of the present invention.
  • the anti-theft system 100 includes an input unit 110, a communication unit 120, a display unit 130, a memory 140, and a processor 150.
  • the input unit 110 generates input data in response to user input of the theft prevention system 100.
  • the user input may be a control input such as selecting and confirming a CCTV image.
  • the input unit 110 includes at least one input means.
  • the input unit 110 includes a keyboard, key pad, dome switch, touch panel, touch key, mouse, menu button, etc. may include.
  • the communication unit 120 transmits and receives data to and from a plurality of CCTVs, and also communicates with external devices such as servers and data collection devices to transmit and receive data.
  • This communication unit 120 may include both a wired communication module and a wireless communication module.
  • the wired communication module can be implemented as a power line communication device, telephone line communication device, home cable (MoCA), Ethernet, IEEE1294, integrated wired home network, and RS-485 control device.
  • wireless communication modules include WLAN (wireless LAN), Bluetooth, HDR WPAN, UWB, ZigBee, Impulse Radio, 60GHz WPAN, Binary-CDMA, wireless USB technology and wireless HDMI technology, as well as 5G (5th generation communication) and LTE-A. It may be composed of modules to implement functions such as (long term evolution-advanced), LTE (long term evolution), and Wi-Fi (wireless fidelity).
  • the display unit 130 displays display data according to the operation of the theft prevention system 100.
  • the display unit 130 may display information about a plurality of CCTVs, a list of images corresponding to each CCTV, and recognition results for theft and non-theft acts, etc. on the screen.
  • the display unit 130 includes a liquid crystal display (LCD), a light emitting diode (LED) display, an organic light emitting diode (OLED) display, and a micro electro mechanical systems (MEMS) display. and electronic paper displays.
  • LCD liquid crystal display
  • LED light emitting diode
  • OLED organic light emitting diode
  • MEMS micro electro mechanical systems
  • the display unit 130 may be combined with the input unit 110 and implemented as a touch screen.
  • the memory 140 stores programs for detecting theft within an unmanned store based on a first image captured of the indoor space of the unmanned store and a second image captured of the screen of the POS device.
  • the memory 140 is a general term for non-volatile storage devices and volatile storage devices that continue to retain stored information even when power is not supplied.
  • the memory 140 may be a compact flash (CF) card, a secure digital (SD) card, a memory stick, a solid-state drive (SSD), and a micro SD.
  • CF compact flash
  • SD secure digital
  • SSD solid-state drive
  • the processor 150 may control at least one other component (eg, hardware or software component) of the anti-theft system 100 by executing software such as a program, and may perform various data processing or calculations.
  • software such as a program
  • the processor 150 derives behavior information by extracting object information from images captured by a plurality of CCTVs, and also extracts the screen of the POS device from the captured images to check payment status information and detects theft through this. judge.
  • the processor 150 uses at least one of machine learning, neural network, or deep learning algorithms as an artificial intelligence algorithm to generate an artificial intelligence algorithm.
  • an artificial intelligence algorithm at least one of machine learning, neural network, or deep learning algorithm may be used.
  • neural network networks include Convolutional Neural Network (CNN) and Deep Neural Network (DNN). Network) and RNN (Recurrent Neural Network).
  • Figure 2 is a flowchart of a theft prevention method according to an embodiment of the present invention.
  • Figure 3 is a diagram for explaining analysis of object pose information from a first image.
  • a plurality of first images captured by a plurality of CCTV cameras installed in the indoor space of the unmanned store are received (S110), and object information is extracted from the plurality of first images (S120).
  • object object information and human object information can be extracted as object information from the first image, respectively.
  • the present invention extracts skeleton information from a plurality of first images and analyzes pose information of the object based on the skeleton information to obtain human object information.
  • the object's behavior information is derived based on the object information (S130).
  • the present invention can derive object behavior information related to the act of purchasing a product based on the movement change amount of object information and the object pose information of human object information.
  • payment status information is extracted from the second image including the screen of the POS device among the plurality of first images (S140), and theft is determined based on the object's behavior information and payment status information (S150) .
  • a plurality of CCTVs installed in the indoor space of the unmanned store may capture the first image corresponding to each area of the indoor space, and the second image may be obtained from the first image.
  • the second image may be an image obtained from a CCTV installed separately to include the screen of the POS device.
  • Figure 4 is a diagram for explaining the content of determining whether an act of theft is committed using a second image in one embodiment of the present invention.
  • behavioral information regarding the act of purchasing a product is derived, and when it is confirmed from the second image that payment has not been made, it can be determined to be an act of theft.
  • one embodiment of the present invention is characterized by using an artificial intelligence algorithm in analyzing whether payment has been completed using the second image, which is the screen of the POS device.
  • the payment completed video and the payment uncompleted video that match the object's behavior information among the second images are configured as learning data for learning the artificial intelligence algorithm, and the learning data is set as the input terminal of the artificial intelligence algorithm, and the payment is completed as a result. Learning can be performed by setting information to the output stage.
  • an embodiment of the present invention can determine whether or not there is a theft by further considering the amount of change in movement of object information (i.e., product) from the first image. That is, when it is confirmed that the object object information is moving on the shelf based on the human object information, the amount of change in movement of the object object information is calculated. At this time, the amount of movement change may be calculated based on each position of the object information on the frame per time of the first image.
  • object information i.e., product
  • the object information it is determined whether the amount of change in movement of the object information exceeds the threshold, and if it exceeds the threshold, it is determined that a person has picked up the product and moves it, and the object information can be tracked in real time or at equivalent intervals. .
  • the present invention may further consider not only the movement change amount of the first product object information, but also the movement change amount of the second product object information adjacent to the first product. In other words, it is possible to check whether movement of the second product adjacent to the first product occurred during the process of a person picking up the first product through the amount of movement change. If movement of the second product occurs and the amount of movement change for the first product continues to occur thereafter, it may be determined that a person is moving with the first product in an indoor space.
  • an embodiment of the present invention allows continuous tracking of a specific person selecting a specific object and moving it while holding a specific object in a situation where a plurality of people and a plurality of objects exist.
  • first person object information and first object object information are detected in a first frame of an image captured by a single CCTV camera, and second object object information and second person object information are detected in a second frame consecutive to the first frame.
  • the object information After detecting the object information, if the first and second person object information overlap with each other and the first and second object object information overlap, they are determined to be the same person and object, respectively, and recognition and tracking can be performed.
  • first person object information and first object object information are detected in the first frame of the first CCTV camera, and second object information and second person object information are detected in the second frame of the second CCTV camera at the same point in time.
  • recognition and tracking can be performed by determining that they are the same person and object, respectively.
  • the human object information and object object information in the above-described embodiment may be expressed as a predetermined bounding box or as a vector with a predetermined size, and when the overlapping portion of the bounding box exceeds the threshold or the vector's If the degree of overlap in direction or size exceeds the threshold, it can be judged as overlap.
  • steps S110 to S150 may be further divided into additional steps or combined into fewer steps, depending on the implementation of the present invention. Additionally, some steps may be omitted or the order between steps may be changed as needed. In addition, even if other omitted content, the content described in FIG. 1 and the content described in FIGS. 2 to 4 are mutually applicable.
  • the embodiments of the present invention described above may be implemented as a program (or application) and stored in a medium in order to be executed in conjunction with a server, which is hardware.
  • the above-mentioned program is C, C++, JAVA, machine language, etc. that can be read by the processor (CPU) of the computer through the device interface of the computer in order for the computer to read the program and execute the methods implemented in the program.
  • It may include code coded in a computer language. These codes may include functional codes related to functions that define the necessary functions for executing the methods, and include control codes related to execution procedures necessary for the computer's processor to execute the functions according to predetermined procedures. can do.
  • these codes may further include memory reference-related codes that indicate at which location (address address) in the computer's internal or external memory additional information or media required for the computer's processor to execute the above functions should be referenced. there is.
  • the code uses the computer's communication module to determine how to communicate with any other remote computer or server. It may further include communication-related codes regarding whether communication should be performed and what information or media should be transmitted and received during communication.
  • the storage medium refers to a medium that stores data semi-permanently and can be read by a device, rather than a medium that stores data for a short period of time, such as a register, cache, or memory.
  • examples of the storage medium include ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage device, etc., but are not limited thereto. That is, the program may be stored in various recording media on various servers that the computer can access or on various recording media on the user's computer. Additionally, the medium may be distributed to computer systems connected to a network, and computer-readable code may be stored in a distributed manner.
  • the steps of the method or algorithm described in connection with embodiments of the present invention may be implemented directly in hardware, implemented as a software module executed by hardware, or a combination thereof.
  • the software module may be RAM (Random Access Memory), ROM (Read Only Memory), EPROM (Erasable Programmable ROM), EEPROM (Electrically Erasable Programmable ROM), Flash Memory, hard disk, removable disk, CD-ROM, or It may reside on any type of computer-readable recording medium well known in the art to which the present invention pertains.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Alarm Systems (AREA)
  • Burglar Alarm Systems (AREA)

Abstract

La présente invention concerne un procédé pour empêcher le vol dans un environnement de magasin en libre-service. Ce procédé comprend les étapes consistant à : recevoir une pluralité de premières images capturées par une pluralité de caméras CCTV qui sont installées dans l'espace intérieur d'un magasin en libre-service ; extraire des informations d'objet à partir de la pluralité de premières images ; dériver des informations d'activité d'un objet sur la base des informations d'objet ; extraire des informations indiquant si un paiement a été effectué à partir d'une seconde image comprenant l'écran d'un dispositif POS parmi la pluralité de premières images ; et déterminer si une activité de vol est effectuée sur la base des informations d'activité de l'objet et des informations concernant le fait que le paiement a été effectué.
PCT/KR2023/011921 2022-11-29 2023-08-11 Procédé et système pour empêcher le vol dans un environnement de magasin en libre-service Ceased WO2024117457A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020220163503A KR102809905B1 (ko) 2022-11-29 2022-11-29 무인점포 환경에서의 도난 방지 방법 및 시스템
KR10-2022-0163503 2022-11-29

Publications (1)

Publication Number Publication Date
WO2024117457A1 true WO2024117457A1 (fr) 2024-06-06

Family

ID=91324327

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2023/011921 Ceased WO2024117457A1 (fr) 2022-11-29 2023-08-11 Procédé et système pour empêcher le vol dans un environnement de magasin en libre-service

Country Status (2)

Country Link
KR (1) KR102809905B1 (fr)
WO (1) WO2024117457A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102780035B1 (ko) * 2024-07-11 2025-03-12 (주)제이케이인 지능형 카메라와 연계된 연막 발생 장치 및 방법

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101868112B1 (ko) * 2017-10-30 2018-06-15 주식회사 에이씨큐 매장 관리를 위한 포스 영상 검색 방법 및 시스템
CN109345360A (zh) * 2018-11-23 2019-02-15 青岛海信智能商用系统股份有限公司 无人商店防盗方法和装置
JP2021009488A (ja) * 2019-06-28 2021-01-28 株式会社野村総合研究所 盗難抑止装置
KR20220084762A (ko) * 2020-12-14 2022-06-21 주식회사 에스원 무인 매장용 블랙리스트 등록 방법 및 이를 이용한 블랙 리스트 등록 시스템
KR102419220B1 (ko) * 2021-12-21 2022-07-08 주식회사 인피닉 소량 구매를 위한 자동 결제 방법 및 이를 위한 시스템

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102495296B1 (ko) 2020-05-22 2023-02-02 주식회사 코리아세븐 무인 점포 시스템 및 이의 비상 상황 판단 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101868112B1 (ko) * 2017-10-30 2018-06-15 주식회사 에이씨큐 매장 관리를 위한 포스 영상 검색 방법 및 시스템
CN109345360A (zh) * 2018-11-23 2019-02-15 青岛海信智能商用系统股份有限公司 无人商店防盗方法和装置
JP2021009488A (ja) * 2019-06-28 2021-01-28 株式会社野村総合研究所 盗難抑止装置
KR20220084762A (ko) * 2020-12-14 2022-06-21 주식회사 에스원 무인 매장용 블랙리스트 등록 방법 및 이를 이용한 블랙 리스트 등록 시스템
KR102419220B1 (ko) * 2021-12-21 2022-07-08 주식회사 인피닉 소량 구매를 위한 자동 결제 방법 및 이를 위한 시스템

Also Published As

Publication number Publication date
KR20240080013A (ko) 2024-06-05
KR102809905B1 (ko) 2025-05-23

Similar Documents

Publication Publication Date Title
EP4168983A1 (fr) Segmentation d'instances d'objets visuels à l'aide d'une imitation de modèle spécialisé de premier plan
CN102833478B (zh) 容错背景模型化
KR101824446B1 (ko) 강화 학습 기반 cctv용 차량 번호 인식 방법
CN108446669B (zh) 运动识别方法、装置及存储介质
WO2019083738A1 (fr) Procédés et systèmes pour appliquer une détection d'objet complexe dans un système d'analyse vidéo
Saffari et al. Battery-free camera occupancy detection system
CN112084882B (zh) 一种行为检测方法、装置及计算机可读存储介质
CN114387548B (zh) 视频及活体检测方法、系统、设备、存储介质及程序产品
KR102481995B1 (ko) 딥러닝 기반으로 이상 행동을 자동으로 감지하는 온디바이스 ai 장치 및 이의 동작 방법
CN115442744B (zh) 一种基于被动式WiFi感知的大规模人群计数方法
CN109325413A (zh) 一种人脸识别方法、装置及终端
CN109544870A (zh) 用于智能监控系统的报警判断方法与智能监控系统
CN114612813A (zh) 身份识别方法、模型训练方法、装置、设备和存储介质
KR20190092227A (ko) 지능형 영상 분석에 의한 실영상 및 녹화영상 검색 시스템
WO2024117457A1 (fr) Procédé et système pour empêcher le vol dans un environnement de magasin en libre-service
WO2025164906A1 (fr) Système d'interaction pour interaction entre un contenu et un utilisateur
WO2024048944A1 (fr) Appareil et procédé pour détecter une intention d'utilisateur pour une capture d'image ou un enregistrement vidéo
WO2022097805A1 (fr) Procédé, dispositif et système de détection d'évènement anormal
WO2023074994A1 (fr) Système d'appareil de prise de vues de surveillance de détection d'évanouissement au niveau d'un site industriel, et son procédé de fonctionnement
CN114429677A (zh) 一种煤矿场景作业行为安全识别考核方法及系统
Qureshi et al. Towards intelligent camera networks: a virtual vision approach
WO2021095962A1 (fr) Système de reconnaissance à base hybride pour un comportement anormal et procédé associé
Zeng et al. Effectively linking persons on cameras and mobile devices on networks
CN115086527B (zh) 一种家庭视频跟踪监控方法、装置、设备和存储介质
KR20220150584A (ko) 화면 촬영의 방지를 위한 장치, 방법 및 시스템

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23897993

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 23897993

Country of ref document: EP

Kind code of ref document: A1