[go: up one dir, main page]

CN119399706A - Method, device and system for identifying abnormal operation in the kitchen - Google Patents

Method, device and system for identifying abnormal operation in the kitchen Download PDF

Info

Publication number
CN119399706A
CN119399706A CN202411998426.0A CN202411998426A CN119399706A CN 119399706 A CN119399706 A CN 119399706A CN 202411998426 A CN202411998426 A CN 202411998426A CN 119399706 A CN119399706 A CN 119399706A
Authority
CN
China
Prior art keywords
detection area
food
intersect
target image
article
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202411998426.0A
Other languages
Chinese (zh)
Other versions
CN119399706B (en
Inventor
来焕明
徐伟栋
王超
郭俊虎
何洋
石夏锋
刘帆
宋杰伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision System Technology Co Ltd
Original Assignee
Hangzhou Hikvision System Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision System Technology Co Ltd filed Critical Hangzhou Hikvision System Technology Co Ltd
Priority to CN202411998426.0A priority Critical patent/CN119399706B/en
Publication of CN119399706A publication Critical patent/CN119399706A/en
Application granted granted Critical
Publication of CN119399706B publication Critical patent/CN119399706B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • G06V10/421Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation by analysing segments intersecting the pattern
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Emergency Management (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

本申请公开了一种识别后厨非正常操作的方法、装置和系统,该方法包括:获取后厨中目标环境区域的目标图像,目标环境区域至少包括一个检测区域;识别检测区域中第一物品和第二物品,根据第一物品和第二物品的位置信息确定第一物品和第二物品是否相交;其中,第一物品为用于处理食材的工具,第二物品为可移动物品,第二物品禁止放置于第一物品;若第一物品与第二物品相交,输出告警指令。本申请能够提高识别后厨工作人员违规操作行为的效率。

The present application discloses a method, device and system for identifying abnormal operation in a back kitchen, the method comprising: obtaining a target image of a target environment area in a back kitchen, the target environment area comprising at least one detection area; identifying a first object and a second object in the detection area, and determining whether the first object and the second object intersect according to the position information of the first object and the second object; wherein the first object is a tool for processing food, the second object is a movable object, and the second object is prohibited from being placed on the first object; if the first object intersects with the second object, an alarm instruction is output. The present application can improve the efficiency of identifying illegal operation behaviors of back kitchen staff.

Description

Method, device and system for identifying abnormal operation of kitchen
Technical Field
The application relates to the technical field of computer vision, in particular to a method, a device and a system for identifying abnormal operation of a kitchen.
Background
In the kitchen scene of a restaurant, strict sanitary requirements are imposed on kitchen workers, and one of the strict sanitary requirements is that cleaning tools such as a broom, a mop, a dustpan and the like are not allowed to be cleaned in a food water tank. At present, the illegal operation behavior can be avoided only by the management means of enterprises, and the problems of increased cost, increased management difficulty and low efficiency exist.
Disclosure of Invention
Based on the above, it is necessary to provide a method, a device and a system for identifying abnormal kitchen operations, which can solve the problem of difficult management of illegal kitchen operations, improve the efficiency of identifying illegal kitchen operations, and reduce the management cost and management difficulty.
In a first aspect, the present application provides a method of identifying abnormal kitchen operation, the method comprising:
acquiring a target image of a target environment area in the kitchen, wherein the target environment area at least comprises a detection area;
Identifying a first article and a second article in the detection area, and determining whether the first article and the second article are intersected according to the position information of the first article and the second article, wherein the first article is a tool for processing food materials, the second article is a movable article, and the second article is forbidden to be placed on the first article;
and if the first article is intersected with the second article, outputting an alarm instruction.
In one embodiment, the first image capturing device is configured to be able to identify a first item and a second item, the second image capturing device is configured to be able to identify the first item and the second item, the mounting locations and the shooting angles of the first image capturing device and the second image capturing device are different, the method further comprising:
The first image acquisition equipment acquires and acquires a first target image of the target environment area, identifies a first article and a second article in a detection area of the first target image, and determines whether the first article and the second article are intersected;
The second image acquisition equipment acquires and acquires a second target image of the target environment area, identifies a first article and a second article in a detection area of the second target image, and determines whether the first article and the second article are intersected;
If the first image acquisition equipment determines that the first article is intersected with the second article, and the second image acquisition equipment determines that the first article is intersected with the second article, an alarm instruction is output.
In one embodiment, the first image capturing device is a first camera and the second image capturing device is a second camera, the method further comprising:
The first camera captures a second target image at a first time interval without determining that the first object and the second object intersect;
in the event that the first camera determines that the first item and the second item intersect, the second camera captures a second target image at a second time interval, wherein the second time interval is less than the first time interval.
In one embodiment, if the first image capturing device determines that the first article and the second article intersect, and the second image capturing device determines that the first article and the second article intersect, outputting an alarm instruction, comprising:
The first image acquisition equipment determines that the first article and the second article are intersected, and outputs a first identification result to the server;
the second image acquisition equipment determines that the first article is intersected with the second article and outputs a second identification result to the server;
and the server responds to the first identification result and the second identification result and outputs an alarm instruction.
In one embodiment, the method further comprises:
Obtaining false alarm information fed back by a user aiming at an alarm instruction within a preset time period, determining movable articles with false alarm rate higher than a preset false alarm rate threshold, and dividing the movable articles into at least a first part and a second part, wherein in the false alarm information fed back by the user, the first part is determined that the number of times of intersecting the first articles is larger than the first threshold, or the proportion of the number of images intersecting the first articles in the number of false alarm images intersecting the second articles fed back by the user is larger than the second threshold;
if it is determined that the first item intersects the first portion of the movable item, no alert instruction is output.
In one embodiment, the method further comprises:
acquiring a plurality of specific false positive images, wherein the specific false positive images are false positive images of which the first part is determined to intersect with the first article;
marking the intersection positions of the first part and the first object in a plurality of specific false positive images respectively;
Dividing the first part into at least a third sub-part and a fourth sub-part according to the marked intersection position of the first part and the first article, wherein the third sub-part is determined to be intersected with the first article for more than a third threshold value, or the third sub-part is determined to be intersected with the first article for more than a fourth threshold value in the proportion of the number of images intersected with the first article in the specific false positive image number;
If the third subsection is determined to be intersected with the first object in the acquired target image, an alarm instruction is not output;
and if the fourth subsection is determined to be intersected with the first object in the acquired target image, outputting an alarm instruction.
In one embodiment, the first item is a first cleaning vessel for cleaning food materials and the second item is a floor cleaning tool, the method further comprising:
Determining a first detection area in the target image, and determining whether the first detection area simultaneously contains a first cleaning container and a floor cleaning tool, wherein the first detection area comprises at least one first cleaning container;
if the first detection area is provided with the first cleaning container and the ground cleaning tool at the same time, determining whether the first cleaning container and the ground cleaning tool are intersected or not, or whether the first cleaning container contains the ground cleaning tool or not, and if so, outputting a first alarm instruction;
and/or the number of the groups of groups,
The first article includes a second cleaning vessel for cleaning tableware and food material, and a third cleaning vessel for cleaning food material, the method further comprising:
Determining a second detection area in the target image, wherein the second detection area comprises a second cleaning container and a third cleaning container;
if the second cleaning container in the second detection area is identified to be intersected with the food materials or the third cleaning container is identified to be intersected with the tableware, outputting a second warning instruction;
and/or the number of the groups of groups,
The first article comprises a first placing plate, the second article comprises a first type of food material and a second type of food material, the first placing plate is used for processing the first type of food material, and the method further comprises:
determining a third detection area in the target image, wherein the third detection area at least comprises a first placing plate;
If the first placing plate and the second food materials are simultaneously identified in the third detection area and the first placing plate and the second food materials are intersected, outputting a third warning instruction;
and/or the number of the groups of groups,
The first article comprises a second placing plate, the second article comprises a first type of food material and a second type of food material, the second placing plate is used for processing the second type of food material, and the method further comprises:
Determining a fourth detection area in the target image, wherein the fourth detection area at least comprises a second placing plate;
If the second placing plate and the first food materials are simultaneously identified in the fourth detection area and the second placing plate is intersected with the first food materials, outputting a fourth warning instruction;
and/or the number of the groups of groups,
The first article is a food material processing plate and the second article is a cleaning cloth, the method further comprising:
determining a fifth detection area in the target image, wherein the fifth detection area comprises a food processing plate;
and if the food material processing plate and the cleaning cloth are simultaneously identified in the fifth detection area and the food material processing plate and the cleaning cloth are intersected, outputting a fifth warning instruction.
In one embodiment, determining whether the first cleaning vessel and the floor cleaning tool intersect or the first cleaning vessel contains the floor cleaning tool comprises:
acquiring an identification frame of a first cleaning container in a first detection area;
Acquiring an identification frame of a ground cleaning tool in a first detection area;
Determining whether an overlapping area exists between the identification frame of the first cleaning container and the identification frame of the ground cleaning tool according to the position coordinates of the identification frame of the first cleaning container in the target image and the position coordinates of the identification frame of the ground cleaning tool in the target image;
if the area ratio of the overlapping area to the identification frame of the first cleaning container is larger than or equal to a preset first duty ratio threshold value, determining that the first cleaning container contains a ground cleaning tool, otherwise, determining that the first cleaning container is intersected with the ground cleaning tool;
and/or the number of the groups of groups,
Simultaneously identifying the first placing plate and the second type of food material in the third detection area, and intersecting the first placing plate and the second type of food material, comprising:
acquiring an identification frame of the first placing plate in the third detection area;
acquiring an identification frame of the second food material in the third detection area;
Determining whether an overlapping area exists between the identification frame of the first placing plate and the identification frame of the second food material according to the position coordinates of the identification frame of the first placing plate in the target image and the position coordinates of the identification frame of the second food material in the target image;
if the area occupation ratio of the overlapping area to the identification frame of the first placing plate is larger than or equal to a preset second occupation ratio threshold value, determining that the first placing plate is intersected with the second type food material;
and/or the number of the groups of groups,
If the second placing plate and the first food material are simultaneously identified in the fourth detection area, and the second placing plate and the first food material intersect, the method comprises the following steps:
Acquiring an identification frame of the second placing plate in the fourth detection area;
Acquiring an identification frame of the first food material in the fourth detection area;
Determining whether an overlapping area exists between the identification frame of the second placing plate and the identification frame of the first food material according to the position coordinates of the identification frame of the second placing plate in the target image and the position coordinates of the identification frame of the first food material in the target image;
If the area occupation ratio of the overlapping area to the identification frame of the second placing plate is larger than or equal to a preset third occupation ratio threshold value, determining that the second placing plate is intersected with the first food material;
and/or the number of the groups of groups,
The food processing board is configured to a first color, the cleaning cloth is configured to a second color, the first color is different from the second color, wherein if the food processing board and the cleaning cloth are simultaneously identified in a fifth detection area, and the food processing board and the cleaning cloth intersect, a fifth warning instruction is output, comprising:
Performing color recognition on the target object in the fifth detection area, wherein the target object with the first color is recognized as a food processing plate, and the target object with the second color is recognized as cleaning cloth;
And if the food material processing plate is intersected with the cleaning cloth, outputting a fifth warning instruction.
In a second aspect, the present application provides an apparatus for identifying abnormal operation of a kitchen, the apparatus comprising:
the acquisition module is used for acquiring a target image of a target environment area in the kitchen, wherein the target environment area at least comprises a detection area;
The device comprises a detection area, an identification module, a first detection module and a second detection module, wherein the detection area is used for detecting the first object and the second object in the detection area, and determining whether the first object and the second object are intersected according to the position information of the first object and the second object;
and the alarm module is used for outputting an alarm instruction if the first article is intersected with the second article.
In a third aspect, the present application provides a system for identifying abnormal operation of a kitchen, the system comprising:
The image acquisition equipment is used for acquiring a target image of a target environment area in the kitchen, wherein the target environment area at least comprises a detection area, a first object and a second object in the detection area are identified, whether the first object and the second object are intersected or not is determined according to the position information of the first object and the second object, and if the first object and the second object are intersected, an identification result is output;
And the server is used for responding to the identification result and outputting an alarm instruction.
In a fourth aspect, the application provides a computer device comprising a memory storing a computer program and a processor implementing the steps of the method of identifying abnormal kitchen operation as described above when the computer program is executed by the processor.
In a fifth aspect, the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of a method of identifying kitchen abnormal operation as described above.
According to the application, the target image of the target environment area in the kitchen is acquired, the detection area in the target image is identified, the second article of the first article in the detection area is identified, the first article is a tool for processing food materials, the second article is a movable article, the second article is forbidden to be placed on the first article, whether the first article and the second article are intersected or not is judged, if the intersection determines that illegal operation behaviors exist in kitchen workers after the intersection, for example, a mop is cleaned in a water tank for cleaning food materials, so that the problem of difficult management of the illegal operation behaviors of the kitchen workers is solved, the illegal operation behaviors of the kitchen workers can be automatically and effectively identified, the illegal operation behaviors of the workers can be timely found, the store management efficiency is improved, and the management difficulty and management cost of the workers are reduced.
Drawings
FIG. 1 is a schematic flow chart of a method for identifying abnormal kitchen operation according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a second flow chart of a method for identifying abnormal kitchen operation according to an embodiment of the present application;
FIG. 3 is a schematic view of a third flow chart of a method for identifying abnormal kitchen operation according to an embodiment of the present application;
fig. 4 is a fourth flowchart of a method for identifying abnormal kitchen operations according to an embodiment of the present application;
FIG. 5 is a schematic view of a fifth flow chart of a method for identifying abnormal kitchen operation according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a sixth flow chart of a method for identifying abnormal kitchen operation according to an embodiment of the present application;
fig. 7 is a seventh flowchart of a method for identifying abnormal kitchen operation according to an embodiment of the present application;
FIG. 8 is a schematic view of an eighth flowchart of a method for identifying abnormal kitchen operation according to an embodiment of the present application;
fig. 9 is a ninth flowchart of a method for identifying abnormal kitchen operation according to an embodiment of the present application;
Fig. 10 is a tenth flowchart of a method for identifying abnormal kitchen operations according to an embodiment of the present application;
FIG. 11 is a schematic view of an eleventh procedure of a method for identifying abnormal kitchen operation according to an embodiment of the present application;
FIG. 12 is a twelfth flowchart of a method for identifying abnormal kitchen operation according to an embodiment of the present application;
FIG. 13 is a system block diagram of an apparatus for identifying abnormal kitchen operation according to an embodiment of the present application;
FIG. 14 is a system block diagram of a system for identifying abnormal kitchen operation provided by an embodiment of the present application;
Fig. 15 is a system block diagram of a computer device according to an embodiment of the present application.
Detailed Description
The present application will be described in detail below with reference to the specific embodiments shown in the drawings, but these embodiments are not limited to the present application, and structural, method, or functional modifications made by those skilled in the art based on these embodiments are included in the scope of the present application.
The embodiment of the application provides a method for identifying abnormal kitchen operation, wherein an execution subject of the method can be a video camera or a camera, and the video camera or the camera can be connected with a server. The video camera or the camera may be provided with a processor, a transceiver, and an image capturing device, where the image capturing device may be configured to capture an image, that is, may acquire a target image, and the processor may identify the acquired target image to identify whether the first article and the second article intersect, and the transceiver may be configured to perform data transmission with the server, that is, when it is determined that the first article and the second article intersect, output an identification result to the server, so that the server outputs an alarm instruction.
The execution subject of the method for identifying abnormal kitchen operation in the embodiment of the application can be electronic equipment. The camera is connected with the electronic device. The camera is used for shooting a target image. The electronic equipment acquires a target image shot by the camera to perform image processing so as to identify whether the first article and the second article are intersected, and when the first article and the second article are determined to be intersected, an alarm instruction is output.
By way of example, the electronic device may be a terminal or the like, including but not limited to a mobile terminal including but not limited to a smart phone, a smart watch, a tablet, a notebook, a smart vehicle, a smart car, etc., and a fixed terminal including but not limited to a desktop computer, a smart television, etc.
The electronic device may be a server, which may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and artificial intelligence platforms, but is not limited thereto.
The following describes in detail the implementation manner of the method for identifying abnormal operation of the kitchen according to the embodiment of the present application.
Referring to fig. 1, an embodiment of the present application provides a method for identifying abnormal kitchen operations, which includes steps S101-S103.
Step S101, obtaining a target image of a target environment area in the kitchen, wherein the target environment area at least comprises a detection area.
The target environmental area may be, for example, an environmental area within a kitchen of a restaurant.
The detection area may be, for example, an area where an offending operation behavior occurs. For example, the detection area includes an area of a water tank for washing the food material, and for another example, the detection area includes an area of a food material processing plate.
The target image corresponding to the target environment area is acquired and acquired by a video camera or a camera installed in a kitchen, and the target image can be captured in real time, an image frame extracted from video recording or a periodically captured image. The installation position of the camera can be determined according to the site condition of the actual use site so as to be capable of completely shooting the behaviors of staff in the kitchen, and the application is not limited to the behaviors.
Based on the acquired target image, a detection area in the target image is determined. For example, the target environment area may be zoned based on the environmental arrangement within the kitchen to obtain the desired detection area. The camera is generally fixed in mounting position, and the detection area can be directly determined in the acquired target image based on the area divided in advance. The detection area under the visual angle of the camera can be determined through image segmentation, edge detection and the like, the detection area can be defined in the camera in a manual labeling mode, so that the coordinates of the detection area are generated, the detection area in the target image can be simply and efficiently positioned through the coordinates of each detection area generated in advance, and the method is beneficial to effectively and accurately identifying the target object in the detection area.
Step S102, a first article and a second article in the detection area are identified, whether the first article and the second article are intersected is determined according to the position information of the first article and the second article, wherein the first article is a tool for processing food materials, the second article is a movable article, and the second article is forbidden to be placed on the first article.
And carrying out target object identification on the detection area, and identifying a first article and a second article in the detection area, wherein the first article is a tool for processing food materials, and the second article is a movable article. For example, the detection region may be image identified by a computer vision algorithm or a deep learning algorithm to identify the first item and the second item in the image. And determining whether the first article and the second article are intersected according to the position coordinates of the first article in the target image and the position coordinates of the second article in the target image.
The first article is a tool for processing food materials, the second article is a movable article, and the second article is forbidden to be placed on the first article. Illustratively, the first item is a sink for washing food materials and the second item is a floor cleaning tool, such as a broom, mop, dustpan, etc., which is prohibited from being placed in the sink in the sanitary operating specifications of the kitchen. Illustratively, the first item is a meat food material processing board and the second item is a fruit or vegetable food material that is prohibited from being processed with the meat food material processing board.
Step S103, if the first article is intersected with the second article, an alarm instruction is output.
If the first article is intersected with the second article, the behavior of the illegal operation of the staff is indicated, and an alarm instruction is output, wherein the alarm instruction is used for indicating the behavior of the illegal operation of the staff in the kitchen.
In this embodiment, by acquiring the target image of the target environment area in the kitchen, identifying the detection area in the target image, identifying the second article of the first article in the detection area, judging whether the first article is intersected with the second article, determining that the staff has illegal operation behaviors, for example, cleaning a mop in a water tank for cleaning food materials or placing fruits or vegetable food materials on a meat food material processing board, etc., the problem of difficult management of the illegal operation behaviors of the staff in the kitchen is solved, the management cost is reduced, the illegal operation behaviors can be automatically and effectively identified, the illegal operation behaviors of the staff are timely found, the management efficiency of a store is improved, the management difficulty and the management cost for the staff are reduced, and the supervision effect is improved.
In some embodiments, as shown in fig. 2, the first image capturing device is configured to be able to identify a first item and a second item, the second image capturing device is configured to be able to identify the first item and the second item, the mounting positions and shooting angles of the first image capturing device and the second image capturing device are different, and the method further includes steps S201-S203.
In step S201, the first image capturing device captures and acquires a first target image of the target environment area, identifies a first article and a second article in a detection area of the first target image, and determines whether the first article and the second article intersect.
In order to improve accuracy of identifying illegal operation behaviors in a kitchen, a first image acquisition device and a second image acquisition device are installed in the kitchen, the installation positions and shooting angles of the first image acquisition device and the second image acquisition device are different, so that target environment areas are shot from different angles, and target images of different shooting angles are acquired. The first image capturing device may be a video camera or a still camera and the second image capturing device may be a video camera or a still camera. The camera can be an edge camera and has an image recognition function, and the recognition output result is sent to the server, so that the data flow transmitted to the server by the camera can be effectively reduced.
The first image acquisition equipment acquires and acquires a first target image of a target environment area, determines a detection area in the first target image, performs image recognition on the detection area, recognizes a first article and a second article in the detection area, and determines whether the first article and the second article intersect based on position coordinates of the first article in the first target image and position coordinates of the second article in the first target image.
In step S202, the second image capturing device captures and acquires a second target image of the target environment area, identifies a first article and a second article in a detection area of the second target image, and determines whether the first article and the second article intersect.
The second image acquisition device acquires and acquires a second target image of the target environment area, determines a detection area in the second target image, performs image recognition on the detection area, recognizes a first object and a second object in the detection area, and determines whether the first object and the second object intersect based on position coordinates of the first object in the second target image and position coordinates of the second object in the second target image. The detection area in the first target image is identical to the detection area in the second target image.
In step S203, the first image capturing device determines that the first article and the second article intersect, and the second image capturing device determines that the first article and the second article intersect, and outputs an alarm instruction.
When the first image acquisition equipment determines that the first article is intersected with the second article, and the second image acquisition equipment determines that the first article is intersected with the second article, the first image acquisition equipment indicates that the illegal operation behavior of the staff occurs, and an alarm instruction is output.
In this embodiment, two image acquisition devices are installed to respectively shoot a target environment area from different shooting angles, and the same detection area in the respectively acquired target image is subjected to article identification, so that when the first article and the second article are both identified to be intersected, the illegal operation behavior of the staff is judged to occur, thereby improving the accuracy of the illegal operation behavior identification, reducing the false alarm rate of identification errors, and further improving the store management efficiency.
In some embodiments, the first image capture device is a first camera and the second image capture device is a second camera, the method further comprising the first camera capturing a second target image at a first time interval if the first camera does not determine that the first item and the second item intersect, and capturing the second target image at a second time interval if the first camera determines that the first item and the second item intersect, wherein the second time interval is less than the first time interval.
The first camera indicates that no illegal operation exists at present under the condition that the first article and the second article are not determined to be intersected, and whether the illegal operation exists or not does not need to be recognized, so that the second camera captures the second target image at a longer first time interval to reduce equipment consumption. When the first camera determines that the first object and the second object are intersected, the fact that the illegal operation behavior exists at present is indicated, and the fact that the illegal operation behavior occurs in a peak time period is included, so that the second camera captures a second target image at a shorter second time interval, the fact that a complete behavior monitoring process can be accurately monitored is guaranteed, and the accuracy of identification is improved. The first time interval and the second time interval may be set according to empirical values.
In some embodiments, if the first image acquisition device determines that the first article and the second article intersect, and the second image acquisition device determines that the first article and the second article intersect, the method comprises the steps of enabling the first image acquisition device to determine that the first article and the second article intersect, outputting a first identification result to a server, enabling the second image acquisition device to determine that the first article and the second article intersect, outputting a second identification result to the server, and enabling the server to respond to the first identification result and the second identification result and output an alarm instruction. The first recognition result is used for indicating that the first article is intersected with the second article, and the second recognition result is used for indicating that the first article is intersected with the second article. In this embodiment, when the server receives the first identification result sent by the first image acquisition device and simultaneously receives the second identification result sent by the second image acquisition device, it indicates that the image acquisition device monitors the illegal operation behavior, and an alarm instruction is output to remind a worker that the worker is operating in a illegal manner, so that the false alarm rate of alarm information is reduced.
In some embodiments, as shown in fig. 3, the method further comprises the steps of:
Step S301, false alarm information fed back by a user aiming at an alarm instruction in a preset time period is obtained, movable articles with false alarm rate higher than a preset false alarm rate threshold are determined, and the movable articles are divided into at least a first part and a second part, wherein in the false alarm information fed back by the user, the first part is determined to be intersected with the first articles for times greater than the first threshold, or the first part is determined to be intersected with the first articles, and the proportion of the number of images intersected with the first articles in the number of false alarm images intersected with the second articles fed back by the user is greater than the second threshold;
in step S302, if it is determined that the first article intersects the first portion of the movable article, no warning command is output.
In one application scenario of the embodiment, the second article is a movable article, and when a worker moves the article during use, the worker may touch the first article by mistake, so as to cause false alarm of alarm information. For example, the movable article is a mop, and when a worker drags the floor, the top end of the mop rod may frequently touch the water tank, so that the mop rod is judged to be intersected with the water tank, and false alarm is easy to generate, so that the article is segmented and identified for the article which is easy to generate false alarm, and the false alarm rate can be reduced.
Collecting false alarm information which is fed back by a user and aims at an alarm instruction, acquiring false alarm information in a preset time period, analyzing the acquired false alarm information, and calculating the false alarm rate of each movable article so as to determine the movable article with the false alarm rate higher than a preset false alarm rate threshold. The preset time period can be set according to actual conditions. The false positive rate threshold may be set based on empirical values.
The movable item is divided into at least a first portion and a second portion based on false positive information fed back by the user. The first portion is determined to intersect the second item more than a first threshold, or the first portion is determined to be greater than a second threshold in proportion to the number of images intersected by the second item in the number of false-positive images the user feedback that the first item intersected the second item.
In the feedback false alarm information, if the number of times that the first part of the movable object intersects the first object is greater than the first threshold value, it indicates that the first part of the movable object is prone to false alarm, and in subsequent processing, when it is determined that the first object intersects the first part of the movable object, an alarm instruction is not output. The setting of the first threshold may be set according to an empirical value.
In an exemplary embodiment, the number of false positive images that the first article and the first article intersect is obtained, where the number of false positive images that the first portion of the movable article intersects is determined to be greater than a second threshold value, and it is indicated that the first portion of the movable article is prone to false positive. The setting of the second threshold may be set based on empirical values.
In this embodiment, the movable article that easily generates false alarm is identified by segmentation or partial identification, and a part of the movable article that easily generates false alarm is determined, and when the part intersects with the first article, no alarm is generated, for example, the mop described above, the top end of the mop rod easily generates false alarm, the mop is divided into the mop rod and the mop head, the mop rod does not alarm when intersecting with the water tank, and the mop head does not alarm when intersecting with the water tank, so that the accuracy of identifying the illegal operation behavior can be improved, the false alarm rate is reduced, and the store management efficiency is further improved.
In some embodiments, as shown in fig. 4, the method further comprises the steps of:
step S401, a plurality of specific false positive images are acquired, wherein the specific false positive images are false positive images of which the first part is determined to intersect with the first article;
Step S402, marking the intersection positions of the first part and the first object in a plurality of specific false positive images respectively;
Step S403, dividing the first part into at least a third sub-part and a fourth sub-part according to the marked intersection position of the first part and the first object, wherein the third sub-part is determined that the number of times the first object is intersected is greater than a third threshold value, or the third sub-part is determined that the proportion of the number of images intersected with the first object in the specific false positive image number is greater than a fourth threshold value;
Step S404, if the third subsection is determined to intersect with the first object in the acquired target image, no warning instruction is output;
Step S405, if it is determined that the fourth sub-portion intersects the first object in the obtained target image, an alarm instruction is output.
A plurality of specific false positive images are acquired, wherein the specific false positive images are false positive images of which the first part is determined to be intersected with the second part, and image analysis is carried out on the specific false positive images so as to more accurately identify the intersection position of the first part of the movable article and the first article. And labeling the specific false alarm images, and labeling the intersecting positions by adopting a manual labeling or automatic labeling mode. And according to the marked intersection position, dividing the first part into at least a third sub-part and a fourth sub-part by adopting a manual division or automatic division mode. And carrying out image recognition on the acquired target image, and if the third subsection in the detection area is recognized to intersect with the first object, not outputting an alarm instruction. And carrying out image recognition on the acquired target image, and outputting an alarm instruction when the fourth subsection in the detection area is recognized to intersect with the first object.
In the example, if the number of times that the third sub-portion of the movable item intersects the first item is greater than the third threshold value in the marked position information, it indicates that the third sub-portion of the movable item is prone to false alarm, and in the subsequent processing, when it is determined that the first item intersects the third sub-portion of the movable item, no alarm instruction is output. The setting of the third threshold may be set based on empirical values.
For example, a specific number of false positive images is acquired, wherein in the specific false positive images, the third part of the movable object is determined that the number of images intersected with the first object is larger than a fourth threshold value in the proportion of the number of the specific false positive images, so that the third sub-part of the movable object is easy to generate false positive, and in subsequent processing, no alarm instruction is output when the first object is determined to be intersected with the third sub-part of the movable object. The setting of the fourth threshold may be set based on empirical values.
In this embodiment, by marking the intersection position of the movable object and the first object in the false alarm image, the first portion of the movable object that is prone to false alarm is further segmented or partially identified, when the third sub-portion intersects with the first object, no alarm is generated, and when the fourth sub-portion intersects with the first object, an alarm is generated, for example, the mop rod is divided into two parts, the position from the top end of the mop rod to 1/3 of the mop rod is the third sub-portion, and the remaining part of the mop rod is the fourth sub-portion, so that the illegal operation behavior can be identified more accurately, false alarm can be reduced, and missing alarm can be reduced.
In some embodiments, as shown in fig. 5, the first article is a first cleaning vessel for cleaning food material, and the second article is a floor cleaning tool, the method further comprising the steps of:
step S501, determining a first detection area in the target image, and determining whether the first detection area simultaneously contains a first cleaning container and a floor cleaning tool, wherein the first detection area comprises at least one first cleaning container;
Step S502, if the first detection area has the first cleaning container and the ground cleaning tool at the same time, determining whether the first cleaning container and the ground cleaning tool intersect or whether the first cleaning container contains the ground cleaning tool, and if so, outputting a first warning command.
In one application scenario of the embodiment, the first object is a first cleaning container for cleaning food, for example, the first cleaning container may be a sink or a pool, the second object is a floor cleaning tool, for example, the floor cleaning tool may be a mop or a broom, and the floor cleaning tool is prohibited from being placed in the first cleaning container.
Acquiring a target image of a kitchen target environment area, determining a first detection area in the target image, carrying out image recognition on the first detection area, and recognizing whether a first cleaning container and a ground cleaning tool exist in the first detection area at the same time, wherein the first detection area comprises at least one cleaning container. A convolutional neural network of target detection may be employed to identify a target object in an image. If the first detection area is identified to have the first cleaning container and the ground cleaning tool at the same time, judging whether the first cleaning container and the ground cleaning tool are intersected or not or whether the first cleaning container contains the ground cleaning tool or not based on the position coordinates of the first cleaning container in the target image and the position coordinates of the ground cleaning tool in the target image, if so, indicating that the ground cleaning tool is partially or completely placed in the first cleaning container, determining that illegal operation behaviors exist currently, and outputting an alarm instruction to remind workers.
In some embodiments, as shown in fig. 6, determining whether the first cleaning vessel and the floor cleaning tool intersect, or the first cleaning vessel contains the floor cleaning tool, comprises the steps of:
step S601, acquiring an identification frame of a first cleaning container in a first detection area;
step S602, acquiring an identification frame of a ground cleaning tool in a first detection area;
step S603, determining whether an overlapping area exists between the identification frame of the first cleaning container and the identification frame of the ground cleaning tool according to the position coordinates of the identification frame of the first cleaning container in the target image and the position coordinates of the identification frame of the ground cleaning tool in the target image;
Step S604, if the area ratio of the overlapping area to the identification frame of the first cleaning container is greater than or equal to the preset first duty ratio threshold, determining that the first cleaning container includes a floor cleaning tool, otherwise determining that the first cleaning container and the floor cleaning tool intersect.
For example, a convolutional neural network for object detection, such as a YOLO network, may be used to perform object detection on the first detection area to obtain object target rectangular frames, where each object corresponds to one object target rectangular frame, and a corresponding target object is determined based on the object target rectangular frames, that is, an identification frame of the first cleaning container and an identification frame of the floor cleaning tool in the first detection area are determined. Or other target detection algorithms may be used to determine the target object in the image. The specific detection mode is not limited. Determining whether an overlapping area exists between the identification frame of the first cleaning container and the identification frame of the ground cleaning tool according to the position coordinates of the identification frame of the first cleaning container in the target image and the position coordinates of the identification frame of the ground cleaning tool in the target image, if the area ratio of the overlapping area to the identification frame of the first cleaning container is larger than or equal to a first duty ratio threshold value, determining that the first cleaning container contains the ground cleaning tool, otherwise, determining that the first cleaning container is intersected with the ground cleaning tool, and determining whether the ground cleaning tool is placed in the first cleaning container or not according to the embodiment so as to provide a judgment basis for subsequent processing. The first duty cycle threshold may be set based on empirical values.
In some embodiments, as shown in fig. 7, the first article comprises a second washing container and a third washing container, the second article comprises cutlery and food materials, the second washing container is used for washing cutlery, the third washing container is used for washing food materials, the method further comprises the steps of:
step S701, determining a second detection area in the target image, wherein the second detection area includes a second cleaning container and a third cleaning container;
step S702, if it is identified that the second cleaning container in the second detection area intersects the food material or the third cleaning container intersects the tableware, a second warning command is output.
In one application scenario of the embodiment, the first article is a second cleaning container and a third cleaning container, the second article includes tableware and food, the second cleaning container is used for cleaning the tableware, the third cleaning container is used for cleaning the food, the second cleaning container is forbidden to clean the food, and the third cleaning container is forbidden to clean the tableware, so that the containers for cleaning the tableware and cleaning the food are separated. Based on this embodiment, when the washing container that washes the tableware washes the food, or the washing container that washes the food washes the tableware, the current existence of the offensive operation behavior is recognized, and the warning information is output. The convolutional neural network for target detection can be used for identifying the second cleaning container and the third cleaning container in the second detection area, and identifying food and tableware, which are not described herein.
In some embodiments, as shown in fig. 8, the first article comprises a first placement plate, the second article comprises a first type of food material and a second type of food material, the first placement plate is for processing the first type of food material, the method further comprising the steps of:
Step S801, determining a third detection area in the target image, wherein the third detection area at least comprises a first placing plate;
Step S802, if the first placing plate and the second food materials are identified in the third detection area at the same time, and the first placing plate and the second food materials are intersected, outputting a third warning instruction.
In one of the application scenarios of this embodiment, the first article is a first placing plate, the second article includes a first type food material and a second type food material, the first placing plate is used for processing the first type food material, for example, the first placing plate is a meat processing plate, the first food material is meat or aquatic product, the second type food material is fruit or vegetable, the fruit or vegetable prohibits food material processing on the meat processing plate, and the plates processed by the food materials need to be separated. Based on the embodiment, when the first placing plate for processing the first food material processes the second food material, the current illegal operation behavior is identified, and the warning information is output.
In some embodiments, as shown in fig. 9, the first placing plate and the second type of food material are simultaneously identified in the third detection area, and the first placing plate and the second type of food material intersect, comprising the steps of:
Step S901, acquiring an identification frame of the first placing plate in the third detection area;
step S902, acquiring an identification frame of the second food material in the third detection area;
step S903, determining whether an overlapping area exists between the identification frame of the first placing plate and the identification frame of the second type of food material according to the position coordinates of the identification frame of the first placing plate in the target image and the position coordinates of the identification frame of the second type of food material in the target image;
step S904, if the area ratio of the overlapping area to the identification frame of the first placing plate is larger than or equal to a preset second ratio threshold value, determining that the first placing plate and the second food materials are intersected.
For example, a convolutional neural network for target detection may be used to detect items in the third detection region, and determine an identification frame of the first placement plate and an identification frame of the second type of food material in the third detection region. And judging whether an overlapping area exists between the identification frame of the first placing plate and the identification frame of the second type of food materials according to the position coordinates of the identification frame of the first placing plate in the target image and the position coordinates of the identification frame of the second type of food materials in the target image. If the area ratio of the overlapping area to the identification frame of the first placement plate is greater than or equal to a second ratio threshold, determining that the first placement plate is intersected with the second type of food materials, and determining whether the second type of food materials are placed on the first placement plate or not based on the embodiment so as to provide a judgment basis for subsequent processing. The second duty cycle threshold may be set based on empirical values.
In some embodiments, as shown in fig. 10, the first article comprises a second placement plate, the second article comprises a first type of food material and a second type of food material, the second placement plate is used for processing the second type of food material, the method further comprises the steps of:
Step S1001, determining a fourth detection area in the target image, where the fourth detection area includes at least a second placement plate;
step S1002, if the second placing plate and the first food material are identified in the fourth detection area at the same time, and the second placing plate and the first food material intersect, outputting a fourth alarm command.
One of the application scenarios of this embodiment, the first article is the second and places the board, and the second article includes first class edible material and second class edible material, and the second is placed the board and is used for processing second class edible material, and for example the second is placed the board and is processed the board for the dish, and first edible material is meat or aquatic products, and second class edible material is fruit or vegetables, and meat or aquatic products forbid to carry out edible material processing on the board is processed to the dish, need to separate the board that edible material processed. Based on the embodiment, when the second placing plate for processing the second type of food materials processes the first type of food materials, the current illegal operation behavior is identified, and the warning information is output.
In some embodiments, as shown in fig. 11, if the second placing plate and the first food material are simultaneously identified in the fourth detection area, and the second placing plate and the first food material intersect, the method includes the steps of:
step S1101, acquiring an identification frame of the second placing plate in the fourth detection area;
Step S1102, acquiring an identification frame of the first food material in the fourth detection area;
Step S1103, determining whether an overlapping area exists between the identification frame of the second placement plate and the identification frame of the first food material according to the position coordinates of the identification frame of the second placement plate in the target image and the position coordinates of the identification frame of the first food material in the target image;
Step S1104, determining that the second placing plate and the first food material intersect if the area ratio of the overlapping area to the identification frame of the second placing plate is greater than or equal to a preset third ratio threshold.
For example, a convolutional neural network of target detection may be employed to perform item detection on the fourth detection region, determining an identification frame of the second placement plate and an identification frame of the first food material in the fourth detection region. And judging whether an overlapping area exists between the identification frame of the second placing plate and the identification frame of the first food material according to the position coordinates of the identification frame of the second placing plate in the target image and the position coordinates of the identification frame of the first food material in the target image. If the area ratio of the overlapping area to the identification frame of the second placing plate is larger than or equal to a third ratio threshold, the second placing plate is determined to be intersected with the first food material, and whether the first food material is placed on the second placing plate or not is determined based on the embodiment, so that a judgment basis is provided for subsequent processing. The third duty cycle threshold may be set based on empirical values. The third detection region and the fourth detection region may be the same or different.
In some embodiments, as shown in fig. 12, the first article is a food material processing plate and the second article is a cleaning cloth, the method further comprising the steps of:
Step S1201, determining a fifth detection area in the target image, wherein the fifth detection area includes a food processing plate;
step S1202, if the food processing board and the cleaning cloth are identified in the fifth detection area at the same time, and the food processing board and the cleaning cloth intersect, outputting a fifth warning command.
In one application scenario of the embodiment, the first article is a food processing board, the second article includes a cleaning cloth, and the cleaning cloth is prohibited from being placed on the food processing board. Based on the embodiment, when the cleaning cloth is identified to be placed on the food processing plate, the current illegal operation behavior is determined, and the warning information is output.
In some embodiments, the food processing plate is configured to be a first color, the cleaning cloth is configured to be a second color, and the first color is different from the second color, wherein if the food processing plate and the cleaning cloth are simultaneously identified in the fifth detection area and the food processing plate and the cleaning cloth intersect, a fifth warning instruction is output, the method comprises the steps of identifying the color of a target object in the fifth detection area, identifying the target object in the first color as the food processing plate, identifying the target object in the second color as the cleaning cloth, and if the food processing plate and the cleaning cloth intersect, outputting the fifth warning instruction.
In the embodiment, different colors are configured on the food processing plate and the cleaning cloth, so that whether the cleaning cloth is placed on the food processing plate or not can be identified, and a judgment basis is provided for subsequent processing.
Based on the same inventive concept, the embodiment of the application also provides a device for identifying abnormal kitchen operation. The implementation scheme of the device for solving the problem is similar to that described in the method, so the specific limitation in the embodiment of the device for identifying abnormal operation of the kitchen provided below can be referred to the limitation of the method for identifying abnormal operation of the kitchen hereinabove, and will not be repeated here.
Referring to fig. 13, the present application provides a device for identifying abnormal operation of a kitchen, the device comprising:
the acquiring module 1301 is configured to acquire a target image of a target environmental area in the kitchen, where the target environmental area includes at least one detection area;
The identification module 1302 is configured to identify a first article and a second article in the detection area, and determine whether the first article and the second article intersect according to position information of the first article and the second article, where the first article is a tool for processing food, the second article is a movable article, and the second article is prohibited from being placed on the first article;
and the alarm module 1303 is used for outputting an alarm instruction if the first article is intersected with the second article.
In the embodiment, the target image of the target environment area in the kitchen is acquired, the detection area in the target image is identified, the second article of the first article in the detection area is identified, whether the first article is intersected with the second article is judged, and the illegal operation behaviors of the staff are determined, so that the problem of difficult management of the illegal operation behaviors of the staff in the kitchen is solved, the management cost is reduced, the illegal operation behaviors of the staff can be automatically and effectively identified, the illegal operation behaviors of the staff are timely found, the store management efficiency is improved, the management difficulty and the management cost of the staff are reduced, and the supervision effect is improved.
Referring to fig. 14, the present application provides a system for identifying abnormal operation of a kitchen, the system comprising:
The image acquisition equipment 1401 is used for acquiring a target image of a target environment area in the kitchen, wherein the target environment area at least comprises a detection area, a first object and a second object in the detection area are identified, whether the first object and the second object are intersected or not is determined according to the position information of the first object and the second object, and if the first object and the second object are intersected, an identification result is output, wherein the first object is a tool for processing food materials, the second object is a movable object, and the second object is forbidden to be placed on the first object;
And the server 1402 is configured to output an alarm instruction in response to the identification result.
The embodiment of the application also provides a computer readable storage medium, wherein a computer program is stored in the computer readable storage medium, and when the computer program is executed by a processor, any one of the methods for identifying abnormal kitchen operations is realized.
Fig. 15 is a schematic hardware structure of a computer device according to an embodiment of the present application. The computer device shown in fig. 15 includes a processor 1501, a communication interface 1502, a memory 1503, and a communication bus 1504, and the processor 1501, the communication interface 1502, and the memory 1503 perform communication with each other through the communication bus 1504. The connection between the processor 1501, the communication interface 1502 and the memory 1503 shown in fig. 15 is merely exemplary, and in the implementation, the processor 1501, the communication interface 1502 and the memory 1503 may be communicatively connected to each other by other connection manners besides the communication bus 1504.
The memory 1503 may be used to store a computer program that may include instructions and data to implement the steps of any of the methods of identifying abnormal kitchen operation as described above. In the embodiment of the present application, the memory 1503 may be various types of storage media such as random access memory (random access memory, RAM), read Only Memory (ROM), nonvolatile RAM (NVRAM), programmable ROM (PROM), erasable PROM (erasable PROM, EPROM), electrically erasable PROM (ELECTRICALLY ERASABLE PROM, EEPROM), flash memory, optical memory, registers, and the like. The memory 1503 may include a hard disk and/or memory.
The processor 1501 may be a general-purpose processor, which may be a processor that performs certain steps and/or operations by reading and executing a computer program (e.g., a computer program) stored in a memory (e.g., the memory 1503), which may use data stored in the memory (e.g., the memory 1503) during the performance of the steps and/or operations, such as, but not limited to, a central processing unit (central processing unit, CPU) & furthermore, the processor 1501 can also be a special purpose processor, which can be a specially designed processor for performing specific steps and/or operations, such as, but not limited to, an ASIC, FPGA, etc.
Communication interfaces 1502 may include input/output (I/O) interfaces, physical interfaces, logical interfaces, and the like for enabling interconnection of devices within a network device, as well as interfaces for enabling interconnection of a network device with other devices (e.g., network devices). The communication network may be an ethernet, a radio access network (radio access network, RAN), a wireless local area network (wireless local areanetworks, WLAN), or the like. The communication interface 1502 may be a module, circuit, transceiver, or any device capable of communicating.
In implementation, the steps of the above method may be performed by integrated logic circuitry in hardware in the processor 1501 or by instructions in software. The method for identifying abnormal kitchen operations in combination with the embodiment of the application can be directly embodied as the execution completion of a hardware processor or the execution completion of the combination execution of hardware and software modules in the processor. The software modules may be located in a random access memory flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 1503, and the processor 1501 reads the information in the memory 1503 and performs the steps of the above method in combination with its hardware. To avoid repetition, a detailed description is not provided herein.
Although the preferred embodiments of the present application have been disclosed for illustrative purposes, those skilled in the art will appreciate that various modifications, additions and substitutions are possible, without departing from the scope and spirit of the application as disclosed in the accompanying claims.

Claims (12)

1.一种识别后厨非正常操作的方法,其特征在于,所述方法包括:1. A method for identifying abnormal operation in a kitchen, characterized in that the method comprises: 获取后厨中目标环境区域的目标图像,所述目标环境区域至少包括一个检测区域;Acquire a target image of a target environment area in a back kitchen, wherein the target environment area includes at least one detection area; 识别所述检测区域中第一物品和第二物品,根据所述第一物品和第二物品的位置信息确定所述第一物品和第二物品是否相交;其中,所述第一物品为用于处理食材的工具,所述第二物品为可移动物品,所述第二物品禁止放置于所述第一物品;Identify a first object and a second object in the detection area, and determine whether the first object and the second object intersect according to position information of the first object and the second object; wherein the first object is a tool for processing food, the second object is a movable object, and the second object is prohibited from being placed on the first object; 若所述第一物品与所述第二物品相交,输出告警指令。If the first object intersects with the second object, an alarm instruction is output. 2.根据权利要求1所述的识别后厨非正常操作的方法,其特征在于,第一图像采集设备被配置为能够识别所述第一物品和第二物品,第二图像采集设备被配置为能够识别所述第一物品和第二物品,所述第一图像采集设备和所述第二图像采集设备的安装位置和拍摄角度不同,所述方法还包括:2. The method for identifying abnormal kitchen operation according to claim 1, characterized in that the first image acquisition device is configured to be able to identify the first object and the second object, the second image acquisition device is configured to be able to identify the first object and the second object, the first image acquisition device and the second image acquisition device are installed at different positions and shooting angles, and the method further comprises: 所述第一图像采集设备采集并获取所述目标环境区域的第一目标图像,识别所述第一目标图像的所述检测区域中的第一物品和第二物品,确定所述第一物品和第二物品是否相交;The first image acquisition device acquires and obtains a first target image of the target environment area, identifies a first object and a second object in the detection area of the first target image, and determines whether the first object and the second object intersect; 所述第二图像采集设备采集并获取目标环境区域的第二目标图像,识别所述第二目标图像的所述检测区域中的第一物品和第二物品,确定所述第一物品和第二物品是否相交;The second image acquisition device acquires and obtains a second target image of the target environment area, identifies a first object and a second object in the detection area of the second target image, and determines whether the first object and the second object intersect; 若所述第一图像采集设备确定所述第一物品和第二物品相交,且所述第二图像采集设备确定所述第一物品和第二物品相交,输出所述告警指令。If the first image acquisition device determines that the first object and the second object intersect, and the second image acquisition device determines that the first object and the second object intersect, the alarm instruction is output. 3.根据权利要求2所述的识别后厨非正常操作的方法,其特征在于,所述第一图像采集设备为第一相机,所述第二图像采集设备为第二相机,所述方法还包括:3. The method for identifying abnormal kitchen operation according to claim 2, characterized in that the first image acquisition device is a first camera, the second image acquisition device is a second camera, and the method further comprises: 所述第一相机在未确定所述第一物品和第二物品相交的情况下,所述第二相机以第一时间间隔抓拍所述第二目标图像;When the first camera does not determine that the first object and the second object intersect, the second camera captures the second target image at a first time interval; 在所述第一相机确定所述第一物品和第二物品相交的情况下,所述第二相机以第二时间间隔抓拍所述第二目标图像,其中,所述第二时间间隔小于所述第一时间间隔。When the first camera determines that the first object and the second object intersect, the second camera captures the second target image at a second time interval, wherein the second time interval is smaller than the first time interval. 4.根据权利要求2所述的识别后厨非正常操作的方法,其特征在于,所述若所述第一图像采集设备确定所述第一物品和第二物品相交,且所述第二图像采集设备确定所述第一物品和第二物品相交,输出所述告警指令,包括:4. The method for identifying abnormal kitchen operation according to claim 2, characterized in that if the first image acquisition device determines that the first object and the second object intersect, and the second image acquisition device determines that the first object and the second object intersect, outputting the alarm instruction comprises: 所述第一图像采集设备确定所述第一物品和第二物品相交,输出第一识别结果至服务器;The first image acquisition device determines that the first object and the second object intersect, and outputs a first recognition result to a server; 所述第二图像采集设备确定所述第一物品和第二物品相交,输出第二识别结果至服务器;The second image acquisition device determines that the first object and the second object intersect, and outputs a second recognition result to the server; 所述服务器响应于所述第一识别结果和所述第二识别结果,输出告警指令。The server outputs an alarm instruction in response to the first recognition result and the second recognition result. 5.根据权利要求1所述的识别后厨非正常操作的方法,其特征在于,所述方法还包括:5. The method for identifying abnormal kitchen operation according to claim 1, characterized in that the method further comprises: 获取预设时间段内用户针对所述告警指令所反馈的误报信息,确定误报率高于预设的误报率阈值的可移动物品,将所述可移动物品至少分为第一部分和第二部分,其中,在用户反馈的误报信息中:所述第一部分被确定为与所述第一物品相交的次数大于第一阈值,或者,所述第一部分被确定为与所述第一物品相交的图像数量在用户反馈所述第一物品与所述第二物品相交的误报的图像数量中的所占比例大于第二阈值;Acquire false alarm information fed back by the user in response to the alarm instruction within a preset time period, determine movable objects with a false alarm rate higher than a preset false alarm rate threshold, and divide the movable objects into at least a first part and a second part, wherein, in the false alarm information fed back by the user: the number of times the first part is determined to intersect with the first object is greater than a first threshold, or the proportion of the number of images of the first part determined to intersect with the first object in the number of false alarm images of the first object intersecting with the second object fed back by the user is greater than a second threshold; 若确定所述第一物品与所述可移动物品的第一部分相交时,不输出告警指令。If it is determined that the first object intersects with the first part of the movable object, no alarm instruction is output. 6.根据权利要求5所述的识别后厨非正常操作的方法,其特征在于,所述方法还包括:6. The method for identifying abnormal kitchen operation according to claim 5, characterized in that the method further comprises: 获取多个特定误报图像,所述特定误报图像为所述第一部分被确定为与所述第一物品相交的误报图像;Acquire a plurality of specific false alarm images, where the specific false alarm images are false alarm images in which the first portion is determined to intersect with the first object; 在多个所述特定误报图像中分别标注所述第一部分与所述第一物品的相交位置;Marking intersection positions of the first part and the first object in a plurality of the specific false alarm images respectively; 根据标注后的第一部分与所述第一物品的相交位置,将所述第一部分至少划分为第三子部分和第四子部分,其中,所述第三子部分被确定为与所述第一物品相交的次数大于第三阈值,或者,所述第三子部分被确定为与所述第一物品相交的图像数量在所述特定误报图像数量中的所占比例大于第四阈值;Dividing the first part into at least a third subpart and a fourth subpart according to the intersection position of the marked first part and the first object, wherein the number of times the third subpart is determined to intersect with the first object is greater than a third threshold, or the proportion of the number of images of the third subpart that are determined to intersect with the first object in the number of specific false positive images is greater than a fourth threshold; 若获取的目标图像中,确定所述第三子部分与所述第一物品相交,则不输出告警指令;If it is determined in the acquired target image that the third sub-portion intersects with the first object, no warning instruction is output; 若获取的目标图像中,确定所述第四子部分与所述第一物品相交,则输出告警指令。If it is determined in the acquired target image that the fourth sub-portion intersects with the first object, an alarm instruction is output. 7.根据权利要求1所述的识别后厨非正常操作的方法,其特征在于,所述第一物品为清洗食材的第一清洗容器,所述第二物品为地面清洗工具,所述方法还包括:7. The method for identifying abnormal operation in the kitchen according to claim 1, wherein the first object is a first cleaning container for cleaning food, the second object is a floor cleaning tool, and the method further comprises: 确定所述目标图像中的第一检测区域,确定所述第一检测区域是否同时存在第一清洗容器和地面清洗工具,其中,所述第一检测区域包括至少一个第一清洗容器;Determine a first detection area in the target image, and determine whether a first cleaning container and a floor cleaning tool exist in the first detection area at the same time, wherein the first detection area includes at least one first cleaning container; 若所述第一检测区域同时存在所述第一清洗容器和地面清洗工具,确定所述第一清洗容器和所述地面清洗工具是否相交,或者所述第一清洗容器是否包含所述地面清洗工具,若相交或者包含,输出第一告警指令;If the first cleaning container and the floor cleaning tool exist in the first detection area at the same time, determine whether the first cleaning container and the floor cleaning tool intersect, or whether the first cleaning container contains the floor cleaning tool, and if they intersect or contain, output a first alarm instruction; 和/或,and/or, 所述第一物品包括第二清洗容器和第三清洗容器,所述第二物品包括餐具和食材,所述第二清洗容器用于清洗餐具,所述第三清洗容器用于清洗食材,所述方法还包括:The first article includes a second cleaning container and a third cleaning container, the second article includes tableware and food, the second cleaning container is used to clean tableware, and the third cleaning container is used to clean food, and the method further includes: 确定所述目标图像中的第二检测区域,其中,所述第二检测区域包括所述第二清洗容器和第三清洗容器;Determining a second detection area in the target image, wherein the second detection area includes the second cleaning container and a third cleaning container; 若识别到所述第二检测区域中所述第二清洗容器与所述食材相交,或者,所述第三清洗容器与所述餐具相交,输出第二告警指令;If it is identified that the second cleaning container intersects with the food in the second detection area, or that the third cleaning container intersects with the tableware, a second alarm instruction is output; 和/或,and/or, 所述第一物品包括第一放置板,所述第二物品包括第一类食材和第二类食材,所述第一放置板用于加工所述第一类食材,所述方法还包括:The first object includes a first placement board, the second object includes a first type of food and a second type of food, the first placement board is used to process the first type of food, and the method further includes: 确定所述目标图像中的第三检测区域,其中,所述第三检测区域至少包括所述第一放置板;Determine a third detection area in the target image, wherein the third detection area at least includes the first placement board; 若在所述第三检测区域中同时识别到所述第一放置板和所述第二类食材,且所述第一放置板和所述第二类食材相交,输出第三告警指令;If the first placement plate and the second type of food are simultaneously identified in the third detection area, and the first placement plate and the second type of food intersect, outputting a third alarm instruction; 和/或,and/or, 所述第一物品包括第二放置板,所述第二物品包括第一类食材和第二类食材,所述第二放置板用于加工所述第二类食材,所述方法还包括:The first object includes a second placement board, the second object includes a first type of food and a second type of food, the second placement board is used to process the second type of food, and the method further includes: 确定所述目标图像中的第四检测区域,其中,所述第四检测区域至少包括所述第二放置板;Determine a fourth detection area in the target image, wherein the fourth detection area at least includes the second placement board; 若在所述第四检测区域中同时识别到所述第二放置板和所述第一类食材,且所述第二放置板和所述第一类食材相交,输出第四告警指令;If the second placement plate and the first type of food are simultaneously identified in the fourth detection area, and the second placement plate and the first type of food intersect, outputting a fourth alarm instruction; 和/或,and/or, 所述第一物品为食材加工板,所述第二物品为清洁布,所述方法还包括:The first object is a food processing plate, the second object is a cleaning cloth, and the method further comprises: 确定所述目标图像中的第五检测区域,其中,所述第五检测区域包括所述食材加工板;Determining a fifth detection area in the target image, wherein the fifth detection area includes the food processing board; 若在所述第五检测区域中同时识别到所述食材加工板和所述清洁布,且所述食材加工板和所述清洁布相交,输出第五告警指令。If the food processing board and the cleaning cloth are simultaneously identified in the fifth detection area, and the food processing board and the cleaning cloth intersect, a fifth alarm instruction is output. 8.根据权利要求7所述的识别后厨非正常操作的方法,其特征在于,所述确定所述第一清洗容器和所述地面清洗工具是否相交,或者所述第一清洗容器是否包含所述地面清洗工具,包括:8. The method for identifying abnormal operation in a back kitchen according to claim 7, wherein determining whether the first cleaning container and the floor cleaning tool intersect, or whether the first cleaning container contains the floor cleaning tool, comprises: 获取所述第一检测区域中第一清洗容器的识别框;Acquire an identification frame of a first cleaning container in the first detection area; 获取所述第一检测区域中地面清洗工具的识别框;Acquire an identification frame of the floor cleaning tool in the first detection area; 根据所述第一清洗容器的识别框在所述目标图像中的位置坐标,以及所述地面清洗工具的识别框在所述目标图像中的位置坐标,确定所述第一清洗容器的识别框和所述地面清洗工具的识别框是否存在重叠区域;Determine whether there is an overlapping area between the identification frame of the first cleaning container and the identification frame of the floor cleaning tool according to the position coordinates of the identification frame of the first cleaning container in the target image and the position coordinates of the identification frame of the floor cleaning tool in the target image; 若所述重叠区域占所述第一清洗容器的识别框的面积占比大于等于预设的第一占比阈值,则确定所述第一清洗容器包含所述地面清洗工具,否则确定所述第一清洗容器和所述地面清洗工具相交;If the area ratio of the overlapping region to the identification frame of the first cleaning container is greater than or equal to a preset first ratio threshold, it is determined that the first cleaning container contains the floor cleaning tool; otherwise, it is determined that the first cleaning container and the floor cleaning tool intersect; 和/或,and/or, 所述若在所述第三检测区域中同时识别到所述第一放置板和所述第二类食材,且所述第一放置板和所述第二类食材相交,包括:If the first placement plate and the second type of food are simultaneously identified in the third detection area, and the first placement plate and the second type of food intersect, the method includes: 获取所述第三检测区域中第一放置板的识别框;Acquire an identification frame of the first placement board in the third detection area; 获取所述第三检测区域中第二类食材的识别框;Acquire an identification frame of the second type of food in the third detection area; 根据所述第一放置板的识别框在所述目标图像中的位置坐标,以及所述第二类食材的识别框在所述目标图像中的位置坐标,确定所述第一放置板的识别框和所述第二类食材的的识别框是否存在重叠区域;Determine whether there is an overlapping area between the identification frame of the first placement board and the identification frame of the second category of food according to the position coordinates of the identification frame of the first placement board in the target image and the position coordinates of the identification frame of the second category of food in the target image; 若所述重叠区域占所述第一放置板的识别框的面积占比大于等于预设的第二占比阈值,确定所述第一放置板和所述第二类食材相交;If the area ratio of the overlapping region to the identification frame of the first placement plate is greater than or equal to a preset second ratio threshold, it is determined that the first placement plate and the second type of food intersect; 和/或,and/or, 所述若在所述第四检测区域中同时识别到所述第二放置板和所述第一类食材,且所述第二放置板和所述第一类食材相交,包括:If the second placement plate and the first type of food are simultaneously identified in the fourth detection area, and the second placement plate and the first type of food intersect, the method includes: 获取所述第四检测区域中第二放置板的识别框;Acquire an identification frame of the second placement board in the fourth detection area; 获取所述第四检测区域中第一类食材的识别框;Acquire an identification frame of the first type of food in the fourth detection area; 根据所述第二放置板的识别框在所述目标图像中的位置坐标,以及所述第一类食材的识别框在所述目标图像中的位置坐标,确定所述第二放置板的识别框和所述第一类食材的的识别框是否存在重叠区域;Determine whether there is an overlapping area between the identification frame of the second placement plate and the identification frame of the first category of food according to the position coordinates of the identification frame of the second placement plate in the target image and the position coordinates of the identification frame of the first category of food in the target image; 若所述重叠区域占所述第二放置板的识别框的面积占比大于等于预设的第二占比阈值,确定所述第二放置板和所述第一类食材相交;If the area ratio of the overlapping region to the identification frame of the second placement plate is greater than or equal to a preset second ratio threshold, it is determined that the second placement plate and the first type of food intersect; 和/或,and/or, 所述食材加工板被配置为第一颜色,所述清洁布被配置为第二颜色,所述第一颜色与所述第二颜色不同,其中,若在所述第五检测区域中同时识别到所述食材加工板和所述清洁布,且所述食材加工板和所述清洁布相交,输出第五告警指令,包括:The food processing board is configured as a first color, and the cleaning cloth is configured as a second color, and the first color is different from the second color. If the food processing board and the cleaning cloth are simultaneously identified in the fifth detection area, and the food processing board and the cleaning cloth intersect, a fifth alarm instruction is output, including: 对所述第五检测区域中的目标对象进行颜色识别,识别出所述第一颜色的目标对象为食材加工板,识别出所述第二颜色的目标对象为清洁布;Performing color recognition on the target object in the fifth detection area, recognizing that the target object of the first color is a food processing board, and recognizing that the target object of the second color is a cleaning cloth; 若所述食材加工板和所述清洁布相交,输出第五告警指令。If the food processing plate and the cleaning cloth intersect, a fifth alarm instruction is output. 9.一种识别后厨非正常操作的装置,其特征在于,所述装置:9. A device for identifying abnormal operation in a kitchen, characterized in that: 获取模块,用于获取后厨中目标环境区域的目标图像,所述目标环境区域至少包括一个检测区域;An acquisition module, used to acquire a target image of a target environment area in a back kitchen, wherein the target environment area includes at least one detection area; 识别模块,用于识别所述检测区域中第一物品和第二物品,根据所述第一物品和第二物品的位置信息确定所述第一物品和第二物品是否相交;其中,所述第一物品为用于处理食材的工具,所述第二物品为可移动物品,所述第二物品禁止放置于所述第一物品;An identification module, used for identifying a first object and a second object in the detection area, and determining whether the first object and the second object intersect according to position information of the first object and the second object; wherein the first object is a tool for processing food, the second object is a movable object, and the second object is prohibited from being placed on the first object; 告警模块,用于若所述第一物品与所述第二物品相交,输出告警指令。The alarm module is used to output an alarm instruction if the first object intersects with the second object. 10.一种识别后厨非正常操作的系统,其特征在于,所述系统包括:10. A system for identifying abnormal operations in a kitchen, characterized in that the system comprises: 图像采集设备,用于获取后厨中目标环境区域的目标图像,所述目标环境区域至少包括一个检测区域,识别所述检测区域中第一物品和第二物品,根据所述第一物品和第二物品的位置信息确定所述第一物品和第二物品是否相交,若相交,则输出识别结果;其中,所述第一物品为用于处理食材的工具,所述第二物品为可移动物品,所述第二物品禁止放置于所述第一物品;An image acquisition device is used to obtain a target image of a target environment area in a back kitchen, wherein the target environment area includes at least one detection area, identify a first object and a second object in the detection area, determine whether the first object and the second object intersect according to position information of the first object and the second object, and output a recognition result if they intersect; wherein the first object is a tool for processing food, the second object is a movable object, and the second object is prohibited from being placed on the first object; 服务器,用于响应于所述识别结果,输出告警指令。The server is used to output an alarm instruction in response to the recognition result. 11.一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至8任一项所述的识别后厨非正常操作的方法的步骤。11. A computer device comprising a memory and a processor, wherein the memory stores a computer program, wherein the processor implements the steps of the method for identifying abnormal operation in a back kitchen as claimed in any one of claims 1 to 8 when executing the computer program. 12.一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至8任一项所述的识别后厨非正常操作的方法的步骤。12. A computer-readable storage medium having a computer program stored thereon, wherein when the computer program is executed by a processor, the steps of the method for identifying abnormal operations in a back kitchen as claimed in any one of claims 1 to 8 are implemented.
CN202411998426.0A 2024-12-31 2024-12-31 Method, device and system for identifying abnormal operation in the kitchen Active CN119399706B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411998426.0A CN119399706B (en) 2024-12-31 2024-12-31 Method, device and system for identifying abnormal operation in the kitchen

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411998426.0A CN119399706B (en) 2024-12-31 2024-12-31 Method, device and system for identifying abnormal operation in the kitchen

Publications (2)

Publication Number Publication Date
CN119399706A true CN119399706A (en) 2025-02-07
CN119399706B CN119399706B (en) 2025-07-18

Family

ID=94429956

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411998426.0A Active CN119399706B (en) 2024-12-31 2024-12-31 Method, device and system for identifying abnormal operation in the kitchen

Country Status (1)

Country Link
CN (1) CN119399706B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7741961B1 (en) * 2006-09-29 2010-06-22 Canesta, Inc. Enhanced obstacle detection and tracking for three-dimensional imaging systems used in motor vehicles
CN113191611A (en) * 2021-04-22 2021-07-30 上海仙豆智能机器人有限公司 Vehicle aggregation monitoring method, device, equipment and medium
CN115223073A (en) * 2022-05-27 2022-10-21 浙江大华技术股份有限公司 Method for detecting placement state of target object and related equipment
US11620597B1 (en) * 2022-04-29 2023-04-04 Vita Inclinata Technologies, Inc. Machine learning real property object detection and analysis apparatus, system, and method
CN118072234A (en) * 2024-04-16 2024-05-24 成都前宏科技股份有限公司 Method, device and equipment for monitoring sanitary environment of canteen based on AI vision
CN118274905A (en) * 2024-04-11 2024-07-02 泉州七星电气有限公司 Rapid early warning method for ring main unit faults

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7741961B1 (en) * 2006-09-29 2010-06-22 Canesta, Inc. Enhanced obstacle detection and tracking for three-dimensional imaging systems used in motor vehicles
CN113191611A (en) * 2021-04-22 2021-07-30 上海仙豆智能机器人有限公司 Vehicle aggregation monitoring method, device, equipment and medium
US11620597B1 (en) * 2022-04-29 2023-04-04 Vita Inclinata Technologies, Inc. Machine learning real property object detection and analysis apparatus, system, and method
CN115223073A (en) * 2022-05-27 2022-10-21 浙江大华技术股份有限公司 Method for detecting placement state of target object and related equipment
CN118274905A (en) * 2024-04-11 2024-07-02 泉州七星电气有限公司 Rapid early warning method for ring main unit faults
CN118072234A (en) * 2024-04-16 2024-05-24 成都前宏科技股份有限公司 Method, device and equipment for monitoring sanitary environment of canteen based on AI vision

Also Published As

Publication number Publication date
CN119399706B (en) 2025-07-18

Similar Documents

Publication Publication Date Title
WO2018025831A1 (en) People flow estimation device, display control device, people flow estimation method, and recording medium
CN114596243A (en) Defect detection method, apparatus, device, and computer-readable storage medium
US20160307143A1 (en) Employee task verification to video system
CN111582032A (en) Pedestrian detection method and device, terminal equipment and storage medium
CN105654531B (en) Method and device for drawing image contour
CN111382638A (en) Image detection method, device, equipment and storage medium
CN111666915A (en) Monitoring method, device, equipment and storage medium
CN112115745A (en) Method, device and system for identifying code missing scanning behaviors of commodities
CN112380971B (en) Behavior detection method, device and equipment
CN119399706B (en) Method, device and system for identifying abnormal operation in the kitchen
CN117994719A (en) Method, device and computer readable storage medium for identifying crowd gathering
CN117037059A (en) Equipment management method and device based on inspection monitoring and electronic equipment
CN112307944A (en) Dish inventory information processing method, dish delivery method and related device
CN112257604A (en) Image detection method, image detection device, electronic equipment and storage medium
CN111191499A (en) Fall detection method and device based on minimum center line
CN113470013B (en) Method and device for detecting moving object
CN102110295A (en) Real-time region detection method
CN108664912B (en) Information processing method and device, computer storage medium and terminal
CN119399708B (en) Method, device and system for identifying illegal picking of food
CN116912945A (en) Gesture recognition methods, devices, equipment and computer program products
CN117137382A (en) Ground identification method and device, storage medium and electronic device
CN114611967A (en) Object detection method, object detection apparatus, and computer-readable storage medium
CN116012743A (en) Fire disaster early warning method and device
CN110738828A (en) state monitoring method, device, equipment and storage medium
CN111091535A (en) Factory management method and system based on deep learning image semantic segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant