[go: up one dir, main page]

CN116402895A - Safety verification method, unmanned forklift and storage medium - Google Patents

Safety verification method, unmanned forklift and storage medium Download PDF

Info

Publication number
CN116402895A
CN116402895A CN202310651885.0A CN202310651885A CN116402895A CN 116402895 A CN116402895 A CN 116402895A CN 202310651885 A CN202310651885 A CN 202310651885A CN 116402895 A CN116402895 A CN 116402895A
Authority
CN
China
Prior art keywords
target
stored
verification
unmanned forklift
goods
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310651885.0A
Other languages
Chinese (zh)
Inventor
杨秉川
方牧
鲁豫杰
李陆洋
张帆
方晓曼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visionnav Robotics Shenzhen Co Ltd
Original Assignee
Visionnav Robotics Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionnav Robotics Shenzhen Co Ltd filed Critical Visionnav Robotics Shenzhen Co Ltd
Priority to CN202310651885.0A priority Critical patent/CN116402895A/en
Publication of CN116402895A publication Critical patent/CN116402895A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66FHOISTING, LIFTING, HAULING OR PUSHING, NOT OTHERWISE PROVIDED FOR, e.g. DEVICES WHICH APPLY A LIFTING OR PUSHING FORCE DIRECTLY TO THE SURFACE OF A LOAD
    • B66F17/00Safety devices, e.g. for limiting or indicating lifting force
    • B66F17/003Safety devices, e.g. for limiting or indicating lifting force for fork-lift trucks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66FHOISTING, LIFTING, HAULING OR PUSHING, NOT OTHERWISE PROVIDED FOR, e.g. DEVICES WHICH APPLY A LIFTING OR PUSHING FORCE DIRECTLY TO THE SURFACE OF A LOAD
    • B66F9/00Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes
    • B66F9/06Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes movable, with their loads, on wheels or the like, e.g. fork-lift trucks
    • B66F9/063Automatically guided
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66FHOISTING, LIFTING, HAULING OR PUSHING, NOT OTHERWISE PROVIDED FOR, e.g. DEVICES WHICH APPLY A LIFTING OR PUSHING FORCE DIRECTLY TO THE SURFACE OF A LOAD
    • B66F9/00Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes
    • B66F9/06Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes movable, with their loads, on wheels or the like, e.g. fork-lift trucks
    • B66F9/075Constructional features or details
    • B66F9/0755Position control; Position detectors
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Structural Engineering (AREA)
  • Transportation (AREA)
  • Theoretical Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Geology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Civil Engineering (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)

Abstract

The embodiment of the invention discloses a safety verification method, an unmanned forklift and a storage medium, which are applied to the technical field of unmanned forklifts and can solve the problem of how to quickly and accurately carry out safety verification on the relative position between an object to be stored and a warehouse position. The unmanned forklift is provided with a camera module, and under the condition that the unmanned forklift executes a goods placing task, the unmanned forklift can acquire a target image through the camera module, wherein the target image comprises an object to be stored and a target storage position; performing semantic segmentation on the target image through a target semantic segmentation network to obtain a segmentation result aiming at an object to be stored and a target library position; according to the segmentation result, carrying out safety verification on the relative position between the object to be stored and the target library position to obtain a verification result; and if the verification result is that the verification is passed, placing the object to be stored in the target library position.

Description

Safety verification method, unmanned forklift and storage medium
Technical Field
The embodiment of the invention relates to the technical field of unmanned forklifts, in particular to a safety verification method, an unmanned forklifts and a storage medium.
Background
In an intelligent warehouse, goods are usually carried by an unmanned forklift and placed in corresponding warehouse positions, however, as the objects to be stored are transferred under the carrying of carrying equipment (such as an unmanned forklift), the objects to be stored and the warehouse positions have deviation, so that the objects to be stored slide down easily to cause loss, and therefore, manual intervention is needed to confirm whether the objects to be stored are accurately placed on the warehouse positions, the goods storage efficiency is reduced, and errors are judged by manual operation. Therefore, how to quickly and accurately verify the relative position between the object to be stored and the library position becomes a problem to be solved urgently.
Disclosure of Invention
The embodiment of the invention provides a safety verification method, an unmanned forklift and a storage medium, which are used for solving the problem of how to quickly and accurately carry out safety verification on the relative position between an object to be stored and a library position in the prior art.
In a first aspect, a security verification method is provided, and is applied to an unmanned forklift, and the unmanned forklift is provided with a camera module, and the security verification method includes: under the condition that the unmanned forklift executes a goods placing task, acquiring a target image through the camera module, wherein the target image comprises an object to be stored and a target library position;
performing semantic segmentation on the target image through a target semantic segmentation network to obtain segmentation results aiming at the object to be stored and the target library position;
according to the segmentation result, carrying out safety verification on the relative position between the object to be stored and the target library position to obtain a verification result;
and if the verification result is that the verification is passed, placing the object to be stored in the target library position.
In an optional implementation manner, in a first aspect of the embodiment of the present invention, the object to be stored includes a target cargo or a put platform, the put platform carries the target cargo, and if the verification result is that verification is passed, the object to be stored is placed in the target storage location, including:
When the object to be stored comprises the target goods, if the verification result is that the verification is passed, the target goods are placed in the target warehouse; or alternatively, the first and second heat exchangers may be,
when the object to be stored comprises the goods placing platform, if the verification result is that verification is passed, the loaded target goods are placed in the target warehouse through the goods placing platform.
In an optional implementation manner, in a first aspect of the embodiment of the present invention, the performing, according to the segmentation result, a security check on a relative position between the object to be stored and the target library bit, to obtain a check result includes:
determining a first boundary of the object to be stored and a second boundary of the target library bit according to the segmentation result;
determining deviation information between the first boundary and the second boundary, wherein the deviation information at least comprises angle deviation information and distance deviation information;
if the deviation information is detected to be smaller than or equal to the preset difference value, determining that the verification result is verification passing.
In a first aspect of the embodiment of the present invention, when the unmanned forklift performs a loading task, the acquiring, by the camera module, the target image includes:
Acquiring a goods placing task, wherein the goods placing task comprises the step of storing the object to be stored in the target storage position;
and when the unmanned forklift carries the object to be stored and moves to a goods placing position corresponding to the target warehouse position, acquiring the target image through the camera module.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, before the capturing, by using the image capturing module, the method further includes:
acquiring a plurality of test images, and labeling each test image;
and performing model training on a preset deep learning network according to the plurality of marked test images to obtain the target semantic segmentation network, wherein the preset deep learning network is obtained by fusing a lightweight real-time semantic segmentation task model and an object context feature identification OCR extraction module.
In a first aspect of the embodiment of the present invention, performing model training on a preset deep learning network according to the plurality of labeled test images to obtain the target semantic segmentation network includes:
constructing an area mutual information RMI loss function according to shallow characteristic detail loss, intermediate layer auxiliary loss and output layer loss, wherein the shallow characteristic detail loss is determined according to edge characteristic information extracted by the preset deep learning network;
Training the preset deep learning network based on the RMI loss function to obtain the target semantic segmentation network.
In a first aspect of the embodiment of the present invention, training the preset deep learning network based on the RMI loss function to obtain the target semantic segmentation network includes:
adopting a learning rate cosine attenuation mechanism and an optimizer to adjust a first weight of shallow detail loss, a second weight of intermediate layer auxiliary loss and a third weight of output layer loss to perform parameter optimization, so as to obtain a target weight value;
obtaining the target semantic segmentation network according to the target weight value;
wherein the first weight is less than the second weight, and the second weight is less than the third weight.
In a second aspect, an unmanned forklift is provided, be provided with the module of making a video recording on the unmanned forklift, unmanned forklift includes: the acquisition module is used for acquiring a target image through the camera module under the condition that the unmanned forklift executes a goods placing task, wherein the target image comprises an object to be stored and a target library position;
the processing module is used for carrying out semantic segmentation on the target image through a target semantic segmentation network to obtain segmentation results aiming at the object to be stored and the target library position;
The processing module is further used for carrying out safety verification on the relative position between the object to be stored and the target library position according to the segmentation result to obtain a verification result;
and the processing module is also used for placing the object to be stored in the target library position if the verification result is that the verification is passed.
In a third aspect, an unmanned forklift is provided, be provided with the module of making a video recording on the unmanned forklift, unmanned forklift includes:
a memory storing executable program code;
a processor coupled to the memory;
the processor invokes the executable program code stored in the memory to perform the security check method in the first aspect of the embodiment of the present invention.
In a fourth aspect, a computer-readable storage medium is provided, which stores a computer program that causes a computer to execute the security verification method in the first aspect of the embodiment of the present invention. The computer readable storage medium includes ROM/RAM, magnetic disk or optical disk, etc.
In a fifth aspect, there is provided a computer program product for causing a computer to carry out some or all of the steps of any one of the methods of the first aspect when the computer program product is run on the computer.
In a sixth aspect, an application publishing platform is provided for publishing a computer program product, wherein the computer program product, when run on a computer, causes the computer to perform part or all of the steps of any one of the methods of the first aspect.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, the unmanned forklift is provided with the camera module, and under the condition that the unmanned forklift executes a goods placing task, the unmanned forklift can acquire a target image comprising an object to be stored and a target storage position through the camera module; performing semantic segmentation on the target image through a target semantic segmentation network to obtain a segmentation result aiming at an object to be stored and a target library position; according to the segmentation result, carrying out safety verification on the relative position between the object to be stored and the target library position to obtain a verification result; and if the verification result is that the verification is passed, placing the object to be stored in the target library position. According to the scheme, the unmanned forklift can perform semantic segmentation on the target image comprising the object to be stored and the target library position, so that whether the relative position between the object to be stored and the target library position is safe or not is judged, the confirmation is not needed by manual intervention, the manual workload is greatly reduced, the efficiency of safety verification and cargo access is also improved, and the accuracy of safety verification can be effectively improved by performing semantic segmentation through a trained target semantic segmentation network.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a scenario in which a security verification method according to an embodiment of the present invention is provided;
fig. 2 is a schematic diagram of a second scenario of a security verification method according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a security verification method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a target image according to an embodiment of the present invention;
FIG. 5 is a schematic semantic segmentation diagram for a target image according to an embodiment of the present invention;
FIG. 6 is a second flow chart of a security verification method according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a semantic segmentation result for a target image according to an embodiment of the present invention;
FIG. 8 is a third flow chart of a security verification method according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a structure of a target semantic segmentation network according to an embodiment of the present invention;
Fig. 10 is a schematic structural diagram of an unmanned forklift according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram II of an unmanned forklift according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms first and second and the like in the description and in the claims, are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order of the objects. For example, a first boundary and a second boundary, etc., are used to distinguish between different boundaries, rather than to describe a particular order of boundaries.
The terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
When arranging a large amount of goods shelves in the warehouse, in order to be convenient for unmanned fork truck can not receive the removal of blocking and carry out the access goods operation, can keep fork truck passageway usually between adjacent goods shelves, unmanned fork truck just can carry goods to remove like this to store in corresponding storehouse position, but the goods shelves like this the lower arrangement of density will lead to the goods deposit volume in the whole warehouse lower, consequently in order to improve goods deposit volume, as shown in fig. 1, can constantly reduce the distance between adjacent goods shelves 11, even with goods shelves 11 next together, can improve goods deposit volume like this greatly.
It should be noted that, the common unmanned forklift can basically meet the goods storage and taking requirements of plane storage and middle-low storage, but as shown in fig. 1, when the storage density of the warehouse is high and the roadway 12 is set to be narrower, the unmanned forklift 13 cannot freely run in the narrower roadway 12 to realize goods storage and taking, so that the goods storage and taking speed is slower; if the requirement that the unmanned forklift 13 can freely run is met, the width of the roadway 12 in the warehouse needs to be set to be in line with the running width of the unmanned forklift 13, so that the number of the shelves 11 in the warehouse is reduced, and the goods storage amount of the warehouse is reduced.
Therefore, in order to improve the goods storage capacity of the warehouse and enable the unmanned forklift to smoothly store the goods on the goods shelves, the shuttle goods shelves are introduced, the shuttle goods shelves can be composed of goods shelves and goods placing platforms, the unmanned forklift can place the goods on the goods placing platforms, and then the goods are carried by the goods placing platforms to move to corresponding warehouse positions.
Optionally, in order to increase the storage capacity of the goods warehouse, the goods shelves are usually provided with more layers, and due to the limited height that can be achieved by the unmanned forklift, when the goods are required to be placed on a higher warehouse, the goods can be stored and taken by the shuttle rails arranged on the cross beams of the goods shelves, and when the goods are required to be stored and taken, the unmanned forklift firstly carries the goods to the position of the goods placing platform, then places the goods on the goods placing platform, and then the goods carried by the goods placing platform run on the shuttle rails; when goods need be taken, the goods placed in the deep of the goods shelf are carried by the goods placing platform and moved to the position of the unmanned forklift through the shuttle guide rail, and then the unmanned forklift takes down the goods from the goods placing platform. The shuttle guide rails are arranged on the cross beams of each goods shelf, and the shuttle guide rails are connected, so that the goods placing platforms can be placed in different goods shelves, a plurality of goods shelves can share one goods placing platform, and the number of the goods placing platforms can be determined by comprehensive factors such as the depth of the goods shelves, the total quantity of goods, the goods discharging batch, the goods discharging frequency and the like.
Fig. 2 is a schematic diagram of an application scenario of the security verification method disclosed in the embodiment of the present invention, under the application scenario, a central control system is installed on a central control device 20, and the central control device 20 is respectively connected with an unmanned forklift 21 and a cargo platform 22, and can send corresponding instructions to the unmanned forklift 21 and the cargo platform 22. The central control device 20 may include, but is not limited to, a cell phone, tablet computer, wearable device, notebook computer, personal computer (Personal Computer, PC), etc.
Alternatively, the unmanned forklift 21 is a transport vehicle equipped with an electromagnetic or optical automatic guide device, etc., and capable of traveling along a prescribed guide path, and the types of the unmanned forklift 21 may include, but are not limited to, a fork lift truck, a submersible forklift, a backpack type unmanned forklift, a balanced type unmanned forklift, etc. The unmanned forklift 21 may include a working attachment, which may include a fork, a holding arm, a manipulator, etc., and may be used to fork, hold a clamp, or grab a material, so as to achieve the purpose of transporting the material. In the embodiment of the invention, the operation accessory of the unmanned forklift 21 is a fork, that is, the unmanned forklift 21 can extend the fork into the bottom of the pallet, so that the fork can fork and take the pallet and the goods carried by the pallet.
Optionally, a plurality of unmanned forklifts 21 may be disposed in a warehouse, and the central control device 20 may send a corresponding cargo handling instruction to each unmanned forklift 21, so that the unmanned forklifts 21 automatically perform corresponding cargo handling tasks respectively.
Alternatively, as shown in FIG. 2, the pallet 22 may be transported in a reciprocating or loop-like manner on shuttle rails 24 provided on shuttle shelves 23 to thereby transport the cargo to a designated destination location or docking device. The intelligent induction system is arranged on the goods placing platform 22, the original position can be automatically memorized, the speed is automatically reduced, the volume of the goods placing platform 22 is smaller than that of the unmanned forklift 21, and the occupied space is smaller when goods are transported.
It should be noted that, the same shuttle 24 may be used for simultaneously running multiple cargo platforms 22, and the same cargo platform 22 may also be used for running on different shuttle 24, which is not limited in this embodiment of the present application.
Alternatively, taking fig. 2 as an example, the central control device 20 may send various control instructions to one or more of the unmanned forklift 21 and the goods placing platform 22 to control the one or more devices to work, so as to implement rapid goods storage and pickup and improve the goods storage capacity of the warehouse through the interaction between the unmanned forklift 21 and the goods placing platform 22.
It should be noted that, the security verification method provided by the embodiment of the present invention may be used to detect whether the relative position between the object to be stored and the target bin is secure, where in the storage environment, the object to be stored may be a cargo that needs to be stored in the target bin, that is, the unmanned forklift may directly place the object to be stored in the target bin, and then the security verification method provided by the embodiment of the present invention is suitable for detecting whether the relative position between the object to be stored and the target bin is secure; the object to be stored can also be a goods placing platform, the goods can be loaded on the goods placing platform, that is, the unmanned forklift can also place the goods on the goods placing platform, then the goods loaded by the goods placing platform are moved to the target warehouse position, and the goods are placed in the target warehouse position through the goods placing platform, so that the safety verification method provided by the embodiment of the invention is suitable for detecting whether the relative position between the goods placing platform and the target warehouse position is safe or not; of course, the security verification method provided by the embodiment of the present invention may also be applicable to whether the relative positions between other objects to be stored and the library bits are secure, and the embodiment of the present invention is not particularly limited.
The execution main body of the security verification method provided by the embodiment of the invention can be the unmanned forklift, or can be a functional module and/or a functional entity which can realize the security verification method in the unmanned forklift, and the execution main body can be specifically determined according to actual use requirements. The safety verification method provided by the embodiment of the invention is exemplified by an unmanned forklift.
As shown in fig. 3, an embodiment of the present invention provides a security verification method, which may include the following steps:
301. under the condition that the unmanned forklift executes the goods placing task, a target image is obtained through the camera module.
In the embodiment of the invention, the unmanned forklift can be provided with the camera module, the camera module can be used for acquiring images, when the unmanned forklift is executing a goods placing task, the unmanned forklift can acquire target images through the camera module, and the target images can comprise objects to be stored and target storage positions.
Optionally, this module of making a video recording can set up the root at unmanned fork truck's fork to the direction of advance of unmanned fork truck, so that unmanned fork truck is under the circumstances of carrying out the goods task, when moving to the position of target storehouse position, unmanned fork truck just can shoot the target image including waiting to store object and target storehouse position through this module of making a video recording.
For example, the target image acquired by the unmanned forklift may be as shown in fig. 4, which includes the object 41 to be stored and the target stock 42.
It should be noted that the camera module may include at least one camera, and the camera module needs to cover a larger area as much as possible.
302. And carrying out semantic segmentation on the target image through a target semantic segmentation network to obtain a segmentation result aiming at the object to be stored and the target library position.
In the embodiment of the invention, after the unmanned forklift acquires the target image, the unmanned forklift needs to carry out safety check on the object to be stored and the target bin, namely, the unmanned forklift needs to determine whether the relative position between the object to be stored and the target bin meets the requirement or not, so that the unmanned forklift needs to determine the boundary between the object to be stored and the target bin, and then the unmanned forklift can carry out image processing on the target image in a semantic segmentation mode, so that a segmentation result aiming at the object to be stored and the target bin is obtained.
Illustratively, as shown in fig. 5, after the semantic segmentation of the target image as shown in part a in fig. 5, a segmentation result is obtained as shown in part b in fig. 5, in which the contours of the object 41 to be stored and the target bin 42 are determined.
Optionally, the unmanned forklift may perform semantic segmentation on the target image through a target semantic segmentation network, where the target semantic segmentation network may be a deep learning network trained in advance by the unmanned forklift, and in the target semantic segmentation network, semantic segmentation may be performed on the target image through an algorithm, that is, the unmanned forklift inputs the target image into the target semantic segmentation network, and the target semantic segmentation network may output a segmentation result for the object to be stored and the target library to the unmanned forklift.
303. And carrying out safety verification on the relative position between the object to be stored and the target library according to the segmentation result to obtain a verification result.
In the embodiment of the invention, after the unmanned forklift obtains the segmentation result, the relative position between the object to be stored and the target bin position can be determined according to the segmentation result, and the relative position between the object to be stored and the target bin position is subjected to safety verification, so that a verification result can be obtained, the verification result can comprise that the verification is passed and the verification is not passed, and the fact that the relative position between the current object to be stored and the target bin position meets the requirements is safe can be indicated, and the fact that the verification is not passed can indicate that the relative position between the current object to be stored and the target bin position does not meet the requirements is dangerous.
304. And if the verification result is that the verification is passed, placing the object to be stored in the target library position.
In the embodiment of the invention, after the relative position between the object to be stored and the target bin position is checked safely, if the relative position passes the check, the safety between the current object to be stored and the target bin position can be indicated, so that the object to be stored can be placed in the target bin position by the unmanned forklift.
Alternatively, in different warehouse environments, the object to be stored may include different objects, such as: for a common goods shelf, the object to be stored can be a target goods, namely, the unmanned forklift can directly place the target goods in a target warehouse; for dense goods shelves, the object to be stored can be a goods placing platform, the goods placing platform bears target goods, and the unmanned forklift cannot enter the goods shelves, so that the unmanned forklift can place the target goods on the goods placing platform, the goods placing platform can bear the target goods to move to a target warehouse position, and the target goods are placed in the target warehouse position.
That is, if the verification result is that the verification is passed, the object to be stored is placed in the target library, which at least includes but is not limited to the following implementation manners:
The implementation mode is as follows: when the object to be stored comprises the target goods, if the verification result is that the verification is passed, the target goods are placed in the target warehouse.
It should be noted that if the verification is passed, it can be said that the target goods and the target storage location are safe, so that the unmanned forklift can directly place the target goods in the target storage location.
The implementation mode II is as follows: when the object to be stored comprises the goods placing platform, if the verification result is that the verification is passed, the loaded target goods are placed in the target warehouse through the goods placing platform.
It should be noted that if the verification is passed, it can be said that the cargo platform carrying the target cargo is safe with the target storage location, so that the unmanned forklift can place the target cargo in the target storage location through the cargo platform.
When the relative position between the object to be stored and the target warehouse position is verified safely, the relative position between the goods placing platform and the target warehouse position is verified, the unmanned forklift is used for placing the target goods on the goods placing platform in advance, and the goods placing platform plays a role in transferring and carrying in practice.
The embodiment of the invention provides a safety verification method, an unmanned forklift can perform semantic segmentation on a target image comprising an object to be stored and a target library position, so that whether the relative position between the object to be stored and the target library position is safe or not is judged, the confirmation is not needed by manual intervention, the manual workload is greatly reduced, the safety verification and the goods access efficiency are also improved, and the accuracy of the safety verification can be effectively improved by performing semantic segmentation through a trained target semantic segmentation network.
As shown in fig. 6, an embodiment of the present invention provides a security verification method, which may further include the following steps:
601. and acquiring a put task.
In the embodiment of the invention, the unmanned forklift can acquire a put task, and the put task can comprise storing the object to be stored in the target warehouse.
Optionally, the delivery task may be sent by a central control device, where the central control device may determine an unmanned forklift in an idle state from among a plurality of unmanned forklifts set in the warehouse, and send the delivery task to the unmanned forklift in the idle state.
602. When the unmanned forklift carries the object to be stored and moves to the goods placing position corresponding to the target warehouse position, the target image is obtained through the camera module.
In the embodiment of the invention, the unmanned forklift acquires the goods placing task, can firstly fork the object to be stored, then moves to the goods placing position corresponding to the target warehouse position according to the established route, and acquires the target image through the camera module.
It should be noted that, the target storage location is a storage location on the shuttle-type goods shelf, where goods can be stored, when the unmanned forklift can enter the goods shelf, the unmanned forklift can directly place the target goods in the target storage location, and the goods placing location corresponding to the target storage location is a forklift channel beside the target storage location; when the unmanned forklift cannot enter the goods shelf, the unmanned forklift can be matched with the goods placing platform, and target goods can be carried to a target warehouse position by the goods placing platform capable of moving on the goods shelf, and the goods placing position corresponding to the target warehouse position is the goods delivery point of the unmanned forklift and the goods placing platform.
Optionally, the unmanned forklift is used for carrying the goods to move on the ground of the warehouse, and can be responsible for carrying the goods from the warehouse-in position to the side of the goods shelf and carrying the goods from the side of the goods shelf to the warehouse-out position; the goods placing platform is used for carrying goods to move in the goods shelf, and can be responsible for carrying the goods to the storage position from the side of the goods shelf and carrying the goods to the side of the goods shelf from the storage position.
It should be noted that, the goods placing position corresponding to the target storage position may be a port of the goods shelf closest to the target storage position, or may be a position closest to the moving route of the unmanned forklift, so that the unmanned forklift may move to the goods placing position, place the target goods on the goods placing platform, and then move the goods placing platform to the target storage position.
603. And carrying out semantic segmentation on the target image through a target semantic segmentation network to obtain a segmentation result aiming at the object to be stored and the target library position.
In the embodiment of the present invention, for the description of step 603, please refer to the detailed description of step 302 in the above embodiment, and the description of the embodiment of the present invention is omitted.
604. And determining a first boundary of the object to be stored and a second boundary of the target library bit according to the segmentation result.
In the embodiment of the invention, after the unmanned forklift obtains the segmentation result, the first boundary of the object to be stored and the second boundary of the target library position can be determined from the segmentation result.
It should be noted that, as shown in fig. 5, the segmentation result may indicate the outlines of the object 41 to be stored and the target bin 42, and then the outlines of the object 41 to be stored and the target bin 42 are subjected to boundary processing, that is, the first boundary 411 of the object 41 to be stored and the second boundary 421 of the target bin 42 as shown in fig. 7 may be obtained.
Optionally, the unmanned forklift can conduct boundary processing on the segmentation result through the region prior information, the contour searching algorithm and the straight line fitting algorithm, so that a first boundary of the object to be stored and a second boundary of the target library position are obtained.
605. Deviation information between the first boundary and the second boundary is determined.
In the embodiment of the invention, after the first boundary of the object to be stored and the second boundary of the target bin are determined by the unmanned forklift, the deviation information between the first boundary and the second boundary can be determined according to the first boundary and the second boundary, and the deviation information at least comprises angle deviation information and distance deviation information.
It should be noted that, the object to be stored may sideslip, possibly deflect, and possibly have other conditions, which may cause a possible distance deviation between the object to be stored and the target bin, and may also have an angle deviation, so that the unmanned forklift may calculate the relative positions of the first boundary and the second boundary, thereby obtaining deviation information between the first boundary and the second boundary, that is, angle deviation information and distance deviation information, where the angle deviation information is an angle difference, and the distance deviation information is a distance difference.
606. If the deviation information is detected to be smaller than or equal to the preset difference value, the verification result is determined to be verification passing.
In the embodiment of the invention, after the unmanned forklift determines the deviation information, the deviation information can be compared with the preset difference value, so that the verification result is determined according to the comparison result.
Optionally, in order that the object to be stored does not generate a situation when placed, the boundary of the object to be stored and the boundary of the target bin need to be overlapped generally, that is, the smaller the deviation information between the first boundary and the second boundary is, the better, so that the unmanned forklift can combine the sizes of the object to be stored and the target bin and specific data when the situation occurs before, determine a preset difference value, wherein the preset difference value is a safety critical value, and if the deviation information is smaller than or equal to the preset difference value, the current object to be stored and the target bin are safe before, and a verification result is that verification is passed; if the deviation information is larger than the preset difference value, the current object to be stored and the target library position are dangerous, and the verification result is that the verification is failed.
Optionally, because the deviation information at least includes angle deviation information and distance deviation information, the preset difference value also correspondingly includes a preset angle difference value and a preset distance difference value, and the unmanned forklift can compare the angle deviation information with the preset angle difference value, and compare the distance deviation information with the preset distance difference value to obtain comparison results respectively.
It should be noted that the comparison result may specifically include the following cases:
case one: and when the angle deviation information is smaller than or equal to a preset angle difference value and the distance deviation information is smaller than or equal to a preset distance difference value, determining that the verification result is verification passing.
And a second case: the angle deviation information is larger than a preset angle difference value, but the distance deviation information is smaller than or equal to the preset distance difference value, and the verification result is determined to be that verification fails.
And a third case: the angle deviation information is smaller than or equal to a preset angle difference value, but the distance deviation information is larger than a preset distance difference value, and the verification result is determined to be that verification fails.
Case four: the angle deviation information is larger than a preset angle difference value, the distance deviation information is larger than a preset distance difference value, and the verification result is determined to be that verification fails.
Through the above four cases, it can be seen that the deviation information is smaller than or equal to the preset difference value to determine that the verification result is verification passing, that is, whether the deviation information specifically includes several parameters, all parameters need to be smaller than or equal to the preset difference value corresponding to the parameters to determine that the verification result is verification passing.
607. And if the verification result is that the verification is passed, placing the object to be stored in the target library position.
In the embodiment of the present invention, for the description of step 607, please refer to the detailed description of step 304 in the above embodiment, and the description of the embodiment of the present invention is omitted.
608. If the verification result is that the verification is not passed, outputting prompt information.
In the embodiment of the invention, if the verification result is that the verification is not passed, the situation that the object to be stored is dangerous between the current object to be stored and the target bin position can be indicated, and if the object to be stored is likely to move, the unmanned forklift can output prompt information which is used for indicating that the position between the target bin position and the object to be stored is in an unsafe state, so that staff in the warehouse can check and adjust the object to be stored and the target bin position according to the prompt information.
Note that, the steps 607 and 608 are in parallel, and the unmanned forklift does not perform both steps, either 607 or 608.
The embodiment of the invention provides a safety verification method, which comprises the steps that after a goods placing task is acquired by an unmanned forklift, the unmanned forklift can firstly move to a goods placing position corresponding to a target stock position, then semantic segmentation can be carried out on a target image comprising an object to be stored and the target stock position so as to determine a first boundary of the object to be stored and a second boundary of the target stock position, so that whether the relative position between the object to be stored and the target stock position is safe or not is judged according to deviation information between the first boundary and the second boundary, the confirmation is not needed by manual intervention, the manual workload is greatly reduced, the safety verification and the goods access efficiency are also improved, the semantic segmentation is carried out through a trained target semantic segmentation network, and whether the safety is safe or not is judged through specific data, and the accuracy of the safety verification can be effectively improved.
As shown in fig. 8, an embodiment of the present invention provides a security verification method, which may further include the following steps:
801. and acquiring a plurality of test images, and labeling each test image.
In the embodiment of the invention, since the unmanned forklift needs to perform semantic segmentation when performing security verification between the object to be stored and the target bin, the unmanned forklift needs to train the semantic segmentation network in advance, a large number of training samples are needed in the training process, namely, the unmanned forklift can acquire a plurality of test images, wherein the plurality of test images can comprise the object to be stored and the target bin, and then the unmanned forklift can label each test image, namely, the contours or boundaries of the object to be catalyzed and the target bin are labeled in each test image.
It should be noted that, because the training process of the neural network is that the input item and the output item of the neural network are input into the neural network in advance, so that the neural network learns according to the input item and the output item to obtain the output item by processing the input item, in the embodiment of the invention, a plurality of test images are input items, and the test image marked with the object to be stored and the target library is output item.
Optionally, before each test image is marked, distortion correction can be performed on each test image, and as the conditions of image distortion and the like possibly caused by problems such as shooting angles and the like are unfavorable for subsequent marking when the test image is shot, the unmanned forklift can perform deformity correction on each test image and mark after correction.
802. And performing model training on a preset deep learning network according to the plurality of marked test images to obtain a target semantic segmentation network.
In the embodiment of the invention, the unmanned forklift can train a preset deep learning network to obtain a target semantic segmentation network suitable for the safety verification method executed by the scheme, and the training process is carried out according to a plurality of test images and a plurality of marked test images.
The preset deep learning network is obtained by fusing a lightweight real-time semantic segmentation task model and an object context feature identification OCR extraction module.
It should be noted that, at present, a commonly used algorithm for semantic segmentation of an image is a lightweight real-time semantic segmentation task model, namely PP-LiteSeg. Currently, while deep learning contributes to significant advances in semantic segmentation, many lightweight models are not as accurate and perform satisfactorily, whereas PP-LiteSeg acts as a Flexible and Lightweight Decoder (FLD) to reduce the computational overhead of previous decoders, and to enhance feature representation, a Unified Attention Fusion Module (UAFM) is added that uses spatial attention and channel attention to generate an attention weight, and then fuses the input features with the weight. Meanwhile, a pyramid pooling module (SPPM) is added, and global context information is aggregated under the condition of low calculation cost. At present, it is known through a large number of experiments that compared with other methods, the PP-LiteSeg has a superior trade-off between precision and speed.
Alternatively, an object context feature identification (Optical Character Recognition, OCR) extraction module may calculate a set of feature representations of object regions and then propagate the object region feature representations to each pixel based on the similarity between the object region feature representations and the pixel feature representations. The main idea is to explicitly translate the pixel classification problem into an object region classification problem, which is consistent with the original definition of the semantic segmentation problem, i.e. the class of each pixel is the class of the object to which the pixel belongs.
Optionally, the preset deep learning network also introduces a transducer's attention mechanism to get enhanced features compared to the original PP-LiteSeg algorithm.
Optionally, performing model training on a preset deep learning network according to the plurality of labeled test images to obtain a target semantic segmentation network, which specifically includes: constructing an area mutual information RMI loss function according to shallow characteristic detail loss, intermediate layer auxiliary loss and output layer loss, wherein the shallow characteristic detail loss is determined according to edge characteristic information extracted by a preset deep learning network; training a preset deep learning network based on the RMI loss function to obtain a target semantic segmentation network.
In this implementation manner, OHEM (online hard example mining) cross entropy is determined as a loss function by the original PP-LiteSeg, but in the embodiment of the present invention, an RMI loss function may be introduced when the original PP-LiteSeg is improved, and an area mutual information RMI loss function is constructed according to shallow feature detail loss, intermediate layer auxiliary loss and output layer loss, so that a preset deep learning network is trained according to the RMI loss function, and a target semantic segmentation network is obtained.
The shallow detail loss is determined according to edge feature information extracted by a preset deep learning network.
Alternatively, the RMI loss function is specifically modeling the relationship between pixels. For example, given a pixel, representing the pixel in its neighboring eight neighbors will result in a 9-dimensional point. The image is projected into a multi-dimensional distribution of a number of high-dimensional points using this method. After the group trunk and the model prediction are projected to the multidimensional distribution, the similarity of the two distributions is maximized by using an RMI loss function; the group trunk can be used for supervising the classification accuracy of the training set of training, and is mainly used for verifying or overriding a certain research assumption in a statistical model, namely, collecting accurate and objective data for a verification process.
Further, training a preset deep learning network based on the RMI loss function to obtain a target semantic segmentation network may specifically include: adopting a learning rate cosine attenuation mechanism and an optimizer to adjust a first weight of shallow detail loss, a second weight of intermediate layer auxiliary loss and a third weight of output layer loss to perform parameter optimization, so as to obtain a target weight value; and obtaining a target semantic segmentation network according to the target weight value.
In this implementation manner, since each item of input item is processed in the deep learning network, loss is caused, and in order to improve accuracy, loss amount needs to be reduced as much as possible in the processing process, so parameter optimization is needed to optimize loss weight in the training process, and in the preset deep learning network, including shallow detail loss, intermediate layer auxiliary loss and output layer loss, parameter optimization is needed to be performed on a first weight of shallow detail loss, a second weight of intermediate layer auxiliary loss and a third weight of output layer loss respectively, so that a target weight value is obtained; and (5) bringing the target weight value into a preset deep learning network to obtain a target semantic segmentation network.
Wherein the first weight of the shallow detail loss is less than the second weight of the intermediate layer auxiliary loss, and the second weight of the intermediate layer auxiliary loss is less than the third weight of the output layer loss.
It should be noted that the first weight, the second weight, and the third weight are all greater than 0 and less than 1. In the parameter optimization process, the first initial weight, the second initial weight and the third initial weight may be determined according to empirical values, for example: the first weight is 0.1, the second weight is 0.4, and loss values corresponding to the first initial weight, the second initial weight and the third initial weight are recorded; and then continuously adjusting the first initial weight, the second initial weight and the third initial weight, recording the loss value corresponding to each weight value, and finally comparing all the loss values, wherein the weight value corresponding to the minimum loss value is the target weight value.
Optionally, in the training process, a learning rate cosine decay mechanism can be adopted, and a range optimizer is adopted to perform parameter optimization.
Optionally, after model training is performed on a preset deep learning network, the obtained target semantic segmentation network may be shown in fig. 9, where feature loss, various algorithm modules, convolution weights and the like are shown, the image processing flow is left to right, the target image is input to the leftmost side, the processed segmentation result is output to the rightmost side after processing operations such as xrd file format conversion (ConvX), real-time semantic Segmentation (STDC) and pooling operation (SPP) are sequentially performed, and the positions where the losses corresponding to the three segmentation heads are located and the convolution weight value of each convolution layer are respectively shown in fig. 9.
803. Under the condition that the unmanned forklift executes the goods placing task, a target image is obtained through the camera module.
804. And carrying out semantic segmentation on the target image through a target semantic segmentation network to obtain a segmentation result aiming at the object to be stored and the target library position.
805. And carrying out safety verification on the relative position between the object to be stored and the target library according to the segmentation result to obtain a verification result.
806. And if the verification result is that the verification is passed, placing the object to be stored in the target library position.
In the embodiment of the present invention, for the description of steps 803 to 806, please refer to the detailed description of steps 301 to 304 in the above embodiment, and the description of the embodiment of the present invention is omitted.
The embodiment of the invention provides a safety verification method, an unmanned forklift can perform model training on a preset deep learning network obtained by fusing a lightweight real-time semantic segmentation task model and an object context feature identification OCR extraction module in advance so as to obtain a target semantic segmentation network, and the target image comprising an object to be stored and a target library position is subjected to semantic segmentation through the target semantic segmentation network, so that whether the relative position between the object to be stored and the target library position is safe or not is judged, the confirmation is not required by manual intervention, the manual workload is greatly reduced, the safety verification and the cargo access efficiency are also improved, and the accuracy of the safety verification can be effectively improved by performing semantic segmentation through the trained target semantic segmentation network.
As shown in fig. 10, an embodiment of the present invention provides an unmanned forklift, on which a camera module is disposed, the unmanned forklift includes:
the acquiring module 1001 is configured to acquire, through the camera module, a target image, where the target image includes an object to be stored and a target library location, under a condition that the unmanned forklift performs a loading task;
the processing module 1002 is configured to perform semantic segmentation on a target image through a target semantic segmentation network, to obtain a segmentation result for an object to be stored and a target library;
the processing module 1002 is further configured to perform security verification on a relative position between the object to be stored and the target library according to the segmentation result, to obtain a verification result;
the processing module 1002 is further configured to place the object to be stored in the target library if the verification result is that the verification is passed.
Optionally, the object to be stored comprises a target cargo or a deck, the deck carrying the target cargo,
the processing module 1002 is specifically configured to, when the object to be stored includes a target cargo, place the target cargo in a target library if the verification result is that the verification is passed; or alternatively, the first and second heat exchangers may be,
the processing module 1002 is specifically configured to, when the object to be stored includes a loading platform, place the loaded target cargo in the target storage location through the loading platform if the verification result is that the verification is passed.
Optionally, the processing module 1002 is specifically configured to determine, according to the segmentation result, a first boundary of the object to be stored and a second boundary of the target bin;
the processing module 1002 is specifically configured to determine deviation information between the first boundary and the second boundary, where the deviation information includes at least angle deviation information and distance deviation information;
the processing module 1002 is specifically configured to determine that the verification result is verification pass if the deviation information is detected to be less than or equal to the preset difference value.
Optionally, the acquiring module 1001 is specifically configured to acquire a put task, where the put task includes storing an object to be stored in a target library;
the acquiring module 1001 is specifically configured to acquire, through the camera module, a target image when the unmanned forklift carries the object to be stored and moves to a stocking position corresponding to the target warehouse location.
Optionally, the acquiring module 1001 is further configured to acquire a plurality of test images, and label each test image;
the processing module 1002 is further configured to perform model training on a preset deep learning network according to the plurality of labeled test images, so as to obtain a target semantic segmentation network, where the preset deep learning network is obtained by fusing a lightweight real-time semantic segmentation task model and an object context feature identifier OCR extraction module.
Optionally, the processing module 1002 is specifically configured to construct an area mutual information RMI loss function according to a shallow feature detail loss, an intermediate layer auxiliary loss and an output layer loss, where the shallow feature detail loss is determined according to edge feature information extracted by a preset deep learning network;
the processing module 1002 is specifically configured to train a preset deep learning network based on an RMI loss function, so as to obtain a target semantic segmentation network.
Optionally, the processing module 1002 is specifically configured to adjust, by using a learning rate cosine attenuation mechanism and an optimizer, a first weight of shallow detail loss, a second weight of intermediate layer auxiliary loss, and a third weight of output layer loss to perform parameter optimization, so as to obtain a target weight value;
the processing module 1002 is specifically configured to obtain a target semantic segmentation network according to a target weight value;
wherein the first weight is less than the second weight, and the second weight is less than the third weight.
In the embodiment of the present invention, each module may implement the security verification method provided in the above embodiment of the method, and may achieve the same technical effect, so that repetition is avoided, and details are not repeated here.
As shown in fig. 11, the embodiment of the present invention further provides an unmanned forklift, where the unmanned forklift is provided with a camera module, and the unmanned forklift may include:
A memory 1101 storing executable program code;
a processor 1102 coupled to the memory 1101;
the processor 1102 invokes executable program codes stored in the memory 1101 to execute the security check method executed by the unmanned forklift in the above method embodiments.
The present invention provides a computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute some or all of the steps of the method as in the above method embodiments.
Embodiments of the present invention also provide a computer program product, wherein the computer program product, when run on a computer, causes the computer to perform some or all of the steps of the method as in the method embodiments above.
The embodiment of the invention also provides an application publishing platform, wherein the application publishing platform is used for publishing a computer program product, and the computer program product, when running on a computer, causes the computer to execute part or all of the steps of the method as in the above method embodiments.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art will also appreciate that the embodiments described in the specification are alternative embodiments and that the acts and modules referred to are not necessarily required for the present invention. The above embodiments are not necessarily independent embodiments, and the separation into the embodiments is merely used to highlight different technical features in different embodiments, and those skilled in the art should appreciate that the above embodiments may be combined arbitrarily.
In various embodiments of the present invention, it should be understood that the sequence numbers of the foregoing processes do not imply that the execution sequences of the processes should be determined by the functions and internal logic of the processes, and should not be construed as limiting the implementation of the embodiments of the present invention.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer-accessible memory. Based on this understanding, the technical solution of the present invention, or a part contributing to the prior art or all or part of the technical solution, may be embodied in the form of a software product stored in a memory, comprising several requests for a computer device (which may be a personal computer, a server or a network device, etc., in particular may be a processor in a computer device) to execute some or all of the steps of the above-mentioned method of the various embodiments of the present invention.
Those of ordinary skill in the art will appreciate that all or part of the steps of the various methods of the above embodiments may be implemented by a program that instructs associated hardware, the program may be stored in a computer readable storage medium including Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable programmable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disk Memory, magnetic disk Memory, tape Memory, or any other medium that can be used for carrying or storing data that is readable by a computer.

Claims (10)

1. The safety verification method is characterized by being applied to an unmanned forklift, wherein a camera module is arranged on the unmanned forklift, and the method comprises the following steps:
under the condition that the unmanned forklift executes a goods placing task, acquiring a target image through the camera module, wherein the target image comprises an object to be stored and a target library position;
Performing semantic segmentation on the target image through a target semantic segmentation network to obtain segmentation results aiming at the object to be stored and the target library position;
according to the segmentation result, carrying out safety verification on the relative position between the object to be stored and the target library position to obtain a verification result;
and if the verification result is that the verification is passed, placing the object to be stored in the target library position.
2. The method of claim 1, wherein the object to be stored comprises a target cargo or a deck carrying the target cargo, and wherein if the verification result is verification, placing the object to be stored in the target storage location comprises:
when the object to be stored comprises the target goods, if the verification result is that the verification is passed, the target goods are placed in the target warehouse; or alternatively, the first and second heat exchangers may be,
when the object to be stored comprises the goods placing platform, if the verification result is that verification is passed, the loaded target goods are placed in the target warehouse through the goods placing platform.
3. The method according to claim 1, wherein the performing a security check on the relative position between the object to be stored and the target library according to the segmentation result, to obtain a check result, includes:
Determining a first boundary of the object to be stored and a second boundary of the target library bit according to the segmentation result;
determining deviation information between the first boundary and the second boundary, wherein the deviation information at least comprises angle deviation information and distance deviation information;
if the deviation information is detected to be smaller than or equal to the preset difference value, determining that the verification result is verification passing.
4. The method according to claim 1, wherein the acquiring, by the camera module, the target image in the case where the unmanned forklift performs the loading task includes:
acquiring a goods placing task, wherein the goods placing task comprises the step of storing the object to be stored in the target storage position;
and when the unmanned forklift carries the object to be stored and moves to a goods placing position corresponding to the target warehouse position, acquiring the target image through the camera module.
5. The method of claim 1, wherein prior to the capturing the target image by the camera module, the method further comprises:
acquiring a plurality of test images, and labeling each test image;
and performing model training on a preset deep learning network according to the plurality of marked test images to obtain the target semantic segmentation network, wherein the preset deep learning network is obtained by fusing a lightweight real-time semantic segmentation task model and an object context feature identification OCR extraction module.
6. The method of claim 5, wherein the model training a preset deep learning network according to the plurality of annotated test images to obtain the target semantic segmentation network comprises:
constructing an area mutual information RMI loss function according to shallow characteristic detail loss, intermediate layer auxiliary loss and output layer loss, wherein the shallow characteristic detail loss is determined according to edge characteristic information extracted by the preset deep learning network;
training the preset deep learning network based on the region mutual information RMI loss function to obtain the target semantic segmentation network.
7. The method of claim 6, wherein training the preset deep learning network based on the region mutual information RMI loss function to obtain the target semantic segmentation network comprises:
adopting a learning rate cosine attenuation mechanism and an optimizer to adjust a first weight of shallow detail loss, a second weight of intermediate layer auxiliary loss and a third weight of output layer loss to perform parameter optimization, so as to obtain a target weight value;
obtaining the target semantic segmentation network according to the target weight value;
Wherein the first weight is less than the second weight, and the second weight is less than the third weight.
8. Unmanned forklift, its characterized in that is provided with the module of making a video recording on the unmanned forklift, unmanned forklift includes:
the acquisition module is used for acquiring a target image through the camera module under the condition that the unmanned forklift executes a goods placing task, wherein the target image comprises an object to be stored and a target library position;
the processing module is used for carrying out semantic segmentation on the target image through a target semantic segmentation network to obtain segmentation results aiming at the object to be stored and the target library position;
the processing module is further used for carrying out safety verification on the relative position between the object to be stored and the target library position according to the segmentation result to obtain a verification result;
and the processing module is further used for placing the object to be stored in the target library position if the verification result is that the verification is passed.
9. Unmanned forklift, its characterized in that is provided with the module of making a video recording on the unmanned forklift, unmanned forklift includes:
a memory storing executable program code;
and a processor coupled to the memory;
The processor invokes the executable program code stored in the memory for performing the security check method of any one of claims 1 to 7.
10. A computer-readable storage medium, comprising: computer instructions stored on the computer readable storage medium, which when executed by a processor, implement the security verification method of any one of claims 1 to 7.
CN202310651885.0A 2023-06-05 2023-06-05 Safety verification method, unmanned forklift and storage medium Pending CN116402895A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310651885.0A CN116402895A (en) 2023-06-05 2023-06-05 Safety verification method, unmanned forklift and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310651885.0A CN116402895A (en) 2023-06-05 2023-06-05 Safety verification method, unmanned forklift and storage medium

Publications (1)

Publication Number Publication Date
CN116402895A true CN116402895A (en) 2023-07-07

Family

ID=87016356

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310651885.0A Pending CN116402895A (en) 2023-06-05 2023-06-05 Safety verification method, unmanned forklift and storage medium

Country Status (1)

Country Link
CN (1) CN116402895A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112733703A (en) * 2021-01-07 2021-04-30 深圳市捷顺科技实业股份有限公司 Vehicle parking state detection method and system
CN113538441A (en) * 2021-01-06 2021-10-22 腾讯科技(深圳)有限公司 Image segmentation model processing method, image processing method and device
CN114044298A (en) * 2020-06-12 2022-02-15 深圳市海柔创新科技有限公司 Control method, device, device and readable storage medium for warehouse robot
CN114972753A (en) * 2022-05-20 2022-08-30 暨南大学 A lightweight semantic segmentation method and system based on contextual information aggregation and assisted learning
CN115424012A (en) * 2022-05-30 2022-12-02 湘潭大学 Lightweight image semantic segmentation method based on context information
CN116000931A (en) * 2022-03-24 2023-04-25 深圳市海柔创新科技有限公司 Robot control method, device and equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114044298A (en) * 2020-06-12 2022-02-15 深圳市海柔创新科技有限公司 Control method, device, device and readable storage medium for warehouse robot
CN113538441A (en) * 2021-01-06 2021-10-22 腾讯科技(深圳)有限公司 Image segmentation model processing method, image processing method and device
CN112733703A (en) * 2021-01-07 2021-04-30 深圳市捷顺科技实业股份有限公司 Vehicle parking state detection method and system
CN116000931A (en) * 2022-03-24 2023-04-25 深圳市海柔创新科技有限公司 Robot control method, device and equipment
CN114972753A (en) * 2022-05-20 2022-08-30 暨南大学 A lightweight semantic segmentation method and system based on contextual information aggregation and assisted learning
CN115424012A (en) * 2022-05-30 2022-12-02 湘潭大学 Lightweight image semantic segmentation method based on context information

Similar Documents

Publication Publication Date Title
CN108357848B (en) Modeling optimization method based on Multilayer shuttle car automated storage and retrieval system
US10518973B2 (en) Inventory management
CN114348522B (en) Material box inventory method, device, dispatching equipment, robot and storage system
CN111232590B (en) Automatic control method and device for storage robot
CN112536794A (en) Machine learning method, forklift control method and machine learning device
CN215477503U (en) Sorting units and storage systems
US20240174499A1 (en) Method and System for Load Detection in an Industrial Truck
CN114819821A (en) Goods warehouse-out checking method and device, computer equipment and storage medium
JP2024520533A (en) A system for tracking inventory
CN118616338B (en) Single-piece separation method and system for express delivery sorting
CN117699380A (en) Automatic luggage sorting and stacking system and method
US20240165828A1 (en) Apparatus and method for horizontal unloading with automated articulated arm, multi-array gripping, and computer vision based control
CN119090397A (en) A warehouse management method and system combining weighing platform and image recognition
CN109571408B (en) Robot, angle calibration method of inventory container and storage medium
CN116402895A (en) Safety verification method, unmanned forklift and storage medium
JP7511598B2 (en) Information processing method, information processing device, and program
JP7660029B2 (en) Edge position detection method, transfer position determination method, and article transfer system
CN114348516A (en) Container inventory method, device, scheduling equipment, storage system and storage medium
KR102657029B1 (en) Logistics Transport Robot for Automated Process Linked Operation
CN118701567A (en) An intelligent control system and method for intelligent warehousing
Poss et al. Perceptionbased intelligent materialhandling in industrial logistics environments
JP7631913B2 (en) Transport control system, transport control method, and program
CN118579455B (en) Working method of finished product ex-warehouse automation system with AGV path planning
US12487587B1 (en) Visual perception and techniques for placing inventory into pods with a robotic workcell
CN120573435B (en) Luggage transferring and stacking method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20230707

RJ01 Rejection of invention patent application after publication