CN118887180B - A method for assembly inspection of aerospace products based on visual recognition - Google Patents
A method for assembly inspection of aerospace products based on visual recognitionInfo
- Publication number
- CN118887180B CN118887180B CN202410929106.3A CN202410929106A CN118887180B CN 118887180 B CN118887180 B CN 118887180B CN 202410929106 A CN202410929106 A CN 202410929106A CN 118887180 B CN118887180 B CN 118887180B
- Authority
- CN
- China
- Prior art keywords
- assembly
- pose
- cable
- coordinate system
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/14—Image acquisition
- G06V30/146—Aligning or centring of the image pick-up or image-field
- G06V30/147—Determination of region of interest
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/14—Image acquisition
- G06V30/146—Aligning or centring of the image pick-up or image-field
- G06V30/1475—Inclination or skew detection or correction of characters or of image to be recognised
- G06V30/1478—Inclination or skew detection or correction of characters or of image to be recognised of characters or characters lines
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/19007—Matching; Proximity measures
- G06V30/19093—Proximity measures, i.e. similarity or distance measures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a space product assembly detection method based on visual identification, which comprises the steps of carrying out inclination correction, text detection and identification on flexible cable images to obtain coding contents, matching the coding contents with parts in a database to obtain a flexible cable assembly condition detection result, obtaining the pose of a rigid structure under a camera coordinate system according to the rigid structure images, converting the pose of the rigid structure under the world coordinate system into pose data of the rigid structure and comparing the pose data with real pose data of the structure to obtain a rigid structure assembly condition detection result, and transmitting the flexible cable assembly condition detection result and the rigid structure assembly condition detection result to a man-machine interaction interface of a software system to guide operators to gradually complete assembly state detection of each workpiece of the product. According to the invention, the assembly result of the flexible cable and the rigid structural member in the space product is detected through machine vision, so that the requirement on the professional knowledge of operators can be reduced, the labor cost is saved, and meanwhile, the detection error caused by human factors is reduced.
Description
Technical Field
The invention belongs to the technical field of aerospace assembly detection, and particularly relates to a visual identification-based aerospace product assembly detection method.
Background
The assembly of aerospace products is an important link in modern aerospace engineering. Conventionally, the assembly parts of aerospace products such as airplanes, aeroengines, carrier rockets and the like are numerous, the assembly process is complex, more than half of the production period is used for assembly work, and process file understanding errors and assembly operation errors are easily caused in an operation environment with complex working conditions, so that the assembly time is long and the quality is difficult to guarantee. Therefore, the assembly result is required to be detected and verified, the detection work in the assembly process of the aerospace product mainly depends on manual detection by workers, the efficiency is low, and the detection result mainly depends on subjective judgment of operators.
With the development of visual inspection technology, machine vision provides effective technical support for quality inspection of assembly of aerospace products. The character recognition technology based on deep learning can well recognize various irregular or partially blocked character information, can accurately extract and track three-dimensional pose information of a target object from an image by utilizing technologies such as key point matching, model matching and the like, and can solve the positioning problem that single view is difficult to judge by utilizing a multi-view geometric relationship.
However, because of the various types, complex structures and quick updating of the parts for assembling the aviation products, the challenges are brought to the development of the visual recognition algorithm of the parts, and the traditional visual recognition model development method based on the machine learning training of each part alone cannot be suitable for the assembly inspection of the aviation products with quick updating. Meanwhile, CAD models and real-time depth information of parts are mostly needed for developing models, so that the deep popularization of the visual recognition technology in the field of assembly detection is not facilitated. Therefore, how to efficiently, accurately and conveniently realize the detection of the assembly result of the aerospace product is a problem to be solved.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a space product assembly detection method based on visual identification, which is used for carrying out assembly detection on parts assembled in space products, wherein the parts are divided into flexible cable parts and rigid structural parts, the flexible cable parts with irregular shapes are detected by adopting a detection mode based on coding character identification, the coding characters on a computer visual identification cable label are matched with an assembly task, the assembly result detection is realized by contrasting a process file, the rigid structural parts are detected by adopting a detection mode based on a visual identification pose identification algorithm, and meanwhile, the part assembly result detection is completed by contrasting the identified pose of the rigid part with the coordinate pose of a part in a real world coordinate system.
In order to achieve the technical purpose, the invention adopts the following technical scheme:
A method for detecting assembly of a space product based on visual recognition, the method comprising:
S1, checking the assembly condition of flexible cables of aerospace products, namely performing cable coding inclination correction, text detection and recognition on flexible cable images acquired by a camera based on a text recognition algorithm of visual recognition to obtain cable coding content in the images;
S2, matching the coded content identified in the S1 with parts in an assembly process database to obtain a flexible cable assembly condition checking result;
S3, checking the assembly condition of the rigid structural member of the aerospace product, namely obtaining the pose of the rigid structural member under a camera coordinate system by using a visual recognition pose estimation algorithm and a least square regression algorithm according to the image of the rigid structural member acquired by the camera;
s4, converting the pose of the rigid structural member under the camera coordinate system into the pose of the rigid structural member under the world coordinate system, and comparing the converted pose data with the real pose data of the structural member to obtain an assembly condition inspection result of the rigid structural member;
S5, transmitting the inspection result of the assembly condition of the flexible cable and the rigid structural member to a man-machine interaction interface of a software system, marking the workpieces which are inspected and assembled accurately in a process library, and guiding operators to gradually complete the inspection of the assembly state of each workpiece of the product.
In order to optimize the technical scheme, the specific measures adopted further comprise:
The above S1 performs cable coding inclination correction, text detection and recognition on a flexible cable image acquired by a camera based on a machine vision text recognition algorithm to obtain cable coding content in the image, including:
S1.1, extracting straight line information in a cable image by using Hough transformation, obtaining an inclination angle of the extracted straight line by using an averaging mode, solving a rotation matrix M, and carrying out affine transformation by taking an image center as a rotation center to finish inclination correction of cable coding in the image;
S1.2, extracting cable image characteristics after inclination correction by using a convolutional network, detecting text boxes with indefinite length by using a cyclic neural network, dividing the whole coding region in the image into small text candidate boxes for cable coding text detection, and finally splicing the detected small boxes together to obtain a complete cable coding text box, namely a cable coding region;
S1.3, cutting the detected cable coding region, inputting the cut cable coding region into a CRNN network, and identifying the cable coding region based on the sequence of the images to obtain cable coding content.
The solution formula of the rotation matrix M is as follows:
Where α=cos (angle), β=sin (angle), center. X, center. Y are the center coordinates of the image.
In the step S2, the edit distance between the cable code content character string and all the part name character strings in the assembly process database is calculated, and the part name character string with the smallest edit distance value is selected as a matching result.
The above edit distance calculation formula is:
Wherein lev a,b (i, j) is the edit distance between the first i characters of the cable code content string a and the first j characters of b, 1 (a i≠bj) is an indication function, a i≠bj is 1, otherwise 0, and three formulas in min operation respectively represent deleting, inserting and replacing characters in a to achieve b.
In the above step S3, the pose of the rigid structural member under the camera coordinate system is estimated by using the visual recognition pose estimation algorithm based on RGB data on the rigid structural member image, and the experimentally measured data is linearly fitted by using the least squares regression algorithm, so as to obtain the pose of the rigid structural member with the real scale under the camera coordinate system:
f(xi)=wxi+b
Wherein f (x i) is converted coordinate data to obtain the pose of the rigid structural member with the real scale under the camera coordinate system, x i is pose coordinate data of the rigid structural member under the camera coordinate system, which is output by the recognition algorithm before conversion, w is a conversion weight, and b is a recognition deviation amount.
The w and the b are obtained by minimizing the sum of the Euclidean distances from the experimental sample to the straight line, wherein the formula of the sum of the Euclidean distances from the experimental sample to the straight line is as follows:
Wherein E (w,b) is the sum of Euclidean distances from the sample to the fitting straight line, y i is the real coordinate data of the structural body, and m is the number of experimental data samples;
And (3) respectively carrying out bias guide on w and b by E (w,b), and making the bias guide equal to 0, and obtaining:
Wherein the mean value
The above formula for performing pose conversion in S4 is as follows:
Wherein T 1 is the pose matrix of the rigid structural member under the world coordinate system after the conversion is completed, and T 2 is the pose matrix of the rigid structural member under the camera coordinate system before the conversion; is a transformation matrix of the camera coordinate system to the world coordinate system.
In the above step S4, if the deviation between the converted pose data and the real pose data of the structural member does not exceed the error threshold, the assembly result of the rigid structural member is correct.
The invention has the following beneficial effects:
The invention provides two detection modes of flexible cable part detection and rigid structural part detection for space product assembly detection, and can cover most space product assembly scenes.
The detection scheme based on coding character recognition is provided for cable part recognition, cable coding recognition is carried out by combining CTPN text detection algorithm and CRNN text recognition algorithm, and robustness of the text recognition algorithm to inclined text detection is enhanced through image angle adjustment pretreatment optimization; the text similarity matching mechanism is used for carrying out the association from the text recognition result to the task of the assembly line, the assembly inspection is realized by matching with a software system containing a process file, and the preprocessing improvement scheme of the text recognition algorithm ensures that operators can carry out the cable recognition operation at various angles, thereby enhancing the robustness of the existing recognition algorithm;
For rigid part assembly result detection, a rigid structure part detection method based on pose estimation is provided, a visual recognition pose estimation algorithm based on RGB images is adopted to carry out rigid structure part pose estimation, a least square regression algorithm is used to carry out curve fitting on pose estimation results, the problem of scale distortion of pose estimation without depth data is solved, the pose result after scale conversion is compared with the real pose to judge that the rigid structure body assembly state is correct, neural network training for specific rigid parts is not needed, CAD models and depth information are not needed, part updating and updating speed is high, various types and space product assembly environments with complex structures can be effectively adapted, scale reduction improvement of a pose recognition model can be effectively made up for recognition result deviation caused by depth data deletion, and the rigid structure part assembly method based on RGB data has a certain reference meaning.
According to the invention, the assembly result of the flexible cable and the rigid structural member in the space product is detected through machine vision, so that the requirement on the professional knowledge of operators can be reduced, the labor cost is saved, and meanwhile, the detection error caused by human factors is reduced.
The assembly detection method provided by the invention can be used for assembly detection in the fields of aerospace, automobiles, machine tools and the like, wherein the detection method based on computer vision can be applied to a PC or a mobile terminal, can be integrated into an industrial robot control system, and can be used for guiding a robot to complete a complex operation flow, thereby having a wide application prospect.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention;
FIG. 2 is an assembly test platform of the present invention;
FIG. 3 is a flexible cable identification process according to the present invention;
FIG. 4 is a schematic diagram of each coordinate system estimated based on the visual recognition pose of RGB data in the present invention;
FIG. 5 is a scale reduction fit curve of pose recognition data in the present invention;
FIG. 6 is a human-computer interface of the assembly detection software of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Although the steps of the present invention are arranged by reference numerals, the order of the steps is not limited, and the relative order of the steps may be adjusted unless the order of the steps is explicitly stated or the execution of a step requires other steps as a basis. It is to be understood that the term "and/or" as used herein relates to and encompasses any and all possible combinations of one or more of the associated listed items.
According to the space product assembly detection method based on visual identification, disclosed by the invention, the assembly states of flexible cables and rigid structure parts in space products are detected by using a machine vision technology, so that the workload of operators is reduced, the culture period and the culture cost of assembly workers are reduced, and the assembly errors caused by human subjective factors are avoided. The invention example is further described with reference to the steps in fig. 1, and the verification platform of the invention is shown in fig. 2, and is composed of an assembly wall plate, a camera, a flexible cable and a rigid part, and the specific implementation steps are as follows:
S1, checking the assembly condition of flexible cables of aerospace products, namely performing cable coding inclination correction, text detection and recognition on flexible cable images acquired by a camera based on a machine vision text recognition algorithm to obtain cable coding content in the images;
In an embodiment, for inspecting the assembly condition of a flexible cable, firstly, performing image acquisition on an encoding region of the flexible cable by using a camera in fig. 2, wherein the acquired image needs to contain the whole encoding region as shown in fig. 3, performing image preprocessing on the acquired cable image, performing correction processing on a picture with a larger text inclination, and identifying cable encoding content in the preprocessed picture by using a text recognition algorithm, wherein the S1 comprises:
And S1.1, performing image preprocessing, namely performing inclination correction processing on codes by detecting straight lines in an image, extracting straight line information in an acquired image by using Hough transformation, obtaining detected straight line inclination angles by using an averaging mode, obtaining a rotation matrix by solving a formula, and performing affine transformation by taking the center of the image as a rotation center to finish image rotation correction. The rotation matrix solving formula is as follows:
Where α=cos (angle), β=sin (angle), center. X, center. Y are the center coordinates of the image;
after image preprocessing, an image with the coded characters in a horizontal state as shown in fig. 3 can be obtained, and then the next step of coded region identification is carried out.
And S1.2, cable coding text detection is carried out based on CTPN text detection algorithm, cable image characteristics of the preprocessed image are extracted after rotation correction by utilizing a convolutional network, text boxes with indefinite length are detected by utilizing a cyclic neural network, the whole coding region is divided into small text candidate boxes for detection, and finally the detected small boxes are spliced together to obtain a complete text box, so that a picture detected by the coding region in FIG. 3 is obtained.
S1.3, cutting the detected cable coding region, inputting the cut cable coding region into a CRNN network for character recognition, and recognizing the cable coding region by using the CRNN network based on the sequence of the images to obtain the content of the coded characters in the text box, thereby realizing the content recognition in the coding region with indefinite length;
S2, matching the coded content identified in the S1 with parts in an assembly process database to obtain a flexible cable assembly condition checking result;
In the embodiment, based on the text similarity, comparing and matching the identified code content in the S1 with the corresponding task number in the assembly process database, completing the automatic mapping from the code image to the process task, and providing a reference object for checking the assembly condition of the cable;
The edit distance algorithm (EDIT DISTANCE) is used as a similarity measurement method to calculate the similarity between the cable code recognition result (such as CR20815 in fig. 3) and the cable names in the software system database, three operations of character insertion, character deletion and character replacement are performed on one of the two character strings, so that the character string is converted into a second character string through the operation with the minimum number of times, the operation number is the edit distance, a is used to represent the code recognition result character string (namely CR 20815), b is used to represent the part name character string (such as CR 20816) in the database, i and j are used to represent the subscripts of the characters in the two character strings a and b, and the edit distance lev a,b (i and j) between the first i characters of the character string a and the first j characters of b is:
Where 1 (a i≠bj) is an indicator function, a i≠bj is 1, otherwise 0, and three formulas in the min operation represent the deleted, inserted, and replaced characters in a, respectively, to arrive at b.
And after the coding recognition result character string and all the part name character strings in the database are subjected to editing distance calculation, selecting the part name character with the smallest editing distance value as a matching result. The software system can call out an assembly process file of the corresponding cable according to the matching result, and display assembly information in the process file on a man-machine interaction interface of the software to guide an operator to carry out key information verification so as to assist in completing assembly result detection;
S3, checking the assembly condition of the rigid structural member of the aerospace product, namely obtaining the pose of the rigid structural member under a camera coordinate system by using a visual recognition pose estimation algorithm and a least square regression algorithm based on RGB data according to the image of the rigid structural member acquired by a camera;
In the embodiment, for the assembly condition inspection of the rigid structural member, firstly, a camera is used for collecting an assembly state picture of the rigid structural member, a visual recognition pose estimation algorithm based on RGB data is used for calculating the pose of the structural member relative to a camera coordinate system, namely the pose of the rigid structural member in the camera coordinate system in FIG. 4, and a least square regression algorithm is used for restoring the coordinate scale distorted due to lack of depth data;
In the step S3, the visual recognition pose estimation algorithm based on RGB data is a pose estimation method based on RGB image input, and the recognition of the rigid parts can be realized by collecting the picture information of different angles of some rigid parts in advance and inputting the picture information into the algorithm, and because there is no input of depth data and CAD model, the pose data output by the algorithm has scale distortion, and in order to solve the problem of pose scale distortion, the data measured by experiment is linearly fitted by the least square regression algorithm, so as to obtain a fitting linear formula:
f(xi)=wxi+b
Wherein f (x i) is converted coordinate data, x i is the output coordinate data of a prediction algorithm before conversion, w is a conversion weight, b is an identification deviation amount, and w and b are obtained by minimizing the sum of Euclidean distances from an experimental sample to a straight line, and the formula is as follows:
Wherein E (w,b) is the sum of Euclidean distances from the sample to the fitting straight line, y i is real coordinate data of the structural body, m is the number of experimental data samples, and the offset of the sample to w and b is calculated by E (w,b) respectively, and the offset is equal to 0, so that the method can be used for obtaining:
Wherein the method comprises the steps of
And (3) performing least square fitting on the X, Y, Z directions respectively, obtaining three-direction fitting straight lines shown in fig. 5 after calculation is finished, wherein the abscissa in the figure is output data of a pose recognition algorithm, the ordinate is actual coordinate data of the rigid part relative to a camera coordinate system, inputting coordinate data of the rigid part in the image recognized by the pose recognition algorithm by using a fitted straight line formula, and outputting the coordinate data of the rigid part with a real scale.
S4, converting the pose of the rigid structural member under the camera coordinate system into the pose of the rigid structural member under the world coordinate system, and comparing the converted pose data with the real pose data of the structural member to obtain an assembly condition inspection result of the rigid structural member;
In the embodiment, the pose of the structural member obtained in the step S3 under the camera coordinate system is converted into the pose under the world coordinate system through the coordinate conversion matrix, the pose data and the real pose data of the structural member are compared, whether the deviation of the pose data and the real pose data of the structural member exceeds an error threshold value is judged, and the pose conversion formula is as follows:
t 1 is a pose matrix of the rigid structural member under the world coordinate system as shown in fig. 4 after conversion, T 2 is a pose matrix of the rigid structural member under the camera coordinate system as shown in fig. 4 before conversion, and the pose matrix is obtained by outputting poses by a visual recognition pose estimation algorithm based on RGB data through scale reduction; The matrix is obtained by using a camera calibration method based on Pnp algorithm, the camera performs self coordinate calibration by identifying the two-dimension code on the assembled wall plate in fig. 4, the center of the two-dimension code is used as the origin of the world coordinate system, the plane of the wall plate is the XOY plane of the world coordinate, and the normal direction of the wall plate is the positive direction of the Z axis of the world coordinate system. The rotation matrix R and the translation T of the camera relative to the world coordinate are obtained through marking, and the rotation matrix R and the translation T are combined to obtain a transformation matrix
And (3) converting to the calculated estimated pose of the rigid part under the world coordinate system, comparing the estimated pose with pose data marked in the process file of the rigid part, and if the data deviation is within an allowable range, the assembly result is correct.
S5, transmitting the inspection result of the assembly condition of the flexible cable and the rigid structural member to a man-machine interaction interface of a software system, marking the workpieces which are inspected and assembled accurately in a process library, and guiding operators to gradually complete the inspection of the assembly state of each workpiece of the product.
In an embodiment, the detection results of the flexible cable and the rigid structural member are transmitted to a man-machine interaction interface of the software system as shown in fig. 6, the workpieces which are inspected and assembled accurately are marked in a process library, and operators are guided to gradually complete the assembly state inspection of each workpiece of the product.
The software system is assembly detection software established based on the assembly process files of the aerospace products, can run on a PC, a tablet and a mobile phone, and a man-machine interaction interface of the software automatically obtains assembly process information according to detection and identification results, marks the detected assembly process and guides operators to check the residual assembly process steps.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate embodiment, and that this description is provided for clarity only, and that the disclosure is not limited to the embodiments described in detail below, and that the embodiments described in the examples may be combined as appropriate to form other embodiments that will be apparent to those skilled in the art.
Claims (9)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410929106.3A CN118887180B (en) | 2024-07-11 | 2024-07-11 | A method for assembly inspection of aerospace products based on visual recognition |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410929106.3A CN118887180B (en) | 2024-07-11 | 2024-07-11 | A method for assembly inspection of aerospace products based on visual recognition |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN118887180A CN118887180A (en) | 2024-11-01 |
| CN118887180B true CN118887180B (en) | 2025-09-05 |
Family
ID=93234231
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202410929106.3A Active CN118887180B (en) | 2024-07-11 | 2024-07-11 | A method for assembly inspection of aerospace products based on visual recognition |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN118887180B (en) |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115292804A (en) * | 2022-06-30 | 2022-11-04 | 北京科技大学 | Flexible assembly tool structure layout and movement stroke optimization method and system |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH0631556B2 (en) * | 1989-06-20 | 1994-04-27 | 日立建機株式会社 | Segment assembly method and device |
| JP6410861B2 (en) * | 2017-03-09 | 2018-10-24 | キヤノン株式会社 | Measuring device, processing device, and article manufacturing method |
| CN114347013A (en) * | 2021-11-05 | 2022-04-15 | 深港产学研基地(北京大学香港科技大学深圳研修院) | Method for assembling printed circuit board and FPC flexible cable and related equipment |
| CN116091812A (en) * | 2022-09-20 | 2023-05-09 | 上海飞机制造有限公司 | Detection method, device, system and medium for component assembly |
-
2024
- 2024-07-11 CN CN202410929106.3A patent/CN118887180B/en active Active
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115292804A (en) * | 2022-06-30 | 2022-11-04 | 北京科技大学 | Flexible assembly tool structure layout and movement stroke optimization method and system |
Also Published As
| Publication number | Publication date |
|---|---|
| CN118887180A (en) | 2024-11-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN114473309B (en) | Welding position identification method for automatic welding systems and automatic welding systems | |
| KR102198028B1 (en) | Position Verification Method for Equipment Layout at 3D Design of Smart Factory | |
| CN116465335B (en) | Automatic thickness measurement method and system based on point cloud matching | |
| CN118788992A (en) | A CNC lathe machining auxiliary system based on machine vision | |
| CN110044259A (en) | A kind of gathering pipe flexible measurement system and measurement method | |
| WO2021103558A1 (en) | Rgb-d data fusion-based robot vision guiding method and apparatus | |
| CN118691551B (en) | New energy ship parts detection method and system based on deep learning | |
| CN118081767B (en) | Automatic programming system and method for post-processing machining of casting robot | |
| CN118887180B (en) | A method for assembly inspection of aerospace products based on visual recognition | |
| CN111008607B (en) | A visual servo aircraft door gap automatic laser scanning method and system | |
| CN110222370B (en) | Nuclear power station three-dimensional model repair control unit, system and method | |
| CN120493593B (en) | Underground pipe network construction diagram generation and verification method and system based on image recognition | |
| CN113034418A (en) | Circuit board identification and pad/chip rapid positioning method for electronic industry | |
| CN119180820B (en) | Detection method and terminal for printing defects of surfaces and trademarks of automobile glass | |
| CN119649053A (en) | A method and system for weld seam recognition and path planning of three-dimensional steel structure workpiece | |
| CN110245582A (en) | A method for identifying single-class components in bitmaps based on deep learning | |
| CN118134437B (en) | Digital lofting and welding management method for pipeline based on BIM technology | |
| CN119992011A (en) | Inspection and maintenance method, system, equipment and storage medium based on AR glasses | |
| CN120125656A (en) | A method and system for visual matching of free edges of parts based on model matching | |
| CN113780310A (en) | An instrument reading method based on key point detection | |
| CN119131488A (en) | Intelligent identification device and method for prefabricated ship pipes based on visual detection | |
| CN116339512A (en) | An augmented reality auxiliary assembly system and method based on desktop projection | |
| CN117974043A (en) | A welding task process compilation system and method based on machine vision technology | |
| CN114202589B (en) | Whole vehicle wood scale system and method based on ArUco codes | |
| CN110334818A (en) | A kind of method and system of pipeline automatic identification |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |