[go: up one dir, main page]

EP3158543B1 - Procédé pour la détection d'une caractéristique dépendant de l'angle d'observation - Google Patents

Procédé pour la détection d'une caractéristique dépendant de l'angle d'observation Download PDF

Info

Publication number
EP3158543B1
EP3158543B1 EP15728835.8A EP15728835A EP3158543B1 EP 3158543 B1 EP3158543 B1 EP 3158543B1 EP 15728835 A EP15728835 A EP 15728835A EP 3158543 B1 EP3158543 B1 EP 3158543B1
Authority
EP
European Patent Office
Prior art keywords
image
document
segment
document image
spatial position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP15728835.8A
Other languages
German (de)
English (en)
Other versions
EP3158543A1 (fr
Inventor
Andreas Hartl
Dieter Schmalstieg
Olaf Dressel
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bundesdruckerei GmbH
Original Assignee
Bundesdruckerei GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bundesdruckerei GmbH filed Critical Bundesdruckerei GmbH
Publication of EP3158543A1 publication Critical patent/EP3158543A1/fr
Application granted granted Critical
Publication of EP3158543B1 publication Critical patent/EP3158543B1/fr
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07DHANDLING OF COINS OR VALUABLE PAPERS, e.g. TESTING, SORTING BY DENOMINATIONS, COUNTING, DISPENSING, CHANGING OR DEPOSITING
    • G07D7/00Testing specially adapted to determine the identity or genuineness of valuable papers or for segregating those which are unacceptable, e.g. banknotes that are alien to a currency
    • G07D7/20Testing patterns thereon
    • G07D7/2008Testing patterns thereon using pre-processing, e.g. de-blurring, averaging, normalisation or rotation
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07DHANDLING OF COINS OR VALUABLE PAPERS, e.g. TESTING, SORTING BY DENOMINATIONS, COUNTING, DISPENSING, CHANGING OR DEPOSITING
    • G07D7/00Testing specially adapted to determine the identity or genuineness of valuable papers or for segregating those which are unacceptable, e.g. banknotes that are alien to a currency
    • G07D7/003Testing specially adapted to determine the identity or genuineness of valuable papers or for segregating those which are unacceptable, e.g. banknotes that are alien to a currency using security elements
    • G07D7/0032Testing specially adapted to determine the identity or genuineness of valuable papers or for segregating those which are unacceptable, e.g. banknotes that are alien to a currency using security elements using holograms

Definitions

  • the present invention relates to the field of the detection of viewing angle-dependent features, in particular of holograms, on documents.
  • the disclosure document US 2009/154813 A1 shows a method and a device for validating a hologram which is comprised by a document.
  • US 2012/163666 A1 shows a method to verify the authenticity of a driver's license.
  • WO 92/01975 A1 shows a method for identifying a hologram.
  • WO 2010/116279 A1 is directed to an apparatus and method for automatic verification of polarization dependent images.
  • holograms are used for a large number of applications in order to enable the authenticity or authenticity of documents to be checked.
  • viewing angle-dependent features are applied to identification documents or bank notes in order to make copying of the documents more difficult.
  • Features that are dependent on the viewing angle can have representations that are dependent on the viewing angle.
  • a verification of a viewing angle-dependent feature for checking the authenticity or authenticity of a document can be carried out manually by a person.
  • the detection of the viewing angle-dependent feature of the document takes place visually by the person.
  • the viewing angle-dependent feature can then be verified visually by the person, for example by a visual comparison of the representations of the viewing angle-dependent feature with previously known reference representations. Detection and verification of a viewing angle-dependent feature by a person is usually very time-consuming.
  • the invention is based on the knowledge that the above object can be achieved by capturing images of the document in different spatial positions relative to the document and by determining an image difference between the captured images.
  • the viewing angle-dependent feature has different viewing angle-dependent representations in the images of the document.
  • image difference between the captured images image areas with strong optical changes can be efficiently assigned to the viewing angle-dependent feature.
  • the invention relates to a method for detecting a viewing angle-dependent feature of a document using an image camera, the viewing angle-dependent feature having viewing angle-dependent representations, with the image camera capturing a first image of the document in a first spatial position of the document relative to the image camera
  • the image camera captures a second image of the document in a second spatial position of the document relative to the image camera in order to obtain a second document image, and captures an image difference between the first document image and the second document image, to detect the viewing angle dependent feature of the document.
  • the viewing angle dependent feature can have viewing angle dependent representations and / or illumination angle dependent representations.
  • the document can be one of the following documents: an identity document such as an identity card, a passport, an access control badge, a
  • the document can furthermore comprise an electronically readable circuit, for example an RFID chip.
  • the document can have one or more layers and be paper and / or plastic-based.
  • the document can be constructed from plastic-based foils, which are joined together to form a card body by means of gluing and / or lamination, the foils preferably having similar material properties.
  • the first spatial position of the document relative to the image camera can include an arrangement and / or inclination of the document relative to the image camera.
  • the first spatial position can comprise a pose with six degrees of freedom, three degrees of freedom being associated with the arrangement, and three degrees of freedom being associated with the inclination, for example including a translation and a rotation.
  • the second spatial position of the document relative to the image camera can include an arrangement and / or inclination of the document, for example comprising a translation and a rotation, relative to the image camera.
  • the second spatial position can comprise a pose with six degrees of freedom, three degrees of freedom being associated with the arrangement, and three degrees of freedom being associated with the inclination, for example including a translation and a rotation.
  • the first document image can be a color image or a grayscale image.
  • the first document image can include a plurality of pixels.
  • the second document image can be a color image or a grayscale image.
  • the second document image can include a plurality of pixels.
  • the first document image and the second document image can form an image stack.
  • the image difference between the first document image and the second document image can be detected on the basis of the plurality of pixels of the first document image and the plurality of pixels of the second document image.
  • the method comprises capturing a plurality of images of the document by the image camera in different spatial positions of the document relative to the image camera, the capturing of the first image of the document selecting the first image from the plurality of images in the first Comprises spatial position, and wherein the capturing of the second image of the document comprises selecting the second image from the plurality of images in the second spatial position.
  • the acquisition of the plurality of images of the document can include determining a respective spatial position on the basis of a respective image.
  • the respective spatial positions can be compared with the first spatial position in order to select the first image from the plurality of images.
  • the respective spatial positions can be compared with the second spatial position in order to select the second image from the plurality of images.
  • the first spatial position and the second spatial position can be predetermined.
  • capturing the first image of the document further includes a perspective rectification of the first document image on the basis of the first spatial position
  • capturing the second image of the document further comprises a perspective rectification of the second document image on the basis of the second spatial position.
  • a rectangular first document image By rectifying the perspective of the first document image, a rectangular first document image can be provided. By rectifying the perspective of the second document image, a rectangular second document image can be provided.
  • the perspective rectification of the first document image can include scaling of the first document image.
  • the perspective rectification of the second document image can include scaling of the second document image.
  • the method further comprises determining the first spatial position of the document relative to the image camera on the basis of the first document image and / or determining the second spatial position of the document relative to the image camera based on the second document image.
  • the determination of the first spatial position and the determination of the second spatial position can include determining a respective homography.
  • the respective spatial position is determined by means of an edge detection. This has the advantage that the respective spatial position can be efficiently determined relative to rectangular documents.
  • the edge detection can include the detection of lines, rectangles, parallelograms or trapezoids in the first document image and / or in the second document image. Edge detection can be performed using a Hough transform.
  • the respective document image is low-pass filtered for noise reduction. This has the advantage that the image difference can be detected efficiently.
  • the low-pass filtering can be carried out using a windowed mean value filter or a windowed Gaussian filter.
  • the low-pass filtering can further include determining a respective integral image of the respective document image, wherein the low-pass filtering can be carried out using the respective integral image.
  • the first document image is compared with the second document image in order to determine an alignment of the first document image in relation to the second document image, the first document image and the second document image being aligned in relation to one another on the basis of the determined alignment.
  • Comparing the first document image with the second document image can include extracting and comparing image features of the first document image and the second document image.
  • the image features can be, for example, BRISK image features or SURF image features.
  • determining the image difference between the first document image and the second document image comprises determining a difference image on the basis of the first document image and the second document image, the difference image indicating an image difference between the first document image and the second document image.
  • the difference image can be a grayscale image.
  • the difference image may include a plurality of pixels.
  • the difference image can also be assigned to an image stack.
  • a mean value is determined from a first pixel value of a pixel of the first document image and a second pixel value of a pixel of the second document image, a first deviation of the first pixel value from the mean value being determined, a second deviation of the second pixel value from the mean value being determined and wherein the image difference is detected on the basis of the first deviation and the second deviation.
  • the first pixel value and / or the second pixel value can be grayscale values.
  • the mean value can be an arithmetic mean or a median.
  • the deviation can be a quadratic deviation or an absolute deviation.
  • a first document image mask is determined on the basis of the first document image, a second document image mask being determined on the basis of the second document image, and the image difference being detected on the basis of the first document image mask and the second document image mask.
  • the first document image mask can have pixels with binary-valued pixel values in order to display valid and invalid pixels of the first document image.
  • the second document image mask can have pixels with binary-valued pixel values in order to display valid and invalid pixels of the second document image.
  • the respective document image mask displays pixels of the respective document image which can be used to detect the image difference. This has the advantage that only valid pixels of the respective document image are used to detect the image difference.
  • a pixel of a respective document image can be invalid, for example, if the pixel is assigned to an area of the document which was incompletely captured.
  • the image difference is segmented into a plurality of image segments, the viewing angle-dependent feature of the document being detected on the basis of at least one image segment of the plurality of image segments.
  • image segments can be used to detect the viewing angle-dependent feature.
  • the image difference can be indicated by a difference image, the difference image being segmented into the plurality of image segments.
  • the segmentation can be carried out by means of a pixel-oriented image segmentation method, an edge-oriented image segmentation method, an area-oriented image segmentation method, a model-oriented image segmentation method, or a texture-oriented image segmentation method.
  • the image segmentation method can include, for example, a maximally stable extremal regions (MSER) method or a mean shift method.
  • the image segments can be contiguous image segments.
  • an image segment dimension is determined for an image segment of the plurality of image segments, the determined image segment dimension being compared with a predetermined image segment dimension in order to qualify the image segment for the detection of the viewing angle-dependent feature.
  • the image segment dimension can be an area of the image segment, an aspect ratio of the image segment, a compactness of the image segment, a pixel value of a pixel of the image segment, or a homogeneity dimension of the image segment.
  • an image segment of the plurality of image segments is assigned a first document image segment of the first document image and a second document image segment of the second document image, the first document image segment being compared with the second document image segment in order to qualify the image segment for the detection of the viewing angle-dependent feature.
  • the comparison of the first document image segment with the second document image segment is carried out by means of a normalized cross-correlation.
  • the image segment is qualified for the detection of the viewing angle-dependent feature if the first document image segment and the second document image segment are different.
  • the viewing angle-dependent feature comprises a hologram or a printing ink with viewing angle-dependent reflection properties or absorption properties. This has the advantage that the viewing angle-dependent feature can be easily implemented with viewing angle-dependent representations.
  • the invention relates to a mobile device according to claim 10 for detecting a viewing angle-dependent feature of a document, the viewing-angle-dependent feature having viewing-angle-dependent representations, with an image camera which is designed to take a first image of the document in a first spatial position of the document relative to the To capture image camera in order to obtain a first document image, and to capture a second image of the document in a second spatial position of the document relative to the image camera in order to obtain a second document image, and a processor, which is designed, an image difference between the first Capture document image and the second document image in order to detect the viewing angle-dependent feature of the document.
  • This has the advantage that an efficient concept for detecting a viewing angle-dependent feature of a document can be implemented.
  • the mobile device can be a mobile phone or a smartphone.
  • the imaging camera can be a digital imaging camera.
  • the processor can execute a computer program.
  • the mobile device can also have an illumination device for illuminating the document.
  • the lighting device can be an LED lighting device.
  • the method can be carried out by means of the mobile device. Further features of the mobile device result directly from the functionality of the method.
  • the invention relates to a computer program according to claim 11 with a
  • Program code for carrying out the method when the computer program is executed on a computer This has the advantage that the method can be carried out in an automated and repeatable manner.
  • the computer program can be in machine-readable form.
  • the program code can comprise a sequence of instructions for a processor.
  • the computer program can be executed by the processor of the mobile device.
  • the invention can be implemented in hardware and / or software.
  • Fig. 1 shows a diagram of a method 100 for detecting a viewing angle-dependent feature of a document according to an embodiment.
  • the method 100 is performed using an image camera.
  • the viewing angle dependent feature has viewing angle dependent representations.
  • the method 100 comprises capturing 101 a first image of the document by the image camera in a first spatial position of the document relative to the image camera in order to obtain a first document image, capturing 103 a second image of the document by the image camera in a second spatial position of the document relative to the image camera in order to obtain a second document image, and detecting 105 an image difference between the first document image and the second document image in order to detect the feature of the document which is dependent on the viewing angle.
  • the viewing angle dependent feature can have viewing angle dependent representations and / or illumination angle dependent representations.
  • the document can be one of the following documents: an identity document, such as an identity card, a passport, an access control card, an authorization card, a company ID, a tax code, a ticket, a birth certificate, a driver's license, a vehicle ID card, or a means of payment, for example a Bank card or credit card.
  • the document can furthermore comprise an electronically readable circuit, for example an RFID chip.
  • the document can have one or more layers and be paper and / or plastic-based.
  • the document can be constructed from plastic-based films, which become a Card bodies are joined together by means of gluing and / or lamination, the foils preferably having similar material properties.
  • the first spatial position of the document relative to the image camera can include an arrangement and / or inclination of the document, for example comprising a translation and a rotation, relative to the image camera.
  • the first spatial position can comprise a pose with six degrees of freedom, three degrees of freedom being associated with the arrangement, and three degrees of freedom being associated with the inclination.
  • the second spatial position of the document relative to the image camera can include an arrangement and / or inclination of the document relative to the image camera.
  • the second spatial position can comprise a pose with six degrees of freedom, three degrees of freedom being associated with the arrangement, and three degrees of freedom being associated with the inclination.
  • the first document image can be a color image or a grayscale image.
  • the first document image can include a plurality of pixels.
  • the second document image can be a color image or a grayscale image.
  • the second document image can include a plurality of pixels.
  • the first document image and the second document image can form an image stack.
  • the image difference between the first document image and the second document image can be detected on the basis of the plurality of pixels of the first document image and the plurality of pixels of the second document image.
  • Fig. 2 shows a diagram of a mobile device 200 for detecting a viewing angle-dependent feature of a document according to an embodiment.
  • the viewing angle dependent feature has viewing angle dependent representations.
  • the mobile device 200 comprises an image camera 201 which is designed to capture a first image of the document in a first spatial position of the document relative to the image camera in order to obtain a first document image, and a second image of the document in a second spatial position of the document relative to the image camera to obtain a second document image, and a processor 203 which is configured to detect an image difference between the first document image and the capture second document image in order to detect the viewing angle-dependent feature of the document.
  • the mobile device 200 can be a cell phone or a smartphone.
  • the image camera 201 can be a digital image camera.
  • the processor 203 can execute a computer program.
  • the image camera 201 can be connected to the processor 203.
  • the mobile device 200 can furthermore have an illumination device for illuminating the document.
  • the lighting device can be an LED lighting device.
  • the method 100 can be carried out by means of the mobile device 200. Further features of the mobile device 200 result directly from the functionality of the method 100.
  • Fig. 3 shows a diagram of a method 100 for detecting a viewing angle-dependent feature of a document according to an embodiment.
  • the method 100 comprises a sequence of steps 301 and a sequence of steps 303.
  • the sequence of steps 301 is carried out for each captured image.
  • the sequence of steps 303 is carried out once per document.
  • the step sequence 301 comprises a step 305 of image selection, a step 307 of registration of an image, and a step 309 of spatial filtering of the image.
  • a plurality of captured images and a plurality of specific spatial positions are processed by the sequence of steps 301 in order to provide an image stack.
  • the step sequence 303 comprises a step 311 of generating a difference image and a step 313 of segmenting and filtering.
  • the image stack is processed by the sequence of steps 303 to provide the location of the features.
  • the diagram consequently shows step sequences 301, 303 which can be carried out for a detection of a viewing angle-dependent feature, for example a hologram, per image and per document, as well as an evaluation of the image stack.
  • a viewing angle-dependent feature for example a hologram
  • a mobile device for example a standard smartphone.
  • an image stack with images of the document can be built up and evaluated in order to automatically determine the position and size of the viewing angle-dependent features of the document determine.
  • An automatic detection of both the existence and the position of viewing angle-dependent features on a document can be carried out using a mobile augmented reality (AR) arrangement.
  • AR augmented reality
  • Documents are usually made of paper or cardboard and are rectangular in shape. For reasons of robustness and efficiency, flat areas of documents are mainly considered. Capturing such documents with a mobile device can be a challenging task due to varying personal data on the document, due to changes in the viewing angle, due to the lighting, due to unexpected user behavior, and / or due to limitations of the image camera. Consequently, several captured images should be evaluated with regard to robustness, which can be achieved using a mobile augmented reality (AR) arrangement.
  • AR augmented reality
  • a suitable document template can be generated, which can be used for image-to-image tracking or for a dedicated registration step. This can be based on an algorithm for the detection of perspective distorted rectangles, and executed in real time on a mobile device, and thus serve as a basic building block.
  • the user can be requested to arrange an image camera of the mobile device in front of a document or object and to trigger the detection.
  • an edge image can be calculated, for example using a Canny edge detector with an automatic threshold value selection.
  • Image areas with text-like structures can be filtered in order to remove noise, followed by the detection of lines, for example using a Hough transform.
  • the detected lines can be grouped according to their rough direction.
  • An initial hypothesis for a rectangular area can be formed by considering pairs of line bundles which, for example, can comprise a total of four lines.
  • a final ordered list of rectangle hypotheses can be generated by calculating a support function on an extended edge image.
  • the top candidate of the list can be selected and a homography can be calculated to produce a corrected representation.
  • the dimensions of the rectified image can be determined by averaging the pixel width and / or height of the selected hypothesis.
  • the rectified image can be used to generate a planar tracking template which can be represented as an image pyramid at runtime and which can be tracked using natural image features.
  • a Harris corner detector and normalized cross correlation (NCC) can be used to match image areas across subsequent images and to establish homography between the current image and the rectified image or tracking template.
  • a movement model can be used to estimate and predict the movement of the image camera, and thus to save computing resources.
  • the algorithm can be executed in real time on mobile devices, for example standard smartphones, and can provide a complete spatial position or pose with six degrees of freedom (6DOF) for each captured image.
  • 6DOF degrees of freedom
  • the arrangement has the advantage that it allows interaction with previously unknown documents with any personal data.
  • knowledge of a current viewing angle can be advantageous, as it can allow working with rectified images and controlling image acquisition.
  • the algorithm comprises three main parts to generate an image stack: step 305 of image selection, step 307 of rectification and / or registration, and step 309 of spatial filtering.
  • the image selection in step 305 can be carried out as follows.
  • the image stack should include a plurality of images with spatial positions which use the variability of the viewing angle-dependent feature in the best possible way. For inexperienced users, this task can be challenging. Therefore, the task of image selection, in favor of repeatability and a reduced cognitive load, should not be carried out by the user.
  • the specific poses can be used to automatically select images based on a 2D orientation map.
  • the visibility and the similarity to the template can be taken into account in order to select suitable images.
  • the equalization or registration in step 307 can be carried out as follows. For each image that passes the selection step, an estimated homography from the tracking spatial location can be used to generate a rectified image. A complete set of images can thus form a stack of images of the same size. Basically, the document tracking algorithm can be robust and can successfully track the document over a wide range of angles. However, parts of the document can move out of the current camera image and the images can have perspective distortions. The rectified images can therefore be incomplete and / or not ideally aligned.
  • the alignment can be adapted using image feature extraction, windowed matching and / or homography estimation. However, this can reduce the frame rate, which may not be desirable. Since images are continuously captured and provided by the image camera, unsuitable rectified or registered images can be discarded using an NCC rating, which can be more computationally efficient. Because of the real-time tracking, this can be an efficient way to automatically select images.
  • the spatial filtering in step 309 can be performed as follows. Each new layer that is placed on the stack of rectified images can be spatially filtered in order to better deal with noise and remaining inaccuracies in the registration.
  • a windowed mean value filter which can be based on an integral image calculation, can be used for this task. Incomplete image information, for example undefined and / or black areas during rectification, can be taken into account by capturing valid image areas that are used during filtering using a second mask.
  • the spatial filtering in step 309 can be performed using a predetermined window size, for example 3x3 pixels.
  • the algorithm for processing the image stack comprises two main parts: the step 311 of generating a difference image by statistical-based evaluation and the step 313 of segmenting and searching for a mode to generate a final detection result.
  • an optional verification step can be carried out which can use NCC calculations on the estimated position of the viewing angle-dependent feature between rectified or registered images of the image stack in order to discard false-positive detections.
  • the generation of the difference image in step 311 can be carried out as follows.
  • the image stack can be interpreted as a time sequence for each position (x, y).
  • the degree of change can be assessed by calculating a suitable deviation measure with respect to a model m at the position (x, y) over the entire image stack, whereby document image masks, which can be determined in the previous step, can be taken into account.
  • an intermediate display for signs of viewing angle dependency can be provided, which can also be referred to as a difference image.
  • the viewing angle-dependent feature is a hologram and the difference image is a hologram map.
  • L (x, y) can denote the number of image stack layers which contain valid pixel values for the position (x, y) corresponding to the document image masks, where v I (x, y) can denote a pixel value in layer I.
  • v I (x, y) can denote a pixel value in layer I.
  • the segmentation and filtering in step 313 can be performed as follows. Dominant spatial peaks within the image difference or the difference image and adjacent image areas with large changes of comparable value or amount should be localized. Hence, this constitutes a Image segmentation task, whereby the choice of the image segmentation method can influence both the quality and the running time.
  • the use of a global threshold value may not be sufficient in certain cases. Then locally calculated threshold values can be used, which can additionally be adapted using global information. In order to save running time, integral images can be used for the filtering.
  • the calculated areas can then be filtered to reduce the number of false positives. Criteria relating to a minimum area, an aspect ratio, and a compactness can be used together with a minimum pixel value and / or a homogeneity for the area obtained.
  • the process of detecting a viewing angle-dependent feature can include detecting the document and moving a mobile device with an image camera or moving the document and capturing images of the document and associated spatial positions. This data can then be processed and analyzed by the algorithm.
  • a lighting device of the mobile device can be switched on or off. The lighting device can be advantageous in order to capture all relevant representations of the viewing angle-dependent feature.
  • the image stack can be generated and updated for each image.
  • the generation and evaluation of the difference image with an optional validation step can then be carried out.
  • the viewing angle-dependent feature of the document After the viewing angle-dependent feature of the document has been successfully detected, the viewing angle-dependent feature can be highlighted in an image using a surrounding frame or box in the corresponding position.
  • Real-time tracking of the document can be used to obtain registered images from a variety of angles. In this case, only the difference image can be segmented in order to obtain possible image areas which can then be validated. This enables a process to be implemented which can be easily integrated into existing applications for verification of documents.
  • Fig. 4 shows a diagram of a detection scenario for detecting a viewing angle-dependent feature 402 of a document 401 according to an embodiment.
  • the diagram shows the acquisition of a plurality of images of the document 401 from different spatial positions.
  • a first document image 403 is captured in a first spatial position, a second document image 405 in a second spatial position, a third document image 407 in a third spatial position, and an Nth document image 409 in an Nth spatial position.
  • Fig. 5 shows a diagram of a plurality of captured images of the document according to an embodiment.
  • a first document image 403 from a first spatial position, a second document image 405 from a second spatial position, a third document image 407 from a third spatial position, and an Nth document image 409 from an Nth spatial position are shown one above the other in the form of an image stack.
  • the first document image 403, the second document image 405, the third document image 407, and the N-th document image 409 are displayed together with respective document image masks.
  • the document can be tracked, with a plurality of images of the document being able to be captured from different spatial positions. On the basis of an estimated spatial position or homography, each document image can be rectified and placed on the image stack. The captured document images can be rectified and can have a predetermined resolution.
  • FIG. 6 shows a surface diagram 600 of a difference image according to an embodiment.
  • the surface diagram 600 shows pixel values of the difference image as a function of a position (x, y) for a document.
  • FIG. 10 shows a diagram 701 of a difference image and a contour diagram 703 of a segmented difference image according to an embodiment.
  • the diagram 701 shows pixel values of the difference image as a function of a position (x, y) for a document.
  • the diagram 701 corresponds to a scaled intensity image.
  • Fig. 8 shows contour diagrams 801, 803, 805, 807 with image segments for a plurality of captured images of a document according to one embodiment.
  • the plurality of captured images include a first document image 403, a second document image 405, a third document image 407, and an N-th document image 409.
  • the contour diagrams 801 are assigned to the first document image 403.
  • the contour diagrams 803 are assigned to the second document image 405.
  • the contour diagrams 805 are assigned to the third document image 407.
  • the contour diagrams 807 are assigned to the Nth document image 409.
  • MSER Maximally Stable Extremal Regions
  • mean-shift a method for reducing the influence of reflections on the document.
  • a highlight detector can also be used and inpainting can also be carried out.
  • the majority of the captured images or the image stacks can be analyzed in more detail.
  • the contour diagrams 801, 803, 805, 807 show a segmentation of an image stack, for example with one slice.
  • the document images 403, 405, 407, 409 are shown in the top row.
  • the middle row shows image segments, which for example can be determined using an MSER method.
  • the bottom row shows image segments that are determined, for example, by means of the MSER method, with modified images of the document, for example using highlight detection and / or inpainting, being the basis.
  • An approach for the automatic detection of a viewing angle-dependent feature of a document can thus be implemented using a mobile device.
  • Previously unknown documents can be detected and tracked.
  • the position of one or more viewing angle-dependent features, if any, can be determined automatically.
  • the detection of viewing angle-dependent features can represent a first step towards automated testing and verification of viewing-angle-dependent features.
  • the viewing angle-dependent features can be embedded features.
  • Fig. 9 shows a diagram 900 of a plurality of spatial positions for capturing a plurality of images of the document according to an embodiment.
  • the diagram 900 comprises a 2D orientation map for capturing the images of the document and / or for monitoring the capturing of the images of the document from different angles.
  • Predetermined spatial positions for capturing images of the document are highlighted by dots.
  • the predetermined spatial positions can correspond to an azimuth and an elevation of the document relative to an image camera.
  • the predetermined spatial positions can be defined in a quantized and / or discretized manner.
  • Features that are dependent on the viewing angle can change their representations depending on the viewing direction and lighting direction of existing light sources in the environment.
  • Features that are dependent on the viewing angle can be delimited from the environment in the document and / or have a limited extent in the document.
  • a local change in the appearance with regard to the viewing angle can be used to detect features that are dependent on the viewing angle.
  • the document should be recorded from different angles. Therefore a mobile augmented reality (AR) arrangement can be used for image acquisition.
  • AR augmented reality
  • the area of the document should first be detected.
  • a document image or a rectified document image can then be transferred to a tracking algorithm. This means that information on the spatial position can be made available in each individual document image. Neglecting a rotation around a line of sight, the acquisition of the images can be controlled with an orientation map, which can indicate an angle to the x-axis and to the y-axis. This can be filled depending on the current spatial position or pose and ensure that the document is viewed from different angles.
  • the extraction of the document can then be carried out by rectification using the specific spatial position of the tracker. An image stack with rectified and / or registered images can thus be formed.
  • a check can also be carried out using a normalized cross-correlation.
  • a model can be formed from the image stack (m 0 , m 1 ).
  • the deviations can be fused using each layer of the image stack by means of a deviation measure (e 0 , e 1 ) to form a difference image, for example in the form of a hologram card.
  • This difference image characterizes the document with regard to the position and extent of viewing angle-dependent features. Segmentation can then be carried out in order to obtain a set of image segments.
  • the filtered and validated image segments can represent the result of the detection.
  • the verification and / or validation of the image segments can reduce the number of false-positive detected viewing angle-dependent features.
  • a respective image segment can be extracted from each slice of the image stack.
  • Each image segment or patch is then compared with the other image segments or patches using a normalized cross correlation (NCC) and classified as a match or deviation using a threshold value th ncc. If the relative proportion is above a threshold value th validation , it can be assumed that the current image segment exhibits sufficient visual changes when the viewing angle changes.
  • NCC normalized cross correlation
  • each slice of the image stack can be segmented individually, for example using the Maximally Stable Extremal Regions (MSER) method. Sequences of image segments, which can be approximately spatially constant, can be extracted from the image segments obtained. Each sequence can then be viewed, segmented, filtered and validated as an individual difference image, for example in the form of a hologram card.
  • MSER Maximally Stable Extremal Regions
  • a segmentation of the difference image using local adaptive thresholding with automatic selection of a suitable window size can be used to improve the scaling invariance.
  • the determined image segment can be used in the filtering instead of a respective delimiting rectangle.
  • the peaks in the difference image determined in the previous step can be characterized by a comparison with the immediate surroundings in the difference image. This means that the verification or validation step using a normalized cross-correlation (NCC) can be omitted, depending on the application.
  • NCC normalized cross-correlation
  • holograms can be detected on unknown documents without reference information being available by means of a mobile device. It is thus achieved that a detection of a viewing angle-dependent feature can also be carried out without knowledge of the document type or the document layout.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Image Analysis (AREA)

Claims (11)

  1. Procédé (100) pour détecter une caractéristique (402) dépendante de l'angle de vision d'un document (401) en utilisant une caméra d'imagerie (201), la caractéristique (402) dépendante de l'angle de vision présentant des représentations dépendantes de l'angle de vision, comprenant :
    acquisition (101) d'une première image du document (401) par la caméra d'imagerie (201) dans une première position dans l'espace du document (401) par rapport à la caméra d'imagerie (201) afin d'obtenir une première image de document (403) ;
    acquisition (103) d'une deuxième image du document (401) par la caméra d'imagerie (201) dans une deuxième position dans l'espace du document (401) par rapport à la caméra d'imagerie (201) afin d'obtenir une deuxième image de document (405) ;
    acquisition (105) d'une différence d'images entre la première image de document (403) et la deuxième image de document (405) afin de détecter la caractéristique (402) dépendante de l'angle de vision du document (401),
    la différence d'images étant segmentée en une pluralité de segments d'image et la caractéristique (402) dépendante de l'angle de vision du document (401) étant détectée sur la base d'au moins un segment d'image de la pluralité de segments d'image,
    une dimension de segment d'image étant déterminée pour un segment d'image de la pluralité de segments d'image, et la dimension de segment d'image déterminée étant comparée à une dimension de segment d'image prédéterminée afin de qualifier le segment d'image pour la détection de la caractéristique (402) dépendante de l'angle de vision ;
    un premier segment d'image de document de la première image de document étant associé à un segment d'image de la pluralité de segments d'image et un deuxième segment d'image de document de la deuxième image de document étant associé à un segment d'image de la pluralité de segments d'image, le premier segment d'image de document étant comparé au deuxième segment d'image de document afin de qualifier le segment d'image pour la détection de la caractéristique dépendante de l'angle de vision ;
    la comparaison du premier segment d'image de document au deuxième segment d'image de document étant effectuée au moyen d'une corrélation croisée normalisée et le segment d'image pour la détection de la caractéristique dépendante de l'angle de vision étant qualifié lorsque le premier segment d'image de document et le deuxième segment d'image de document sont différents ;
    l'acquisition (101) de la première image du document (401) comprenant en outre une correction en perspective de la première image de document (403) sur la base de la première position dans l'espace, et l'acquisition (103) de la deuxième image du document (401) comprenant en outre une correction en perspective de la deuxième image de document (405) sur la base de la deuxième position dans l'espace ;
    le procédé comprenant en outre une détermination de la première position dans l'espace du document (401) par rapport à la caméra d'imagerie (201) sur la base de la première image de document (403) et/ou une détermination de la deuxième position dans l'espace du document (401) par rapport à la caméra d'imagerie (201) sur la base de la deuxième image de document (405).
  2. Procédé (100) selon la revendication 1, le procédé (100) comprenant une acquisition d'une pluralité d'images du document (401) par la caméra d'imagerie (201) dans différentes positions dans l'espace du document (401) par rapport à la caméra d'imagerie (201), l'acquisition (101) de la première image du document (401) comprenant une sélection de la première image parmi la pluralité d'images dans la première position dans l'espace et l'acquisition (103) de la deuxième image du document (401) comprenant une sélection de la deuxième image parmi la pluralité d'images dans la deuxième position dans l'espace, l'acquisition de la pluralité d'images du document (401) comprenant une détermination d'une position dans l'espace respective sur la base d'une image respective, les positions dans l'espace respectives étant comparées à la première position dans l'espace afin de sélectionner la première image parmi la pluralité d'images, les positions dans l'espace respectives étant comparées à la deuxième position dans l'espace afin de sélectionner la deuxième image parmi la pluralité d'images.
  3. Procédé (100) selon la revendication 1 ou 2, la position dans l'espace respective étant déterminée au moyen d'une détection des arêtes.
  4. Procédé (100) selon l'une des revendications précédentes, l'image de document (403, 405) respective étant soumise à un filtrage passe-bas en vue de réduire le bruit.
  5. Procédé (100) selon l'une des revendications précédentes, la première image de document (403) étant comparée à la deuxième image de document (405) afin de déterminer une orientation de la première image de document (403) par rapport à la deuxième image de document (405), et la première image de document (403) et la deuxième image de document (405) étant alignées l'une sur l'autre sur la base de l'orientation déterminée.
  6. Procédé (100) selon l'une des revendications précédentes, une valeur moyenne étant déterminée à partir d'une première valeur de pixel d'un pixel de la première image de document (403) et d'une deuxième valeur de pixel d'un pixel de la deuxième image de document (405), un premier écart entre la première valeur de pixel et la valeur moyenne étant déterminé, un deuxième écart entre la deuxième valeur de pixel et la valeur moyenne étant déterminé, et la différence d'images étant acquise sur la base du premier écart et du deuxième écart.
  7. Procédé (100) selon l'une des revendications précédentes, un premier masque d'image de document étant déterminé sur la base de la première image de document (403), un deuxième masque d'image de document étant déterminé sur la base de la deuxième image de document (405) et la différence d'images étant acquise sur la base du premier masque d'image de document et du deuxième masque d'image de document.
  8. Procédé (100) selon la revendication 7, le masque d'image de document respectif affichant des pixels de l'image de document (403, 405) respective qui peuvent être utilisés pour l'acquisition (105) de la différence d'images.
  9. Procédé (100) selon l'une des revendications précédentes, la caractéristique (402) dépendante de l'angle de vision comprenant un hologramme ou une encre d'impression ayant des propriétés de réflexion ou des propriétés d'absorption dépendantes de l'angle de vision.
  10. Appareil mobile (200) destiné à détecter une caractéristique (402) dépendante de l'angle de vision d'un document (401), la caractéristique (402) dépendante de l'angle de vision présentant des représentations dépendantes de l'angle de vision, comprenant :
    une caméra d'imagerie (201) qui est configurée pour acquérir une première image du document (401) dans une première position dans l'espace du document (401) par rapport à la caméra d'imagerie (201) afin d'obtenir une première image de document (403), et pour acquérir une deuxième image du document (401) dans une deuxième position dans l'espace du document (401) par rapport à la caméra d'imagerie (201) afin d'obtenir une deuxième image de document (405) ; et
    un processeur (203), qui est configuré pour acquérir une différence d'images entre la première image de document (403) et la deuxième image de document (405) afin de détecter la caractéristique (402) dépendante de l'angle de vision du document (401),
    le processeur étant configuré pour segmenter la différence d'images en une pluralité de segments d'image et pour détecter la caractéristique (402) dépendante de l'angle de vision du document (401) sur la base d'au moins un segment d'image de la pluralité de segments d'image,
    le processeur étant configuré pour déterminer une dimension de segment d'image pour un segment d'image de la pluralité de segments d'image, et pour comparer la dimension de segment d'image déterminée à une dimension de segment d'image prédéterminée afin de qualifier le segment d'image pour la détection de la caractéristique (402) dépendante de l'angle de vision ;
    le processeur étant configuré pour associer un premier segment d'image de document de la première image de document à un segment d'image de la pluralité de segments d'image et un deuxième segment d'image de document de la deuxième image de document à un segment d'image de la pluralité de segments d'image, et pour comparer le premier segment d'image de document au deuxième segment d'image de document afin de qualifier le segment d'image pour la détection de la caractéristique dépendante de l'angle de vision ;
    le processeur étant configuré pour effectuer la comparaison du premier segment d'image de document au deuxième segment d'image de document au moyen d'une corrélation croisée normalisée et pour qualifier le segment d'image pour la détection de la caractéristique dépendante de l'angle de vision lorsque le premier segment d'image de document et le deuxième segment d'image de document sont différents ;
    l'acquisition (101) de la première image du document (401) comprenant en outre une correction en perspective de la première image de document (403) sur la base de la première position dans l'espace, et l'acquisition (103) de la deuxième image du document (401) comprenant en outre une correction en perspective de la deuxième image de document (405) sur la base de la deuxième position dans l'espace ;
    l'appareil mobile (200) étant configuré pour déterminer la première position dans l'espace du document (401) par rapport à la caméra d'imagerie (201) sur la base de la première image de document (403) et/ou pour déterminer la deuxième position dans l'espace du document (401) par rapport à la caméra d'imagerie (201) sur la base de la deuxième image de document.
  11. Programme informatique comprenant un code de programme dont les instructions ont pour effet que l'appareil mobile selon la revendication 10 met en œuvre un procédé selon l'une des revendications 1 à 9.
EP15728835.8A 2014-06-17 2015-06-10 Procédé pour la détection d'une caractéristique dépendant de l'angle d'observation Active EP3158543B1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102014108492.6A DE102014108492A1 (de) 2014-06-17 2014-06-17 Verfahren zum Detektieren eines blickwinkelabhängigen Merkmals eines Dokumentes
PCT/EP2015/062943 WO2015193152A1 (fr) 2014-06-17 2015-06-10 Procédé de détection d'une caractéristique d'un document, dépendant de l'angle de vue

Publications (2)

Publication Number Publication Date
EP3158543A1 EP3158543A1 (fr) 2017-04-26
EP3158543B1 true EP3158543B1 (fr) 2021-10-13

Family

ID=53396483

Family Applications (1)

Application Number Title Priority Date Filing Date
EP15728835.8A Active EP3158543B1 (fr) 2014-06-17 2015-06-10 Procédé pour la détection d'une caractéristique dépendant de l'angle d'observation

Country Status (3)

Country Link
EP (1) EP3158543B1 (fr)
DE (1) DE102014108492A1 (fr)
WO (1) WO2015193152A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201602198D0 (en) * 2016-02-08 2016-03-23 Idscan Biometrics Ltd Method computer program and system for hologram extraction
RU2644513C1 (ru) 2017-02-27 2018-02-12 Общество с ограниченной ответственностью "СМАРТ ЭНДЖИНС СЕРВИС" Способ детектирования голографических элементов в видеопотоке
TWI844619B (zh) 2019-02-28 2024-06-11 瑞士商西克帕控股有限公司 利用可攜式裝置來認證磁感應標記之方法
AR123354A1 (es) 2020-09-02 2022-11-23 Sicpa Holding Sa Marca de seguridad, método y dispositivo para leer la marca de seguridad, documento de seguridad marcado con la marca de seguridad y método y sistema para verificar dicho documento de seguridad

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NL9001616A (nl) * 1990-07-16 1992-02-17 Stichting Daht Foundation Werkwijze voor het identificeren van een hologram en inrichting voor het uitvoeren van deze werkwijze.
US6473165B1 (en) * 2000-01-21 2002-10-29 Flex Products, Inc. Automated verification systems and methods for use with optical interference devices
US7672475B2 (en) * 2003-12-11 2010-03-02 Fraudhalt Limited Method and apparatus for verifying a hologram and a credit card
DE202005018964U1 (de) * 2005-12-02 2006-03-16 Basler Ag Vorrichtung zum Prüfen der Echtheit von Dokumenten
JP5394071B2 (ja) * 2006-01-23 2014-01-22 ディジマーク コーポレイション 物理的な物品で有用な方法
US7925096B2 (en) * 2007-12-12 2011-04-12 Xerox Corporation Method and apparatus for validating holograms
US20100253782A1 (en) * 2009-04-07 2010-10-07 Latent Image Technology Ltd. Device and method for automated verification of polarization-variant images
US8953037B2 (en) * 2011-10-14 2015-02-10 Microsoft Corporation Obtaining spatially varying bidirectional reflectance distribution function
DE102013101587A1 (de) * 2013-02-18 2014-08-21 Bundesdruckerei Gmbh Verfahren zum überprüfen der echtheit eines identifikationsdokumentes

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Also Published As

Publication number Publication date
DE102014108492A1 (de) 2015-12-17
EP3158543A1 (fr) 2017-04-26
WO2015193152A1 (fr) 2015-12-23

Similar Documents

Publication Publication Date Title
Bu et al. Crack detection using a texture analysis-based technique for visual bridge inspection
DE102011106050B4 (de) Schattenentfernung in einem durch eine fahrzeugbasierte Kamera erfassten Bild zur Detektion eines freien Pfads
Abolghasemi et al. An edge-based color-aided method for license plate detection
DE102008000758B4 (de) Vorrichtung zum Erkennen eines Objekts in einer Abbildung
DE112020005932T5 (de) Systeme und verfahren zur segmentierung von transparenten objekten mittels polarisationsmerkmalen
DE102017220752B4 (de) Bildverarbeitungsvorrichtung, Bildbverarbeitungsverfahren und Bildverarbeitungsprogramm
DE102017220307B4 (de) Vorrichtung und Verfahren zum Erkennen von Verkehrszeichen
Gao et al. A novel target detection method for SAR images based on shadow proposal and saliency analysis
DE102014117102B4 (de) Spurwechselwarnsystem und Verfahren zum Steuern des Spurwechselwarnsystems
EP0780002A1 (fr) Procede de reconstruction de structures constituees de lignes sous forme d'une trame
EP3158543B1 (fr) Procédé pour la détection d'une caractéristique dépendant de l'angle d'observation
DE102015122116A1 (de) System und Verfahren zur Ermittlung von Clutter in einem aufgenommenen Bild
DE102015207903A1 (de) Vorrichtung und Verfahren zum Erfassen eines Verkehrszeichens vom Balkentyp in einem Verkehrszeichen-Erkennungssystem
Gao et al. From quaternion to octonion: Feature-based image saliency detection
DE102013110785A1 (de) Positionierverfahren für die positionierung eines mobilgerätes relativ zu einem sicherheitsmerkmal eines dokumentes
EP2549408A1 (fr) Procédé de détection et de classification d'objets
Hoang et al. Scalable histogram of oriented gradients for multi-size car detection
Haselhoff et al. On visual crosswalk detection for driver assistance systems
EP3259703B1 (fr) Appareil mobile pour détecter une zone de texte sur un document d'identification
Khaliluzzaman et al. Zebra-crossing detection based on geometric feature and vertical vanishing point
EP2394250B1 (fr) Procédé et dispositif pour vérifier des documents par utilisation d'une transformation en ondelettes
EP2551788A1 (fr) Procédé pour la reconnaissance de visage
WO2013144136A1 (fr) Procédé pour détecter une structure polygonale déformée en perspective dans une image d'un document d'identification
Joshi et al. A computational model for boundary detection
Gaur et al. Comparison of edge detection techniques for segmenting car license plates

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20170109

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20181009

REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 502015015293

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: G07D0007120000

Ipc: G07D0007000000

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

RIC1 Information provided on ipc code assigned before grant

Ipc: G07D 7/20 20160101ALI20210617BHEP

Ipc: G07D 7/00 20160101AFI20210617BHEP

INTG Intention to grant announced

Effective date: 20210713

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 502015015293

Country of ref document: DE

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: GERMAN

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1438744

Country of ref document: AT

Kind code of ref document: T

Effective date: 20211115

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG9D

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20211013

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211013

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211013

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211013

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220113

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220213

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211013

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220214

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211013

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220113

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211013

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211013

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211013

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220114

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211013

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 502015015293

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211013

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211013

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211013

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211013

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211013

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211013

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20220714

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211013

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211013

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211013

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20220630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20220610

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20220630

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20220610

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20220630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211013

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20220630

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230526

REG Reference to a national code

Ref country code: AT

Ref legal event code: MM01

Ref document number: 1438744

Country of ref document: AT

Kind code of ref document: T

Effective date: 20220610

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20220610

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20150610

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211013

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211013

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211013

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20250618

Year of fee payment: 11

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20250620

Year of fee payment: 11

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20250626

Year of fee payment: 11

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211013