WO2025077567A1 - Procédé, appareil et dispositif de sortie de modèle tridimensionnel, et support de stockage lisible par ordinateur - Google Patents
Procédé, appareil et dispositif de sortie de modèle tridimensionnel, et support de stockage lisible par ordinateur Download PDFInfo
- Publication number
- WO2025077567A1 WO2025077567A1 PCT/CN2024/120783 CN2024120783W WO2025077567A1 WO 2025077567 A1 WO2025077567 A1 WO 2025077567A1 CN 2024120783 W CN2024120783 W CN 2024120783W WO 2025077567 A1 WO2025077567 A1 WO 2025077567A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- dimensional model
- voxel grid
- target image
- initial
- model output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
Definitions
- the embodiments of the present application relate to the field of three-dimensional reconstruction technology, and specifically to a three-dimensional model output method, device, equipment and computer-readable storage medium.
- 3D models are used to print parts in industry and to design and plan buildings.
- Common 3D models are often manually constructed by technicians using modeling software based on pre-designed drawings.
- the constructed 3D model needs to be viewed on other devices, the complete 3D model file needs to be sent to other devices through hardware copying or wireless transmission before the 3D model can be viewed on other devices.
- the existing method of manually constructing a 3D model and directly outputting the 3D model data when needed may cause inconvenience.
- people on foot want to use AR glasses or mobile phones to view the building shapes and road contours of the area they are about to arrive in front of them to determine whether the destination is ahead if technicians manually model all buildings and roads in advance, it will take a lot of time and effort, and the amount of data in the 3D model will also be large.
- the 3D model is transmitted to the terminal carried by the user for the user to read and view, it will be limited by the data transmission speed, resulting in the user having to spend a lot of time waiting for the 3D model data to be transmitted to the user terminal.
- the user walks in an area where the 3D modeling has not been completed in advance, or when the regional buildings and roads change the user will not be able to view or receive incorrect 3D model information.
- the embodiments of the present application provide a 3D model output method, apparatus, device and computer-readable storage medium to solve the problem of inconvenience in generating and outputting 3D model images.
- a three-dimensional model output method comprising: acquiring a plurality of first initial images taken by cameras with different camera postures;
- the three-dimensional model is projected to obtain a target image on a virtual screen, including: calculating a surface normal vector of a point cloud set; obtaining a randomly sampled point cloud by using a random algorithm for the point cloud set; generating a target space voxel grid according to the surface normal vector and the randomly sampled point cloud; performing differentiable rendering on the target space voxel grid to obtain a color value of each pixel in the target space voxel grid; and projecting the color value of each pixel in the target space voxel grid onto the virtual screen through a projection matrix to obtain a target image.
- a target space voxel grid is generated based on a surface normal vector and a randomly sampled point cloud, including: combining a point cloud set with a surface normal vector to generate a surface grid; constructing an empty space voxel grid; projecting the surface grid into the empty space voxel grid to obtain an initial space voxel grid; calculating a depth value of each voxel grid in the initial space voxel grid; and applying the depth value to the initial space voxel grid to obtain a target space voxel grid.
- each voxel grid in the initial spatial voxel grid is calculated, where dx ,y,z is the depth value of the voxel grid with coordinates (x, y, z) in the initial spatial voxel grid, n is the number of triangles projected from the surface mesh onto the voxel grid, fx ,y,z is the default depth value calculated based on the position of the voxel grid in space, dj is the depth value of the triangle projected from the surface mesh onto the voxel grid, and Tj is the set of coordinates of all voxel grids in the initial spatial voxel grid.
- the three-dimensional model output method before outputting the target image, further includes: converting the coordinates of each pixel on the target image into a two-dimensional coordinate system.
- the three-dimensional model output method further includes: acquiring a second initial image; superimposing the second initial image with the target image to obtain a new target image; and outputting the new target image.
- a three-dimensional model output device including: a first acquisition module for acquiring multiple first initial images taken by cameras at different camera positions; a construction module for performing three-dimensional reconstruction based on the multiple first initial images to obtain a three-dimensional model; a conversion module for projecting the three-dimensional model to obtain a target image on a virtual screen; and a first output module for outputting the target image.
- a three-dimensional model output device including: a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface communicate with each other through the communication bus; the memory is used to store at least one program, and the program enables the processor to execute the operation of any one of the three-dimensional model output methods as above.
- a computer-readable storage medium in which executable instructions are stored, and the executable instructions enable a three-dimensional model output device to perform the operation of any one of the three-dimensional model output methods described above.
- FIG1 is a schematic diagram of a flow chart of a three-dimensional model output method provided in an embodiment of the present application
- FIG. 2 is a flow chart of sub-steps included in step 120 and step 130 of the present application;
- FIG4 is a schematic diagram of the process flow of the steps before step 140 of the present application.
- FIG6 is a functional block diagram of a three-dimensional model output device provided in an embodiment of the present application.
- FIG. 7 is a schematic diagram of the structure of a three-dimensional model output device provided in an embodiment of the present application.
- 3D models can be applied to all aspects of people's daily lives. People can view pre-built 3D models on various mobile devices and enjoy the convenience brought by 3D models containing rich information anytime and anywhere, which brings convenience to people's daily activities such as map navigation, furniture arrangement and online shopping.
- Step 120 Perform three-dimensional reconstruction based on the multiple first initial images to obtain a three-dimensional model
- Step 130 Projecting the three-dimensional model to obtain a target image on a virtual screen
- a plurality of first initial images taken by cameras with different camera postures may be obtained by aerial photography of a drone or by photographing with a mobile phone lens. Since the plurality of first initial images will be used for 3D reconstruction, when it is necessary to use a plurality of target images to generate a 3D model of the same target, the plurality of first initial images taken should contain at least part of the same target. In some application scenarios, a single image may be used to generate a separate 3D model. In the method of a three-dimensional model, the multiple first initial images captured at this time can be images of completely different targets, but with connected or approximately connected boundaries, and then the multiple three-dimensional models generated in step 120 are spliced to obtain a complete three-dimensional model.
- the multiple first initial images should be able to reflect the information of multiple angles of the target object or area, and when the camera postures of the cameras that shoot the first initial images are different, it means that the shooting angles are different, which can ensure to a certain extent that the multiple first initial images obtained can better reflect the information of multiple angles of the target object or area.
- the multiple first initial images captured by cameras with different camera postures can refer to multiple images captured in sequence by the same camera device at different camera postures, or can also be multiple images captured simultaneously by multiple camera devices with different camera postures deployed at the same time.
- Those skilled in the art should be able to adjust the shooting strategy according to actual needs to obtain multiple first initial images that are convenient for three-dimensional reconstruction.
- the embodiment of the present application does not specifically limit the acquisition order and specific shooting method of the first initial images.
- the method of performing three-dimensional reconstruction based on the multiple first initial images can be, for example, generating a point cloud based on the multiple initial images, and then processing the point cloud to finally generate a three-dimensional model.
- other similar methods or algorithms can also be used for three-dimensional reconstruction, the purpose of which is to restore the depth information in the multiple first initial images as accurately as possible, so that the finally reconstructed three-dimensional model is as consistent as possible with the real scene or the required target shape.
- the output object of the target image may be, for example, a display screen, a mobile phone, or AR glasses.
- the size of the virtual screen in step 130 may be larger, such as a larger angle area surrounding the three-dimensional model, so that the target image projected by the three-dimensional model can reflect more information of the three-dimensional model.
- Step 121 generating a point cloud set according to a plurality of first initial images
- Step 122 construct a three-dimensional model based on the point cloud set
- Step 130 includes:
- Step 132 using a random algorithm to obtain a random sampling point cloud for the point cloud set
- Step 133 Generate a target space voxel grid according to the surface normal vector and the randomly sampled point cloud;
- the point cloud set refers to a set composed of multiple three-dimensional point clouds.
- the method of generating a point cloud set based on multiple first initial images can be, for example, by calculating the features of each pixel in the first initial image, matching the pixels in the multiple first initial images based on the poses when the multiple cameras shoot the first initial images, and constructing the pixels in the multiple first initial images in the same coordinate system to generate multiple three-dimensional point clouds to form a point cloud set.
- step 131 to step 133 the surface normal vector of the point cloud set is calculated, and a random search is performed around the point cloud set to obtain a randomly sampled point cloud, and then the randomly sampled point cloud is combined with the surface normal vector to obtain a target space voxel grid.
- the purpose of calculating the surface normal vector of the point cloud set is to enable the target space voxel grid generated in step 133 to contain lighting and shadow data to improve the effect of the final generated three-dimensional model.
- the purpose of calculating the randomly sampled point cloud is to reduce the density of the point cloud while keeping the overall geometric features of the point cloud unchanged to reduce the amount of data to be processed and increase the calculation speed.
- the target space voxel grid is rendered differently to obtain the color value of each pixel in the target space voxel grid, where differentiable rendering refers to a rendering process that can be differentiated. .
- Step 1332 construct an empty space voxel grid
- Step 1333 Project the surface mesh into an empty space voxel grid to obtain an initial space voxel grid
- Step 1334 Calculate the depth value of each voxel grid in the initial spatial voxel grid
- Step 1335 Apply the depth value to the initial spatial voxel grid to obtain the target spatial voxel grid.
- the point cloud set is first combined with the surface normal vector to generate a surface mesh without depth value data, and an empty spatial voxel grid is constructed.
- steps 1333 to 1334 each triangular face in the surface mesh is projected into the empty spatial voxel grid, and the covered voxel grids of the obtained initial spatial voxel grid are marked as covered.
- the depth value of their center points is calculated, and the calculated depth value is used as the depth value of all triangular faces in the surface mesh projected onto the marked voxel grid.
- their depth values can be calculated based on their positions in space.
- a method for generating a target space voxel grid is given, by projecting a surface mesh generated by combining a point cloud set with a surface normal vector into an empty space voxel grid to obtain an initial space voxel grid, and calculating the depth value of each voxel grid in the initial space voxel grid, and then applying the depth value to the initial space voxel grid to obtain a target space voxel grid for generating a three-dimensional model.
- each voxel grid in the initial spatial voxel grid is calculated, where d x,y,z is the coordinate of the initial spatial voxel grid (x, y, z), n is the number of triangles projected from the surface mesh onto the voxel mesh, fx ,y,z is the default depth value calculated based on the position of the voxel mesh in space, d i is the depth value of the triangle projected from the surface mesh onto the voxel mesh, and T j is the set of coordinates of all voxel meshes in the initial spatial voxel mesh.
- the depth value of each voxel grid in the initial space voxel grid is calculated by the above formula, and the depth value can be applied to the initial space voxel grid to obtain the target space voxel grid containing the depth value information of each voxel grid, which is convenient for the subsequent reconstruction and rendering of the three-dimensional model, so that the generated three-dimensional model can reflect the distance between each pixel and the projection viewing angle, making the three-dimensional model more refined and easier for users to observe and view.
- the color value of each pixel in the target space voxel grid is projected onto the virtual screen, where u, v, and w are the coordinate positions of the pixels with coordinates (X, Y, Z) in the space voxel grid on the virtual screen, fx and fy are the focal lengths, and cx and cy are the coordinates of the center point of the image on the virtual screen.
- the projection coordinates of the color value of each pixel in the target space voxel grid on the virtual screen are calculated by the above formula, so that the projection position of each pixel in the target space voxel grid on the virtual screen can be unified on the virtual screen, and the problem of coordinate axis confusion is less likely to occur.
- the coordinates of several pixels are: (3, 15, 2)(3, 16, 2)...(5, 3, 2). It is easy to know that the virtual screen is a plane with a Z axis of 2. At this time, the plane with a Z axis of 2 is used as a plane coordinate system, and the coordinates of several pixels obtained after projection are: (3, 15)(3, 16)...(5, 3).
- the coordinate data of the pixel coordinate values of the target image obtained after projection on the virtual screen is more refined, which facilitates the output of the target image.
- FIG. 5 is a flow chart of steps after step 140 of the present application. In some embodiments of the present application, after step 140, the following steps are also included:
- Step 1430 Output the new target image.
- step 1420 to step 1430 the second initial image is superimposed on the target image to obtain a new target image index.
- the pixels in the second initial image are superimposed with the pixels in the target image.
- the superposition method can adopt the image processing technology in computer vision to perform weighted averaging or transparency mixing of the pixels of the two to obtain a new target image, and output the new target image to the display device.
- the construction module 602 further includes a first generation unit and a second generation unit.
- the first generation unit is used to generate a point cloud set according to the plurality of first initial images
- the second generation unit is used to construct a three-dimensional model according to the point cloud set.
- the first conversion module 603 further includes a first calculation unit, a second calculation unit, a third generation unit, a rendering unit, and a projection unit.
- the first calculation unit is used to calculate the surface normal vector of the point cloud set
- the second calculation unit is used to obtain a randomly sampled point cloud using a random algorithm for the point cloud set
- the third generation unit is used to generate a target space voxel grid based on the surface normal vector and the randomly sampled point cloud
- the rendering unit is used to perform differentiable rendering on the target space voxel grid to obtain the target space voxel grid
- the projection unit is used to project the color value of each pixel in the target space voxel grid onto a virtual screen through a projection matrix to obtain a target image.
- the third generation unit further includes a generation element, a construction element, a projection element, a calculation element, and a processing element.
- the generation element is used to combine the point cloud set with the surface normal vector to generate a surface mesh
- the construction element is used to construct an empty space voxel mesh
- the projection element is used to project the surface mesh into the empty space voxel mesh to obtain an initial space voxel mesh
- the calculation element is used to calculate the depth value of each voxel grid in the initial space voxel mesh
- the processing element is used to apply the depth value to the initial space voxel mesh to obtain a target space voxel mesh.
- the 3D model output device 600 further includes a second conversion module.
- the second conversion module is used to convert the coordinates of each pixel on the target image into a two-dimensional coordinate system.
- the three-dimensional model output device may include: a processor 702 , a memory 706 , a communication interface 704 and a communication bus 708 .
- the program 710 may include program code including computer executable instructions.
- the embodiment of the present application further provides a computer-readable storage medium, in which executable instructions are stored.
- the executable instructions are executed on a three-dimensional model output device
- the three-dimensional model output device executes the three-dimensional model output method in any of the above embodiments;
- the executable instructions can specifically be used to enable the three-dimensional model output device to perform the following operations: obtain multiple first initial images taken by cameras with different camera positions; perform three-dimensional reconstruction based on the multiple first initial images to obtain a three-dimensional model; project the three-dimensional model to obtain a target image on a virtual screen; and output the target image.
- modules in the devices in the embodiments can be adaptively changed and arranged in one or more devices different from the embodiments.
- the modules or units or components in the embodiments can be combined into one module or unit or component, and they can be divided into multiple submodules or subunits or subassemblies. Except that at least some of such features and/or processes or units are mutually exclusive, all features disclosed in this specification (including the accompanying abstract and drawings) and all processes or units of any method or device disclosed in this manner can be combined in any combination. Unless otherwise explicitly stated, each feature disclosed in this specification (including the accompanying abstract and drawings) can be replaced by an alternative feature providing the same, equivalent or similar purpose.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
Des modes de réalisation de la présente demande se rapportent au domaine technique de la reconstruction tridimensionnelle, et concernent un procédé, un appareil et un dispositif de sortie de modèle tridimensionnel, ainsi qu'un support de stockage lisible par ordinateur. Le procédé consiste à : acquérir une pluralité d'images initiales d'un objet cible ou d'une zone cible pour effectuer une reconstruction tridimensionnelle ; établir un modèle tridimensionnel pour l'objet cible ou la zone cible ; puis projeter le modèle tridimensionnel généré sur un écran virtuel ; et délivrer en sortie une image, qui est obtenue par projection, sur l'écran virtuel. De cette manière, la génération du modèle tridimensionnel est plus rapide, il n'est pas nécessaire que des techniciens construisent manuellement le modèle, le volume de données de sortie d'image est inférieur, et la vitesse de sortie est plus rapide.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311290411.4 | 2023-10-08 | ||
| CN202311290411.4A CN117036444A (zh) | 2023-10-08 | 2023-10-08 | 三维模型输出方法、装置、设备及计算机可读存储介质 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025077567A1 true WO2025077567A1 (fr) | 2025-04-17 |
Family
ID=88632217
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2024/120783 Pending WO2025077567A1 (fr) | 2023-10-08 | 2024-09-24 | Procédé, appareil et dispositif de sortie de modèle tridimensionnel, et support de stockage lisible par ordinateur |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN117036444A (fr) |
| WO (1) | WO2025077567A1 (fr) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120318462A (zh) * | 2025-06-13 | 2025-07-15 | 柏意慧心(杭州)网络科技有限公司 | 基于聚类算法的夹层腔体分离方法、装置、设备和介质 |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN117036444A (zh) * | 2023-10-08 | 2023-11-10 | 深圳市其域创新科技有限公司 | 三维模型输出方法、装置、设备及计算机可读存储介质 |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20060066611A1 (en) * | 2004-09-24 | 2006-03-30 | Konica Minolta Medical And Graphic, Inc. | Image processing device and program |
| CN110998669A (zh) * | 2017-08-08 | 2020-04-10 | 索尼公司 | 图像处理装置和方法 |
| CN112991458A (zh) * | 2021-03-09 | 2021-06-18 | 武汉大学 | 一种基于体素的快速三维建模方法及系统 |
| CN116018619A (zh) * | 2020-07-01 | 2023-04-25 | 索尼集团公司 | 信息处理装置、信息处理方法和程序 |
| CN116310224A (zh) * | 2023-05-09 | 2023-06-23 | 小视科技(江苏)股份有限公司 | 一种快速目标三维重建方法及装置 |
| CN117036444A (zh) * | 2023-10-08 | 2023-11-10 | 深圳市其域创新科技有限公司 | 三维模型输出方法、装置、设备及计算机可读存储介质 |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106228594B (zh) * | 2016-07-18 | 2018-11-09 | 中国人民解放军理工大学 | 基于曲面细分的台风模式云动画显示方法 |
| CN107564089B (zh) * | 2017-08-10 | 2022-03-01 | 腾讯科技(深圳)有限公司 | 三维图像处理方法、装置、存储介质和计算机设备 |
| CN108600607A (zh) * | 2018-03-13 | 2018-09-28 | 上海网罗电子科技有限公司 | 一种基于无人机的消防全景信息展示方法 |
| CN112365673B (zh) * | 2020-11-12 | 2022-08-02 | 光谷技术有限公司 | 一种林区火势监控系统和方法 |
| CN112435427B (zh) * | 2020-11-12 | 2022-05-13 | 光谷技术有限公司 | 一种森林火灾监测系统和方法 |
| CN113808277B (zh) * | 2021-11-05 | 2023-07-18 | 腾讯科技(深圳)有限公司 | 一种图像处理方法及相关装置 |
| CN114047823B (zh) * | 2021-11-26 | 2024-06-11 | 贝壳找房(北京)科技有限公司 | 三维模型展示方法、计算机可读存储介质及电子设备 |
-
2023
- 2023-10-08 CN CN202311290411.4A patent/CN117036444A/zh active Pending
-
2024
- 2024-09-24 WO PCT/CN2024/120783 patent/WO2025077567A1/fr active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20060066611A1 (en) * | 2004-09-24 | 2006-03-30 | Konica Minolta Medical And Graphic, Inc. | Image processing device and program |
| CN110998669A (zh) * | 2017-08-08 | 2020-04-10 | 索尼公司 | 图像处理装置和方法 |
| CN116018619A (zh) * | 2020-07-01 | 2023-04-25 | 索尼集团公司 | 信息处理装置、信息处理方法和程序 |
| CN112991458A (zh) * | 2021-03-09 | 2021-06-18 | 武汉大学 | 一种基于体素的快速三维建模方法及系统 |
| CN116310224A (zh) * | 2023-05-09 | 2023-06-23 | 小视科技(江苏)股份有限公司 | 一种快速目标三维重建方法及装置 |
| CN117036444A (zh) * | 2023-10-08 | 2023-11-10 | 深圳市其域创新科技有限公司 | 三维模型输出方法、装置、设备及计算机可读存储介质 |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120318462A (zh) * | 2025-06-13 | 2025-07-15 | 柏意慧心(杭州)网络科技有限公司 | 基于聚类算法的夹层腔体分离方法、装置、设备和介质 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN117036444A (zh) | 2023-11-10 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7176012B2 (ja) | オブジェクト・モデリング動作方法及び装置並びにデバイス | |
| CN108335353B (zh) | 动态场景的三维重建方法、装置和系统、服务器、介质 | |
| CN107223269B (zh) | 三维场景定位方法和装置 | |
| WO2021174939A1 (fr) | Procédé et système d'acquisition d'image faciale | |
| CN110648274B (zh) | 鱼眼图像的生成方法及装置 | |
| KR102718123B1 (ko) | 모델 생성 방법, 이미지 투시도 결정 방법, 장치, 설비 및 매체 | |
| WO2025077567A1 (fr) | Procédé, appareil et dispositif de sortie de modèle tridimensionnel, et support de stockage lisible par ordinateur | |
| CN111932664A (zh) | 图像渲染方法、装置、电子设备及存储介质 | |
| WO2023280038A1 (fr) | Procédé de construction d'un modèle tridimensionnel de scène réelle et appareil associé | |
| US10169891B2 (en) | Producing three-dimensional representation based on images of a person | |
| CN114972599B (zh) | 一种对场景进行虚拟化的方法 | |
| WO2022166868A1 (fr) | Procédé, appareil et dispositif de génération de vue de visite, et support de stockage | |
| CN113643414A (zh) | 一种三维图像生成方法、装置、电子设备及存储介质 | |
| CN110378947A (zh) | 3d模型重建方法、装置及电子设备 | |
| CN109427089B (zh) | 基于环境光照条件的混合现实对象呈现 | |
| CN112766215A (zh) | 人脸融合方法、装置、电子设备及存储介质 | |
| CN116630518A (zh) | 一种渲染方法、电子设备及介质 | |
| WO2017113729A1 (fr) | Procédé de chargement d'image à 360 degrés et module de chargement, et terminal mobile | |
| WO2019042028A1 (fr) | Procédé de rendu de champ lumineux sphérique tous azimuts | |
| CN112288878A (zh) | 增强现实预览方法及预览装置、电子设备及存储介质 | |
| CN115984447A (zh) | 图像渲染方法、装置、设备和介质 | |
| JP2022518402A (ja) | 三次元再構成の方法及び装置 | |
| WO2020181510A1 (fr) | Procédé, appareil et système de traitement de données d'image | |
| CN109166176B (zh) | 三维人脸图像的生成方法与装置 | |
| CN120259520A (zh) | 元素渲染方法、装置、设备、存储介质及程序产品 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24876385 Country of ref document: EP Kind code of ref document: A1 |