WO2025126343A1 - Dispositif de traitement d'informations et procédé de traitement d'informations - Google Patents
Dispositif de traitement d'informations et procédé de traitement d'informations Download PDFInfo
- Publication number
- WO2025126343A1 WO2025126343A1 PCT/JP2023/044521 JP2023044521W WO2025126343A1 WO 2025126343 A1 WO2025126343 A1 WO 2025126343A1 JP 2023044521 W JP2023044521 W JP 2023044521W WO 2025126343 A1 WO2025126343 A1 WO 2025126343A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- medical image
- inference
- subject
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/05—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
- A61B5/055—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
Definitions
- medical images refer to images obtained by actually capturing images of internal parts of the human body (e.g., organs such as the brain, lungs, heart, stomach, intestines, and kidneys) or by inference for the purpose of diagnosing and treating human illnesses.
- Patent Document 1 proposes a technology that predicts future brain images by performing a specified image processing (erosion processing) on the patient's current brain images.
- AI prediction artificial intelligence
- the present disclosure therefore aims to accurately predict medical images of human parts.
- the information processing device includes a model generation unit that generates a model by deep learning based on medical image information of a part of a person's body and background information on the person including age information at the time the part was imaged, an inference unit that infers a medical image of the part of the subject at a target age based on the model generated by the model generation unit, medical images of the part of the subject to be inferred, and background information on the subject including age information at the time the part was imaged and target age information for inference, and an output unit that outputs the medical image of the part of the subject obtained by inference by the inference unit.
- This disclosure makes it possible to accurately predict medical images of human body parts.
- the medical image to be inferred is a magnetic resonance image (i.e., a Magnetic Resonance Imaging image (hereinafter referred to as an "MRI image")) of a human brain.
- MRI image Magnetic Resonance Imaging image
- the model generation unit 11 is a functional unit that acquires information on medical images of various human body parts (here, MRI images of the brain (hereinafter referred to as "brain MRI images")) and background information including age information at the time the MRI images were taken, and generates a model M by deep learning based on the acquired brain MRI image information and background information.
- the "background information” refers to various information on the person whose brain MRI image was taken, including age information at the time the brain MRI image was taken, and may also include other attribute information of the person (for example, current age information, gender information, information on physique, information on medical history, information on lifestyle habits, etc.).
- the model generation unit 11 acquires image information of the brain MRI image as learning image information and the background information as learning background information, and in deep learning, generates (constructs) a model M so as to internally solve a regression problem of determining how old the inference image is in a brain image using the learning image information and learning background information as inputs.
- the model generation unit 11 of this embodiment has the first feature described below of incorporating the regression loss into the overall loss.
- the above learning image information and learning background information may be acquired by providing a learning information storage unit in which this information is stored in advance within the information processing device 10, and acquiring the learning image information and learning background information from this learning information storage unit, or, instead of providing the learning information storage unit, acquiring the information from an external server in which the above information is stored in advance.
- the model generation unit 11 of this embodiment has a second feature described later in which background information including age information is reflected in the generation of a model at a timing after the learning image information is compressed by the encoder and before the decoder starts restoring the learning image information (a timing also called the bottleneck part of U-Net) during the processing process in U-Net.
- the generator in deep learning is not limited to the above U-Net, and for example, ResNet (Residual Neural Networks) may also be used.
- ResNet Residual Neural Networks
- the trained model storage unit 12 is a functional unit that stores the model M generated by the model generation unit 11.
- the inference unit 13 is a functional unit that infers a medical image (here, a brain MRI image) of a part of the subject at a target age based on the model M generated by the model generation unit 11, a medical image (here, a brain MRI image) of a part of the subject to be inferred, and background information about the subject including age information at the time the medical image was captured and target age information for inference, and further has a function of generating an image (difference image) showing the difference between the medical image at the target age obtained by inference and the medical image at the age at the time of capture (i.e., the medical image used as the basis for inference).
- a medical image here, a brain MRI image
- the output unit 14 is a functional unit that outputs medical images (brain MRI images in this case) and difference images of the subject's body parts obtained by inference and generation by the inference unit 13. Note that "output” can take various forms, such as display output, print output, and data transmission outside the information processing device 10.
- the model generation unit 11 uses the acquired learning image information and learning background information as inputs to generate (construct) a model M to internally solve the regression problem of determining what age the inferred image is of the brain. At this time, the model generation unit 11 incorporates the regression loss into the overall loss (first feature).
- the algorithm used in the deep learning at this time is CycleGAN shown in Figure 4, which is a type of GAN (Generative Adversarial Network), a method of unsupervised learning. CycleGAN is a method of generating an image and then converting the generated image back into the original image, and in this case, instead of predicting whether the generated image is real as in normal GANs, it learns to make the converted image match the original input image.
- the first feature that is, incorporating the regression loss into the overall loss, is shown in the following equation (1), which is also shown at the top of Figure 4.
- L L GAN + ⁇ cycle L cycle + ⁇ idt L idt + ⁇ reg L reg (1) That is, the total cycle loss L is Normal GAN loss L GAN and The loss L cycle for the retransformed image to match the original input image, and Loss L idt for returning an identity map when the same domain is input
- the regression loss L reg term ( ⁇ reg L reg ) is further incorporated into the linear expression above, as shown in formula (1).
- the first feature above (incorporating regression loss into the overall loss) allows the generated model M to more accurately determine the age of a given brain MRI image as learning progresses.
- the generated model M can acquire knowledge for solving the problem.
- the accuracy of inferring brain MRI images of any target age can be improved.
- the subject can visually recognize the brain MRI image of the subject at the target age obtained by inference.
- the subject can also visually recognize a difference image showing the difference between the brain MRI image at the age at the time of imaging that was the basis of the inference and the brain MRI image at the target age. This allows the subject to clearly recognize, for example, the state of brain atrophy, which can be linked to behavioral changes such as improving lifestyle habits.
- [4] The information processing device according to any one of [1] to [3], wherein the model generation unit generates the model in the deep learning to solve a regression problem of determining how old the person was when a medical image of the part of a person obtained by inference was the medical image of the part of the person, and incorporates a loss of the regression into an overall loss.
- the medical image is a magnetic resonance image of a brain of the person or the subject.
- the information processing device infers a medical image of the part of the subject at the target age based on the generated model, a medical image of the part of the subject to be inferred, and background information on the subject including age information at the time of imaging of the part and target age information of inference; a step of outputting the medical image of the part of the subject obtained by inference by the information processing device;
- An information processing method comprising:
- each functional block may be realized using one device that is physically or logically coupled, or may be realized using two or more devices that are physically or logically separated and directly or indirectly connected (for example, using wires, wirelessly, etc.).
- the functional blocks may be realized by combining the one device or the multiple devices with software.
- an information processing device in an embodiment of the present disclosure may function as a computer that executes the processing of the present disclosure.
- FIG. 5 is a diagram showing an example of the hardware configuration of an information processing device 10 according to an embodiment of the present disclosure.
- the information processing device 10 described above may be physically configured as a computer device including a processor 1001, memory 1002, storage 1003, a communication device 1004, an input device 1005, an output device 1006, a bus 1007, etc.
- the word “apparatus” can be interpreted as a circuit, device, unit, etc.
- the hardware configuration of the information processing device 10 may be configured to include one or more of the devices shown in the figure, or may be configured to exclude some of the devices.
- Each function of the information processing device 10 is realized by loading a specific software (program) onto hardware such as the processor 1001 and memory 1002, causing the processor 1001 to perform calculations, control communications via the communication device 1004, and control at least one of the reading and writing of data in the memory 1002 and storage 1003.
- a specific software program
- the processor 1001 for example, runs an operating system to control the entire computer.
- the processor 1001 may be configured as a central processing unit (CPU) that includes an interface with peripheral devices, a control device, an arithmetic unit, registers, etc.
- CPU central processing unit
- the processor 1001 also reads out programs (program codes), software modules, data, etc. from at least one of the storage 1003 and the communication device 1004 into the memory 1002, and executes various processes according to these.
- the programs used are those that cause a computer to execute at least some of the operations described in the above-mentioned embodiments. Although it has been described that the various processes are executed by one processor 1001, they may be executed simultaneously or sequentially by two or more processors 1001.
- the processor 1001 may be implemented by one or more chips.
- the programs may be transmitted from a network via a telecommunications line.
- Memory 1002 is a computer-readable recording medium, and may be composed of at least one of, for example, ROM (Read Only Memory), EPROM (Erasable Programmable ROM), EEPROM (Electrically Erasable Programmable ROM), RAM (Random Access Memory), etc. Memory 1002 may also be called a register, cache, main memory (primary storage device), etc. Memory 1002 can store executable programs (program codes), software modules, etc. for implementing a wireless communication method according to one embodiment of the present disclosure.
- ROM Read Only Memory
- EPROM Erasable Programmable ROM
- EEPROM Electrical Erasable Programmable ROM
- RAM Random Access Memory
- Memory 1002 may also be called a register, cache, main memory (primary storage device), etc.
- Memory 1002 can store executable programs (program codes), software modules, etc. for implementing a wireless communication method according to one embodiment of the present disclosure.
- Storage 1003 is a computer-readable recording medium, and may be, for example, at least one of an optical disk such as a CD-ROM (Compact Disc ROM), a hard disk drive, a flexible disk, a magneto-optical disk (e.g., a compact disk, a digital versatile disk, a Blu-ray (registered trademark) disk), a smart card, a flash memory (e.g., a card, a stick, a key drive), a floppy (registered trademark) disk, a magnetic strip, etc.
- Storage 1003 may also be referred to as an auxiliary storage device.
- the above-mentioned storage medium may be, for example, a database, a server, or other suitable medium including at least one of memory 1002 and storage 1003.
- the communication device 1004 is hardware (transmitting/receiving device) for communicating between computers via at least one of a wired network and a wireless network, and is also referred to as, for example, a network device, a network controller, a network card, a communication module, etc.
- the communication device 1004 may be configured to include a high-frequency switch, a duplexer, a filter, a frequency synthesizer, etc., to realize, for example, at least one of Frequency Division Duplex (FDD) and Time Division Duplex (TDD).
- FDD Frequency Division Duplex
- TDD Time Division Duplex
- the input device 1005 is an input device (e.g., a keyboard, a mouse, a microphone, a switch, a button, a sensor, etc.) that accepts input from the outside.
- the output device 1006 is an output device (e.g., a display, a speaker, an LED lamp, etc.) that performs output to the outside. Note that the input device 1005 and the output device 1006 may be integrated into one structure (e.g., a touch panel).
- each device such as the processor 1001 and memory 1002 is connected by a bus 1007 for communicating information.
- the bus 1007 may be configured using a single bus, or may be configured using different buses between each device.
- the information processing device 10 may be configured to include hardware such as a microprocessor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a programmable logic device (PLD), or a field programmable gate array (FPGA), and some or all of the functional blocks may be realized by the hardware.
- the processor 1001 may be implemented using at least one of these pieces of hardware.
- the notification of information is not limited to the aspects/embodiments described in this disclosure, and may be performed using other methods.
- the notification of information may be performed by physical layer signaling (e.g., DCI (Downlink Control Information), UCI (Uplink Control Information)), higher layer signaling (e.g., RRC (Radio Resource Control) signaling, MAC (Medium Access Control) signaling, broadcast information (MIB (Master Information Block), SIB (System Information Block))), other signals, or a combination of these.
- RRC signaling may be referred to as an RRC message, and may be, for example, an RRC Connection Setup message, an RRC Connection Reconfiguration message, etc.
- Each aspect/embodiment described in this disclosure may be a mobile communication system (mobile communications system) for mobile communications over a wide range of networks, including LTE (Long Term Evolution), LTE-A (LTE-Advanced), SUPER 3G, IMT-Advanced, 4G (4th generation mobile communication system), 5G (5th generation mobile communication system), 6th generation mobile communication system (6G), xth generation mobile communication system (xG) (xG (x is, for example, an integer or a decimal number)), FRA (Future Radio Access), and LTE (LTE-Advanced).
- LTE Long Term Evolution
- LTE-A LTE-Advanced
- SUPER 3G IMT-Advanced
- 4G fourth generation mobile communication system
- 5G 5th generation mobile communication system
- 6G 6th generation mobile communication system
- xG xth generation mobile communication system
- xG xG (x is, for example, an integer or a decimal number)
- FRA Full Radio Access
- the present invention may be applied to at least one of the following systems using appropriate systems: IEEE 802.11 (Wi-Fi (registered trademark)), IEEE 802.16 (WiMAX (registered trademark)), IEEE 802.20, UWB (Ultra-Wide Band), Bluetooth (registered trademark), NR (new Radio Access), New radio access (NX), Future generation radio access (FX), W-CDMA (registered trademark), GSM (registered trademark), CDMA2000, UMB (Ultra Mobile Broadband), IEEE 802.11 (Wi-Fi (registered trademark)), IEEE 802.16 (WiMAX (registered trademark)), IEEE 802.20, UWB (Ultra-Wide Band), Bluetooth (registered trademark), and next-generation systems that are expanded, modified, created, or defined based on these.
- the present invention may be applied to a combination of multiple systems (for example, a combination of at least one of LTE and LTE-A with 5G, etc.).
- the input and output information may be stored in a specific location (e.g., memory) or may be managed using a management table.
- the input and output information may be overwritten, updated, or added to.
- the output information may be deleted.
- the input information may be sent to another device.
- the determination may be based on a value represented by one bit (0 or 1), a Boolean value (true or false), or a numerical comparison (e.g., a comparison with a predetermined value).
- notification of specific information is not limited to being done explicitly, but may be done implicitly (e.g., not notifying the specific information).
- Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executable files, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
- Software, instructions, information, etc. may also be transmitted and received via a transmission medium.
- a transmission medium For example, if the software is transmitted from a website, server, or other remote source using at least one of wired technologies (such as coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL)), and/or wireless technologies (such as infrared, microwave), then at least one of these wired and wireless technologies is included within the definition of a transmission medium.
- wired technologies such as coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL)
- wireless technologies such as infrared, microwave
- the information, signals, etc. described in this disclosure may be represented using any of a variety of different technologies.
- the data, instructions, commands, information, signals, bits, symbols, chips, etc. that may be referred to throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or magnetic particles, optical fields or photons, or any combination thereof.
- At least one of the channel and the symbol may be a signal (signaling).
- the signal may be a message.
- a component carrier (CC) may be called a carrier frequency, a cell, a frequency carrier, etc.
- system and “network” are used interchangeably.
- radio resources may be indicated by an index.
- the names used for the above-mentioned parameters are not limiting in any respect. Furthermore, the formulas etc. using these parameters may differ from those explicitly disclosed in this disclosure.
- the various channels (e.g., PUCCH, PDCCH, etc.) and information elements may be identified by any suitable names, and therefore the various names assigned to these various channels and information elements are not limiting in any respect.
- determining may encompass a wide variety of actions.
- Determining and “determining” may include, for example, judging, calculating, computing, processing, deriving, investigating, looking up, search, inquiry (e.g., searching in a table, database, or other data structure), and considering ascertaining as “judging” or “determining.”
- determining and “determining” may include receiving (e.g., receiving information), transmitting (e.g., sending information), input, output, accessing (e.g., accessing data in memory), and considering ascertaining as “judging” or “determining.”
- judgment” and “decision” can include considering resolving, selecting, choosing, establishing, comparing, etc., to have been “judged” or “decided.” In other words, “judgment” and “decision” can include considering some action to have been “judged” or “decided.” Additionally, “judgment (decision)” can be interpreted as “assuming,” “ex
- the phrase “based on” does not mean “based only on,” unless expressly stated otherwise. In other words, the phrase “based on” means both “based only on” and “based at least on.”
- any reference to an element using a designation such as "first,” “second,” etc., used in this disclosure does not generally limit the quantity or order of those elements. These designations may be used in this disclosure as a convenient method of distinguishing between two or more elements. Thus, a reference to a first and a second element does not imply that only two elements may be employed or that the first element must precede the second element in some way.
- a and B are different may mean “A and B are different from each other.”
- the term may also mean “A and B are each different from C.”
- Terms such as “separate” and “combined” may also be interpreted in the same way as “different.”
- 10 Information processing device, 11: Model generation unit, 12: Learned model storage unit, 13: Inference unit, 14: Output unit, M: Model, 1001: Processor, 1002: Memory, 1003: Storage, 1004: Communication device, 1005: Input device, 1006: Output device, 1007: Bus.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- High Energy & Nuclear Physics (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Radiology & Medical Imaging (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Image Analysis (AREA)
Abstract
La présente invention concerne un dispositif de traitement d'informations (10) comprenant : une unité de génération de modèle (11) qui génère un modèle par apprentissage profond sur la base d'informations d'images médicales obtenues par imagerie d'une région du corps d'une personne et d'informations contextuelles relatives à la personne, qui comprennent des informations sur l'âge au moment de l'imagerie de la région ; une unité d'inférence (13) qui, sur la base du modèle généré, d'une image médicale obtenue par imagerie de la région susmentionnée sur un sujet pour l'inférence, et des informations contextuelles relatives au sujet, qui comprennent des informations sur l'âge au moment de l'imagerie de la région et des informations sur l'âge cible pour l'inférence, déduit une image médicale de la région pour le sujet à l'âge cible ; et une unité de sortie (14) qui produit l'image médicale de la région du sujet obtenue au moyen de l'inférence.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/JP2023/044521 WO2025126343A1 (fr) | 2023-12-12 | 2023-12-12 | Dispositif de traitement d'informations et procédé de traitement d'informations |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/JP2023/044521 WO2025126343A1 (fr) | 2023-12-12 | 2023-12-12 | Dispositif de traitement d'informations et procédé de traitement d'informations |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025126343A1 true WO2025126343A1 (fr) | 2025-06-19 |
Family
ID=96056706
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2023/044521 Pending WO2025126343A1 (fr) | 2023-12-12 | 2023-12-12 | Dispositif de traitement d'informations et procédé de traitement d'informations |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025126343A1 (fr) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2017023457A (ja) * | 2015-07-23 | 2017-02-02 | 東芝メディカルシステムズ株式会社 | 医用画像処理装置 |
| US20190130565A1 (en) * | 2017-10-26 | 2019-05-02 | Samsung Electronics Co., Ltd. | Method of processing medical image, and medical image processing apparatus performing the method |
| JP2019211307A (ja) * | 2018-06-04 | 2019-12-12 | 浜松ホトニクス株式会社 | 断層画像予測装置および断層画像予測方法 |
| JP2020168200A (ja) * | 2019-04-03 | 2020-10-15 | キヤノンメディカルシステムズ株式会社 | 医用画像処理装置 |
| CN113171075A (zh) * | 2021-04-16 | 2021-07-27 | 东北大学 | 基于深度生成模型的神经退行性疾病脑影像生成预测方法 |
-
2023
- 2023-12-12 WO PCT/JP2023/044521 patent/WO2025126343A1/fr active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2017023457A (ja) * | 2015-07-23 | 2017-02-02 | 東芝メディカルシステムズ株式会社 | 医用画像処理装置 |
| US20190130565A1 (en) * | 2017-10-26 | 2019-05-02 | Samsung Electronics Co., Ltd. | Method of processing medical image, and medical image processing apparatus performing the method |
| JP2019211307A (ja) * | 2018-06-04 | 2019-12-12 | 浜松ホトニクス株式会社 | 断層画像予測装置および断層画像予測方法 |
| JP2020168200A (ja) * | 2019-04-03 | 2020-10-15 | キヤノンメディカルシステムズ株式会社 | 医用画像処理装置 |
| CN113171075A (zh) * | 2021-04-16 | 2021-07-27 | 东北大学 | 基于深度生成模型的神经退行性疾病脑影像生成预测方法 |
Non-Patent Citations (1)
| Title |
|---|
| GADEWAR, SHRUTI ET AL.: "Predicting individual brain MRIs at any age using style encoding generative adversarial networks", PROCEEDINGS OF SPIE, vol. 12567, 6 March 2023 (2023-03-06), XP060174071, DOI: 10.1117112.2669741 * |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN107729929B (zh) | 用于获取信息的方法及装置 | |
| JP7166350B2 (ja) | 対話装置 | |
| JP7438191B2 (ja) | 情報処理装置 | |
| JP2021026401A (ja) | 審査装置 | |
| CN110209506B (zh) | 数据处理系统、方法、计算机设备及可读存储介质 | |
| WO2025126343A1 (fr) | Dispositif de traitement d'informations et procédé de traitement d'informations | |
| US12394511B2 (en) | Methods and systems for remote analysis of medical image records | |
| US11429672B2 (en) | Dialogue server | |
| US20210034678A1 (en) | Dialogue server | |
| JP7682175B2 (ja) | 予測装置 | |
| CN110226290B (zh) | 编码方法和编码器 | |
| US20210191949A1 (en) | Conversation information generation device | |
| JP7122943B2 (ja) | 学習データ生成装置および学習データ生成方法 | |
| CN113808143B (zh) | 图像分割方法、装置、可读存储介质及电子设备 | |
| JP7534916B2 (ja) | データ生成システム | |
| JP2020071622A (ja) | 金融商品提案装置 | |
| US20230215406A1 (en) | Recommendation information provision device | |
| JP2023037848A (ja) | 変位量算出装置 | |
| JP6705038B1 (ja) | 行動支援装置 | |
| CN118366154B (zh) | 图像处理方法、装置、设备、存储介质和程序产品 | |
| WO2024089954A1 (fr) | Dispositif de traitement d'informations | |
| JP2024115927A (ja) | 顎関節評価装置および顎関節評価方法 | |
| JP6847006B2 (ja) | 通信制御装置及び端末 | |
| US20250345940A1 (en) | Human augmentation platform device and physical ability augmentation method | |
| WO2025196908A1 (fr) | Dispositif, dispositif d'apprentissage et procédé |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23961410 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2025563100 Country of ref document: JP Kind code of ref document: A |