WO2025197661A1 - Periodontal disease detection system, periodontal disease detection method, and program - Google Patents
Periodontal disease detection system, periodontal disease detection method, and programInfo
- Publication number
- WO2025197661A1 WO2025197661A1 PCT/JP2025/008921 JP2025008921W WO2025197661A1 WO 2025197661 A1 WO2025197661 A1 WO 2025197661A1 JP 2025008921 W JP2025008921 W JP 2025008921W WO 2025197661 A1 WO2025197661 A1 WO 2025197661A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- periodontal disease
- image
- disease detection
- periodontal
- unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Definitions
- This disclosure relates to a periodontal disease detection system, a periodontal disease detection method, and a program.
- Patent Document 1 discloses a system that, when a photo of the oral cavity region is taken with the camera on a mobile device, extracts image data of an oral cavity target region to be estimated from the image data of the oral cavity region, and estimates the condition of the oral cavity target region based on the image data of the extracted oral cavity target region.
- periodontal diseases such as periodontal disease using images
- the present disclosure therefore provides a periodontal disease detection system, periodontal disease detection method, and program that can improve the accuracy of periodontal disease detection using images.
- a periodontal disease detection system is a system for detecting periodontal disease in a user based on an image captured inside the oral cavity, and includes: a first acquisition unit that irradiates light into the user's oral cavity to acquire a first RGB image of an image capture area including a specific tooth and the periodontal region of the specific tooth including the gums; a second acquisition unit that acquires reference color data that indicates the color standard of natural teeth included in the image capture area and is user-specific; an image processing unit that generates a second RGB image in which the gains of at least two of the red, green, and blue color components that make up the natural tooth region in the first RGB image are adjusted based on the reference color data; and a periodontal disease detection unit that detects periodontal disease in the user based on image data derived from the second RGB image.
- a periodontal disease detection method is a periodontal disease detection method executed by a periodontal disease detection system that detects periodontal disease in a user based on an image captured inside the oral cavity, the method including irradiating light into the user's oral cavity to obtain a first RGB image capturing an image of an imaged region including a specific tooth and the periodontal region of the specific tooth including the gums, obtaining reference color data that indicates the color standard of natural teeth included in the imaged region and that is specific to the user, generating a second RGB image in which the gains of at least two of the red, green, and blue color components that make up the natural tooth region in the first RGB image are adjusted based on the reference color data, and detecting periodontal disease in the user based on image data derived from the second RGB image.
- a periodontal disease detection method is a periodontal disease detection method executed by one or more processors, wherein the one or more processors irradiate light into the oral cavity of a user, output a first RGB image capturing an image of a capture area including a specific tooth and the periodontal region of the specific tooth including the gums, obtain a detection result of the user's periodontal disease based on the first RGB image, and perform predetermined processing on the obtained detection result, wherein the obtained detection result includes the detection result of the user's periodontal disease detected based on image data based on a second RGB image adjusted based on the reference color data corresponding to the user, where the gains of at least two of the red, green, and blue color components constituting the natural tooth region in the first RGB image are reference color data indicating the standard color of the natural tooth included in the capture area.
- a program according to one aspect of the present disclosure is a program for causing a computer to execute the periodontal disease detection method described above.
- FIG. 1 is a perspective view of an intraoral camera in a periodontal disease detection system according to a first embodiment.
- FIG. 2 is a schematic diagram of the periodontal disease detection system according to the first embodiment.
- FIG. 3 is a block diagram showing a functional configuration of the mobile terminal according to the first embodiment.
- FIG. 4 is a flowchart showing the operation of the periodontal disease detection system according to the first embodiment.
- FIG. 5 is a first diagram for explaining the periodontal region in the oral cavity according to the first embodiment.
- FIG. 6 is a second diagram for explaining the periodontal region in the oral cavity according to the first embodiment.
- FIG. 7A is a diagram for explaining image data extracted by an image data extraction unit according to the first embodiment.
- FIG. 7B is a diagram showing a specific example of a method for forming a rectangular region according to the first embodiment.
- FIG. 8A is a diagram illustrating an example of a first group rule base according to the first modification of the first embodiment.
- FIG. 8B is a diagram illustrating an example of a second group rule base according to the first modification of the first embodiment.
- FIG. 8C is a diagram illustrating an example of a third group rule base according to the first modification of the first embodiment.
- FIG. 9 is a diagram illustrating learning data according to the second modification of the first embodiment.
- FIG. 10 is a block diagram showing a functional configuration of a mobile terminal according to the second embodiment.
- FIG. 11 is a diagram for explaining a plurality of regions in the oral cavity according to the second embodiment.
- FIG. 12 is a diagram showing an example of an image obtained by capturing images of a plurality of regions according to the second embodiment.
- FIG. 13 is a diagram showing an example of an image used for learning the learning model according to the second embodiment.
- FIG. 14 is a flowchart showing the operation of the periodontal disease detection system according to the second embodiment.
- FIG. 15 is a flowchart showing the operation of the periodontal disease detection system according to the modified example of the second embodiment.
- FIG. 16 is a diagram showing the angle of the tip of the interdental papilla gingiva according to a modification of the second embodiment.
- FIG. 17A is a first diagram illustrating a method for calculating the angle of the tip of the interdental papilla gingiva according to a modified example of the second embodiment.
- FIG. 17A is a first diagram illustrating a method for calculating the angle of the tip of the interdental papilla gingiva according to a modified example of the second embodiment.
- FIG. 17B is a second diagram illustrating a method for calculating the angle of the tip of the interdental papilla gingiva according to a modified example of the second embodiment.
- FIG. 17C is a third diagram illustrating a method for calculating the angle of the tip of the interdental papilla gingiva according to a modified example of the second embodiment.
- FIG. 18A is a first diagram showing the radius of the tip of the interdental papilla gingiva according to a modified example of the second embodiment.
- FIG. 18B is a second diagram showing the radius of the tip of the interdental papilla gingiva according to a modified example of embodiment 2.
- a process may be performed to make the teeth a white region by adjusting the white balance gain so that the average values of R, G, and B are equal based on the tooth region. This process allows for effective highlighting of plaque.
- periodontal disease detection systems that can improve the accuracy of periodontal disease detection using images, and in particular, periodontal disease detection systems that can generate images that are more suitable for accurately detecting periodontal disease, and have devised the periodontal disease detection system described below.
- a periodontal disease detection system is a system for detecting periodontal disease in a user based on an image captured inside the oral cavity, and includes: a first acquisition unit that irradiates light into the user's oral cavity to acquire a first RGB image of an image capture area including a specific tooth and the periodontal region of the specific tooth including the gums; a second acquisition unit that acquires reference color data that indicates the color standard of natural teeth included in the image capture area and is user-specific; an image processing unit that generates a second RGB image in which the gains of at least two of the red, green, and blue color components that make up the natural tooth region in the first RGB image are adjusted based on the reference color data; and a periodontal disease detection unit that detects periodontal disease in the user based on image data derived from the second RGB image.
- the color of the teeth and gums in the corrected image can be closer to the color of the user's teeth and gums, making it possible to more accurately detect periodontal disease from the corrected image. In other words, it is possible to generate an image that is more suitable for accurately detecting periodontal disease. This can improve the accuracy of periodontal disease detection using images.
- the periodontal disease detection system according to the second aspect may be the periodontal disease detection system according to the first aspect, and may further include an image data extraction unit that extracts, from the second RGB image, periodontal region image data including the free gingiva, which is the gingiva that surrounds the cervical region including the interdental papilla and marginal gingiva in a band shape, and a portion of the attached gingiva that is continuous with the free gingiva and extends from the bottom of the gingival sulcus to the gingival-alveolar junction, as the image data based on the second RGB image.
- an image data extraction unit that extracts, from the second RGB image, periodontal region image data including the free gingiva, which is the gingiva that surrounds the cervical region including the interdental papilla and marginal gingiva in a band shape, and a portion of the attached gingiva that is continuous with the free gingiva and extends from the bottom of the gingival sulcus to the gingival-alveolar junction, as the image data based on
- the image data based on the second RGB image is an image in which part of the tooth region in the second RGB image has been deleted.
- the tooth region is an area that is not necessary for detecting periodontal disease, and can be a factor in reducing detection accuracy. Therefore, periodontal disease can be detected more accurately by detecting periodontal disease using an image in which part of the tooth region in the second RGB image has been deleted.
- the periodontal disease detection system according to the third aspect may be the periodontal disease detection system according to the second aspect, and the periodontal region image data may further include the left and right interdental papillae and gingiva.
- the periodontal disease detection system according to the fourth aspect may be a periodontal disease detection system according to any one of the first to third aspects, and the reference color data may be set based on the color of the user's natural teeth.
- a periodontal disease detection system is a periodontal disease detection system according to any one of the first to fourth aspects, wherein the image processing unit identifies a specific pixel area including pixels whose values based on the pixel values of the second RGB image are within a predetermined range, and generates a third RGB image by adjusting the gain of at least two of the red, green, and blue color components that make up the area of the natural tooth in the first RGB image excluding the specific pixel area based on the reference color data, and the image data based on the second RGB image may be image data based on the third RGB image.
- the third RGB image is an image corrected based on the color components of the tooth portion excluding the specific pixel area, and is therefore an image with more accurate gain adjustment. Detecting periodontal disease using image data based on such a third RGB image allows for more accurate detection of periodontal disease.
- a periodontal disease detection system may be the periodontal disease detection system according to the fifth aspect, wherein the image processing unit generates an HSV image by converting the color space of the second RGB image into an HSV space, and identifies as the specific pixel area a pixel area in which one or more pixels of the HSV image that satisfy at least one of a first predetermined range for saturation, a second predetermined range for hue, and a third predetermined range for brightness are located.
- a periodontal disease detection system may be the periodontal disease detection system according to the fifth aspect, wherein the image processing unit identifies as the specific pixel area a pixel area in which one or more pixels are located, where the red pixel values of the multiple images constituting the natural tooth area in the second RGB image satisfy at least one of the following conditions: the multiple red pixel values are within a first range, the multiple green pixel values are within a second range, and the multiple blue pixel values are within a third range.
- the periodontal disease detection system according to the eighth aspect may be a periodontal disease detection system according to any one of the third to seventh aspects, and the specific pixel region may include a plaque region.
- the periodontal disease detection system according to the ninth aspect may be a periodontal disease detection system according to any one of the second to eighth aspects, and the image data extraction unit may extract the periodontal region from the tip of the interdental papilla gingiva of the specific tooth to the free gingival sulcus from the second RGB image as the periodontal region image data.
- the periodontal region image data includes the periodontal region from the tip of the interdental papilla gingiva of the tooth to the free gingival sulcus, which is prone to changes when periodontal disease develops, allowing for more accurate detection of periodontal disease.
- a periodontal disease detection system may be a periodontal disease detection system according to any one of the first to tenth aspects, wherein the periodontal disease detection unit detects the periodontal disease of the user by inputting image data based on the second RGB image into a learning model that is trained to receive image data including a gingival region as input and output information related to periodontal disease in the image data.
- the periodontal disease detection system according to the twelfth aspect may be the periodontal disease detection system according to the eleventh aspect, and the learning model may output an estimated result of at least one of periodontal pocket depth, BOP (Bleeding On Probing), and GI (Gingival Index) value as information related to the periodontal disease.
- BOP Biting On Probing
- GI Geographic Index
- periodontal pocket depth BOP (Bleeding On Probing)
- GI Gingival Index
- a periodontal disease detection system is a periodontal disease detection system according to any one of the first to twelfth aspects, and the periodontal disease detection unit may input the R, G, and B values of the gum color acquired from image data based on the second RGB image and detect the user's periodontal disease in that area based on a periodontal disease detection tool created from the range of R, G, and B values in the RGB color space of normal gum color and the range of R, G, and B values of gum color for multiple stages of periodontal disease.
- a periodontal disease detection system may be a periodontal disease detection system according to any one of the first to twelfth aspects, wherein the periodontal disease detection unit may input at least one of the hue information, saturation information, and brightness information of the color of normal gums in the HSV color space and at least one range of the hue information, saturation information, and brightness information of the color of gums for each of multiple stages of periodontal disease, and detect the periodontal disease of the user in that area.
- a periodontal disease detection system is a periodontal disease detection system according to any one of the first to twelfth aspects, wherein the periodontal disease detection unit may input at least one of the hue information, saturation information, and luminance information of the color of normal gums in the HSL color space and at least one range of the hue information, saturation information, and luminance information of the color of gums for each of multiple stages of periodontal disease, and detect the periodontal disease of the user in the relevant area.
- the periodontal disease detection system according to the sixteenth aspect may be a periodontal disease detection system according to any one of the first to fifteenth aspects, and may further include a tooth type identification unit that identifies the type of the specific tooth from the first RGB image or the second RGB image.
- a periodontal disease detection system is a periodontal disease detection system according to any one of the first to sixteenth aspects, and further includes an output unit that outputs information indicating the periodontal disease detected by the periodontal disease detection unit to a first information terminal of a dentist other than the user, and the output unit may further output examination necessity information regarding the need for the user to undergo examination, which information was acquired via the first acquisition unit and entered into the first information terminal, to a second information terminal of the user.
- the periodontal disease detection system according to the eighteenth aspect may be a periodontal disease detection system according to any one of the first to seventeenth aspects, and may further include an output unit that outputs information indicating the periodontal disease detected by the periodontal disease detection unit to the user's second information terminal.
- the periodontal disease detection system according to the 19th aspect may be a periodontal disease detection system according to any one of the first to 18th aspects, in which the second acquisition unit acquires the reference color data before acquiring the first RGB image, and may further include a storage unit that stores the reference color data.
- a periodontal disease detection system may be a periodontal disease detection system according to any one of the first to nineteenth aspects, in which the reference color data is color data of the natural tooth obtained when a first light is irradiated onto the natural tooth, and the first RGB image is an image obtained when a second light different from the first light is irradiated onto the imaging area.
- a periodontal disease detection system is a periodontal disease detection system according to any one of the first to 20th aspects, wherein the at least two color components include at least two of a first red pixel average value of multiple red pixel values possessed by multiple pixels constituting the natural tooth region in the first RGB image, a first green pixel average value of multiple green pixel values possessed by the multiple pixels, and a first blue pixel average value of multiple blue pixel values possessed by the multiple pixels, and the image processing unit may generate the second RGB image by adjusting the gain so that the difference between the at least two color components and the reference color data falls within a predetermined range.
- a periodontal disease detection method is a periodontal disease detection method executed by a periodontal disease detection system that detects periodontal disease in a user based on an image taken inside the oral cavity, and includes irradiating light into the user's oral cavity to obtain a first RGB image of an imaged area including a specific tooth and the periodontal area of the specific tooth including the gums, obtaining reference color data that indicates the color standard of natural teeth included in the imaged area and that corresponds to the user, generating a second RGB image in which the gains of at least two of the red, green, and blue color components that constitute the area of the natural tooth in the first RGB image are adjusted based on the reference color data, and detecting periodontal disease in the user based on image data based on the second RGB image.
- a periodontal disease detection method is a periodontal disease detection method executed by one or more processors, wherein the one or more processors irradiate light into the oral cavity of a user, output a first RGB image capturing an imaging area including a specific tooth and the periodontal region of the specific tooth including the gums, obtain a detection result of the user's periodontal disease based on the first RGB image, and perform predetermined processing on the obtained detection result, wherein the obtained detection result includes the detection result of the user's periodontal disease detected based on image data based on a second RGB image adjusted based on the reference color data corresponding to the user, where the gains of at least two of the red, green, and blue color components constituting the natural tooth region in the first RGB image are reference color data indicating the reference color of the natural tooth included in the imaging area.
- a program according to a 24th aspect of the present disclosure is a program for causing a computer to execute the periodontal disease detection method according to the 22nd or 23rd aspect.
- these general or specific aspects may be realized as a system, method, integrated circuit, computer program, or non-transitory recording medium such as a computer-readable CD-ROM, or as any combination of a system, method, integrated circuit, computer program, or recording medium.
- the program may be pre-stored on the recording medium, or may be supplied to the recording medium via a wide area communication network, including the Internet.
- each figure is a schematic diagram and is not necessarily an exact illustration. Therefore, for example, the scales of the figures do not necessarily match. Furthermore, in each figure, substantially identical components are assigned the same reference numerals, and duplicate explanations are omitted or simplified.
- ordinal numbers such as “first” and “second” do not refer to the number or order of components, unless otherwise specified, but are used to avoid confusion and distinguish between components of the same type.
- Figure 1 is a perspective view of an intraoral camera 10 in the periodontal disease detection system according to this embodiment.
- the intraoral camera 10 has a toothbrush-shaped housing that can be handled with one hand.
- the housing includes a head portion 10a that is placed in the user's oral cavity when taking a photograph, a handle portion 10b that is held by the user, and a neck portion 10c that connects the head portion 10a and the handle portion 10b.
- the imaging unit 21 images the surface of the dentition and the periodontal region in the oral cavity, which are illuminated with light including the wavelength range of blue light.
- the surface of the dentition includes at least one of the buccal (outer) side surface of the dentition and the lingual (inner) side surface of the dentition.
- the dentition may also include, for example, one or more teeth. Blue light is an example of the second light.
- the photographing unit 21 is incorporated into the head unit 10a and neck unit 10c.
- the photographing unit 21 has an image sensor (not shown) and a lens (not shown) arranged on its optical axis LA.
- the imaging element is a photographing device such as a CMOS (Complementary Metal Oxide Semiconductor) sensor or a CCD (Charge Coupled Device) element, and an image of the teeth is formed by a lens.
- the imaging element outputs a signal (image data) corresponding to the formed image to the outside.
- the image captured by the imaging element is an example of a first RGB image or a first image.
- the captured image is obtained by irradiating the dentition with blue light, and therefore has a blue tint.
- the captured image is, for example, an image of the side of the dentition and the periodontal region.
- the side of the dentition may be the side on the lingual side or the side on the buccal side.
- the side of the dentition may be the side on the maxillary side or the side on the mandibular side.
- the captured image is, for example, an image of a capture region including one or more teeth (an example of a specific tooth) and the periodontal region of the one or more teeth.
- the periodontal region is a region including the gingiva of the tooth.
- the gingiva is the periodontal tissue surrounding the root of a tooth and is also called the gum.
- the specific tooth may be any tooth, for example, a tooth in an area where a user or a medical professional such as a dentist wants to detect periodontal disease, or any tooth.
- the imaging unit 21 may further include an optical filter that blocks light of a color emitted from the illumination unit (illumination device) and transmits fluorescence emitted by plaque in response to that light.
- the imaging unit 21 may include, as an optical filter, a blue light cut-off filter that cuts out the blue wavelength light component contained in the light incident on the imaging element.
- a blue light cut-off filter that cuts out the blue wavelength light component contained in the light incident on the imaging element.
- the blue light cut-off filter cuts out a portion of the light including the blue wavelength range from the light before it enters the imaging element. Note that the imaging unit 21 does not necessarily have to include a blue light cut-off filter.
- the intraoral camera 10 is also equipped with multiple first to fourth LEDs 23A-23D as an illumination unit that irradiates light onto the teeth to be photographed during photography.
- the first to fourth LEDs 23A-23D irradiate dental plaque with light of a color that causes the plaque to fluoresce (for example, single-color light).
- the first to fourth LEDs 23A-23D are, for example, blue LEDs that irradiate blue light including a wavelength whose peak is 405 nm (an example of a predetermined wavelength). Note that the first to fourth LEDs 23A-23D are not limited to blue LEDs, and may be any light source that irradiates light including the blue light wavelength range.
- the first to fourth LEDs 23A to 23D may be configured to emit light for capturing reference color data that serves as a reference for the color of the user's teeth.
- the light for capturing the reference color data includes light of a color different from blue light, such as white light.
- White light is an example of the first light.
- FIG. 2 is a schematic diagram of a periodontal disease detection system according to this embodiment.
- the periodontal disease detection system according to this embodiment is, generally speaking, an information processing system capable of detecting periodontal disease with greater accuracy.
- the information processing system is configured such that the imaging unit 21 captures fluorescence emitted by plaque in response to light from the illumination unit 23, and from one or more captured images, an image more suitable for detecting periodontal disease is obtained, and periodontal disease is detected based on the acquired image.
- the periodontal disease detection system includes an intraoral camera 10 and a mobile terminal 50.
- the intraoral camera 10 comprises a hardware unit 20, a signal processing unit 30, and a communication unit 40.
- the hardware unit 20 is a physical element of the intraoral camera 10 and includes a photographing unit 21, a sensor unit 22, an illumination unit 23, and an operation unit 24.
- the imaging unit 21 generates image data by photographing the side of the dentition and the periodontal region in the oral cavity, which are illuminated with light of a specific wavelength that excites fluorescent substances contained in plaque.
- the imaging unit 21 receives control signals from the camera control unit 31, performs operations such as photographing in accordance with the received control signals, and outputs video or still image data obtained by photographing to the image processing unit 32.
- the imaging unit 21 has the above-mentioned image sensor, optical filter, and lens.
- the image data is generated, for example, based on light that has passed through the optical filter.
- the image data is an image that shows multiple teeth, but it is sufficient that the image shows at least one tooth. Note that plaque may adhere to the side of the dentition.
- the photographing unit 21 may also perform photographing to obtain reference color data.
- the photographing unit 21 may generate image data for obtaining reference color data by photographing, for example, the side of the dentition and the periodontal region in the oral cavity illuminated with reference light such as white light.
- the reference light may be light from the illumination unit 23, or light from a light source external to the intraoral camera 10 (external light).
- the sensor unit 22 detects external light entering the capture area of the captured image. For example, the sensor unit 22 detects whether external light is entering the oral cavity.
- the sensor unit 22 is, for example, arranged near the capture unit 21.
- the sensor unit 22 may, for example, be arranged in the head unit 10a of the intraoral camera 10, similar to the capture unit 21. In other words, the sensor unit 22 is located inside the user's oral cavity when the capture unit 21 captures images.
- the sensor unit 22 may detect whether white light within a predetermined chromaticity range or a predetermined color temperature is being irradiated into the oral cavity when capturing images to obtain reference color data.
- the lighting unit 23 irradiates light onto the area of the multiple areas in the oral cavity that will be photographed by the imaging unit 21.
- QLF quantitative visible light induced fluorescence
- bacteria in dental plaque are known to fluoresce a reddish-pink color (excited fluorescence) when irradiated with blue light; in this embodiment, the lighting unit 23 irradiates blue light onto the area that will be photographed by the imaging unit 21.
- the lighting unit 23 has the above-mentioned first to fourth LEDs 23A to 23D.
- the first to fourth LEDs 23A to 23D for example, emit light from different directions toward the shooting area. This makes it possible to prevent shadows from appearing in the shooting area.
- Each of the first to fourth LEDs 23A to 23D is configured so that at least the dimming can be controlled.
- Each of the first to fourth LEDs 23A to 23D may also be configured so that the dimming and color adjustment can be controlled.
- the first to fourth LEDs 23A to 23D are arranged to surround the imaging unit 21.
- the illumination unit 23 controls the illumination intensity (light emission intensity) according to the shooting area.
- the illumination intensity of each of the first to fourth LEDs 23A to 23D may be controlled uniformly, or may be controlled to differ from one another.
- the number of LEDs in the illumination unit 23 is not particularly limited, and may be one, or five or more.
- the illumination unit 23 is not limited to having an LED as a light source, and may also have other light sources.
- the illumination unit 23 may also be configured to emit reference light for obtaining reference color data of the user's teeth.
- the reference light is emitted at a different timing than the blue light.
- the operation unit 24 accepts operations from the user.
- the operation unit 24 is configured, for example, with push buttons, but may also be configured to accept operations via voice, etc.
- the operation unit 24 accepts operations from the user, for example, as to whether to take an image for detecting periodontal disease or to take an image for obtaining reference color data.
- the hardware unit 20 may also include a battery (e.g., a secondary battery) that supplies power to each component of the intraoral camera 10, a coil for wireless charging by an external charger connected to a commercial power source, and an actuator required for at least one of composition adjustment and focus adjustment.
- a battery e.g., a secondary battery
- the signal processing unit 30 has functional components implemented by a CPU (Central Processing Unit) or MPU (Micro Processor Unit) that execute various processes described below, and a memory unit 35 such as a ROM (Read Only Memory) or RAM (Random Access Memory) that stores programs for causing each functional component to execute various processes.
- the signal processing unit 30 has a camera control unit 31, an image processing unit 32, a control unit 33, a lighting control unit 34, and a memory unit 35.
- the camera control unit 31 is mounted, for example, on the handle portion 10b of the intraoral camera 10 and controls the imaging unit 21.
- the camera control unit 31 controls at least one of the aperture and shutter speed of the imaging unit 21 in response to a control signal from the image processing unit 32, for example.
- the image processing unit 32 is mounted, for example, on the handle unit 10b of the intraoral camera 10, acquires the captured image captured by the imaging unit 21, performs image processing on the acquired captured image, and outputs the processed captured image to the camera control unit 31 and the control unit 33.
- the image processing unit 32 may also output the processed captured image to the memory unit 35, and store the processed captured image in the memory unit 35.
- the image processing unit 32 is composed of, for example, a circuit, and performs image processing such as noise removal and edge enhancement on the captured image. Note that noise removal and edge enhancement processing may also be performed by the mobile terminal 50.
- the captured image output from the image processing unit 32 may be transmitted to the mobile terminal 50 via the communication unit 40, and an image based on the transmitted captured image may be displayed on the display unit 56 of the mobile terminal 50. This allows an image based on the captured image to be presented to the user.
- the control unit 33 is a control device that controls the signal processing unit 30.
- the control unit 33 controls each component of the signal processing unit 30 based on, for example, the detection results of external light, etc. by the sensor unit 22.
- the lighting control unit 34 is mounted, for example, on the handle portion 10b of the intraoral camera 10 and controls the turning on and off of the first to fourth LEDs 23A to 23D.
- the lighting control unit 34 is composed of, for example, a circuit. For example, when a user operates the display unit 56 of the mobile terminal 50 to start the intraoral camera 10, a corresponding signal is sent from the mobile terminal 50 to the signal processing unit 30 via the communication unit 40.
- the lighting control unit 34 of the signal processing unit 30 turns on the first to fourth LEDs 23A to 23D based on the received signal.
- the lighting control unit 34 may cause the lighting unit 23 to emit blue light when the operation unit 24 receives an operation indicating that imaging for periodontal disease detection will be performed, or cause the lighting unit 23 to emit white light when the operation unit 24 receives an operation indicating that imaging for reference color data acquisition will be performed.
- the memory unit 35 also stores images captured by the image capture unit 21.
- the memory unit 35 is realized by, for example, semiconductor memory such as ROM or RAM, but is not limited to this.
- the communication unit 40 is a wireless communication module for wireless communication with the mobile terminal 50.
- the communication unit 40 is mounted, for example, on the handle portion 10b of the intraoral camera 10, and communicates wirelessly with the mobile terminal 50 based on control signals from the signal processing unit 30.
- the communication unit 40 performs wireless communication with the mobile terminal 50 in accordance with existing communication standards such as Wi-Fi (registered trademark) and Bluetooth (registered trademark). Captured images are sent from the intraoral camera 10 to the mobile terminal 50, and operation signals are sent from the mobile terminal 50 to the intraoral camera 10, via the communication unit 40.
- the mobile terminal 50 performs processing to obtain images more suitable for detecting periodontal disease and to detect periodontal disease, for example, by irradiating the teeth with light including the wavelength range of blue light and using captured images of the surface of the dentition and the periodontal region that are fluorescently reacting.
- the mobile terminal 50 functions as a user interface for the periodontal disease detection system.
- the mobile terminal 50 is an example of a second information terminal.
- FIG. 3 is a block diagram showing the functional configuration of the mobile terminal 50 according to this embodiment.
- the acquisition unit 51 also acquires reference color data that indicates the reference color of the natural teeth included in the imaging area and is user-specific.
- the acquisition unit 51 may acquire, as reference color data, an image (color image) obtained by illuminating the oral cavity with white light from the intraoral camera 10, or may acquire reference color information for the natural teeth.
- the color information may be, for example, the chromaticity of the natural teeth in the image obtained by illuminating the oral cavity with white light.
- the color information for the natural teeth may be color information for any one location on the natural teeth, color information for a specific natural tooth, or a statistical value of color information for multiple locations on the natural teeth (e.g., multiple natural teeth).
- the statistical value of the color information is the average value of the chromaticity, but it may also be the maximum value, minimum value, mode, median, etc.
- the acquisition unit 51 functions as a second acquisition unit that acquires reference color data.
- the acquisition unit 51 may acquire information (e.g., medical examination necessity information, described below) from a first information terminal carried by a person other than the user (e.g., a medical professional such as a dentist).
- the first information terminal may be, for example, a portable terminal such as a mobile terminal, or a stationary terminal such as a PC.
- the first information terminal is an information terminal different from the portable terminal 50.
- the acquisition unit 51 may also include a wired communication module that performs wired communication with the intraoral camera 10.
- the tooth type identification unit 52 identifies the type of tooth (e.g., a specific tooth) contained in the captured image (i.e., the first image) from the captured image. Identifying the type of tooth may mean identifying whether the tooth is an incisor, canine, or molar, or whether the tooth is a central incisor, lateral incisor, canine, first premolar, second premolar, first molar, second molar, or third molar (wisdom tooth). The tooth type identification unit 52 may also identify the region of the oral cavity (upper jaw, lower jaw, left or right) in which the tooth is located.
- the region of the oral cavity upper jaw, lower jaw, left or right
- the tooth type identification unit 52 identifies the type of tooth, and may, for example, use a machine learning model, a pattern matching method, or any other known method.
- the machine learning model is a learning model that is trained to output the type of tooth appearing in an image containing teeth when that image is input.
- a machine learning model is an example of a periodontal disease detection tool.
- a periodontal disease detection tool may include multiple learning models, each trained to receive image data containing an area as input and output information related to periodontal disease in that area.
- a periodontal disease detection tool may be represented by a machine learning model such as a neural network for estimating periodontal disease detection results from input information (here, an image).
- the tooth type identification unit 52 may use a corrected image (described below) instead of a captured image to identify the type of tooth contained in the corrected image.
- the image processing unit 53 generates a corrected image in which the gain of at least two of the red, green, and blue color components that make up the natural tooth area in the captured image is adjusted based on the reference color data.
- the image processing unit 53 since the captured image is a bluish image, the image processing unit 53 performs processing to correct the color of the natural teeth in the captured image to the color of the actual user's natural teeth based on the reference color data of the user's natural teeth. In this way, the reference color data is set based on, for example, the color of the user's natural teeth.
- the image processing unit 53 does not simply perform a process to make the teeth achromatic (so-called white balance processing), but rather uses reference color data stored in the memory unit 58 to determine the amount of gain adjustment required to correct the color of the natural teeth in the captured image to the reference color data, and uniformly corrects the entire captured image (i.e., the entire image including the teeth and periodontal region) using the determined gain adjustment amount. "Uniformly" here means that the teeth and periodontal region are corrected using the same gain adjustment amount.
- the corrected image is an example of the second RGB image or second image.
- the color of the natural teeth matches the color of the actual user's natural teeth. Furthermore, because the teeth and periodontal region are corrected with the same gain, the color of the periodontal region is closer to the color of the actual user's periodontal region than when white balance processing is performed.
- the image processing unit 53 may generate a corrected image from the captured image by adjusting the gain of at least two color components, for example, a first red pixel average value of multiple red pixel values of multiple pixels (first pixels) constituting the natural tooth region in the captured image, a first green pixel average value of multiple green pixel values of multiple pixels (first pixels), and a first blue pixel average value of multiple blue pixel values of multiple pixels (first pixels), so that the difference from the reference color data falls within a predetermined range.
- the image processing unit 53 is not limited to using the first red pixel average value, the first blue pixel average value, and the first green pixel average value, and may generate a corrected image from the captured image using a statistical value of the first red pixels, a statistical value of the first blue pixels, and a statistical value of the first green pixels.
- Statistical values include, but are not limited to, maximum values, minimum values, modes, medians, etc.
- the image processing unit 53 may also detect plaque regions on the teeth from the corrected image. Because teeth and plaque emit different fluorescence, the image processing unit 53 can detect teeth and plaque from the color of the fluorescent (excited fluorescence) parts in the corrected image. Plaque (plaque region) fluoresces a reddish pink color (excited fluorescence) when illuminated with blue light. Note that any other known method for detecting tooth regions and plaque regions may be used. Plaque regions are an example of specific pixel regions.
- the image data extraction unit 54 extracts periodontal region image data from the corrected image, including the gingival region near the boundary between the tooth and the gingiva. It can also be said that the image data extraction unit 54 extracts periodontal region image data from the corrected image, including the free gingiva, which is the gingiva that surrounds the cervical region, including the interdental papilla and marginal gingiva, in a band shape, and a portion of the attached gingiva that is continuous with the free gingiva and extends from the bottom of the gingival sulcus to the gingival-alveolar junction.
- the periodontal region image data may include the interdental papilla and gingiva on both sides of the tooth.
- the periodontal region image data is an image extracted from the corrected image that includes the periodontal region where periodontal disease develops, but that minimizes the inclusion of tooth regions that are unnecessary for detecting periodontal disease.
- the image data extraction unit 54 may extract the periodontal region image data by removing part of the tooth region from the corrected image.
- the periodontal region image data is an example of image data based on the corrected image.
- the periodontal disease detection unit 55 detects the user's periodontal disease based on image data derived from the corrected image.
- the periodontal disease here may be the current progress of periodontal disease, or a sign of periodontal disease.
- a sign includes the presence of a precursory phenomenon that is not currently recognized as periodontal disease, but which makes it highly likely that periodontal disease will occur. Examples of precursory phenomena include, but are not limited to, the occurrence of gingivitis.
- the periodontal disease detection unit 55 detects the user's periodontal disease by inputting image data based on the corrected image (in this embodiment, periodontal region image data) into a learning model that has been trained to input image data including the gingival region and output information related to periodontal disease in the image data.
- the information related to periodontal disease includes the presence or absence of periodontal disease, the progress of periodontal disease, signs of periodontal disease, etc.
- the information related to periodontal disease may also include an estimated result of at least one of the periodontal pocket depth, BOP, and GI value.
- the learning model is trained in advance through supervised learning using a learning dataset in which image data including the gingival region (in this embodiment, periodontal region image data) is used as input data and information regarding periodontal disease is used as correct answer data.
- the learning model may be, for example, a model that can determine the occurrence of each of a plurality of diseases, including cases in which multiple diseases have occurred, obtained in advance by training using training data including a plurality of sample data having features for identifying the presence or absence of diseases related to the periodontal region of a plurality of teeth.
- the image data used for learning is a plurality of images showing different stages of periodontal disease progression, such as image data of a normal periodontal region, image data of a periodontal region suspected of having periodontal disease, and image data of periodontal regions with mild, moderate, or severe periodontitis. Furthermore, images for learning may be prepared for each type of tooth, region in which the tooth is located, and shooting direction (such as whether the image is taken from the lingual side or the buccal side).
- the image data may be an image including the periodontal region extracted by the image data extraction unit 54 (i.e., an edited image), or it may be the corrected image itself.
- the processing unit that generates the learning model may be provided by the periodontal disease detection system, or by a device outside the periodontal disease detection system.
- the display unit 56 is a display device provided in the mobile terminal 50, and displays information indicating the periodontal disease detected by the periodontal disease detection unit 55.
- the display unit 56 is realized, for example, by a liquid crystal display panel, but is not limited to this.
- the output unit 57 is a wireless communication module for wireless communication with a first information terminal of a person other than the user (for example, a medical professional such as a dentist).
- the output unit 57 outputs information indicating periodontal disease detected by the periodontal disease detection unit 55 to the first information terminal of the person other than the user.
- the output unit 57 also outputs to the user's second information terminal the medical examination necessity information acquired by the acquisition unit 51, which information is regarding the need for the user to receive medical examination and entered by the person into the first information terminal.
- the second information terminal may be, for example, an information terminal different from the mobile terminal 50.
- the dentist may be, for example, the user's doctor.
- the output unit 57 may associate information indicating periodontal disease detected by the periodontal disease detection unit 55 with the type of tooth identified by the tooth type identification unit 52 and output the associated information to at least one of the first information terminal and the second information terminal.
- the output unit 57 may display information indicating periodontal disease associated with the type of tooth on at least one of the first information terminal and the second information terminal. This makes it possible to notify at least one of the user and medical professionals of the type of periodontal disease (or whether the periodontal region of each tooth has periodontal disease) (or whether it has no periodontal disease).
- the output unit 57 may also include a wired communication module that performs wired communication with the first information terminal and the second information terminal.
- the memory unit 58 is a storage device that stores various information for detecting periodontal disease in a user.
- the memory unit 58 may store, for example, a learning model used by the periodontal disease detection unit 55, or may store reference color data.
- the memory unit 58 may be realized, for example, by a semiconductor memory or the like, but is not limited to this.
- the mobile terminal 50 has a configuration for generating an image that is more suitable for detecting periodontal disease.
- the mobile terminal 50 does not have to have a configuration for detecting periodontal disease, such as the periodontal disease detection unit 55.
- the dentist makes a diagnosis while checking an image (e.g., a corrected image) that is more suitable for detecting periodontal disease generated by the mobile terminal 50.
- the mobile terminal 50 may function as a support device that generates images that allow the dentist to make a more accurate diagnosis.
- FIG. 4 is a flowchart showing the operation of the periodontal disease detection system (periodontal disease detection method) according to this embodiment.
- the acquisition unit 51 acquires reference color data of the user's natural teeth and stores it in the storage unit 58 (S11).
- the acquisition unit 51 may acquire the reference color data before acquiring the captured image. Note that the reference color data only needs to be acquired before executing step S14, and is not limited to being acquired before acquiring the captured image.
- the acquisition unit 51 may store the reference color data in the storage unit 58 in association with information indicating the user. Furthermore, the processing of step S11 may be performed, for example, once for each user. For example, if reference color data is already stored for the user, step S11 may be omitted.
- the acquisition unit 51 acquires a captured image of the imaging area inside the user's oral cavity (S12).
- the captured image may be an image taken by the user themselves, or an image of the inside of the user's oral cavity taken by a medical professional.
- the acquisition unit 51 may store the captured image in the storage unit 58.
- the tooth type identification unit 52 identifies the tooth area in the captured image (S13).
- the tooth type identification unit 52 may use the tooth area that is the output of a trained machine learning model obtained by inputting the captured image into the machine learning model to identify in which region of the oral cavity the tooth in the captured image is located.
- the image processing unit 53 generates a corrected image in which the color of the teeth in the captured image has been corrected based on the reference color data (S14).
- the image processing unit 53 determines the amount of gain adjustment so that the color data of the teeth in the captured image matches the reference color data or falls within a predetermined range, and uniformly corrects the entire captured image (i.e., the entire image including the teeth and periodontal region) using the determined amount of gain adjustment.
- the image processing unit 53 may perform the following processing to generate a corrected image.
- the image processing unit 53 may identify a specific pixel area including pixels whose values based on the pixel values of the first corrected image obtained by correcting the color of the tooth in the captured image are within a predetermined range, and may generate a third corrected image as the above corrected image by adjusting the gain of at least two of the red, green, and blue color components that make up the area of the natural tooth in the captured image excluding the specific pixel area based on the reference color data.
- Image data based on the third corrected image is an example of image data based on a corrected image.
- the specific pixel area is an area of the natural tooth where the natural tooth is not exposed, for example, an area where plaque is attached.
- the image processing unit 53 may generate an HSV image by converting the color space of a first corrected image obtained by correcting the color of the teeth in the captured image into an HSV space, and identify as the specific pixel area a pixel area in which one or more pixels of the HSV image that satisfy at least one of the following conditions are located: saturation within a first predetermined range, hue within a second predetermined range, and brightness within a third predetermined range.
- the image processing unit 53 may identify as the specific pixel area a pixel area in which one or more pixels are located, where the multiple red pixel values of the multiple images constituting the natural tooth area in the first corrected image satisfy at least one of the following conditions: the multiple green pixel values are within a first range, the multiple blue pixel values are within a second range, and the multiple blue pixel values are within a third range.
- the image processing unit 53 may identify the specific pixel area from the HSV image converted from the first corrected image, or may identify the specific pixel area from the first corrected image itself.
- the image data extraction unit 54 extracts the gingival region based on the corrected image (S15).
- the image data extraction unit 54 may, for example, generate an HSV image by converting the color space of the first image (RGB image) into HSV space, and detect the region whose hue (H) falls within a predetermined range as the gingival region.
- FIG. 5 is a first diagram for explaining the periodontal region within the oral cavity in this embodiment.
- Figure 5 is an image captured of the oral cavity including the dental region and the periodontal region.
- Figure 6 is a second diagram for explaining the periodontal region within the oral cavity in this embodiment.
- Figure 6 shows a cross-sectional view of the periodontal region. Note that although Figure 6 is a cross-sectional view, hatching has been omitted for convenience.
- the gingiva comprises the free gingiva, attached gingiva, and alveolar mucosa.
- the free gingiva is the part of the gingiva that is closest to the head of the tooth and moves when the cheek or other parts of the body move.
- the attached gingiva is the part of the gingiva that is attached to the bone and does not move when the cheek or other parts of the body move.
- the alveolar mucosa is the part of the attached gingiva that is located on the opposite side of the free gingiva from the free gingiva and moves when the cheek or other parts of the body move.
- the interdental papilla gingiva is the thin gingiva found between the teeth; it is weak and prone to inflammation and swelling, and is a place where dirt and plaque tend to accumulate. Periodontal disease is one of the causes of loss of the interdental papilla gingiva.
- the free gingiva is the part of the gingiva that forms the gingival sulcus.
- the gingival sulcus is a groove between the tooth and gingiva, and is a place where plaque that leads to periodontal disease tends to accumulate.
- the gingival margin is the top of the free gingiva and is located opposite the attached gingiva.
- the free gingival sulcus is located at the border between the free gingiva and attached gingiva, and the gingival-alveolar junction is located at the border between the attached gingiva and alveolar mucosa.
- the attached gingiva is the part of the gingiva from the free gingival sulcus to the gingival-alveolar junction.
- FIG. 7A is a diagram illustrating image data extracted by the image data extraction unit 54 according to this embodiment.
- (a) of FIG. 7A is a diagram illustrating the area extracted from the corrected image by the image data extraction unit 54
- (b) of FIG. 7A is a diagram illustrating the extracted image data.
- the image shown in (a) of FIG. 7A is an example of a corrected image
- (b) of FIG. 7A is an image obtained by extracting the rectangular area surrounded by the dashed line in (a) of FIG. 7A (periodontal region image data extracted by the image data extraction unit 54).
- the image data extraction unit 54 draws dashed-dotted lines (dotted-dotted lines extending vertically on the paper) that circumscribe the left and right sides of the tooth contour and are perpendicular to the tooth alignment direction. The left and right sides are the parts of the tooth that protrude most toward the adjacent tooth.
- the image data extraction unit 54 also draws dashed-dotted lines (the upper dashed-dotted lines extending horizontally on the paper) parallel to the tooth alignment direction so as to include the apex (tip) of the interdental papilla gingiva, and draws dashed-dotted lines (the lower dashed-dotted lines extending horizontally on the paper) parallel to the tooth alignment direction so as to include the peripheral tooth edge.
- the image data extraction unit 54 generates periodontal region image data by extracting the rectangular region enclosed by the dashed line shown in (a) of Figure 7A from the corrected image.
- the image data extraction unit 54 extracts from the corrected image, as periodontal region image data, a rectangular region of the periodontal region of the tooth in question that includes from the tip of the interdental papilla gingiva (e.g., the periodontal margin) to the free gingival sulcus.
- the periodontal region image data is an image that includes only a portion of the tooth.
- the tooth region is an area that is unnecessary for detecting periodontal disease (e.g., an area that may reduce detection accuracy), and as shown in (b) of Figure 7A, it is desirable that the periodontal region image data include as little tooth region as possible.
- the periodontal region image data includes the area from the periodontal margin to the free gingival sulcus, it is possible to estimate the depth of the periodontal pocket based on that image data, and when the interdental papilla gingiva becomes diseased, the interdental papilla gingiva recedes, appearing to create gaps between the teeth, making it possible to determine the progression of the periodontal disease.
- the image data extraction unit 54 is not limited to extracting the region shown in (a) of Figure 7A, and may, for example, extract from the corrected image as periodontal region image data a rectangular region that is long in the vertical direction and is surrounded by a dashed line extending in the vertical direction (for example, a region that includes the periodontal region as well as most of the tooth). Note that, from the perspective of detecting periodontal disease with greater accuracy, it is preferable to extract the region surrounded by the dashed line shown in (a) of Figure 7A. Note also that the extracted image data is not limited to being a rectangular image. Note that the image data extraction unit 54 may also extract from the captured image as periodontal region image data a region in the captured image that corresponds to the dashed line shown in (a) of Figure 7A.
- Figure 7B is a diagram showing a specific example of a method for forming a rectangular area according to this embodiment.
- the frame that shows a rectangular area will also be referred to as a rectangular frame.
- Figure 7B shows an example of a rectangular frame for a maxillary central incisor.
- the image data extraction unit 54 forms a rectangular frame F1 that surrounds a specific tooth in accordance with the dentition.
- the image data extraction unit 54 identifies the tooth based on, for example, the color of the tooth and gums, and forms a rectangular frame F1 that surrounds one tooth based on the shape of the tooth.
- the four sides of the rectangular frame F1 are formed so as to be in contact with the outer shape of one tooth.
- the image data extraction unit 54 then forms a rectangular frame F2 by moving the bottom edge of the rectangular frame F1 upward within a range that includes the left and right interdental papillae and gingiva (for example, the tips of the interdental papillae and gingiva). At this time, it is advisable to move the frame upward as much as possible within a range that includes the left and right interdental papillae and gingiva in order to remove as much tooth information as possible.
- the image data extraction unit 54 then forms a rectangular frame F3 by moving the upper end of the rectangular frame F2 upward from the gingival margin by a predetermined distance D.
- the predetermined distance D is, for example, 2 mm, but is not limited to this, and may be 1 mm or 3 mm or more. This forms the dashed-dotted frame shown in (b) of Figure 7A.
- Feature engineering it is more important to find meaningful "features" that are closely related to the target variable (disease progression) than to find complex mathematical or statistical transformations to improve prediction accuracy.
- finding such features requires knowledge, experience, and intuition related to dentistry, knowledge of data (the meaning of data items and relationships between tables), and knowledge of statistics and machine learning (statistical stability and predictive power).
- Feature engineering is one of the most important and most difficult steps in the process of developing a machine learning model.
- the periodontal disease detection unit 55 performs detection related to periodontal disease based on the image of the gingival region (S16).
- the periodontal disease detection unit 55 inputs the periodontal region image data extracted by the image data extraction unit 54 (for example, the image shown in (b) of FIG. 7A) into the learning model, and obtains an estimated result of periodontal disease, which is the output of the learning model.
- the periodontal disease detection unit 55 obtains, for example, the estimated result as the detection result of periodontal disease.
- the output unit 57 outputs the detection results of the periodontal disease detection unit 55 (S17).
- the output unit 57 transmits the detection results via communication to the information terminal of at least one of the user and medical professional. This allows the periodontal disease detection results to be notified to at least one of the user and medical professional.
- the periodontal disease detection system corrects color casts on specific teeth and gums caused by the light irradiating the oral cavity when capturing an image based on pre-stored reference color data for specific teeth that is specific to the user, without using color calibration color patches or the like. This allows the colors of the teeth and gums in the image to be closer to the colors of the teeth and gums of the user. By detecting periodontal disease using such an image (corrected image), it is possible to prevent a decrease in the accuracy of periodontal disease detection due to the image.
- a method is adopted in which risk is assessed using a judgment rule for content for which a person can construct a judgment rule (for example, a judgment rule based on the shape of the tip of the interdental papilla gingiva).
- the judgment rule is an example of a periodontal disease detection tool.
- the judgment rule is generated in order to output information related to periodontal disease from image data.
- the configuration of the periodontal disease detection system of this modified example may be the same as that of the periodontal disease detection system 1 of embodiment 1, and will be described below using the reference numerals of the periodontal disease detection system 1.
- the periodontal disease detection unit 55 of this modified example automatically detects the user's periodontal disease using preset determination rules.
- Figures 8A to 8C are diagrams showing an example of each group rule base (judgment rule) for this modified example.
- Figure 8A shows the judgment rule when the normal gingival R value is A1 to A2 for free gingiva and B1 to B2 for attached gingiva.
- Figure 8B shows the judgment rule when the normal gingiva R value is A3 to A4 for free gingiva and B3 to B4 for attached gingiva.
- Figure 8C shows the judgment rule when the normal gingiva R value is A5 to A6 for free gingiva and B5 to B6 for attached gingiva.
- the nth group rule base stores the periodontal disease determination criteria when the normal range for free gingiva is A2n-1 ⁇ R value ⁇ A2n, and when the normal range for attached gingiva is B2n-1 ⁇ R value ⁇ B2n.
- the rule base groups are classified only by R value, but the determination rules may also be classified based on, for example, G value, B value, or a combination of two of R value, G value, and B value, or a combination of three of R value, G value, and B value.
- the rule base (each judgment rule) for each group may set a rule for determining the progression level of periodontal disease based on the amount of change in the R value, G value, and B value from normal gingiva obtained from the user.
- the rule base (each judgment rule) for each group may divide gingiva into free gingiva and attached gingiva, and set the progression level of periodontal disease based on the amount of change in each of the R value, G value, and B value from normal gingiva values.
- the criteria for determining periodontal disease may be set for each user.
- the progression level is an example of the degree of progression.
- the progression level may be, for example, the stage of progression of periodontal disease.
- progression level 1 may be gingivitis
- progression level 2 may be mild periodontal disease
- progression level 3 may be moderate periodontal disease
- progression level 4 may be severe periodontal disease.
- the periodontal disease detection unit 55 may output the progression level of periodontal disease by prioritizing an evaluation based on free gingiva. Alternatively, the periodontal disease detection unit 55 may output the progression level of periodontal disease for both free gingiva and attached gingiva.
- the judgment rules shown in Figures 8A to 8C are set in advance and stored in the memory unit 58. There is no particular limit to the number of judgment rules stored in the memory unit 58, as long as it is two or more.
- the judgment rules may also be rules for detecting the presence or absence of periodontal disease.
- each judgment rule may be set for each color space, or may be set individually for each region in the oral cavity, or, if color information of the user's healthy gums (e.g., absolute color values and color variation) has been acquired in advance, may be set according to that color information.
- judgment rules may be set for each user. For example, color information (absolute color values, e.g., R value, G value, B value) at multiple points on the user's gums may be acquired, the variation in R value may be calculated, and the normal range of the R value may be determined based on the statistical value of the R value (e.g., average value) and the variation in the R value.
- the G value and B value may also be determined in the same way as the R value.
- the periodontal disease detection unit 55 detects periodontal disease using the above-mentioned judgment rules.
- the periodontal disease detection unit 55 is configured to compare the input image data with the judgment rules and detect periodontal disease depending on the comparison results. Processing the input image data using the judgment rules is also referred to as inputting the image data into the judgment rules.
- the periodontal disease detection unit 55 detects the user's periodontal disease based on the R, G, and B values of the gum color obtained from image data based on the second RGB image and the judgment rule.
- the periodontal disease detection unit 55 extracts a judgment rule from multiple judgment rules that corresponds to the region to which the target tooth belongs or the user, and uses the extracted judgment rule to judge periodontal disease.
- the periodontal disease detection unit 55 extracts color information of the gums from image data based on the second RGB image, and determines whether the gums are normal or not based on the difference (amount of change) between the color value indicated by the extracted color information and a preset normal range, and if not normal, determines the progression level of the periodontal disease for both the free gums and the attached gums.
- the color values may be R values, G values, and B values, or may be hue information, saturation information, lightness information, or hue information, saturation information, and brightness information.
- the periodontal disease detection unit 55 may use a determination rule created from the range of R, G, and B values in the RGB color space for normal gum color and the range of R, G, and B values for gum color for multiple stages of periodontal disease to input the R, G, and B values of the gum color obtained from image data based on the second RGB image and detect the user's periodontal disease in that area.
- the periodontal disease detection unit 55 may input at least one of the hue information, saturation information, and brightness information of the color of normal gums in the HSV color space, and at least one of the ranges of the hue information, saturation information, and brightness information of the color of gums for each of multiple stages of periodontal disease, and detect the user's periodontal disease in the relevant area.
- the periodontal disease detection unit 55 may input at least one of the hue information, saturation information, and luminance information of the color of normal gums in the HSL color space, and at least one of the ranges of the hue information, saturation information, and luminance information of the color of gums for each of multiple stages of periodontal disease, and detect the user's periodontal disease in that area.
- the difference in gum color is the difference in color between the darkest and lightest parts of the gums
- the determination rule may be a rule that associates this difference with normal and advanced levels. Since each user's gum color (e.g., normal gum color) is different, by using the difference in color between the darkest and lightest parts of the gums, periodontal disease can be detected without relying on each user's own gum color.
- gum shape may be used instead of or in addition to color.
- An example in which gum shape is used will be described in a modified version of embodiment 2.
- periodontal disease detection system of this modified example may be the same as that of the periodontal disease detection system 1 according to embodiment 1, and the following description will use the reference numerals of the periodontal disease detection system 1.
- Figure 9 is a diagram explaining the learning data for this modified example.
- numbers are assigned to each type of tooth in the upper and lower jaws. For example, “1" is a central incisor and "2" is a lateral incisor.
- multiple points are set for each tooth to obtain correct answer data for the training data.
- three points (1 (left), 2 (center), 3 (right)) are set on the outside of the maxillary central incisors, and three points (4 to 6) are set on the buccal side of the maxillary central incisors.
- three points (11 to 13) are set on the outside of the mandibular central incisors, and three points (14 to 16) are set on the buccal side of the mandibular central incisors.
- the measurement points are set in the gingival area at the boundary between the tooth and the gingiva.
- Measurements include at least one of periodontal pockets and BOP values, and the measurement items in this modified example include both periodontal pockets and BOP values.
- periodontal pockets and BOP values are measured at six locations for each tooth (for example, locations 1 to 6 for the maxillary central incisors).
- periodontal pockets and BOP values are measured at three locations for each tooth (for example, locations 1 to 3 or 4 to 6 for the maxillary central incisors).
- the periodontal pocket values may be set into nine classes, one for each of 1 to 9 mm, or two classes, 3 mm or less and 4 mm or more, or three classes, 2 mm or less, 3 to 5 mm, and 6 mm or more, or five classes, 2 mm or less, 3 mm, 4 mm, 5 mm, and 6 mm or more.
- BOP scores are assigned a value of 0 or 1 depending on whether or not there is bleeding.
- It includes (1) information indicating whether the image was taken of the upper or lower jaw, (2) information indicating whether the image was taken of the right or left side, with the front teeth (central incisors) at the center, (3) information indicating the tooth number (1-8), (4) information indicating whether the image is of the lingual side or the buccal side, (5) information indicating the periodontal pocket value, and (6) information indicating the BOP value.
- Information indicating the periodontal pocket value and (6) information indicating the BOP value include the measurement results for each measurement point. For example, the following additional information is added: "upper jaw, left, number 5, lingual side, pocket values 3, 2, 5, BOP values 0, 0, 1.”
- Periodontal pocket value and (6) BOP value are used as correct labels during learning. Note that tooth numbers are set to exclude primary teeth, but this is not a limitation.
- the accuracy of the generated machine learning model can be improved. Furthermore, based on the estimated periodontal pocket depth and the BOP value, the machine learning model can determine at least one of the degree of progression of periodontal disease and the need for periodontal surgical treatment.
- the periodontal disease detection system is an information processing system for detecting periodontal disease in a user based on an image of a predetermined area in the oral cavity, and differs from the periodontal disease detection system according to embodiment 1 in that it includes a mobile terminal 50a instead of the mobile terminal 50.
- the following description will focus on differences from embodiment 1, such as the configuration of the mobile terminal 50a, and descriptions of identical or similar configurations will be omitted or simplified.
- FIG. 10 is a block diagram showing the functional configuration of the mobile terminal 50a according to this embodiment.
- the mobile terminal 50a does not include the tooth type identification unit 52 of the mobile terminal 50 according to embodiment 1, but does include an area detection unit 151 and a rule selection unit 152.
- the image processing unit 53 generates a corrected image in which the colors have been corrected from the captured image.
- the image processing unit 53 may generate the corrected image from the captured image using reference color data as in embodiment 1, or may generate the corrected image from the captured image by performing white balance processing.
- the area detection unit 151 detects which of the multiple areas of the oral cavity defined by dividing the tooth row the tooth in the corrected image generated by the image processing unit 53 belongs to.
- the multiple areas are areas obtained by dividing the oral cavity into groups of teeth with similar characteristics. Similar characteristics include similar tooth shapes. Note that if the shape of the teeth changes, the shape of the gums may also change similarly. Therefore, in this case, similar tooth shapes mean similar gum shapes.
- FIG. 11 is a diagram illustrating multiple regions within the oral cavity according to this embodiment.
- (a) of FIG. 11 is a diagram in which regions with similar shapes of teeth and gums are separated by dashed lines.
- (b) of FIG. 11 is a diagram showing an example of multiple regions set in this embodiment.
- the upper jaw includes three regions: the right upper jaw, which includes the right molars; the front upper jaw, which includes the central incisors, lateral incisors, and canines; and the left upper jaw, which includes the left molars.
- the lower jaw includes three regions: the right lower jaw, which includes the right molars; the front lower jaw, which includes the central incisors, lateral incisors, and canines; and the left lower jaw, which includes the left molars.
- the multiple regions include a first region R1 to a fourth region R4.
- the first region R1 indicates the tongue-side region of the molars of the upper and lower jaws
- the second region R2 indicates the buccal-side region of the molars of the upper and lower jaws
- the third region R3 indicates the tongue-side region of the anterior teeth of the upper and lower jaws
- the fourth region R4 indicates the buccal-side region of the anterior teeth of the upper and lower jaws.
- the number of regions may be two or more, and may be two or three, or five or more.
- FIG. 12 shows an example of an image obtained by photographing multiple regions in this embodiment.
- (a) of FIG. 12 shows an image of a mandibular anterior tooth photographed from the lingual side.
- the image shown in (a) of FIG. 12 is an example of an image photographed in the third region R3 shown in (b) of FIG. 11.
- (b) of FIG. 12 shows an image of a mandibular anterior tooth photographed from the buccal side.
- the image shown in (b) of FIG. 12 is an example of an image photographed in the fourth region R4 shown in (b) of FIG. 11.
- (c) of FIG. 12 shows an image of a mandibular molar photographed from the lingual side.
- FIG. 12 is an example of an image photographed in the first region R1 shown in (b) of FIG. 11.
- (d) of FIG. 12 shows an image of a mandibular molar photographed from the buccal side.
- the image shown in (d) of FIG. 12 is an example of an image photographed in the second region R2 shown in (b) of FIG. 11.
- the region detection unit 151 detects which of the first region R1 to fourth region R4 the corrected image is an image of. It can also be said that the region detection unit 151 detects from which of the first region R1 to fourth region R4 the teeth and periodontal region shown in the corrected image were photographed.
- the method by which the region detection unit 151 detects regions is not particularly limited, and may be, for example, a method using a machine learning model, a method using pattern matching, or any other known method.
- the machine learning model is a learning model that is trained to output, when an image including teeth and periodontal regions is input, which of the first region R1 to fourth region R4 the teeth and periodontal region shown in the image belongs to.
- the shapes of teeth and gums vary depending on at least one of the tooth type (molar, canine, anterior tooth) and imaging direction (cheek side, lingual side). It is also believed that the type of periodontal disease and the degree of gum deformation associated with its progression also vary depending on at least one of the tooth type and imaging direction. Therefore, if all teeth and gums are trained using a single learning model, there is a risk that the unique shapes of each tooth and gum may result in incorrect detection of the degree of progression of periodontal disease.
- the memory unit 58 pre-stores, for each of the multiple regions, multiple learning models that have been trained using images corresponding to that region.
- the multiple learning models correspond to each of the multiple regions, and each is a learning model that has been trained to input image data of the teeth and gums in that region and output information related to periodontal disease in that region.
- a learning model that corresponds one-to-one to that region is generated.
- Information related to periodontal disease includes the presence or absence of periodontal disease, the progress of periodontal disease, signs of periodontal disease, etc.
- the information related to periodontal disease may also include estimated results for at least one of periodontal pocket depth, BOP, and GI value.
- Each of the multiple learning models is associated with information indicating which of the multiple regions the learning model corresponds to.
- each of the multiple learning models is associated with information indicating one of the first region R1 to fourth region R4.
- the corrected image based on the captured image of the first region R1 there are learning models corresponding to the corrected image based on the captured image of the first region R1, the corrected image based on the captured image of the second region R2, the corrected image based on the captured image of the third region R3, and the corrected image based on the captured image of the fourth region R4.
- four learning models corresponding one-to-one to each of the four regions are created in advance and stored in the memory unit 58.
- the rule selection unit 152 selects a learning model that corresponds to the area detected by the area detection unit 151 from among multiple learning models stored in the memory unit 58.
- the rule selection unit 152 is an example of a tool selection unit.
- the periodontal disease detection unit 55 detects the user's periodontal disease using the learning model selected by the rule selection unit 152.
- the periodontal disease detection unit 55 detects the user's periodontal disease by inputting image data based on the corrected image (e.g., periodontal region image data) into the selected learning model.
- Figure 13 is a diagram showing an example of an image used for training a learning model according to this embodiment.
- the area enclosed by the dashed-dotted rectangular frame may be extracted by the image data extraction unit 54 or the like, and the extracted image data may be used as image data for training.
- the dashed-dotted rectangular frame may be formed by the method shown in Figure 7A (a).
- the image shown in Figure 13 (a) is an image of a lower jaw front tooth photographed from the lingual side, and is used for training a learning model to which an image of the third region R3 is input.
- the image shown in Figure 13 (b) is an image of a lower jaw front tooth photographed from the buccal side, and is used for training a learning model to which an image of the fourth region R4 is input.
- the image shown in Figure 13 (c) is an image of a lower jaw back tooth photographed from the lingual side, and is used for training a learning model to which an image of the first region R1 is input.
- the image shown in Figure 13 (d) is an image of a lower jaw back tooth photographed from the buccal side, and is used for training a learning model to which an image of the second region R2 is input.
- the images shown in (a), (b), and (d) of FIG. 13 are not used in the learning model into which an image of the first region R1 is input; the images shown in (a) to (c) of FIG. 13 are not used in the learning model into which an image of the second region R2 is input; the images shown in (b) to (d) of FIG. 13 are not used in the learning model into which an image of the third region R3 is input; and the images shown in (a), (c), and (d) of FIG. 13 are not used in the learning model into which an image of the fourth region R4 is input.
- Each learning model is trained using multiple images showing different stages of periodontal disease progression.
- the training method for each learning model is the same as in embodiment 1, so further explanation will be omitted.
- the processing unit that generates the learning model may be provided by the periodontal disease detection system, or by a device outside the periodontal disease detection system.
- Fig. 14 is a flowchart showing the operation of the periodontal disease detection system (periodontal disease detection method) according to this embodiment.
- Step S101 corresponds to step S12 shown in FIG. 4.
- Step S102 corresponds to steps S14 and S15 shown in Figure 4.
- the region detection unit 151 detects an intraoral region from the image of the periodontal region or the corrected image based on the type of tooth shown in the image and the imaging direction (S103).
- the region detection unit 151 detects which of the first region R1 to fourth region R4 the image corresponds to by using a machine learning model, pattern matching, or the like.
- the rule selection unit 152 selects a learning model that performs periodontal disease detection from among the multiple learning models stored in the memory unit 58, based on the intraoral region detected by the region detection unit 151 (S104).
- the rule selection unit 152 selects, from the multiple learning models, the learning model that corresponds to the region detected by the region detection unit 151 as the learning model that performs periodontal disease detection in the image.
- the learning model corresponding to the region detected by the region detection unit 151 is selected by the rule selection unit 152.
- the periodontal disease detection unit 55 performs periodontal disease detection based on the image of the gingival region and the selected learning model (S105).
- the periodontal disease detection unit 55 inputs the image of the gingival region into the selected learning model, and obtains the detection results for periodontal disease as the output of the learning model.
- Step S105 corresponds to step S16 shown in Figure 4.
- Step S106 corresponds to step S17 shown in Figure 4. Note that information indicating which learning model was used to detect periodontal disease may be included in the detection result and output.
- periodontal disease in the periodontal region of a tooth can be detected based on a learning model trained using periodontal region images with similar tooth shapes, thereby improving the accuracy of periodontal disease detection compared to, for example, detecting periodontal disease using a single learning model. Furthermore, evaluation is performed for each region using a learning model corresponding to that region, thereby improving the accuracy of periodontal disease detection.
- the types of classification (deformation, discoloration) performed by the learning model are the same as when detecting periodontal disease using learning models for each region as in this embodiment, and the number of classes (e.g., progression levels of periodontal disease) is not very large, so a large network is not required. Therefore, the learning model according to this embodiment can reduce the amount of processing.
- the learning efficiency of the learning model is high.
- the learning efficiency of the learning model is high, so the detection accuracy of the learning model can be effectively improved.
- the periodontal disease detection system according to this modification will be described below with reference to FIGS. 15 to 18B. The following description will focus on differences from the second embodiment, and descriptions of content that is the same as or similar to the second embodiment will be omitted or simplified.
- This modification differs from the periodontal disease detection system according to the embodiment in that the multiple periodontal disease detection tools selected by the rule selection unit (an example of a tool selection unit) include at least one or more determination rules.
- the multiple periodontal disease detection tools selected by the rule selection unit may include one or more machine learning models, and may include, for example, one or more determination rules and one or more machine learning models.
- the machine learning model is generated to output information related to periodontal disease from image data.
- the determination rule and the machine learning model are examples of tools.
- periodontal disease detection system of this modified example may be the same as that of the periodontal disease detection system 2 according to embodiment 2, and the following description will use the reference numerals of the periodontal disease detection system 2.
- the rule selection unit 152 selects two periodontal disease detection tools for detecting periodontal disease from among the multiple periodontal disease detection tools stored in the memory unit 58, based on the area within the oral cavity detected by the area detection unit 151 (S204).
- each of the multiple periodontal disease detection tools includes at least one or more judgment rules for inputting an image of the gingiva between adjacent teeth for a tooth in image data including the relevant region and outputting information related to periodontal disease in the relevant region.
- the one or more judgment rules are created, for example, from gingival shape data at multiple stages of periodontal disease for each tooth belonging to the multiple regions.
- the multiple periodontal disease detection tools may include multiple judgment rules, or may include one or more machine learning models and one or more judgment rules.
- the judgment rules may be rules using gingival color, as shown in the modified example of embodiment 1, or may be rules using gingival shape.
- the gingival color and gingival shape (for example, the shape of the gingival papilla) are evaluation parameters in the rule base.
- periodontal disease that is difficult to detect by one of the machine learning model and the judgment rule may be detected by the other of the machine learning model and the judgment rule. Since the detection of periodontal disease can complement each other, it is possible to prevent periodontal disease from being overlooked.
- the rule selection unit 152 may select two periodontal disease detection tools using a table that associates regions within the oral cavity with the periodontal disease detection tools to be used, or may select two periodontal disease detection tools based on the history of periodontal disease detection tools used to detect periodontal disease in that region, or may select two periodontal disease detection models using some other method.
- the rule selection unit 152 may select three or more periodontal disease detection tools in step S204. When three or more periodontal disease detection tools are selected, there is no particular limit to the number of machine learning models and judgment rules selected. The number of periodontal disease detection tools selected by the rule selection unit 152 may be set by the user, for example.
- the periodontal disease detection unit 55 performs periodontal disease detection based on the image and the two selected periodontal disease detection models (S205).
- a machine learning model and a judgment rule are selected as the two periodontal disease detection models
- the periodontal disease detection unit 55 performs periodontal disease detection using the machine learning model, and performs periodontal disease detection using the judgment rule.
- two judgment rules are selected as the two periodontal disease detection models
- the periodontal disease detection unit 55 performs periodontal disease detection using each of the two judgment rules.
- Figure 16 is a diagram showing the angle ⁇ of the tip of the interdental papilla gingiva in this modified example.
- Periodontal disease develops, the gums between the teeth become inflamed and swollen, causing the shape of the interdental papilla gingiva to change. Therefore, by comparing the angle ⁇ based on the tip of the interdental papilla gingiva as the shape of the interdental papilla gingiva with the angle based on the tip of the interdental papilla gingiva in healthy gums, it is possible to determine whether gingivitis or periodontal disease is present from the angle of the tip.
- Figure 16 shows the angle ⁇ when normal for each region.
- the angle ⁇ when normal can differ for each region. Therefore, one or more judgment rules are used in each region that associate the angle ⁇ with normal and advanced levels.
- Figure 16 only shows the angle ⁇ of one of the interdental papilla gingiva on the left and right sides of the tooth.
- the image data extraction unit 54 may extract from the second image an area including at least a portion of a tooth (e.g., a specific tooth) and at least one of the left and right interdental papilla gingiva of that tooth as image data based on the second image.
- This extraction process may be performed in step S102 or in step S205.
- Figures 17A to 17C are diagrams explaining the method for calculating the angle of the tip of the interdental papilla gingiva in this modified example.
- the periodontal disease detection unit 55 uses image processing to identify the tip (e.g., apex) of the interdental papilla gingiva between teeth (e.g., a specific tooth) from the image.
- the periodontal disease detection unit 55 may, for example, determine the position of the gingiva located highest between the teeth as the tip of the interdental papilla gingiva, or may determine a position having a predetermined shape as the tip of the interdental papilla gingiva.
- Figure 17A shows an example in which the " ⁇ " portion at the end of the arrow has been identified as the tip of the interdental papilla gingiva.
- the periodontal disease detection unit 55 then draws a line segment downward from the tip of the interdental papilla gingiva, rotates the line segment clockwise and counterclockwise around the tip as an axis, and fixes the line segment at the position where it first comes into contact with the tooth. This forms an angle ⁇ between the two line segments and the tip.
- the gingival shape includes the angle ⁇
- a determination rule is used that associates the angle ⁇ with normal and advanced levels, or normal and abnormal.
- Figures 18A and 18B are diagrams showing the radius r of the tip of the interdental papilla gingiva in this modified example.
- Periodontal disease develops, the gums between the teeth swell due to inflammation, causing the tips of the triangles formed by the interdental papilla gingiva to become rounded. Therefore, by comparing the radius of the rounded tip of the interdental papilla gingiva of healthy gums with the rounded tip of the interdental papilla gingiva of periodontal disease, it is possible to determine whether gingivitis or periodontal disease is present from the radius of the rounded tip.
- Figure 18A shows the radius r when periodontal disease is present
- Figure 18B shows the radius r when periodontal disease is not present.
- the radius r may be, for example, the radius of curvature of the tip of the interdental papilla gingiva. When periodontal disease is present, the radius r tends to be a large value.
- the gingival shape includes the radius r
- a determination rule is used that associates the radius r with normal and advanced levels, or normal and abnormal.
- the periodontal disease detection unit 55 detects the user's periodontal disease based on the periodontal disease detection results of each of the two periodontal disease detection tools. For example, the periodontal disease detection unit 55 may compare the periodontal disease detection results of each of the two periodontal disease detection tools and determine the detection result of the one with the more severe periodontal disease symptoms as the detection result for the user's periodontal disease.
- the periodontal disease detection unit 55 detects the user's periodontal disease by taking into consideration both the first detection result, which is the periodontal disease detection result according to the judgment rule, and the second detection result, which is the periodontal disease detection result according to the machine learning model. For example, if periodontal disease is not detected in the first detection result and periodontal disease is detected in the second detection result, the periodontal disease detection unit 55 may prioritize the first detection result and determine that periodontal disease has not been detected as the detection result for the user's periodontal disease.
- a detection result that indicates worse periodontal disease symptoms can be rephrased as a detection result of someone with more advanced periodontal disease.
- the output unit 57 outputs the detection result of the periodontal disease detection unit 55 (S106). If the periodontal disease detection unit 55 determines that the detection result showing worse periodontal disease symptoms is the detection result of the user's periodontal disease, the output unit 57 notifies the user of the worse detection result. Furthermore, when a machine learning model is used, the symptoms of periodontal disease may be detected as worse than they actually are due to the influence of factors such as the pattern of the gums. Therefore, if the periodontal disease detection unit 55 prioritizes the first detection result, the output unit 57 may also notify the user of the second detection result as additional information.
- an intraoral camera 10 is used primarily for photographing teeth, but the intraoral camera 10 may also be an oral care device equipped with a camera.
- the intraoral camera 10 may also be an oral irrigator equipped with a camera.
- mobile terminals 50 and 50a are given as examples of the second information terminal, but the second information terminal may also be a stationary information terminal.
- a display device capable of communicating with the mobile terminals 50, 50a may also be provided as a separate device from the mobile terminals 50, 50a.
- the machine learning model has one or more calculation parameters that can be adjusted by machine learning.
- the machine learning model may be configured, for example, using a neural network, a regression model, a decision tree model, a support vector machine, or other functional formulas (calculation models).
- the machine learning method is selected appropriately depending on the machine learning model employed, and examples include, but are not limited to, backpropagation.
- the reference color data in the above-mentioned first embodiment and the modified example of the first embodiment is not limited to measured data, as long as it is based on the color of the user's teeth, and may be, for example, the color indicated by a shade guide.
- the periodontal disease detection unit 55 may, for example, determine for each intraoral region whether to adopt the detection results of the machine learning model or the detection results of the judgment rule, or may determine for all intraoral regions whether to adopt the detection results of the machine learning model or the judgment rule.
- the intraoral regions may, for example, include two or more of the first region R1 to the fourth region R4 described above, but are not limited to this and may be set arbitrarily.
- the rule selection unit 152 may select only two or more machine learning models, i.e., may not select a judgment rule, or may select only two or more judgment rules, i.e., may not select a machine learning model.
- the mobile terminal 50 in the above-mentioned first embodiment and the modified example of the first embodiment only needs to include at least an output unit 57, an acquisition unit 51, and a processing unit.
- the processing unit performs predetermined processing, such as storing the detection results in the memory unit 58, displaying the detection results on the display unit 56, and transmitting the detection results to another device.
- the periodontal disease detection method (information processing method) executed by such a mobile device 50 is a periodontal disease detection method executed by one or more processors, and includes irradiating the user's oral cavity with light, outputting a first RGB image capturing an image of a photographed area including a specific tooth and the periodontal region of the specific tooth, including the gums, acquiring a periodontal disease detection result for the user based on the first RGB image, and performing predetermined processing on the acquired detection result.
- the detection result may include the periodontal disease detection result for the user detected based on image data based on a second RGB image in which gains of at least two of the red, green, and blue color components constituting the natural tooth area in the first RGB image are reference color data indicating the color reference of the natural tooth included in the photographed area, and the reference color data is adjusted based on the user-specific reference color data.
- the functions of the tooth type identification unit 52, image processing unit 53, image data extraction unit 54, and periodontal disease detection unit 55 are executed by a device external to the mobile device, such as a server device.
- the mobile terminal 50a in the above-mentioned second embodiment and the modified example of the second embodiment only needs to include at least an output unit 57, an acquisition unit 51, and a processing unit.
- the processing unit performs predetermined processing, such as storing the detection results in the memory unit 58, displaying the detection results on the display unit 56, and transmitting the detection results to another device.
- a periodontal disease detection method (information processing method) executed by such a mobile terminal 50a is a periodontal disease detection method executed by one or more processors, and includes outputting a first image of a specific tooth in the user's oral cavity and the periodontal region of the specific tooth including the gums, obtaining a detection result of the user's periodontal disease based on the first image, and performing predetermined processing on the obtained detection result, wherein the detection result may include the detection result of the user's periodontal disease detected based on image data based on the second image, and a periodontal disease detection tool selected from a plurality of periodontal disease detection tools corresponding to each of a plurality of regions in the oral cavity, each of which is trained to input image data including the region and output information about periodontal disease in the region, depending on which of the plurality of regions defined by dividing the tooth row a second image including the periodontal region of the specific tooth generated from the first image is an image of.
- the functions of the image processing unit 53, area detection
- each processing unit included in the periodontal disease detection systems according to the first and second embodiments is typically realized as an LSI, which is an integrated circuit. These may be individually implemented as single chips, or some or all of them may be integrated into a single chip.
- integrated circuits are not limited to LSIs, but may be realized using dedicated circuits or general-purpose processors.
- FPGAs Field Programmable Gate Arrays
- reconfigurable processors which allow the connections and settings of circuit cells within the LSI to be reconfigured, may also be used.
- each component may be configured with dedicated hardware, or may be realized by executing a software program appropriate for each component.
- Each component may also be realized by a program execution unit such as a CPU or processor reading and executing a software program recorded on a recording medium such as a hard disk or semiconductor memory.
- the division of functional blocks in the block diagram is one example; multiple functional blocks may be realized as a single functional block, one functional block may be divided into multiple blocks, or some functions may be moved to other functional blocks. Furthermore, the functions of multiple functional blocks with similar functions may be processed in parallel or time-shared by a single piece of hardware or software.
- the periodontal disease detection systems according to the first and second embodiments may be realized as a single device or may be realized by multiple devices.
- the components of the periodontal disease detection system may be distributed in any manner among the multiple devices.
- at least some of the functions of the periodontal disease detection system may be realized by the intraoral camera 10 (e.g., signal processing unit 30).
- the communication method between the multiple devices is not particularly limited, and may be wireless communication or wired communication. Furthermore, wireless communication and wired communication may be combined between the devices.
- the present disclosure may also be realized as a periodontal disease detection method executed by a periodontal disease detection system.
- the present disclosure may also be realized as an intraoral camera, mobile terminal, or cloud server included in the periodontal disease detection system.
- one aspect of the present disclosure may be a computer program that causes a computer to execute each of the characteristic steps included in the periodontal disease detection method shown in any of Figures 4, 14, and 15.
- the program may be a program to be executed by a computer.
- one aspect of the present disclosure may be a computer-readable non-transitory recording medium on which such a program is recorded.
- such a program may be recorded on a recording medium and distributed or circulated. For example, by installing the distributed program in a device having another processor and having that processor execute the program, it becomes possible to cause that device to perform each of the above processes.
- This disclosure is useful for periodontal disease detection systems that detect periodontal disease.
- Intraoral camera 10a Head unit 10b Handle unit 20 Hardware unit 21 Photography unit 22 Sensor unit 23 Illumination unit 23A First LED 23B Second LED 23C Third LED 23D Fourth LED 24 Operation unit 30 Signal processing unit 31 Camera control unit 32, 53 Image processing unit 33 Control unit 34 Lighting control unit 35 Memory unit 40 Communication unit 50, 50a Portable terminal (second information terminal) 51 Acquisition unit (first acquisition unit, second acquisition unit) 52 Tooth type identification unit 54 Image data extraction unit 55 Periodontal disease detection unit 56 Display unit 57 Output unit 58 Storage unit 151 Area detection unit 152 Rule selection unit (tool selection unit) F1, F2, F3 Rectangular frame D Predetermined distance r Radius R1 First area R2 Second area R3 Third area R4 Fourth area ⁇ Angle
Landscapes
- Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)
Abstract
Description
本開示は、歯周疾患検出システム、歯周疾患検出方法及びプログラムに関する。 This disclosure relates to a periodontal disease detection system, a periodontal disease detection method, and a program.
近年、カメラ付き携帯端末の普及に伴い、歯科医師による検診を受ける前に、ユーザが携帯端末のカメラで自己の口腔付近を撮影し、得られた画像データから簡易に口腔の状態を初期判断するシステムが提案されている。例えば、特許文献1には、携帯端末のカメラによって口腔領域が撮影されると、口腔領域の画像データから、推定対象となる口腔対象領域の画像データを抽出し、抽出された口腔対象領域の画像データに基づいて口腔対象領域の状態を推定するシステムが開示されている。 In recent years, with the widespread use of mobile devices equipped with cameras, systems have been proposed that allow users to take a photo of the area around their oral cavity with the camera on their mobile device before undergoing a dental examination, and then easily make an initial assessment of the condition of the oral cavity from the resulting image data. For example, Patent Document 1 discloses a system that, when a photo of the oral cavity region is taken with the camera on a mobile device, extracts image data of an oral cavity target region to be estimated from the image data of the oral cavity region, and estimates the condition of the oral cavity target region based on the image data of the extracted oral cavity target region.
ところで、画像を用いて歯周病等の歯周疾患の検出を行う場合、歯周疾患の検出精度が向上されることが望まれる。 When detecting periodontal diseases such as periodontal disease using images, it is desirable to improve the accuracy of detecting periodontal disease.
そこで、本開示は、画像を用いた歯周疾患の検出精度を向上させることができる歯周疾患検出システム、歯周疾患検出方法及びプログラムを提供する。 The present disclosure therefore provides a periodontal disease detection system, periodontal disease detection method, and program that can improve the accuracy of periodontal disease detection using images.
本開示の一態様に係る歯周疾患検出システムは、口腔内を撮影した画像に基づいてユーザの歯周疾患を検出する歯周疾患検出システムであって、前記ユーザの口腔内に光を照射して、特定の歯牙と歯肉を含む当該特定の歯牙の歯周領域とを含む撮影領域を撮影した第1RGB画像を取得する第1取得部と、前記撮影領域に含まれる天然歯牙の色の基準を示す基準色データであって、前記ユーザに応じた基準色データを取得する第2取得部と、前記第1RGB画像内の前記天然歯牙の領域を構成する赤色、緑色、青色の各色成分のうち少なくとも2つの色成分のゲインを前記基準色データに基づいて調整した第2RGB画像を生成する画像処理部と、前記第2RGB画像に基づく画像データに基づいて、前記ユーザの歯周疾患を検出する歯周疾患検出部と、を備える。 A periodontal disease detection system according to one aspect of the present disclosure is a system for detecting periodontal disease in a user based on an image captured inside the oral cavity, and includes: a first acquisition unit that irradiates light into the user's oral cavity to acquire a first RGB image of an image capture area including a specific tooth and the periodontal region of the specific tooth including the gums; a second acquisition unit that acquires reference color data that indicates the color standard of natural teeth included in the image capture area and is user-specific; an image processing unit that generates a second RGB image in which the gains of at least two of the red, green, and blue color components that make up the natural tooth region in the first RGB image are adjusted based on the reference color data; and a periodontal disease detection unit that detects periodontal disease in the user based on image data derived from the second RGB image.
本開示の一態様に係る歯周疾患検出方法は、口腔内を撮影した画像に基づいてユーザの歯周疾患を検出する歯周疾患検出システムが実行する歯周疾患検出方法であって、前記ユーザの口腔内に光を照射して、特定の歯牙と歯肉を含む当該特定の歯牙の歯周領域とを含む撮影領域を撮影した第1RGB画像を取得し、前記撮影領域に含まれる天然歯牙の色の基準を示す基準色データであって、前記ユーザに応じた基準色データを取得し、前記第1RGB画像内の前記天然歯牙の領域を構成する赤色、緑色、青色の各色成分のうち少なくとも2つの色成分のゲインを前記基準色データに基づいて調整した第2RGB画像を生成し、前記第2RGB画像に基づく画像データに基づいて、前記ユーザの歯周疾患を検出する。 A periodontal disease detection method according to one aspect of the present disclosure is a periodontal disease detection method executed by a periodontal disease detection system that detects periodontal disease in a user based on an image captured inside the oral cavity, the method including irradiating light into the user's oral cavity to obtain a first RGB image capturing an image of an imaged region including a specific tooth and the periodontal region of the specific tooth including the gums, obtaining reference color data that indicates the color standard of natural teeth included in the imaged region and that is specific to the user, generating a second RGB image in which the gains of at least two of the red, green, and blue color components that make up the natural tooth region in the first RGB image are adjusted based on the reference color data, and detecting periodontal disease in the user based on image data derived from the second RGB image.
本開示の一態様に係る歯周疾患検出方法は、1以上のプロセッサが実行する歯周疾患検出方法であって、前記1以上のプロセッサは、ユーザの口腔内に光を照射して、特定の歯牙と歯肉を含む当該特定の歯牙の歯周領域とを含む撮影領域を撮影した第1RGB画像を出力し、前記第1RGB画像に基づく前記ユーザの歯周疾患の検出結果を取得し、取得した前記検出結果に対して所定の処理を実行し、取得される前記検出結果は、前記第1RGB画像内の天然歯牙の領域を構成する赤色、緑色、青色の各色成分のうち少なくとも2つの色成分のゲインが、前記撮影領域に含まれる天然歯牙の色の基準を示す基準色データであって、前記ユーザに応じた基準色データに基づいて調整された第2RGB画像に基づく画像データに基づいて検出された、前記ユーザの前記歯周疾患の検出結果を含む。 A periodontal disease detection method according to one aspect of the present disclosure is a periodontal disease detection method executed by one or more processors, wherein the one or more processors irradiate light into the oral cavity of a user, output a first RGB image capturing an image of a capture area including a specific tooth and the periodontal region of the specific tooth including the gums, obtain a detection result of the user's periodontal disease based on the first RGB image, and perform predetermined processing on the obtained detection result, wherein the obtained detection result includes the detection result of the user's periodontal disease detected based on image data based on a second RGB image adjusted based on the reference color data corresponding to the user, where the gains of at least two of the red, green, and blue color components constituting the natural tooth region in the first RGB image are reference color data indicating the standard color of the natural tooth included in the capture area.
本開示の一態様に係るプログラムは、上記の歯周疾患検出方法をコンピュータに実行させるためのプログラムである。 A program according to one aspect of the present disclosure is a program for causing a computer to execute the periodontal disease detection method described above.
本開示の一態様によれば、画像を用いた歯周疾患の検出精度を向上させることができる歯周疾患検出システム等を実現することができる。 According to one aspect of the present disclosure, it is possible to realize a periodontal disease detection system that can improve the accuracy of detecting periodontal disease using images.
(本開示に至った経緯)
ユーザ(例えば、被験者)の口腔内を撮影した画像に対して歯垢の強調表示を行う場合、歯牙領域を基準にR、G、B値の平均値が等しくなるようにホワイトバランスゲインを調整して歯牙を白色領域とする処理(無彩色化処理)を行うことがある。この処理により、歯垢の強調表示を効果的に行うことができる。
(Background to this disclosure)
When highlighting plaque in an image of the inside of a user's (e.g., a subject's) mouth, a process (achromatization process) may be performed to make the teeth a white region by adjusting the white balance gain so that the average values of R, G, and B are equal based on the tooth region. This process allows for effective highlighting of plaque.
画像を用いて歯周疾患を検出する場合、ユーザの歯肉の色がより忠実に再現された画像が用いられることが望まれる。しかしながら、上記のホワイトバランスによる色の調整では、R、G、B値の平均値が等しくなるように一律に補正されるので、ユーザごとに異なり得る歯肉の自然な色合いを実現することが困難である。つまり、ホワイトバランスを行うことで、歯周疾患の検出に適した画像が得られることは困難である。 When detecting periodontal disease using images, it is desirable to use images that more faithfully reproduce the color of the user's gums. However, color adjustment using the above-mentioned white balance involves uniform correction so that the average values of R, G, and B values are equal, making it difficult to achieve a natural color tone for the gums, which can vary from user to user. In other words, performing white balance makes it difficult to obtain an image suitable for detecting periodontal disease.
また、歯周疾患の判定を画像の色データを用いて行う場合、判定の精度を向上させるためには、異なる波長の光の下又は室外における色の差を補正する必要がある。例えば、特許文献1では、色校正用カラーパッチを口腔内とともに画像として撮像し、撮影画像における色校正用カラーパッチの色データにより、歯周画像データの色データを補正している。しかしながら、頬肉や舌を避けながら撮影しなければならない奥歯領域の撮影では色校正用カラーパッチを用いることが困難であり、改善の余地がある。 Furthermore, when assessing periodontal disease using image color data, it is necessary to correct color differences under light of different wavelengths or outdoors in order to improve the accuracy of the assessment. For example, in Patent Document 1, color calibration color patches are captured as images along with the inside of the oral cavity, and the color data of the periodontal image data is corrected using the color data of the color calibration color patches in the captured image. However, it is difficult to use color calibration color patches when photographing the back tooth area, where the cheek muscles and tongue must be avoided, and there is room for improvement.
以上のように、従来では、歯周疾患の検出を精度よく行うためのより適した画像を生成することが困難であり、これは歯周疾患の検出精度を低下させる要因となり得る。 As described above, it has traditionally been difficult to generate images that are more suitable for accurately detecting periodontal disease, which can be a factor in reducing the accuracy of periodontal disease detection.
そこで、本願発明者らは、画像を用いた歯周疾患の検出精度を向上させることができる歯周疾患検出システム等、特に歯周疾患の検出を精度よく行うためのより適した画像を生成することができる歯周疾患検出システム等について鋭意検討を行い、以下に示す歯周疾患検出システムを創案した。 The inventors of the present application therefore conducted extensive research into periodontal disease detection systems that can improve the accuracy of periodontal disease detection using images, and in particular, periodontal disease detection systems that can generate images that are more suitable for accurately detecting periodontal disease, and have devised the periodontal disease detection system described below.
本開示の第1態様に係る歯周疾患検出システムは、口腔内を撮影した画像に基づいてユーザの歯周疾患を検出する歯周疾患検出システムであって、前記ユーザの口腔内に光を照射して、特定の歯牙と歯肉を含む当該特定の歯牙の歯周領域とを含む撮影領域を撮影した第1RGB画像を取得する第1取得部と、前記撮影領域に含まれる天然歯牙の色の基準を示す基準色データであって、前記ユーザに応じた基準色データを取得する第2取得部と、前記第1RGB画像内の前記天然歯牙の領域を構成する赤色、緑色、青色の各色成分のうち少なくとも2つの色成分のゲインを前記基準色データに基づいて調整した第2RGB画像を生成する画像処理部と、前記第2RGB画像に基づく画像データに基づいて、前記ユーザの歯周疾患を検出する歯周疾患検出部と、を備える。 A periodontal disease detection system according to a first aspect of the present disclosure is a system for detecting periodontal disease in a user based on an image captured inside the oral cavity, and includes: a first acquisition unit that irradiates light into the user's oral cavity to acquire a first RGB image of an image capture area including a specific tooth and the periodontal region of the specific tooth including the gums; a second acquisition unit that acquires reference color data that indicates the color standard of natural teeth included in the image capture area and is user-specific; an image processing unit that generates a second RGB image in which the gains of at least two of the red, green, and blue color components that make up the natural tooth region in the first RGB image are adjusted based on the reference color data; and a periodontal disease detection unit that detects periodontal disease in the user based on image data derived from the second RGB image.
これにより、ユーザに応じた基準色データに基づいて歯牙及び歯肉の色を補正することができる。補正された画像の歯牙及び歯肉の色は、当該ユーザの歯牙及び歯肉の色に近い色となり得るので、補正された画像から歯周病の検出をより正確に行い得る。つまり、歯周疾患の検出を精度よく行うためのより適した画像を生成することができる。よって、画像を用いた歯周疾患の検出精度を向上させることができる。 This allows the color of the teeth and gums to be corrected based on reference color data tailored to the user. The color of the teeth and gums in the corrected image can be closer to the color of the user's teeth and gums, making it possible to more accurately detect periodontal disease from the corrected image. In other words, it is possible to generate an image that is more suitable for accurately detecting periodontal disease. This can improve the accuracy of periodontal disease detection using images.
また、例えば、第2態様に係る歯周疾患検出システムは、第1態様に係る歯周疾患検出システムであって、前記第2RGB画像から、歯間乳頭部と辺縁歯肉部とを含む歯頸部を帯状に取り囲む歯肉である遊離歯肉と、当該遊離歯肉に連続し、歯肉溝底から歯肉歯槽粘膜境までの付着歯肉の一部とを含む歯周領域画像データを、前記第2RGB画像に基づく前記画像データとして抽出する画像データ抽出部をさらに備えてもよい。 Furthermore, for example, the periodontal disease detection system according to the second aspect may be the periodontal disease detection system according to the first aspect, and may further include an image data extraction unit that extracts, from the second RGB image, periodontal region image data including the free gingiva, which is the gingiva that surrounds the cervical region including the interdental papilla and marginal gingiva in a band shape, and a portion of the attached gingiva that is continuous with the free gingiva and extends from the bottom of the gingival sulcus to the gingival-alveolar junction, as the image data based on the second RGB image.
これにより、第2RGB画像に基づく画像データは、第2RGB画像における歯牙領域の一部が削除された画像となる。歯牙領域は、歯周疾患の検出に不要な部分であり、検出精度の低下の要因となり得る部分である。よって、第2RGB画像における歯牙領域の一部が削除された画像を用いて歯周疾患が検出されることで、より正確に歯周疾患を検出することができる。 As a result, the image data based on the second RGB image is an image in which part of the tooth region in the second RGB image has been deleted. The tooth region is an area that is not necessary for detecting periodontal disease, and can be a factor in reducing detection accuracy. Therefore, periodontal disease can be detected more accurately by detecting periodontal disease using an image in which part of the tooth region in the second RGB image has been deleted.
また、例えば、第3態様に係る歯周疾患検出システムは、第2態様に係る歯周疾患検出システムであって、前記歯周領域画像データは、さらに、左右の歯間乳頭歯肉を含んでもよい。 Furthermore, for example, the periodontal disease detection system according to the third aspect may be the periodontal disease detection system according to the second aspect, and the periodontal region image data may further include the left and right interdental papillae and gingiva.
これにより、左右の歯間乳頭歯肉を含むので、歯周疾患の検出もれを効果的に抑制することができる。 This includes both the left and right interdental papillae and gingiva, effectively preventing missed detection of periodontal disease.
また、例えば、第4態様に係る歯周疾患検出システムは、第1態様~第3態様のいずれかに係る歯周疾患検出システムであって、前記基準色データは、前記ユーザの天然歯牙の色に基づいて設定されていてもよい。 Furthermore, for example, the periodontal disease detection system according to the fourth aspect may be a periodontal disease detection system according to any one of the first to third aspects, and the reference color data may be set based on the color of the user's natural teeth.
これにより、当該ユーザに対する歯周疾患の検出精度を向上させることができる。 This can improve the accuracy of detecting periodontal disease for the user.
また、例えば、第5態様に係る歯周疾患検出システムは、第1態様~第4態様のいずれかに係る歯周疾患検出システムであって、前記画像処理部は、前記第2RGB画像の画素値に基づく値が所定範囲内である画素を含む特定画素領域を特定し、前記第1RGB画像内の前記天然歯牙の領域のうち前記特定画素領域を除く領域を構成する赤色、緑色、青色の各色成分のうち少なくとも2つの色成分のゲインを前記基準色データに基づいて調整することで第3RGB画像を生成し、前記第2RGB画像に基づく前記画像データは、前記第3RGB画像に基づく画像データであってもよい。 Furthermore, for example, a periodontal disease detection system according to a fifth aspect is a periodontal disease detection system according to any one of the first to fourth aspects, wherein the image processing unit identifies a specific pixel area including pixels whose values based on the pixel values of the second RGB image are within a predetermined range, and generates a third RGB image by adjusting the gain of at least two of the red, green, and blue color components that make up the area of the natural tooth in the first RGB image excluding the specific pixel area based on the reference color data, and the image data based on the second RGB image may be image data based on the third RGB image.
これにより、第3RGB画像は、特定画素領域を除いた歯牙の部分の色成分に基づいて補正された画像であるので、より正確にゲイン調整が行われた画像である。このような第3RGB画像に基づく画像データを用いて歯周疾患が検出されることで、より正確に歯周疾患を検出することができる。 As a result, the third RGB image is an image corrected based on the color components of the tooth portion excluding the specific pixel area, and is therefore an image with more accurate gain adjustment. Detecting periodontal disease using image data based on such a third RGB image allows for more accurate detection of periodontal disease.
また、例えば、第6態様に係る歯周疾患検出システムは、第5態様に係る歯周疾患検出システムであって、前記画像処理部は、前記第2RGB画像の色空間をHSV空間に変換することでHSV画像を生成し、前記HSV画像が有する複数の画素のうち彩度が第1の所定範囲内、色相が第2の所定範囲内、及び、明度が第3の所定範囲内の少なくとも1つを満たす1以上の画素が位置する画素領域を前記特定画素領域として特定してもよい。 Furthermore, for example, a periodontal disease detection system according to a sixth aspect may be the periodontal disease detection system according to the fifth aspect, wherein the image processing unit generates an HSV image by converting the color space of the second RGB image into an HSV space, and identifies as the specific pixel area a pixel area in which one or more pixels of the HSV image that satisfy at least one of a first predetermined range for saturation, a second predetermined range for hue, and a third predetermined range for brightness are located.
これにより、HSV画像を用いるので、より正確に特定画素領域を特定することができる。 This allows for more accurate identification of specific pixel areas by using an HSV image.
また、例えば、第7態様に係る歯周疾患検出システムは、第5態様に係る歯周疾患検出システムであって、前記画像処理部は、前記第2RGB画像内の前記天然歯牙の領域を構成する複数の画像が有する複数の赤画素値が第1の範囲内、複数の緑画素値が第2の範囲内、複数の青画素値が第3の範囲内の少なくとも1つを満たす1以上の画素が位置する画素領域を前記特定画素領域として特定してもよい。 Furthermore, for example, a periodontal disease detection system according to a seventh aspect may be the periodontal disease detection system according to the fifth aspect, wherein the image processing unit identifies as the specific pixel area a pixel area in which one or more pixels are located, where the red pixel values of the multiple images constituting the natural tooth area in the second RGB image satisfy at least one of the following conditions: the multiple red pixel values are within a first range, the multiple green pixel values are within a second range, and the multiple blue pixel values are within a third range.
これにより、第2RGB画像から特定画素領域を直接特定することができるので、歯周疾患検出システムの処理量を減らすことができる。 This allows specific pixel areas to be directly identified from the second RGB image, thereby reducing the processing load of the periodontal disease detection system.
また、例えば、第8態様に係る歯周疾患検出システムは、第3態様~第7態様のいずれかに係る歯周疾患検出システムであって、前記特定画素領域は、歯垢領域を含んでもよい。 Furthermore, for example, the periodontal disease detection system according to the eighth aspect may be a periodontal disease detection system according to any one of the third to seventh aspects, and the specific pixel region may include a plaque region.
これにより、歯垢領域がゲイン調整量に与える影響を抑制することができる。つまり、より正確なゲイン調整を行うことができる。 This reduces the effect of plaque areas on the amount of gain adjustment, allowing for more accurate gain adjustment.
また、例えば、第9態様に係る歯周疾患検出システムは、第2態様~第8態様のいずれかに係る歯周疾患検出システムであって、前記画像データ抽出部は、前記特定の歯牙の歯間乳頭歯肉の先端部から遊離歯肉溝までの歯周領域を、前記歯周領域画像データとして前記第2RGB画像から抽出してもよい。 Furthermore, for example, the periodontal disease detection system according to the ninth aspect may be a periodontal disease detection system according to any one of the second to eighth aspects, and the image data extraction unit may extract the periodontal region from the tip of the interdental papilla gingiva of the specific tooth to the free gingival sulcus from the second RGB image as the periodontal region image data.
これにより、歯周疾患を発症している場合に変化が起こりやすい歯牙の歯間乳頭歯肉の先端部から遊離歯肉溝までの歯周領域を歯周領域画像データが含むので、より正確に歯周疾患の検出を行うことができる。 As a result, the periodontal region image data includes the periodontal region from the tip of the interdental papilla gingiva of the tooth to the free gingival sulcus, which is prone to changes when periodontal disease develops, allowing for more accurate detection of periodontal disease.
また、例えば、第10態様に係る歯周疾患検出システムは、第9態様に係る歯周疾患検出システムであって、前記画像データ抽出部は、前記特定の歯牙の輪郭の左右側部に外接し、かつ、前記特定の歯牙の前記先端部から前記遊離歯肉溝までを含む矩形枠の領域を、前記歯周領域画像データとして前記第2RGB画像から抽出してもよい。 Furthermore, for example, a periodontal disease detection system according to a tenth aspect may be the periodontal disease detection system according to the ninth aspect, wherein the image data extraction unit extracts, from the second RGB image, a rectangular area that circumscribes the left and right sides of the contour of the specific tooth and includes from the tip of the specific tooth to the free gingival sulcus as the periodontal region image data.
これにより、より正確に歯周疾患の検出を行い得る歯周領域画像データを容易に、かつ、確実に抽出することができる。 This makes it possible to easily and reliably extract periodontal region image data that allows for more accurate detection of periodontal disease.
また、例えば、第11態様に係る歯周疾患検出システムは、第1態様~第10態様のいずれかに係る歯周疾患検出システムであって、前記歯周疾患検出部は、歯肉の領域を含む画像データを入力とし当該画像データにおける歯周疾患に関する情報を出力するように学習された学習モデルに前記第2RGB画像に基づく画像データを入力することで、前記ユーザの前記歯周疾患を検出してもよい。 Furthermore, for example, a periodontal disease detection system according to an eleventh aspect may be a periodontal disease detection system according to any one of the first to tenth aspects, wherein the periodontal disease detection unit detects the periodontal disease of the user by inputting image data based on the second RGB image into a learning model that is trained to receive image data including a gingival region as input and output information related to periodontal disease in the image data.
これにより、学習モデルを用いて歯周疾患を検出することができる。 This makes it possible to detect periodontal disease using a learning model.
また、例えば、第12態様に係る歯周疾患検出システムは、第11態様に係る歯周疾患検出システムであって、前記学習モデルは、前記歯周疾患に関する情報として、歯周ポケットの深さ、BOP(Bleeding On Probing)、及び、GI(Gingival Index)値の少なくとも1つの推定結果を出力してもよい。 Furthermore, for example, the periodontal disease detection system according to the twelfth aspect may be the periodontal disease detection system according to the eleventh aspect, and the learning model may output an estimated result of at least one of periodontal pocket depth, BOP (Bleeding On Probing), and GI (Gingival Index) value as information related to the periodontal disease.
これにより、歯周疾患に関する情報として、歯周ポケットの深さ、BOP(Bleeding On Probing)及びGI(Gingival Index)値の少なくとも1つの推定結果を取得することができる。このような情報は、歯周疾患を把握する上で有用な情報である。 This makes it possible to obtain estimated results for at least one of the following information related to periodontal disease: periodontal pocket depth, BOP (Bleeding On Probing), and GI (Gingival Index) value. Such information is useful for understanding periodontal disease.
また、例えば、第13態様に係る歯周疾患検出システムは、第1態様~第12態様のいずれかに係る歯周疾患検出システムであって、前記歯周疾患検出部は、正常な歯肉の色のRGB色空間におけるR値、G値、B値の範囲と、歯周病の複数の進行段階ごとの歯肉の色のR値、G値、B値の範囲から作成した歯周疾患検出ツールとに基づいて、前記第2RGB画像に基づく画像データから取得した歯肉の色のR値、G値、B値を入力とし当該領域における前記ユーザの前記歯周疾患を検出してもよい。 Furthermore, for example, a periodontal disease detection system according to a thirteenth aspect is a periodontal disease detection system according to any one of the first to twelfth aspects, and the periodontal disease detection unit may input the R, G, and B values of the gum color acquired from image data based on the second RGB image and detect the user's periodontal disease in that area based on a periodontal disease detection tool created from the range of R, G, and B values in the RGB color space of normal gum color and the range of R, G, and B values of gum color for multiple stages of periodontal disease.
これにより、R値、G値、B値を用いて歯周疾患を効果的に検出することができる。 This allows periodontal disease to be effectively detected using the R, G, and B values.
また、例えば、第14態様に係る歯周疾患検出システムは、第1態様~第12態様のいずれかに係る歯周疾患検出システムであって、前記歯周疾患検出部は、正常な歯肉の色のHSV色空間における色相情報、彩度情報、明度情報の少なくとも一つと、歯周病の複数の進行段階ごとの歯肉の色の色相情報、彩度情報、明度情報の範囲の少なくとも一つから作成した歯周疾患検出ツールとに基づいて、前記第2RGB画像に基づく画像データから取得した歯肉の色の色相情報、彩度情報、明度情報の少なくとも一つを入力とし当該領域における前記ユーザの前記歯周疾患を検出してもよい。 Furthermore, for example, a periodontal disease detection system according to a fourteenth aspect may be a periodontal disease detection system according to any one of the first to twelfth aspects, wherein the periodontal disease detection unit may input at least one of the hue information, saturation information, and brightness information of the color of normal gums in the HSV color space and at least one range of the hue information, saturation information, and brightness information of the color of gums for each of multiple stages of periodontal disease, and detect the periodontal disease of the user in that area.
これにより、色相情報、彩度情報、明度情報の少なくとも一つを用いて歯周疾患を効果的に検出することができる。 This makes it possible to effectively detect periodontal disease using at least one of hue information, saturation information, and brightness information.
また、例えば、第15態様に係る歯周疾患検出システムは、第1態様~第12態様のいずれかに係る歯周疾患検出システムであって、前記歯周疾患検出部は、正常な歯肉の色のHSL色空間における色相情報、彩度情報、輝度情報の少なくとも一つと、歯周病の複数の進行段階ごとの歯肉の色の色相情報、彩度情報、輝度情報の範囲の少なくとも一つから作成した歯周疾患検出ツールに基づいて、前記第2RGB画像に基づく画像データから取得した歯肉の色の色相情報、彩度情報、輝度情報の少なくとも一つを入力とし当該領域における前記ユーザの前記歯周疾患を検出してもよい。 Furthermore, for example, a periodontal disease detection system according to a fifteenth aspect is a periodontal disease detection system according to any one of the first to twelfth aspects, wherein the periodontal disease detection unit may input at least one of the hue information, saturation information, and luminance information of the color of normal gums in the HSL color space and at least one range of the hue information, saturation information, and luminance information of the color of gums for each of multiple stages of periodontal disease, and detect the periodontal disease of the user in the relevant area.
これにより、色相情報、彩度情報、輝度情報の少なくとも一つを用いて歯周疾患を効果的に検出することができる。 This makes it possible to effectively detect periodontal disease using at least one of hue information, saturation information, and brightness information.
また、例えば、第16態様に係る歯周疾患検出システムは、第1態様~第15態様のいずれかに係る歯周疾患検出システムであって、前記第1RGB画像又は前記第2RGB画像から、前記特定の歯牙の種類を識別する歯牙種類識別部をさらに備えてもよい。 Furthermore, for example, the periodontal disease detection system according to the sixteenth aspect may be a periodontal disease detection system according to any one of the first to fifteenth aspects, and may further include a tooth type identification unit that identifies the type of the specific tooth from the first RGB image or the second RGB image.
これにより、歯周疾患の検出を行った歯牙の種類を取得することができる。例えば、検出結果とともに歯牙の種類を示す情報が出力されることで、歯周疾患の確認等に有用な情報を出力することができる。 This makes it possible to obtain the type of tooth for which periodontal disease has been detected. For example, by outputting information indicating the type of tooth along with the detection results, it is possible to output information useful for confirming periodontal disease, etc.
また、例えば、第17態様に係る歯周疾患検出システムは、第1態様~第16態様のいずれかに係る歯周疾患検出システムであって、前記歯周疾患検出部により検出された前記歯周疾患を示す情報を、前記ユーザとは異なる歯科医師の第1情報端末に出力する出力部をさらに備え、前記出力部は、さらに、前記第1取得部を介して取得された、前記第1情報端末に対して入力された前記ユーザの受診の要否に関する受診要否情報を前記ユーザの第2情報端末に出力してもよい。 Furthermore, for example, a periodontal disease detection system according to a seventeenth aspect is a periodontal disease detection system according to any one of the first to sixteenth aspects, and further includes an output unit that outputs information indicating the periodontal disease detected by the periodontal disease detection unit to a first information terminal of a dentist other than the user, and the output unit may further output examination necessity information regarding the need for the user to undergo examination, which information was acquired via the first acquisition unit and entered into the first information terminal, to a second information terminal of the user.
これにより、歯科医師が遠隔にいるユーザ(つまり、実際に診察していないユーザ)の歯周疾患の状況を把握することができるので、歯周疾患の治療を支援することができる。 This allows dentists to understand the periodontal disease status of remote users (i.e., users they are not actually examining), thereby assisting in the treatment of periodontal disease.
また、例えば、第18態様に係る歯周疾患検出システムは、第1態様~第17態様のいずれかに係る歯周疾患検出システムであって、前記歯周疾患検出部により検出された前記歯周疾患を示す情報を、前記ユーザの第2情報端末に出力する出力部をさらに備えてもよい。 Furthermore, for example, the periodontal disease detection system according to the eighteenth aspect may be a periodontal disease detection system according to any one of the first to seventeenth aspects, and may further include an output unit that outputs information indicating the periodontal disease detected by the periodontal disease detection unit to the user's second information terminal.
これにより、歯周疾患の状況をユーザに通知することができる。 This allows users to be notified of their periodontal disease status.
また、例えば、第19態様に係る歯周疾患検出システムは、第1態様~第18態様のいずれかに係る歯周疾患検出システムであって、前記第2取得部は、前記第1RGB画像を取得するより前に前記基準色データを取得し、前記基準色データを記憶している記憶部をさらに備えてもよい。 Furthermore, for example, the periodontal disease detection system according to the 19th aspect may be a periodontal disease detection system according to any one of the first to 18th aspects, in which the second acquisition unit acquires the reference color data before acquiring the first RGB image, and may further include a storage unit that stores the reference color data.
これにより、基準色データを記憶部に記憶させておくことができる。ユーザの基準色データを事前に取得し記憶部に記憶しておくことで、歯周疾患検出システムの利便性が向上する。 This allows the reference color data to be stored in the memory unit. By obtaining the user's reference color data in advance and storing it in the memory unit, the convenience of the periodontal disease detection system is improved.
また、例えば、第20態様に係る歯周疾患検出システムは、第1態様~第19態様のいずれかに係る歯周疾患検出システムであって、前記基準色データは、前記天然歯牙に第1の光を照射したときに得られる前記天然歯牙の色データであり、前記第1RGB画像は、前記第1の光とは異なる第2の光を前記撮影領域に照射したときに得られる画像であってもよい。 Furthermore, for example, a periodontal disease detection system according to a twentieth aspect may be a periodontal disease detection system according to any one of the first to nineteenth aspects, in which the reference color data is color data of the natural tooth obtained when a first light is irradiated onto the natural tooth, and the first RGB image is an image obtained when a second light different from the first light is irradiated onto the imaging area.
これにより、第1の光を照射しているときの第1RGB画像を、第2の光を照射した場合に得られる画像に近い第2RGB画像に補正することができる。 This allows the first RGB image obtained when the first light is irradiated to be corrected into a second RGB image that is closer to the image obtained when the second light is irradiated.
また、例えば、第21態様に係る歯周疾患検出システムは、第1態様~第20態様のいずれかに係る歯周疾患検出システムであって、前記少なくとも2つの色成分は、前記第1RGB画像内の前記天然歯牙の領域を構成する複数の画素が有する複数の赤画素値の第1赤画素平均値と、前記複数の画素が有する複数の緑画素値の第1緑画素平均値と、前記複数の画素が有する複数の青画素値の第1青画素平均値のうちの少なくとも2つを含み、前記画像処理部は、前記少なくとも2つの色成分と前記基準色データとの差が所定範囲内となるように前記ゲインを調整することで前記第2RGB画像を生成してもよい。 Furthermore, for example, a periodontal disease detection system according to a 21st aspect is a periodontal disease detection system according to any one of the first to 20th aspects, wherein the at least two color components include at least two of a first red pixel average value of multiple red pixel values possessed by multiple pixels constituting the natural tooth region in the first RGB image, a first green pixel average value of multiple green pixel values possessed by the multiple pixels, and a first blue pixel average value of multiple blue pixel values possessed by the multiple pixels, and the image processing unit may generate the second RGB image by adjusting the gain so that the difference between the at least two color components and the reference color data falls within a predetermined range.
これにより、ゲインを調整することで、ユーザに対応する基準色データを用いて補正された第2RGB画像を生成することができる。 This allows the gain to be adjusted to generate a second RGB image corrected using reference color data corresponding to the user.
本開示の第22態様に係る歯周疾患検出方法は、口腔内を撮影した画像に基づいてユーザの歯周疾患を検出する歯周疾患検出システムが実行する歯周疾患検出方法であって、前記ユーザの口腔内に光を照射して、特定の歯牙と歯肉を含む当該特定の歯牙の歯周領域とを含む撮影領域を撮影した第1RGB画像を取得し、前記撮影領域に含まれる天然歯牙の色の基準を示す基準色データであって、前記ユーザに応じた基準色データを取得し、前記第1RGB画像内の前記天然歯牙の領域を構成する赤色、緑色、青色の各色成分のうち少なくとも2つの色成分のゲインを前記基準色データに基づいて調整した第2RGB画像を生成し、前記第2RGB画像に基づく画像データに基づいて、前記ユーザの歯周疾患を検出する。また、本開示の第23態様に係る歯周疾患検出方法は、1以上のプロセッサが実行する歯周疾患検出方法であって、前記1以上のプロセッサは、ユーザの口腔内に光を照射して、特定の歯牙と歯肉を含む当該特定の歯牙の歯周領域とを含む撮影領域を撮影した第1RGB画像を出力し、前記第1RGB画像に基づく前記ユーザの歯周疾患の検出結果を取得し、取得した前記検出結果に対して所定の処理を実行し、取得される前記検出結果は、前記第1RGB画像内の天然歯牙の領域を構成する赤色、緑色、青色の各色成分のうち少なくとも2つの色成分のゲインが、前記撮影領域に含まれる天然歯牙の色の基準を示す基準色データであって、前記ユーザに応じた基準色データに基づいて調整された第2RGB画像に基づく画像データに基づいて検出された、前記ユーザの前記歯周疾患の検出結果を含む。 A periodontal disease detection method according to a 22nd aspect of the present disclosure is a periodontal disease detection method executed by a periodontal disease detection system that detects periodontal disease in a user based on an image taken inside the oral cavity, and includes irradiating light into the user's oral cavity to obtain a first RGB image of an imaged area including a specific tooth and the periodontal area of the specific tooth including the gums, obtaining reference color data that indicates the color standard of natural teeth included in the imaged area and that corresponds to the user, generating a second RGB image in which the gains of at least two of the red, green, and blue color components that constitute the area of the natural tooth in the first RGB image are adjusted based on the reference color data, and detecting periodontal disease in the user based on image data based on the second RGB image. Furthermore, a periodontal disease detection method according to a 23rd aspect of the present disclosure is a periodontal disease detection method executed by one or more processors, wherein the one or more processors irradiate light into the oral cavity of a user, output a first RGB image capturing an imaging area including a specific tooth and the periodontal region of the specific tooth including the gums, obtain a detection result of the user's periodontal disease based on the first RGB image, and perform predetermined processing on the obtained detection result, wherein the obtained detection result includes the detection result of the user's periodontal disease detected based on image data based on a second RGB image adjusted based on the reference color data corresponding to the user, where the gains of at least two of the red, green, and blue color components constituting the natural tooth region in the first RGB image are reference color data indicating the reference color of the natural tooth included in the imaging area.
これにより、上記の歯周疾患検出システムと同様の効果を奏する。 This achieves the same effect as the periodontal disease detection system described above.
また、本開示の第24態様に係るプログラムは、第22態様又は第23態様の歯周疾患検出方法をコンピュータに実行させるためのプログラムである。 Furthermore, a program according to a 24th aspect of the present disclosure is a program for causing a computer to execute the periodontal disease detection method according to the 22nd or 23rd aspect.
これにより、上記の歯周疾患検出システムと同様の効果を奏する。 This achieves the same effect as the periodontal disease detection system described above.
なお、これらの全般的又は具体的な態様は、システム、方法、集積回路、コンピュータプログラム又はコンピュータで読み取り可能なCD-ROM等の非一時的記録媒体で実現されてもよく、システム、方法、集積回路、コンピュータプログラム又は記録媒体の任意な組み合わせで実現されてもよい。プログラムは、記録媒体に予め記憶されていてもよいし、インターネット等を含む広域通信網を介して記録媒体に供給されてもよい。 Note that these general or specific aspects may be realized as a system, method, integrated circuit, computer program, or non-transitory recording medium such as a computer-readable CD-ROM, or as any combination of a system, method, integrated circuit, computer program, or recording medium. The program may be pre-stored on the recording medium, or may be supplied to the recording medium via a wide area communication network, including the Internet.
また、各図は、模式図であり、必ずしも厳密に図示されたものではない。したがって、例えば、各図において縮尺などは必ずしも一致しない。また、各図において、実質的に同一の構成については同一の符号を付しており、重複する説明は省略又は簡略化する。 Furthermore, each figure is a schematic diagram and is not necessarily an exact illustration. Therefore, for example, the scales of the figures do not necessarily match. Furthermore, in each figure, substantially identical components are assigned the same reference numerals, and duplicate explanations are omitted or simplified.
また、本明細書において、平行などの要素間の関係性を示す用語、並びに、数値、及び、数値範囲は、厳格な意味のみを表す表現ではなく、実質的に同等な範囲、例えば数%程度(あるいは、10%程度)の差異をも含むことを意味する表現である。 Furthermore, in this specification, terms indicating relationships between elements, such as "parallel," as well as numerical values and numerical ranges, are not expressions that express only the strict meaning, but also expressions that include a substantially equivalent range, for example, a difference of about a few percent (or about 10%).
また、本明細書において、「第1」、「第2」などの序数詞は、特に断りの無い限り、構成要素の数又は順序を意味するものではなく、同種の構成要素の混同を避け、区別する目的で用いられている。 Furthermore, in this specification, ordinal numbers such as "first" and "second" do not refer to the number or order of components, unless otherwise specified, but are used to avoid confusion and distinguish between components of the same type.
(実施の形態1)
以下、本実施の形態に係る歯周疾患検出システムについて、図1~図7Bを参照しながら説明する。
(Embodiment 1)
Hereinafter, the periodontal disease detection system according to this embodiment will be described with reference to FIGS. 1 to 7B.
[1-1.歯周疾患検出システムの構成]
まず、本実施の形態に係る歯周疾患検出システムの構成について、図1~図3を参照しながら説明する。図1は、本実施の形態に係る歯周疾患検出システムにおける口腔内カメラ10の斜視図である。
[1-1. Configuration of periodontal disease detection system]
First, the configuration of a periodontal disease detection system according to this embodiment will be described with reference to Figures 1 to 3. Figure 1 is a perspective view of an intraoral camera 10 in the periodontal disease detection system according to this embodiment.
図1に示すように、口腔内カメラ10は、片手で取り扱うことが可能な歯ブラシ状の筺体を備え、その筺体は、撮影時にユーザの口腔内に配置されるヘッド部10aと、ユーザが把持するハンドル部10bと、ヘッド部10a及びハンドル部10bを接続するネック部10cとを備える。 As shown in Figure 1, the intraoral camera 10 has a toothbrush-shaped housing that can be handled with one hand. The housing includes a head portion 10a that is placed in the user's oral cavity when taking a photograph, a handle portion 10b that is held by the user, and a neck portion 10c that connects the head portion 10a and the handle portion 10b.
撮影部21は、青色光の波長域を含む光が照射されている口腔内の歯列の面及び歯周領域を撮影する。歯列の面は、歯列の頬側(外側)の側面、及び、歯列の舌側(内側)の側面の少なくとも1つを含む。また、歯列とは、例えば、1以上の歯牙を含む。青色光は、第2の光の一例である。 The imaging unit 21 images the surface of the dentition and the periodontal region in the oral cavity, which are illuminated with light including the wavelength range of blue light. The surface of the dentition includes at least one of the buccal (outer) side surface of the dentition and the lingual (inner) side surface of the dentition. The dentition may also include, for example, one or more teeth. Blue light is an example of the second light.
撮影部21は、ヘッド部10aとネック部10cとに組み込まれている。撮影部21は、その光軸LA上に配置された撮像素子(図示しない)とレンズ(図示しない)とを有する。 The photographing unit 21 is incorporated into the head unit 10a and neck unit 10c. The photographing unit 21 has an image sensor (not shown) and a lens (not shown) arranged on its optical axis LA.
撮像素子は、例えばCMOS(Complementary Metal Oxide Semiconductor)センサ又はCCD(Charge Coupled Device)素子などの撮影デバイスであって、レンズによって歯牙の像が結像される。その結像した像に対応する信号(画像データ)を、撮像素子は外部に出力する。撮像素子によって撮影された撮影画像は、第1RGB画像、又は、第1画像の一例である。 The imaging element is a photographing device such as a CMOS (Complementary Metal Oxide Semiconductor) sensor or a CCD (Charge Coupled Device) element, and an image of the teeth is formed by a lens. The imaging element outputs a signal (image data) corresponding to the formed image to the outside. The image captured by the imaging element is an example of a first RGB image or a first image.
また、撮影画像は、青色光を歯列に照射して得られる画像であるので、青色に色かぶりした画像である。また、撮影画像は、例えば、歯列の側面及び歯周領域が撮影された画像である。歯列の側面は、舌側の側面であってもよいし、頬側の側面であってもよい。また、歯列の側面は、上顎側の側面であってもよいし、下顎側の側面であってもよい。撮影画像は、例えば、1以上の歯牙(特定の歯牙の一例)と、当該1以上の歯牙の歯周領域とを含む撮影領域を撮影した画像である。歯周領域は、当該歯牙の歯肉を含む領域である。歯肉は、歯牙の歯根を取り巻く歯周組織であり、歯茎とも称される。なお、特定の歯牙は、いずれの歯牙であってもよく、例えば、ユーザ又は歯科医師などの医療従事者が歯周疾患を検出したい領域の歯牙であってもよいし、任意の歯牙であってもよい。 Furthermore, the captured image is obtained by irradiating the dentition with blue light, and therefore has a blue tint. The captured image is, for example, an image of the side of the dentition and the periodontal region. The side of the dentition may be the side on the lingual side or the side on the buccal side. The side of the dentition may be the side on the maxillary side or the side on the mandibular side. The captured image is, for example, an image of a capture region including one or more teeth (an example of a specific tooth) and the periodontal region of the one or more teeth. The periodontal region is a region including the gingiva of the tooth. The gingiva is the periodontal tissue surrounding the root of a tooth and is also called the gum. The specific tooth may be any tooth, for example, a tooth in an area where a user or a medical professional such as a dentist wants to detect periodontal disease, or any tooth.
撮影部21は、さらに照明部(照明デバイス)から照射される色の光を遮光し、かつ、当該光に対して歯垢が発する蛍光を透過する光学フィルタを有してもよい。本実施の形態では、撮影部21は、撮像素子に入射する光に含まれる青色波長の光成分をカットする青色光カットフィルタを光学フィルタとして有してもよい。青色光の波長域を含む光を歯牙に照射し、歯垢を検出する場合、歯垢の励起蛍光を強くするため青色光の波長域を含む光を強くすると青色画素値が赤色画素値及び緑色画素値に比べて支配的になるため、撮影画像の全体が青色を帯びる。この対処として、青色光カットフィルタが撮像素子に入射する前の光から青色光の波長域を含む光の一部をカットする。なお、撮影部21は、青色光カットフィルタを有していなくてもよい。 The imaging unit 21 may further include an optical filter that blocks light of a color emitted from the illumination unit (illumination device) and transmits fluorescence emitted by plaque in response to that light. In this embodiment, the imaging unit 21 may include, as an optical filter, a blue light cut-off filter that cuts out the blue wavelength light component contained in the light incident on the imaging element. When irradiating teeth with light including the blue wavelength range to detect plaque, if the light including the blue wavelength range is strengthened to increase the excitation fluorescence of plaque, the blue pixel values will become dominant compared to the red and green pixel values, and the entire captured image will appear blue. To address this, the blue light cut-off filter cuts out a portion of the light including the blue wavelength range from the light before it enters the imaging element. Note that the imaging unit 21 does not necessarily have to include a blue light cut-off filter.
また、口腔内カメラ10は、撮影時に撮影対象の歯牙に対して光を照射する照明部として、複数の第1~第4のLED23A~23Dを搭載している。第1~第4のLED23A~23Dは、歯垢に照射されることで当該歯垢が蛍光を発する色の光(例えば、単一色の光)を照射する。第1~第4のLED23A~23Dは、例えば、405nmをピークとする波長(所定波長の一例)を含む青色光を照射する青色LEDである。なお、第1~第4のLED23A~23Dは、青色光の波長域を含む光を照射する光源であればよく、青色LEDに限るものではない。 The intraoral camera 10 is also equipped with multiple first to fourth LEDs 23A-23D as an illumination unit that irradiates light onto the teeth to be photographed during photography. The first to fourth LEDs 23A-23D irradiate dental plaque with light of a color that causes the plaque to fluoresce (for example, single-color light). The first to fourth LEDs 23A-23D are, for example, blue LEDs that irradiate blue light including a wavelength whose peak is 405 nm (an example of a predetermined wavelength). Note that the first to fourth LEDs 23A-23D are not limited to blue LEDs, and may be any light source that irradiates light including the blue light wavelength range.
また、第1~第4のLED23A~23Dは、ユーザの歯牙の色の基準となる基準色データを撮影するための光を照射可能に構成されてもよい。基準色データを撮影するための光は、青色光とは異なる色の光を含み、例えば、白色光である。白色光は、第1の光の一例である。 Furthermore, the first to fourth LEDs 23A to 23D may be configured to emit light for capturing reference color data that serves as a reference for the color of the user's teeth. The light for capturing the reference color data includes light of a color different from blue light, such as white light. White light is an example of the first light.
図2は、本実施の形態に係る歯周疾患検出システムの概略的構成図である。本実施の形態に係る歯周疾患検出システムは、概略的には、より精度よく歯周疾患を検出することができる情報処理システムであり、具体的には、照明部23からの光に対して歯垢が発する蛍光を撮影部21が撮影し、撮影された1以上の撮影画像から、歯周疾患を検出するためのより適した画像を取得し、取得した当該画像に基づいて歯周疾患を検出するように構成される情報処理システムである。 Figure 2 is a schematic diagram of a periodontal disease detection system according to this embodiment. The periodontal disease detection system according to this embodiment is, generally speaking, an information processing system capable of detecting periodontal disease with greater accuracy. Specifically, the information processing system is configured such that the imaging unit 21 captures fluorescence emitted by plaque in response to light from the illumination unit 23, and from one or more captured images, an image more suitable for detecting periodontal disease is obtained, and periodontal disease is detected based on the acquired image.
図2に示すように、歯周疾患検出システムは、口腔内カメラ10と、携帯端末50とを備える。 As shown in Figure 2, the periodontal disease detection system includes an intraoral camera 10 and a mobile terminal 50.
口腔内カメラ10は、ハード部20と、信号処理部30と、通信部40とを備える。 The intraoral camera 10 comprises a hardware unit 20, a signal processing unit 30, and a communication unit 40.
ハード部20は、口腔内カメラ10における物理的な要素であり、撮影部21と、センサ部22と、照明部23と、操作部24とを有する。 The hardware unit 20 is a physical element of the intraoral camera 10 and includes a photographing unit 21, a sensor unit 22, an illumination unit 23, and an operation unit 24.
撮影部21は、歯垢に含まれる蛍光物質を励起する所定波長の照射光が照射されている口腔内の歯列の側面及び歯周領域を撮影することで画像データを生成する。撮影部21は、カメラ制御部31からの制御信号を受け付け、受け付けた制御信号に応じて、撮影などの動作を実行し、撮影で得た動画又は静止画の画像データを画像処理部32に出力する。撮影部21は、上記の撮像素子及び光学フィルタとレンズとを有する。画像データは、例えば光学フィルタを通過した光に基づいて生成される。また、画像データは、複数本の歯牙が映る画像であるが、少なくとも1本の歯牙が映る画像であればよい。なお、歯列の側面には、歯垢が付着している場合がある。 The imaging unit 21 generates image data by photographing the side of the dentition and the periodontal region in the oral cavity, which are illuminated with light of a specific wavelength that excites fluorescent substances contained in plaque. The imaging unit 21 receives control signals from the camera control unit 31, performs operations such as photographing in accordance with the received control signals, and outputs video or still image data obtained by photographing to the image processing unit 32. The imaging unit 21 has the above-mentioned image sensor, optical filter, and lens. The image data is generated, for example, based on light that has passed through the optical filter. The image data is an image that shows multiple teeth, but it is sufficient that the image shows at least one tooth. Note that plaque may adhere to the side of the dentition.
また、撮影部21は、基準色データを取得するための撮影を行ってもよい。撮影部21は、例えば、白色光などの基準光が照射されている口腔内の歯列の側面及び歯周領域を撮影することで基準色データ取得用の画像データを生成してもよい。基準光は、照明部23からの光であってもよいし、口腔内カメラ10の外部の光源からの光(外光)であってもよい。 The photographing unit 21 may also perform photographing to obtain reference color data. The photographing unit 21 may generate image data for obtaining reference color data by photographing, for example, the side of the dentition and the periodontal region in the oral cavity illuminated with reference light such as white light. The reference light may be light from the illumination unit 23, or light from a light source external to the intraoral camera 10 (external light).
センサ部22は、撮影画像の撮影領域に入射する外光を検出する。例えば、センサ部22は、口腔内に外光が入射しているか否かを検出する。センサ部22は、例えば、撮影部21の近傍に配置される。センサ部22は、例えば、撮影部21と同様、口腔内カメラ10のヘッド部10aに配置されてもよい。つまり、センサ部22は、撮影部21が撮影する際、ユーザの口腔内に位置する。また、センサ部22は、基準色データを取得するための画像を撮影する場合において、所定の色度範囲又は所定の色温度の白色光が口腔内に照射されているかを検出してもよい。 The sensor unit 22 detects external light entering the capture area of the captured image. For example, the sensor unit 22 detects whether external light is entering the oral cavity. The sensor unit 22 is, for example, arranged near the capture unit 21. The sensor unit 22 may, for example, be arranged in the head unit 10a of the intraoral camera 10, similar to the capture unit 21. In other words, the sensor unit 22 is located inside the user's oral cavity when the capture unit 21 captures images. Furthermore, the sensor unit 22 may detect whether white light within a predetermined chromaticity range or a predetermined color temperature is being irradiated into the oral cavity when capturing images to obtain reference color data.
照明部23は、口腔内の複数の領域のうち、撮影部21が撮影する領域に光を照射する。定量的可視光誘起蛍光法(QLF法)でも知られているように、青色光が照射されると歯垢内のバクテリアが赤みを帯びたピンク色に蛍光(励起蛍光)することが知られており、本実施の形態では、照明部23は、撮影部21が撮影する領域に青色光を照射する。 The lighting unit 23 irradiates light onto the area of the multiple areas in the oral cavity that will be photographed by the imaging unit 21. As is also known from quantitative visible light induced fluorescence (QLF), bacteria in dental plaque are known to fluoresce a reddish-pink color (excited fluorescence) when irradiated with blue light; in this embodiment, the lighting unit 23 irradiates blue light onto the area that will be photographed by the imaging unit 21.
照明部23は、上記の複数の第1~第4のLED23A~23Dを有する。複数の第1~第4のLED23A~23Dは、例えば、撮影領域に対して互いに異なる方向から光を照射する。これにより、撮影領域に影が発生することを抑制することができる。 The lighting unit 23 has the above-mentioned first to fourth LEDs 23A to 23D. The first to fourth LEDs 23A to 23D, for example, emit light from different directions toward the shooting area. This makes it possible to prevent shadows from appearing in the shooting area.
複数の第1~第4のLED23A~23Dのそれぞれは、少なくとも調光が制御可能に構成される。複数の第1~第4のLED23A~23Dのそれぞれは、調光及び調色が制御可能に構成されてもよい。複数の第1~第4のLED23A~23Dは、撮影部21を囲むように配置される。 Each of the first to fourth LEDs 23A to 23D is configured so that at least the dimming can be controlled. Each of the first to fourth LEDs 23A to 23D may also be configured so that the dimming and color adjustment can be controlled. The first to fourth LEDs 23A to 23D are arranged to surround the imaging unit 21.
照明部23は、撮影領域に応じて照射強度(発光強度)が制御される。複数の第1~第4のLED23A~23Dのそれぞれは、一律に照射強度が制御されてもよいし、互いに異なるように照射強度が制御されてもよい。なお、照明部23が有するLEDの数は特に限定されず、1つであってもよいし、5つ以上であってもよい。また、照明部23は、光源としてLEDを有することに限定されず、他の光源を有していてもよい。 The illumination unit 23 controls the illumination intensity (light emission intensity) according to the shooting area. The illumination intensity of each of the first to fourth LEDs 23A to 23D may be controlled uniformly, or may be controlled to differ from one another. The number of LEDs in the illumination unit 23 is not particularly limited, and may be one, or five or more. Furthermore, the illumination unit 23 is not limited to having an LED as a light source, and may also have other light sources.
また、照明部23は、ユーザの歯牙の基準色データを取得するための基準光を照射可能に構成されてもよい。基準光は、青色光とは異なるタイミングで照射される。 The illumination unit 23 may also be configured to emit reference light for obtaining reference color data of the user's teeth. The reference light is emitted at a different timing than the blue light.
操作部24は、ユーザからの操作を受け付ける。操作部24は、例えば、押しボタンなどにより構成されるが、音声などにより操作を受け付ける構成であってもよい。操作部24は、例えば、歯周疾患検出用の撮影を行うか、基準色データ取得用の撮影を行うかの操作をユーザから受け付ける。 The operation unit 24 accepts operations from the user. The operation unit 24 is configured, for example, with push buttons, but may also be configured to accept operations via voice, etc. The operation unit 24 accepts operations from the user, for example, as to whether to take an image for detecting periodontal disease or to take an image for obtaining reference color data.
また、ハード部20は、さらに、口腔内カメラ10の各構成要素に電力を供給する電池(例えば、二次電池)、商用電源に接続された外部の充電器によってワイヤレス充電されるためのコイル、構図調節及びピント調節の少なくとも一方に必要なアクチュエータなどを備えていてもよい。 The hardware unit 20 may also include a battery (e.g., a secondary battery) that supplies power to each component of the intraoral camera 10, a coil for wireless charging by an external charger connected to a commercial power source, and an actuator required for at least one of composition adjustment and focus adjustment.
信号処理部30は、後述する様々な処理を実行するCPU(Central Processing Unit)又はMPU(Micro Processor Unit)などにより実現される各機能構成部と、各機能構成部に様々な処理を実行させるためのプログラムを記憶するROM(Read Only Memory)、RAM(Random Access Memory)などのメモリ部35とを有する。具体的には、信号処理部30は、カメラ制御部31と、画像処理部32と、制御部33と、照明制御部34と、メモリ部35とを有する。 The signal processing unit 30 has functional components implemented by a CPU (Central Processing Unit) or MPU (Micro Processor Unit) that execute various processes described below, and a memory unit 35 such as a ROM (Read Only Memory) or RAM (Random Access Memory) that stores programs for causing each functional component to execute various processes. Specifically, the signal processing unit 30 has a camera control unit 31, an image processing unit 32, a control unit 33, a lighting control unit 34, and a memory unit 35.
カメラ制御部31は、例えば、口腔内カメラ10のハンドル部10bに搭載され、撮影部21を制御する。カメラ制御部31は、例えば、画像処理部32からの制御信号に応じて撮影部21の絞り及びシャッタスピードの少なくとも1つを制御する。 The camera control unit 31 is mounted, for example, on the handle portion 10b of the intraoral camera 10 and controls the imaging unit 21. The camera control unit 31 controls at least one of the aperture and shutter speed of the imaging unit 21 in response to a control signal from the image processing unit 32, for example.
画像処理部32は、例えば、口腔内カメラ10のハンドル部10bに搭載され、撮影部21が撮影した撮影画像を取得し、その取得した撮影画像に対して画像処理を実行し、その画像処理後の撮影画像をカメラ制御部31及び制御部33に出力する。また、画像処理部32は、画像処理後の撮影画像をメモリ部35に出力し、画像処理後の撮影画像をメモリ部35に記憶させてもよい。 The image processing unit 32 is mounted, for example, on the handle unit 10b of the intraoral camera 10, acquires the captured image captured by the imaging unit 21, performs image processing on the acquired captured image, and outputs the processed captured image to the camera control unit 31 and the control unit 33. The image processing unit 32 may also output the processed captured image to the memory unit 35, and store the processed captured image in the memory unit 35.
画像処理部32は、例えば、回路で構成され、例えば撮影画像に対してノイズ除去、輪郭強調処理などの画像処理を実行する。なお、ノイズ除去及び輪郭強調処理などは、携帯端末50により実行されてもよい。 The image processing unit 32 is composed of, for example, a circuit, and performs image processing such as noise removal and edge enhancement on the captured image. Note that noise removal and edge enhancement processing may also be performed by the mobile terminal 50.
なお、画像処理部32から出力された撮影画像(画像処理後の撮影画像)は、通信部40を介して携帯端末50に送信され、送信された撮影画像に基づく画像が携帯端末50の表示部56に表示されてもよい。これにより、撮影画像に基づく画像をユーザに提示することができる。 The captured image output from the image processing unit 32 (the captured image after image processing) may be transmitted to the mobile terminal 50 via the communication unit 40, and an image based on the transmitted captured image may be displayed on the display unit 56 of the mobile terminal 50. This allows an image based on the captured image to be presented to the user.
制御部33は、信号処理部30を制御する制御装置である。制御部33は、例えば、センサ部22による外光等の検出結果に基づいて、信号処理部30の各構成要素を制御する。 The control unit 33 is a control device that controls the signal processing unit 30. The control unit 33 controls each component of the signal processing unit 30 based on, for example, the detection results of external light, etc. by the sensor unit 22.
照明制御部34は、例えば、口腔内カメラ10のハンドル部10bに搭載され、第1~第4のLED23A~23Dの点灯及び消灯を制御する。照明制御部34は、例えば回路で構成される。例えば、ユーザが携帯端末50の表示部56に対して口腔内カメラ10を起動させる操作を実行すると、携帯端末50から対応する信号が通信部40を介して信号処理部30に送信される。信号処理部30の照明制御部34は、受信した信号に基づいて、第1~第4のLED23A~23Dを点灯させる。照明制御部34は、例えば、歯周疾患検出用の撮影を行うことを示す操作を操作部24が受け付けると、照明部23に青色光を照射させ、基準色データ取得用の撮影を行うことを示す操作を操作部24が受け付けると、照明部23に白色光を照射させてもよい。 The lighting control unit 34 is mounted, for example, on the handle portion 10b of the intraoral camera 10 and controls the turning on and off of the first to fourth LEDs 23A to 23D. The lighting control unit 34 is composed of, for example, a circuit. For example, when a user operates the display unit 56 of the mobile terminal 50 to start the intraoral camera 10, a corresponding signal is sent from the mobile terminal 50 to the signal processing unit 30 via the communication unit 40. The lighting control unit 34 of the signal processing unit 30 turns on the first to fourth LEDs 23A to 23D based on the received signal. For example, the lighting control unit 34 may cause the lighting unit 23 to emit blue light when the operation unit 24 receives an operation indicating that imaging for periodontal disease detection will be performed, or cause the lighting unit 23 to emit white light when the operation unit 24 receives an operation indicating that imaging for reference color data acquisition will be performed.
メモリ部35は、上記のプログラム以外に、撮影部21によって撮影された撮影画像などを記憶する。メモリ部35は、例えば、ROM、RAMなどの半導体メモリにより実現されるがこれに限定されない。 In addition to the above programs, the memory unit 35 also stores images captured by the image capture unit 21. The memory unit 35 is realized by, for example, semiconductor memory such as ROM or RAM, but is not limited to this.
通信部40は、携帯端末50と無線通信を行うための無線通信モジュールである。通信部40は、例えば、口腔内カメラ10のハンドル部10bに搭載され、信号処理部30からの制御信号に基づいて、携帯端末50と無線通信を行う。通信部40は、例えばWiFi(登録商標)、Bluetooth(登録商標)などの既存の通信規格に準拠した無線通信を携帯端末50との間で実行する。通信部40を介して、口腔内カメラ10から携帯端末50に撮影画像が送信され、且つ、携帯端末50から口腔内カメラ10に操作信号が送信される。 The communication unit 40 is a wireless communication module for wireless communication with the mobile terminal 50. The communication unit 40 is mounted, for example, on the handle portion 10b of the intraoral camera 10, and communicates wirelessly with the mobile terminal 50 based on control signals from the signal processing unit 30. The communication unit 40 performs wireless communication with the mobile terminal 50 in accordance with existing communication standards such as Wi-Fi (registered trademark) and Bluetooth (registered trademark). Captured images are sent from the intraoral camera 10 to the mobile terminal 50, and operation signals are sent from the mobile terminal 50 to the intraoral camera 10, via the communication unit 40.
携帯端末50は、例えば、青色光の波長域を含む光を歯牙に照射することで蛍光反応している歯列の面、及び、歯周領域を撮影した撮影画像を用いて、歯周疾患を検出するためのより適した画像の取得、及び、歯周疾患の検出を行うための処理を実行する。携帯端末50は、歯周疾患検出システムのユーザインタフェースとして機能する。携帯端末50は、第2情報端末の一例である。 The mobile terminal 50 performs processing to obtain images more suitable for detecting periodontal disease and to detect periodontal disease, for example, by irradiating the teeth with light including the wavelength range of blue light and using captured images of the surface of the dentition and the periodontal region that are fluorescently reacting. The mobile terminal 50 functions as a user interface for the periodontal disease detection system. The mobile terminal 50 is an example of a second information terminal.
図3は、本実施の形態に係る携帯端末50の機能構成を示すブロック図である。 Figure 3 is a block diagram showing the functional configuration of the mobile terminal 50 according to this embodiment.
図3に示すように、携帯端末50は、取得部51と、歯牙種類識別部52と、画像処理部53と、画像データ抽出部54と、歯周疾患検出部55と、表示部56と、出力部57と、記憶部58とを備える。携帯端末50は、プロセッサ及びメモリなどを備える。メモリは、ROM及びRAMなどであり、プロセッサにより実行されるプログラムを記憶することができる。取得部51と、歯牙種類識別部52と、画像処理部53と、画像データ抽出部54と、歯周疾患検出部55と、表示部56と、出力部57とは、メモリに格納されたプログラムを実行するプロセッサなどによって実現される。携帯端末50は、例えば、無線通信可能なスマートフォン又はタブレット端末等により実現されてもよい。 As shown in FIG. 3, the mobile terminal 50 includes an acquisition unit 51, a tooth type identification unit 52, an image processing unit 53, an image data extraction unit 54, a periodontal disease detection unit 55, a display unit 56, an output unit 57, and a storage unit 58. The mobile terminal 50 also includes a processor and memory. The memory may be a ROM or RAM, and can store programs executed by the processor. The acquisition unit 51, the tooth type identification unit 52, the image processing unit 53, the image data extraction unit 54, the periodontal disease detection unit 55, the display unit 56, and the output unit 57 are realized by a processor that executes programs stored in the memory. The mobile terminal 50 may be realized, for example, by a smartphone or tablet terminal capable of wireless communication.
取得部51は、口腔内カメラ10と無線通信を行うための無線通信モジュールである。取得部51は、口腔内カメラ10から撮影画像を取得する。具体的には、取得部51は、撮影部21で生成された1以上の歯牙及び歯周領域を含む撮影画像を取得する。撮影画像は、口腔内カメラ10が青色光の波長域を含む光を歯牙に照射することで蛍光反応している歯牙を撮影することで得られた画像である。このように、取得部51は、撮影画像を取得する第1取得部として機能する。 The acquisition unit 51 is a wireless communication module for wireless communication with the intraoral camera 10. The acquisition unit 51 acquires captured images from the intraoral camera 10. Specifically, the acquisition unit 51 acquires captured images including one or more teeth and periodontal regions generated by the imaging unit 21. The captured images are images obtained by the intraoral camera 10 photographing teeth that are undergoing a fluorescent reaction by irradiating the teeth with light including the wavelength range of blue light. In this way, the acquisition unit 51 functions as a first acquisition unit that acquires captured images.
また、取得部51は、撮影領域に含まれる天然歯牙の色の基準を示す基準色データであって、ユーザに応じた基準色データを取得する。取得部51は、例えば、基準色データとして、口腔内に白色光を照射して得られた画像(カラー画像)を口腔内カメラ10から取得してもよいし、天然歯牙の基準となる色情報を取得してもよい。色情報は、例えば、口腔内に白色光を照射して得られた画像内の天然歯牙の色度等である。天然歯牙の色情報は、天然歯牙の任意の1か所の色情報であってもよいし、所定の天然歯牙の色情報であってもよいし、天然歯牙の複数の箇所(例えば、複数の天然歯牙)の色情報の統計値であってもよい。例えば、色度を例に説明すると、色情報の統計値は、色度の平均値であるが、最大値、最小値、最頻値、中央値等であってもよい。このように、取得部51は、基準色データを取得する第2取得部として機能する。 The acquisition unit 51 also acquires reference color data that indicates the reference color of the natural teeth included in the imaging area and is user-specific. For example, the acquisition unit 51 may acquire, as reference color data, an image (color image) obtained by illuminating the oral cavity with white light from the intraoral camera 10, or may acquire reference color information for the natural teeth. The color information may be, for example, the chromaticity of the natural teeth in the image obtained by illuminating the oral cavity with white light. The color information for the natural teeth may be color information for any one location on the natural teeth, color information for a specific natural tooth, or a statistical value of color information for multiple locations on the natural teeth (e.g., multiple natural teeth). For example, taking chromaticity as an example, the statistical value of the color information is the average value of the chromaticity, but it may also be the maximum value, minimum value, mode, median, etc. In this way, the acquisition unit 51 functions as a second acquisition unit that acquires reference color data.
また、取得部51は、ユーザとは異なる人(例えば、歯科医師等の医療従事者)が所持する第1情報端末からの情報(例えば、後述する受診要否情報)を取得してもよい。第1情報端末は、例えば、携帯端末等の携帯型の端末であってもよいし、PC等の据え置き型の端末であってもよい。また、第1情報端末は、携帯端末50とは異なる情報端末である。 Furthermore, the acquisition unit 51 may acquire information (e.g., medical examination necessity information, described below) from a first information terminal carried by a person other than the user (e.g., a medical professional such as a dentist). The first information terminal may be, for example, a portable terminal such as a mobile terminal, or a stationary terminal such as a PC. Furthermore, the first information terminal is an information terminal different from the portable terminal 50.
なお、取得部51は、口腔内カメラ10と有線通信を行う有線通信モジュールを含んでいてもよい。 The acquisition unit 51 may also include a wired communication module that performs wired communication with the intraoral camera 10.
歯牙種類識別部52は、撮影画像(つまり、第1画像)から、撮影画像に含まれる歯牙(例えば、特定の歯牙)の種類を識別する。歯牙の種類を識別するとは、当該歯牙が切歯、犬歯、臼歯のいずれであるかを識別することであってもよいし、中切歯、側切歯、犬歯、第一小臼歯・第二小臼歯・第一大臼歯・第二大臼歯・第三大臼歯(親知らず)のいずれであるかを識別することであってもよい。また、歯牙種類識別部52は、歯牙が口腔内のどの領域(上顎、下顎、左右)に位置するかを識別してもよい。なお、歯牙種類識別部52が歯牙の種類を識別する方法は特に限定されず、例えば、機械学習モデルを用いる方法であってもよいし、パターンマッチングを用いる方法であってもよいし、その他の公知のいかなる方法であってもよい。機械学習モデルは、歯牙を含む画像が入力されると、当該画像に映る歯牙の種類を出力するように学習された学習モデルである。 The tooth type identification unit 52 identifies the type of tooth (e.g., a specific tooth) contained in the captured image (i.e., the first image) from the captured image. Identifying the type of tooth may mean identifying whether the tooth is an incisor, canine, or molar, or whether the tooth is a central incisor, lateral incisor, canine, first premolar, second premolar, first molar, second molar, or third molar (wisdom tooth). The tooth type identification unit 52 may also identify the region of the oral cavity (upper jaw, lower jaw, left or right) in which the tooth is located. Note that there are no particular limitations on the method by which the tooth type identification unit 52 identifies the type of tooth, and may, for example, use a machine learning model, a pattern matching method, or any other known method. The machine learning model is a learning model that is trained to output the type of tooth appearing in an image containing teeth when that image is input.
機械学習モデル(学習モデル)は、歯周疾患検出ツールの一例である。つまり、歯周疾患検出ツールは、それぞれが領域を含む画像データを入力とし当該領域における歯周疾患に関する情報を出力するように学習された複数の学習モデルを含んでいてもよい。歯周疾患検出ツールは、入力情報(ここでは、画像)から歯周疾患の検出結果を推定するためのニューラルネットワークなどの機械学習モデルで表されてもよい。 A machine learning model (learning model) is an example of a periodontal disease detection tool. In other words, a periodontal disease detection tool may include multiple learning models, each trained to receive image data containing an area as input and output information related to periodontal disease in that area. A periodontal disease detection tool may be represented by a machine learning model such as a neural network for estimating periodontal disease detection results from input information (here, an image).
なお、歯牙種類識別部52は、撮影画像に替えて、後述する補正画像を用いて、当該補正画像に含まれる歯牙の種類を識別してもよい。 In addition, the tooth type identification unit 52 may use a corrected image (described below) instead of a captured image to identify the type of tooth contained in the corrected image.
画像処理部53は、撮影画像内の天然歯牙の領域を構成する赤色、緑色、青色の各色成分のうち少なくとも2つの色成分のゲインを基準色データに基づいて調整した補正画像を生成する。本実施の形態では、撮影画像が青みを帯びた画像であるので、画像処理部53は、ユーザの天然歯牙の基準色データに基づいて、撮影画像内の天然歯牙の色味を実際のユーザの天然歯牙の色味に補正する処理を実行する。このように、基準色データは、例えば、ユーザの天然歯牙の色に基づいて設定されている。 The image processing unit 53 generates a corrected image in which the gain of at least two of the red, green, and blue color components that make up the natural tooth area in the captured image is adjusted based on the reference color data. In this embodiment, since the captured image is a bluish image, the image processing unit 53 performs processing to correct the color of the natural teeth in the captured image to the color of the actual user's natural teeth based on the reference color data of the user's natural teeth. In this way, the reference color data is set based on, for example, the color of the user's natural teeth.
画像処理部53は、単に歯牙を無彩色する処理(いわゆる、ホワイトバランス処理)を行うわけではなく、記憶部58に記憶されている基準色データを用いて、撮影画像の天然歯牙の色を基準色データに補正するためのゲインの調整量を決定し、撮影画像の全体(つまり、歯牙及び歯周領域を含む画像全体)を決定したゲインの調整量を用いて、一律に補正する。ここでの一律とは、歯牙と歯周領域とを同一のゲインの調整量を用いて補正することを意味する。補正画像は、第2RGB画像又は第2画像の一例である。 The image processing unit 53 does not simply perform a process to make the teeth achromatic (so-called white balance processing), but rather uses reference color data stored in the memory unit 58 to determine the amount of gain adjustment required to correct the color of the natural teeth in the captured image to the reference color data, and uniformly corrects the entire captured image (i.e., the entire image including the teeth and periodontal region) using the determined gain adjustment amount. "Uniformly" here means that the teeth and periodontal region are corrected using the same gain adjustment amount. The corrected image is an example of the second RGB image or second image.
このように生成された補正画像は、天然歯牙の色味が実際のユーザの天然歯牙の色味となる。また、歯牙及び歯周領域が同じゲインで補正されているので、歯周領域の色味も、ホワイトバランス処理を行う場合に比べて、実際のユーザの歯周領域の色味に近くなる。 In the corrected image generated in this way, the color of the natural teeth matches the color of the actual user's natural teeth. Furthermore, because the teeth and periodontal region are corrected with the same gain, the color of the periodontal region is closer to the color of the actual user's periodontal region than when white balance processing is performed.
画像処理部53は、例えば、撮影画像内の天然歯牙の領域を構成する複数の画素(第1画素)が有する複数の赤画素値の第1赤画素平均値と、複数の画素(第1画素)が有する複数の緑画素値の第1緑画素平均値と、複数の画素(第1画素)が有する複数の青画素値の第1青画素平均値のうちの少なくとも2つの色成分のゲインを調整して、基準色データとの差が所定範囲内となるように調整することで、撮影画像から補正画像を生成してもよい。なお、画像処理部53は、第1赤画素平均値、第1青画素平均値及び第1緑画素平均値を用いることに限定されず、第1赤画素の統計値、第1青画素の統計値及び第1緑画素の統計値を用いて撮影画像から補正画像を生成してもよい。統計値は、最大値、最小値、最頻値、中央値等であるが、これに限定されない。 The image processing unit 53 may generate a corrected image from the captured image by adjusting the gain of at least two color components, for example, a first red pixel average value of multiple red pixel values of multiple pixels (first pixels) constituting the natural tooth region in the captured image, a first green pixel average value of multiple green pixel values of multiple pixels (first pixels), and a first blue pixel average value of multiple blue pixel values of multiple pixels (first pixels), so that the difference from the reference color data falls within a predetermined range. Note that the image processing unit 53 is not limited to using the first red pixel average value, the first blue pixel average value, and the first green pixel average value, and may generate a corrected image from the captured image using a statistical value of the first red pixels, a statistical value of the first blue pixels, and a statistical value of the first green pixels. Statistical values include, but are not limited to, maximum values, minimum values, modes, medians, etc.
なお、天然歯牙は、励起光を照射すると象牙質から励起蛍光が発せられ、エナメル質(後述する図6を参照)を透過して緑色に蛍光することが知られている。また、齲歯治療痕の詰め物(例えば、メタルインレー)は、青色LED光の下では励起蛍光せず、カメラでは暗く(低輝度で)撮像されることが知られている。これらのことから、画像処理部53は、撮影画像から齲歯治療痕を除く天然歯牙を検出可能である。齲歯治療痕の領域は、特定画素領域の一例である。 It is known that when natural teeth are irradiated with excitation light, excitation fluorescence is emitted from the dentin, which passes through the enamel (see Figure 6 described below) and fluoresces green. It is also known that fillings in caries treatment marks (e.g., metal inlays) do not emit excitation fluorescence under blue LED light, and are captured as dark (low brightness) on the camera. For these reasons, the image processing unit 53 can detect the natural teeth excluding the caries treatment marks from the captured image. The caries treatment mark area is an example of a specific pixel area.
また、画像処理部53は、さらに補正画像から歯牙の歯垢領域を検出してもよい。歯牙及び歯垢が互いに異なる蛍光を発するので、画像処理部53は、補正画像における蛍光(励起蛍光)している部分の色により、歯牙及び歯垢を検出可能である。歯垢(歯垢領域)は、青色光が照射されると赤みを帯びたピンク色に蛍光(励起蛍光)する。なお、歯牙領域及び歯垢領域の検出方法として、上記以外の公知のいかなる方法が用いられてもよい。歯垢領域は、特定画素領域の一例である。 The image processing unit 53 may also detect plaque regions on the teeth from the corrected image. Because teeth and plaque emit different fluorescence, the image processing unit 53 can detect teeth and plaque from the color of the fluorescent (excited fluorescence) parts in the corrected image. Plaque (plaque region) fluoresces a reddish pink color (excited fluorescence) when illuminated with blue light. Note that any other known method for detecting tooth regions and plaque regions may be used. Plaque regions are an example of specific pixel regions.
画像データ抽出部54は、補正画像から、歯牙と歯肉との境界近傍における歯肉の領域を含む歯周領域画像データを抽出する。画像データ抽出部54は、補正画像から、歯間乳頭部と辺縁歯肉部とを含む歯頸部を帯状に取り囲む歯肉である遊離歯肉と、当該遊離歯肉に連続し、歯肉溝底から歯肉歯槽粘膜境までの付着歯肉の一部とを含む歯周領域画像データを抽出するともいえる。例えば、歯周領域画像データは、歯牙の左右の歯間乳頭歯肉を含んでいてもよい。 The image data extraction unit 54 extracts periodontal region image data from the corrected image, including the gingival region near the boundary between the tooth and the gingiva. It can also be said that the image data extraction unit 54 extracts periodontal region image data from the corrected image, including the free gingiva, which is the gingiva that surrounds the cervical region, including the interdental papilla and marginal gingiva, in a band shape, and a portion of the attached gingiva that is continuous with the free gingiva and extends from the bottom of the gingival sulcus to the gingival-alveolar junction. For example, the periodontal region image data may include the interdental papilla and gingiva on both sides of the tooth.
詳細は図7A及び図7Bを用いて後述するが、歯周領域画像データは、歯周疾患が発症する歯周領域を含み、かつ、歯周疾患の検出に対して不要である歯牙領域を極力含まない部分を、補正画像から抽出した画像である。画像データ抽出部54は、補正画像から歯牙の領域の一部を取り除くことで、歯周領域画像データを抽出してもよい。歯周領域画像データは、補正画像に基づく画像データの一例である。 Details will be described later using Figures 7A and 7B, but the periodontal region image data is an image extracted from the corrected image that includes the periodontal region where periodontal disease develops, but that minimizes the inclusion of tooth regions that are unnecessary for detecting periodontal disease. The image data extraction unit 54 may extract the periodontal region image data by removing part of the tooth region from the corrected image. The periodontal region image data is an example of image data based on the corrected image.
歯周疾患検出部55は、補正画像に基づく画像データに基づいて、ユーザの歯周疾患を検出する。ここでの歯周疾患は、現在の歯周疾患の進行状況であってもよいし、歯周疾患の予兆であってもよい。予兆とは、現時点では歯周疾患とは認められないが、歯周疾患が発生する可能性の高い前兆現象があることを含む。前兆現象は、例えば、歯肉炎が発生していること等が例示されるが、これに限定されない。 The periodontal disease detection unit 55 detects the user's periodontal disease based on image data derived from the corrected image. The periodontal disease here may be the current progress of periodontal disease, or a sign of periodontal disease. A sign includes the presence of a precursory phenomenon that is not currently recognized as periodontal disease, but which makes it highly likely that periodontal disease will occur. Examples of precursory phenomena include, but are not limited to, the occurrence of gingivitis.
本実施の形態では、歯周疾患検出部55は、歯肉の領域を含む画像データを入力とし当該画像データにおける歯周疾患に関する情報を出力するように学習された学習モデルに補正画像に基づく画像データ(本実施の形態では、歯周領域画像データ)を入力することで、ユーザの歯周疾患を検出する。歯周疾患に関する情報は、歯周疾患の有無、歯周疾患の進行状況、歯周疾患の予兆等を含む。また、歯周疾患に関する情報は、歯周ポケットの深さ、BOP及びGI値の少なくとも1つの推定結果を含んでいてもよい。 In this embodiment, the periodontal disease detection unit 55 detects the user's periodontal disease by inputting image data based on the corrected image (in this embodiment, periodontal region image data) into a learning model that has been trained to input image data including the gingival region and output information related to periodontal disease in the image data. The information related to periodontal disease includes the presence or absence of periodontal disease, the progress of periodontal disease, signs of periodontal disease, etc. The information related to periodontal disease may also include an estimated result of at least one of the periodontal pocket depth, BOP, and GI value.
学習モデルは、歯肉の領域を含む画像データ(本実施の形態では、歯周領域画像データ)を入力データとし、歯周疾患に関する情報を正解データとした学習用のデータセットを用いて、教師あり学習により予め学習されている。学習モデルは、例えば、複数の歯牙の歯周領域に関する疾患についての疾患発症の有無を識別するための特徴を有する複数のサンプルデータを含む教師データを用いて学習を行って予め得た複数の疾患について、それぞれの疾患を複数の疾患を発症している場合を含めて判定可能なモデルであってもよい。 The learning model is trained in advance through supervised learning using a learning dataset in which image data including the gingival region (in this embodiment, periodontal region image data) is used as input data and information regarding periodontal disease is used as correct answer data. The learning model may be, for example, a model that can determine the occurrence of each of a plurality of diseases, including cases in which multiple diseases have occurred, obtained in advance by training using training data including a plurality of sample data having features for identifying the presence or absence of diseases related to the periodontal region of a plurality of teeth.
学習に用いられる画像データは、例えば、正常な歯周領域の画像データ、歯周病の疑いがある歯周領域の画像データ、軽度の歯周炎、中度の歯周炎、重度の歯周炎が発症している歯周領域の画像データ等の歯周病の進行状態が異なる複数の画像を用いて行われる。また、学習用の画像は、歯牙の種類、歯牙が位置する領域、及び、撮影方向(舌側から撮影された画像であるか、頬側から撮影された画像であるかなど)のそれぞれで準備されてもよい。画像データは、画像データ抽出部54により抽出される歯周領域を含む画像(つまり、編集された画像)であってもよいし、補正画像そのものであってもよい。 The image data used for learning is a plurality of images showing different stages of periodontal disease progression, such as image data of a normal periodontal region, image data of a periodontal region suspected of having periodontal disease, and image data of periodontal regions with mild, moderate, or severe periodontitis. Furthermore, images for learning may be prepared for each type of tooth, region in which the tooth is located, and shooting direction (such as whether the image is taken from the lingual side or the buccal side). The image data may be an image including the periodontal region extracted by the image data extraction unit 54 (i.e., an edited image), or it may be the corrected image itself.
なお、学習モデルを生成する処理部は、歯周疾患検出システムが備えていてもよいし、歯周疾患検出システム外の装置が備えていてもよい。 The processing unit that generates the learning model may be provided by the periodontal disease detection system, or by a device outside the periodontal disease detection system.
表示部56は、携帯端末50が備える表示デバイスであり、歯周疾患検出部55により検出された歯周疾患を示す情報等を表示する。表示部56は、例えば、液晶ディスプレイパネルなどにより実現されるが、これに限定されない。 The display unit 56 is a display device provided in the mobile terminal 50, and displays information indicating the periodontal disease detected by the periodontal disease detection unit 55. The display unit 56 is realized, for example, by a liquid crystal display panel, but is not limited to this.
出力部57は、ユーザとは異なる人(例えば、歯科医師等の医療従事者)の第1情報端末と無線通信を行うための無線通信モジュールである。出力部57は、歯周疾患検出部55により検出された歯周疾患を示す情報を、ユーザとは異なる人の第1情報端末に出力する。また、出力部57は、さらに、取得部51が取得した受診要否情報であって、第1情報端末に対して当該人が入力したユーザの受診の要否に関する受診要否情報をユーザの第2情報端末に出力する。第2情報端末は、例えば、携帯端末50とは異なる情報端末であってもよい。なお、歯科医師は、例えば、ユーザの担当医であってもよい。 The output unit 57 is a wireless communication module for wireless communication with a first information terminal of a person other than the user (for example, a medical professional such as a dentist). The output unit 57 outputs information indicating periodontal disease detected by the periodontal disease detection unit 55 to the first information terminal of the person other than the user. The output unit 57 also outputs to the user's second information terminal the medical examination necessity information acquired by the acquisition unit 51, which information is regarding the need for the user to receive medical examination and entered by the person into the first information terminal. The second information terminal may be, for example, an information terminal different from the mobile terminal 50. The dentist may be, for example, the user's doctor.
出力部57は、歯周疾患検出部55により検出された歯周疾患を示す情報と、歯牙種類識別部52により識別された歯牙の種類とを対応付けて、第1情報端末及び第2情報端末の少なくとも一方に出力してもよい。言い換えると、出力部57は、第1情報端末及び第2情報端末の少なくとも一方に、歯周疾患を示す情報と、歯牙の種類とを対応付けて表示させてもよい。これにより、どの歯牙の歯周領域がどのような歯周疾患であるか(又は歯周疾患がないか)をユーザ及び医療従事者の少なくとも一方に通知することができる。 The output unit 57 may associate information indicating periodontal disease detected by the periodontal disease detection unit 55 with the type of tooth identified by the tooth type identification unit 52 and output the associated information to at least one of the first information terminal and the second information terminal. In other words, the output unit 57 may display information indicating periodontal disease associated with the type of tooth on at least one of the first information terminal and the second information terminal. This makes it possible to notify at least one of the user and medical professionals of the type of periodontal disease (or whether the periodontal region of each tooth has periodontal disease) (or whether it has no periodontal disease).
なお、出力部57は、第1情報端末及び第2情報端末と有線通信を行う有線通信モジュールを含んでいてもよい。 The output unit 57 may also include a wired communication module that performs wired communication with the first information terminal and the second information terminal.
記憶部58は、ユーザの歯周疾患を検出するための各種情報を記憶する記憶装置である。記憶部58は、例えば、歯周疾患検出部55が用いる学習モデルを記憶していてもよいし、基準色データを記憶していてもよい。記憶部58は、例えば、半導体メモリ等により実現されるが、これに限定されない。 The memory unit 58 is a storage device that stores various information for detecting periodontal disease in a user. The memory unit 58 may store, for example, a learning model used by the periodontal disease detection unit 55, or may store reference color data. The memory unit 58 may be realized, for example, by a semiconductor memory or the like, but is not limited to this.
なお、携帯端末50は、歯周疾患を検出するためのより適した画像を生成するための構成を備えていればよい。例えば、携帯端末50は、歯周疾患検出部55などの歯周疾患の検出を行う構成を備えていなくてもよい。この場合、歯科医師は、携帯端末50が生成した歯周疾患を検出するためのより適した画像(例えば、補正画像)を確認しながら診断を行う。つまり、携帯端末50は、歯科医師が診断をより正確に行うための画像を生成する支援装置として機能してもよい。 It is sufficient that the mobile terminal 50 has a configuration for generating an image that is more suitable for detecting periodontal disease. For example, the mobile terminal 50 does not have to have a configuration for detecting periodontal disease, such as the periodontal disease detection unit 55. In this case, the dentist makes a diagnosis while checking an image (e.g., a corrected image) that is more suitable for detecting periodontal disease generated by the mobile terminal 50. In other words, the mobile terminal 50 may function as a support device that generates images that allow the dentist to make a more accurate diagnosis.
[1-2.歯周疾患検出システムの動作]
続いて、上記のように構成される歯周疾患検出システムの動作について、図4~図7Bを参照しながら説明する。図4は、本実施の形態に係る歯周疾患検出システムの動作(歯周疾患検出方法)を示すフローチャートである。
[1-2. Operation of Periodontal Disease Detection System]
Next, the operation of the periodontal disease detection system configured as described above will be described with reference to Figures 4 to 7B. Figure 4 is a flowchart showing the operation of the periodontal disease detection system (periodontal disease detection method) according to this embodiment.
図4に示すように、まず、取得部51は、ユーザの天然歯牙の基準色データを取得し、記憶部58に記憶する(S11)。取得部51は、撮影画像を取得するよりも前に基準色データを取得してもよい。なお、基準色データは、ステップS14を実行する前に取得されていればよく、撮影画像を取得するよりも前に取得されていることに限定されない。 As shown in FIG. 4, first, the acquisition unit 51 acquires reference color data of the user's natural teeth and stores it in the storage unit 58 (S11). The acquisition unit 51 may acquire the reference color data before acquiring the captured image. Note that the reference color data only needs to be acquired before executing step S14, and is not limited to being acquired before acquiring the captured image.
また、当該基準色データは、ユーザごとに異なるので、取得部51は、ユーザを示す情報と対応付けて基準色データを記憶部58に記憶してもよい。また、ステップS11の処理は、例えば、1人のユーザに対して1回行われればよい。例えば、既に基準色データが記憶されているユーザであれば、ステップS11は省略されてもよい。 Furthermore, since the reference color data differs for each user, the acquisition unit 51 may store the reference color data in the storage unit 58 in association with information indicating the user. Furthermore, the processing of step S11 may be performed, for example, once for each user. For example, if reference color data is already stored for the user, step S11 may be omitted.
次に、取得部51は、ユーザの口腔内の撮影領域を撮影した撮影画像を取得する(S12)。撮影画像は、ユーザが自身で撮影した画像であってもよいし、医療従事者がユーザの口腔内を撮影した画像であってもよい。取得部51は、撮影画像を記憶部58に記憶してもよい。 Next, the acquisition unit 51 acquires a captured image of the imaging area inside the user's oral cavity (S12). The captured image may be an image taken by the user themselves, or an image of the inside of the user's oral cavity taken by a medical professional. The acquisition unit 51 may store the captured image in the storage unit 58.
次に、歯牙種類識別部52は、撮影画像内の歯牙の領域を識別する(S13)。歯牙種類識別部52は、例えば、学習済みの機械学習モデルに撮影画像を入力して得られる当該機械学習モデルの出力である歯牙の領域を用いて、撮影画像内の歯牙が口腔内のどの領域に位置する歯牙であるかを識別してもよい。 Next, the tooth type identification unit 52 identifies the tooth area in the captured image (S13). For example, the tooth type identification unit 52 may use the tooth area that is the output of a trained machine learning model obtained by inputting the captured image into the machine learning model to identify in which region of the oral cavity the tooth in the captured image is located.
次に、画像処理部53は、基準色データに基づいて、撮影画像の歯牙の色彩を補正した補正画像を生成する(S14)。画像処理部53は、撮影画像の歯牙の色データが基準色データと一致する又は所定範囲内となるようにゲインの調整量を決定し、撮影画像の全体(つまり、歯牙及び歯周領域を含む画像全体)を決定したゲインの調整量を用いて、一律に補正する。 Next, the image processing unit 53 generates a corrected image in which the color of the teeth in the captured image has been corrected based on the reference color data (S14). The image processing unit 53 determines the amount of gain adjustment so that the color data of the teeth in the captured image matches the reference color data or falls within a predetermined range, and uniformly corrects the entire captured image (i.e., the entire image including the teeth and periodontal region) using the determined amount of gain adjustment.
ここで、画像処理部53は、以下の処理を行って補正画像を生成してもよい。例えば、画像処理部53は、撮影画像の歯牙の色彩を補正した第1補正画像の画素値に基づく値が所定範囲内である画素を含む特定画素領域を特定し、撮影画像内の天然歯牙の領域のうち特定画素領域を除く領域を構成する赤色、緑色、青色の各色成分のうち少なくとも2つの色成分のゲインを基準色データに基づいて調整することで、上記補正画像として第3補正画像を生成してもよい。第3補正画像に基づく画像データは、補正画像に基づく画像データの一例である。特定画素領域は、天然歯牙において当該天然歯牙が露出していない領域であり、例えば、歯垢が付着している領域である。 Here, the image processing unit 53 may perform the following processing to generate a corrected image. For example, the image processing unit 53 may identify a specific pixel area including pixels whose values based on the pixel values of the first corrected image obtained by correcting the color of the tooth in the captured image are within a predetermined range, and may generate a third corrected image as the above corrected image by adjusting the gain of at least two of the red, green, and blue color components that make up the area of the natural tooth in the captured image excluding the specific pixel area based on the reference color data. Image data based on the third corrected image is an example of image data based on a corrected image. The specific pixel area is an area of the natural tooth where the natural tooth is not exposed, for example, an area where plaque is attached.
また、例えば、画像処理部53は、撮影画像の歯牙の色彩を補正した第1補正画像の色空間をHSV空間に変換することでHSV画像を生成し、HSV画像が有する複数の画素のうち彩度が第1の所定範囲内、色相が第2の所定範囲内、及び、明度が第3の所定範囲内の少なくとも1つを満たす1以上の画素が位置する画素領域を特定画素領域として特定してもよい。 Furthermore, for example, the image processing unit 53 may generate an HSV image by converting the color space of a first corrected image obtained by correcting the color of the teeth in the captured image into an HSV space, and identify as the specific pixel area a pixel area in which one or more pixels of the HSV image that satisfy at least one of the following conditions are located: saturation within a first predetermined range, hue within a second predetermined range, and brightness within a third predetermined range.
また、例えば、画像処理部53は、第1補正画像内の天然歯牙の領域を構成する複数の画像が有する複数の赤画素値が第1の範囲内、複数の緑画素値が第2の範囲内、複数の青画素値が第3の範囲内の少なくとも1つを満たす1以上の画素が位置する画素領域を特定画素領域として特定してもよい。 Furthermore, for example, the image processing unit 53 may identify as the specific pixel area a pixel area in which one or more pixels are located, where the multiple red pixel values of the multiple images constituting the natural tooth area in the first corrected image satisfy at least one of the following conditions: the multiple green pixel values are within a first range, the multiple blue pixel values are within a second range, and the multiple blue pixel values are within a third range.
このように、画像処理部53は、第1補正画像を変換したHSV画像から特定画素領域を特定してもよいし、第1補正画像そのものから特定画素領域を特定してもよい。 In this way, the image processing unit 53 may identify the specific pixel area from the HSV image converted from the first corrected image, or may identify the specific pixel area from the first corrected image itself.
次に、画像データ抽出部54は、補正画像に基づいて、歯肉領域を抽出する(S15)。なお、画像データ抽出部54は、例えば、第1画像(RGB画像)の色空間をHSV空間に変換することでHSV画像を生成し、色相(H)が予め定められた範囲に含まれる領域を歯肉領域として検出してもよい。 Next, the image data extraction unit 54 extracts the gingival region based on the corrected image (S15). Note that the image data extraction unit 54 may, for example, generate an HSV image by converting the color space of the first image (RGB image) into HSV space, and detect the region whose hue (H) falls within a predetermined range as the gingival region.
ここで、抽出された歯肉領域の画像(歯周領域画像)について、さらに図5~図7Bを参照しながら説明する。まずは、歯周領域について、図5及び図6を参照しながら説明する。図5は、本実施の形態に係る口腔内の歯周領域を説明するための第1図である。図5は、歯牙領域及び歯周領域を含む口腔内を撮影した画像である。図6は、本実施の形態に係る口腔内の歯周領域を説明するための第2図である。図6は、歯周領域の断面図を示す。なお、図6は、断面図であるが、便宜上ハッチングの図示を省略している。 Here, the extracted image of the gingival region (periodontal region image) will be further described with reference to Figures 5 to 7B. First, the periodontal region will be described with reference to Figures 5 and 6. Figure 5 is a first diagram for explaining the periodontal region within the oral cavity in this embodiment. Figure 5 is an image captured of the oral cavity including the dental region and the periodontal region. Figure 6 is a second diagram for explaining the periodontal region within the oral cavity in this embodiment. Figure 6 shows a cross-sectional view of the periodontal region. Note that although Figure 6 is a cross-sectional view, hatching has been omitted for convenience.
図5の(a)及び(b)に示すように、歯肉は、遊離歯肉と、付着歯肉と、歯槽粘膜とを含む。遊離歯肉は、歯肉のうち歯牙の頭に近い部分であり、頬などが動くと一緒に動く部分である。付着歯肉は、歯肉のうち骨に付いている部分であり、頬などが動いても一緒に動かない部分である。歯槽粘膜は、付着歯肉に対して遊離歯肉と反対側に位置する部分であり、頬などが動くと一緒に動く部分である。また、歯間乳頭歯肉は、歯の隙間の細い歯肉であり、弱い上に炎症を起こして腫れやすく、汚れ、歯垢などが溜まりやすい場所である。歯間乳頭歯肉が失われる原因の1つに、歯周病がある。 As shown in Figures 5(a) and (b), the gingiva comprises the free gingiva, attached gingiva, and alveolar mucosa. The free gingiva is the part of the gingiva that is closest to the head of the tooth and moves when the cheek or other parts of the body move. The attached gingiva is the part of the gingiva that is attached to the bone and does not move when the cheek or other parts of the body move. The alveolar mucosa is the part of the attached gingiva that is located on the opposite side of the free gingiva from the free gingiva and moves when the cheek or other parts of the body move. The interdental papilla gingiva is the thin gingiva found between the teeth; it is weak and prone to inflammation and swelling, and is a place where dirt and plaque tend to accumulate. Periodontal disease is one of the causes of loss of the interdental papilla gingiva.
図6に示すように、遊離歯肉は、歯肉において歯肉溝を形成している部位である。歯肉溝は、歯牙と歯肉との間にある溝であり、歯周病につながるプラークが溜まりやすい箇所である。歯肉縁は、遊離歯肉の頂上部であり、付着歯肉と反対側に位置する。また、遊離歯肉と付着歯肉との境目には遊離歯肉溝があり、付着歯肉と歯槽粘膜との境目には歯肉歯槽粘膜境がある。付着歯肉は、歯肉のうち遊離歯肉溝から歯肉歯槽粘膜境までの部分である。 As shown in Figure 6, the free gingiva is the part of the gingiva that forms the gingival sulcus. The gingival sulcus is a groove between the tooth and gingiva, and is a place where plaque that leads to periodontal disease tends to accumulate. The gingival margin is the top of the free gingiva and is located opposite the attached gingiva. Furthermore, the free gingival sulcus is located at the border between the free gingiva and attached gingiva, and the gingival-alveolar junction is located at the border between the attached gingiva and alveolar mucosa. The attached gingiva is the part of the gingiva from the free gingival sulcus to the gingival-alveolar junction.
図7Aは、本実施の形態に係る画像データ抽出部54が抽出する画像データを説明するための図である。図7Aの(a)は、画像データ抽出部54が補正画像から抽出する領域を説明するための図であり、図7Aの(b)は、抽出された画像データを示す図である。図7Aの(a)に示す画像は、補正画像の一例であり、図7Aの(b)に示す画像は、図7Aの(a)の一点鎖線で囲まれた矩形の領域を抽出した画像(画像データ抽出部54により抽出された歯周領域画像データ)を示す。 FIG. 7A is a diagram illustrating image data extracted by the image data extraction unit 54 according to this embodiment. (a) of FIG. 7A is a diagram illustrating the area extracted from the corrected image by the image data extraction unit 54, and (b) of FIG. 7A is a diagram illustrating the extracted image data. The image shown in (a) of FIG. 7A is an example of a corrected image, and (b) of FIG. 7A is an image obtained by extracting the rectangular area surrounded by the dashed line in (a) of FIG. 7A (periodontal region image data extracted by the image data extraction unit 54).
図7Aの(a)に示すように、画像データ抽出部54は、歯牙の輪郭の左右側部に外接し、かつ、歯牙の並び方向と直交する方向の一点鎖線(紙面における上下方向に延在する一点鎖線)をひく。左右側部は、当該歯牙において、隣接する歯牙の方に最も突出している部分である。また、画像データ抽出部54は、歯間乳頭歯肉の頂部(先端部)を含むように、歯牙の並び方向と平行な方向の一点鎖線(紙面における左右方向に延在する一点鎖線のうち上側の一点鎖線)をひき、歯周縁を含むように、歯牙の並び方向と平行な方向の一点鎖線(紙面における左右方向に延在する一点鎖線のうち下側の一点鎖線)をひく。 As shown in (a) of Figure 7A, the image data extraction unit 54 draws dashed-dotted lines (dotted-dotted lines extending vertically on the paper) that circumscribe the left and right sides of the tooth contour and are perpendicular to the tooth alignment direction. The left and right sides are the parts of the tooth that protrude most toward the adjacent tooth. The image data extraction unit 54 also draws dashed-dotted lines (the upper dashed-dotted lines extending horizontally on the paper) parallel to the tooth alignment direction so as to include the apex (tip) of the interdental papilla gingiva, and draws dashed-dotted lines (the lower dashed-dotted lines extending horizontally on the paper) parallel to the tooth alignment direction so as to include the peripheral tooth edge.
そして、図7Aの(b)に示すように、画像データ抽出部54は、図7Aの(a)に示す一点鎖線で囲まれた矩形の領域を補正画像から抽出することで、歯周領域画像データを生成する。このように、画像データ抽出部54は、当該歯牙の歯周領域のうち、歯間乳頭歯肉の先端部(例えば、歯周縁)から遊離歯肉溝までを含む矩形枠の領域を、歯周領域画像データとして補正画像から抽出する。この場合、歯周領域画像データは、歯牙の一部のみを含む画像となる。歯牙領域は歯周疾患の検出には不要な領域(例えば、検出精度を低下させ得る領域)であり、図7Aの(b)に示すように、歯周領域画像データは、歯牙領域を極力含んでいないとよい。 Then, as shown in (b) of Figure 7A, the image data extraction unit 54 generates periodontal region image data by extracting the rectangular region enclosed by the dashed line shown in (a) of Figure 7A from the corrected image. In this way, the image data extraction unit 54 extracts from the corrected image, as periodontal region image data, a rectangular region of the periodontal region of the tooth in question that includes from the tip of the interdental papilla gingiva (e.g., the periodontal margin) to the free gingival sulcus. In this case, the periodontal region image data is an image that includes only a portion of the tooth. The tooth region is an area that is unnecessary for detecting periodontal disease (e.g., an area that may reduce detection accuracy), and as shown in (b) of Figure 7A, it is desirable that the periodontal region image data include as little tooth region as possible.
歯周領域画像データが歯周縁から遊離歯肉溝の領域を含む場合、当該画像データに基づいて、歯周ポケットの深さを推測可能であり、かつ、歯間乳頭歯肉が歯周病になると歯間乳頭歯肉が退縮し、歯牙と歯牙との間に隙間ができるように見えることから歯周病の進行を判定することができる。 When the periodontal region image data includes the area from the periodontal margin to the free gingival sulcus, it is possible to estimate the depth of the periodontal pocket based on that image data, and when the interdental papilla gingiva becomes diseased, the interdental papilla gingiva recedes, appearing to create gaps between the teeth, making it possible to determine the progression of the periodontal disease.
なお、画像データ抽出部54は、図7Aの(a)に示す領域を抽出することに限定されず、例えば、上下方向に延在する一点鎖線に囲まれた上下方向が長尺の矩形の領域(例えば、歯周領域に加えて当該歯牙の大部分を含む領域)を歯周領域画像データとして補正画像から抽出してもよい。なお、歯周疾患をより精度よく検出する観点から、図7Aの(a)に示す一点鎖線で囲まれた領域が抽出されるとよい。また、抽出される画像データは、矩形の画像であることに限定されない。なお、画像データ抽出部54は、図7Aの(a)に示す一点鎖線に対応する撮影画像における領域を歯周領域画像データとして撮影画像から抽出してもよい。 Note that the image data extraction unit 54 is not limited to extracting the region shown in (a) of Figure 7A, and may, for example, extract from the corrected image as periodontal region image data a rectangular region that is long in the vertical direction and is surrounded by a dashed line extending in the vertical direction (for example, a region that includes the periodontal region as well as most of the tooth). Note that, from the perspective of detecting periodontal disease with greater accuracy, it is preferable to extract the region surrounded by the dashed line shown in (a) of Figure 7A. Note also that the extracted image data is not limited to being a rectangular image. Note that the image data extraction unit 54 may also extract from the captured image as periodontal region image data a region in the captured image that corresponds to the dashed line shown in (a) of Figure 7A.
ここで、矩形の領域の形成方法について、図7Bを参照しながら説明する。図7Bは、本実施の形態に係る矩形の領域の形成方法の具体例を示す図である。矩形の領域を示す枠を矩形枠とも記載する。図7Bは、上顎の中切歯に対する矩形枠の一例を示す。 Here, a method for forming a rectangular area will be described with reference to Figure 7B. Figure 7B is a diagram showing a specific example of a method for forming a rectangular area according to this embodiment. The frame that shows a rectangular area will also be referred to as a rectangular frame. Figure 7B shows an example of a rectangular frame for a maxillary central incisor.
図7Bの(a)に示すように、画像データ抽出部54は、歯列に対応して特定の歯牙を囲む矩形枠F1を形成する。画像データ抽出部54は、例えば、歯牙及び歯肉の色などにより識別し、歯牙の形状などにより1つの歯牙を囲む矩形枠F1を形成する。矩形枠F1の4辺は、1つの歯牙の外形に接するように形成される。 As shown in (a) of Figure 7B, the image data extraction unit 54 forms a rectangular frame F1 that surrounds a specific tooth in accordance with the dentition. The image data extraction unit 54 identifies the tooth based on, for example, the color of the tooth and gums, and forms a rectangular frame F1 that surrounds one tooth based on the shape of the tooth. The four sides of the rectangular frame F1 are formed so as to be in contact with the outer shape of one tooth.
図7Bの(b)に示すように、次に、画像データ抽出部54は、矩形枠F1の下端を左右の歯間乳頭歯肉(例えば歯間乳頭歯肉の先端部)を含む範囲で上方向に移動することで矩形枠F2を形成する。この際、歯牙の情報を極力削除するために、左右の歯間乳頭歯肉を含む範囲で極力上方向に移動させるとよい。 As shown in (b) of Figure 7B, the image data extraction unit 54 then forms a rectangular frame F2 by moving the bottom edge of the rectangular frame F1 upward within a range that includes the left and right interdental papillae and gingiva (for example, the tips of the interdental papillae and gingiva). At this time, it is advisable to move the frame upward as much as possible within a range that includes the left and right interdental papillae and gingiva in order to remove as much tooth information as possible.
図7Bの(c)に示すように、次に、画像データ抽出部54は、矩形枠F2の上端を歯肉辺縁から所定距離Dだけ上方向に移動することで矩形枠F3を形成する。所定距離Dは、例えば、2mmであるがこれに限定されず、1mmであってもよいし、3mm以上であってもよい。これにより、図7Aの(b)に示す一点鎖線の枠が形成される。 As shown in (c) of Figure 7B, the image data extraction unit 54 then forms a rectangular frame F3 by moving the upper end of the rectangular frame F2 upward from the gingival margin by a predetermined distance D. The predetermined distance D is, for example, 2 mm, but is not limited to this, and may be 1 mm or 3 mm or more. This forms the dashed-dotted frame shown in (b) of Figure 7A.
特徴量設計では、予測精度を高めるための複雑な数学や統計的な変換を見つけ出す以上に、目的変数(疾病の進行度)と関係性の深い意味のある「特徴」を見つけ出すことが重要である。一方で、そのような特徴量を見つけ出すためには、歯科医療関連の知識・経験・直感、データの知識(データ項目の意味、テーブル間の関係性)、統計・機械学習の知識(統計的な安定性・予測力)といった知識が求められ、特徴量設計は機械学習モデルを開発するプロセスの中で、最も重要かつ最も難しい工程の1つである。 In feature engineering, it is more important to find meaningful "features" that are closely related to the target variable (disease progression) than to find complex mathematical or statistical transformations to improve prediction accuracy. However, finding such features requires knowledge, experience, and intuition related to dentistry, knowledge of data (the meaning of data items and relationships between tables), and knowledge of statistics and machine learning (statistical stability and predictive power). Feature engineering is one of the most important and most difficult steps in the process of developing a machine learning model.
本願発明者らは、鋭意検討により、上記の通り、歯牙が映る領域を減らして(歯周疾患の検出に対する歯牙の影響を減らして)、歯肉の特徴が得やすい学習データを生成することを見出した。 Through extensive research, the inventors discovered that, as described above, by reducing the area in which teeth are visible (reducing the influence of teeth on the detection of periodontal disease), they can generate training data that makes it easier to obtain gingival features.
図4を再び参照して、歯周疾患検出部55は、歯肉領域の画像に基づいて、歯周疾患に関する検出を実行する(S16)。歯周疾患検出部55は、画像データ抽出部54により抽出された歯周領域画像データ(例えば、図7Aの(b)に示す画像)を学習モデルに入力することで得られる当該学習モデルの出力である歯周疾患の推定結果を取得する。歯周疾患検出部55は、例えば、当該推定結果を歯周疾患の検出結果として取得する。 Referring again to FIG. 4, the periodontal disease detection unit 55 performs detection related to periodontal disease based on the image of the gingival region (S16). The periodontal disease detection unit 55 inputs the periodontal region image data extracted by the image data extraction unit 54 (for example, the image shown in (b) of FIG. 7A) into the learning model, and obtains an estimated result of periodontal disease, which is the output of the learning model. The periodontal disease detection unit 55 obtains, for example, the estimated result as the detection result of periodontal disease.
次に、出力部57は、歯周疾患検出部55の検出結果を出力する(S17)。出力部57は、ユーザ及び医療従事者の少なくとも一方の情報端末に、通信を介して検出結果を送信する。これにより、検出結果をユーザ及び医療従事者の少なくとも一方に歯周疾患の検出結果を通知することができる。 Next, the output unit 57 outputs the detection results of the periodontal disease detection unit 55 (S17). The output unit 57 transmits the detection results via communication to the information terminal of at least one of the user and medical professional. This allows the periodontal disease detection results to be notified to at least one of the user and medical professional.
以上のように、本実施の形態に係る歯周疾患検出システムでは、撮影画像の撮影時に口腔内を照射する照射光による特定の歯牙及び歯肉の色かぶりを、色校正用カラーパッチなどを使うことなく、予め記憶した特定の歯牙の基準色データであってユーザに応じた基準色データに基づいて補正する。これにより、画像における歯牙及び歯肉の色彩を当該ユーザの歯牙及び歯肉の色彩に近づけることができる。このような画像(補正画像)を用いて歯周疾患の検出が行われることで、画像に起因して歯周疾患の検出精度が低下することを抑制することができる。 As described above, the periodontal disease detection system according to this embodiment corrects color casts on specific teeth and gums caused by the light irradiating the oral cavity when capturing an image based on pre-stored reference color data for specific teeth that is specific to the user, without using color calibration color patches or the like. This allows the colors of the teeth and gums in the image to be closer to the colors of the teeth and gums of the user. By detecting periodontal disease using such an image (corrected image), it is possible to prevent a decrease in the accuracy of periodontal disease detection due to the image.
(実施の形態1の変形例1)
以下では、本変形例に係る歯周疾患検出システムについて、図8A~図8Cを参照しながら説明する。なお、以下では、実施の形態1との相違点を中心に説明し、実施の形態1と同一又は類似の内容については説明を省略又は簡略化する。本変形例では、歯周疾患検出部がルールベースを用いて歯周疾患を検出する例について説明する。
(First Modification of First Embodiment)
The periodontal disease detection system according to this modification will be described below with reference to Figures 8A to 8C. The following description will focus on the differences from embodiment 1, and descriptions of content that is the same as or similar to embodiment 1 will be omitted or simplified. In this modification, an example will be described in which the periodontal disease detection unit detects periodontal disease using a rule base.
例えば、本変形例では、歯周病リスクなどの歯周疾患の判定において、人が判定ルールを構築できる内容(例えば歯間乳頭歯肉の先端部の形状による判定ルールなど)について、判定ルールを用いてリスク判定する方法を採用する。判定ルールは、歯周疾患検出ツールの一例である。判定ルールは、画像データから歯周疾患に関する情報を出力するために生成される。 For example, in this modified example, when assessing periodontal disease risk, etc., a method is adopted in which risk is assessed using a judgment rule for content for which a person can construct a judgment rule (for example, a judgment rule based on the shape of the tip of the interdental papilla gingiva). The judgment rule is an example of a periodontal disease detection tool. The judgment rule is generated in order to output information related to periodontal disease from image data.
なお、本変形例の歯周疾患検出システムの構成は実施の形態1に係る歯周疾患検出システム1と同様であってもよく、以下では歯周疾患検出システム1の符号を用いて説明する。本変形例に係る歯周疾患検出部55は、図4に示すステップS16において、予め設定された判定ルールを用いてユーザの歯周疾患を自動で検出する。 The configuration of the periodontal disease detection system of this modified example may be the same as that of the periodontal disease detection system 1 of embodiment 1, and will be described below using the reference numerals of the periodontal disease detection system 1. In step S16 shown in Figure 4, the periodontal disease detection unit 55 of this modified example automatically detects the user's periodontal disease using preset determination rules.
歯肉を撮影したRGB画像におけるR値、G値、B値の全てにおいて、正常な歯肉群及び病的な歯肉群ともに遊離歯肉及び付着歯肉のユーザ全体の色分布にばらつきが大きく、正常な歯肉群と歯周疾患の疑いのある歯肉群との色分布の重なりが認められる。そのため、ユーザの正常な歯肉のR値、G値、B値を考慮せずに判定を行うと、明るい歯肉色であっても歯周疾患の疑いのある歯肉である場合、暗い歯肉色であっても正常な歯肉である場合がある。 In all R, G, and B values of the RGB images of the gums, there is a large variation in the color distribution of the free and attached gums across users, for both normal and diseased gums, and there is overlap in the color distribution of the normal gums and gums suspected of having periodontal disease. Therefore, if a judgment is made without taking into account the R, G, and B values of the user's normal gums, light gum color may be gum suspected of having periodontal disease, and dark gum color may be normal gum.
そこで、本変形例に係る判定ルールは、正常な歯肉群のR値、G値、B値の範囲に基づいて複数の判定ルールを設定している。以下に、具体的な設定方法を説明する。図8A~図8Cは、本変形例に係る各群ルールベース(判定ルール)の一例を示す図である。図8Aは、R値の正常歯肉が、遊離歯肉の場合にA1~A2であり、付着歯肉の場合にB1~B2であるときの判定ルールを示す。図8Bは、R値の正常歯肉が、遊離歯肉の場合にA3~A4であり、付着歯肉の場合にB3~B4であるときの判定ルールを示す。図8Cは、R値の正常歯肉が、遊離歯肉の場合にA5~A6であり、付着歯肉の場合にB5~B6であるときの判定ルールを示す。 Therefore, the judgment rules for this modified example set multiple judgment rules based on the ranges of R, G, and B values of the normal gingival group. A specific setting method is described below. Figures 8A to 8C are diagrams showing an example of each group rule base (judgment rule) for this modified example. Figure 8A shows the judgment rule when the normal gingival R value is A1 to A2 for free gingiva and B1 to B2 for attached gingiva. Figure 8B shows the judgment rule when the normal gingiva R value is A3 to A4 for free gingiva and B3 to B4 for attached gingiva. Figure 8C shows the judgment rule when the normal gingiva R value is A5 to A6 for free gingiva and B5 to B6 for attached gingiva.
図8A~図8Cに示すように、例えば、第n群ルールベースでは、遊離歯肉の正常範囲A2n-1≦R値≦A2nの場合、及び、付着歯肉の正常範囲B2n-1≦R値≦B2nの場合の歯周疾患の判定基準が格納されている。ここでは、R値のみでルールベース群を分類しているが、例えば、G値に基づいて分類したり、B値に基づいて分類したり、あるいは、R値、G値、B値のうち2つの組み合わせ、又は、R値、G値、B値の3つの組み合わせで判定ルールを分類してもよい。 As shown in Figures 8A to 8C, for example, the nth group rule base stores the periodontal disease determination criteria when the normal range for free gingiva is A2n-1≦R value≦A2n, and when the normal range for attached gingiva is B2n-1≦R value≦B2n. Here, the rule base groups are classified only by R value, but the determination rules may also be classified based on, for example, G value, B value, or a combination of two of R value, G value, and B value, or a combination of three of R value, G value, and B value.
例えば、各群のルールベース(各判定ルール)は、ユーザから取得した正常な歯肉からR値、G値、B値の変化量に応じて歯周疾患の進行レベルの判定ルールが設定されていてもよい。各群のルールベース(各判定ルール)は、歯肉を遊離歯肉と付着歯肉とに分け、R値、G値、B値それぞれの正常な歯肉の値からの変化量に基づいて歯周疾患の進行レベルが設定されていてもよい。つまり、歯周疾患の判定基準はユーザごとに設定されていてもよい。進行レベルは、進行度の一例である。 For example, the rule base (each judgment rule) for each group may set a rule for determining the progression level of periodontal disease based on the amount of change in the R value, G value, and B value from normal gingiva obtained from the user. The rule base (each judgment rule) for each group may divide gingiva into free gingiva and attached gingiva, and set the progression level of periodontal disease based on the amount of change in each of the R value, G value, and B value from normal gingiva values. In other words, the criteria for determining periodontal disease may be set for each user. The progression level is an example of the degree of progression.
進行レベルは、例えば、歯周病の進行の段階であってもよい。例えば、進行レベル1は歯肉炎であり、進行レベル2は軽度の歯周病であり、進行レベル3は中度の歯周病であり、進行レベル4は重度の歯周病であってもよい。 The progression level may be, for example, the stage of progression of periodontal disease. For example, progression level 1 may be gingivitis, progression level 2 may be mild periodontal disease, progression level 3 may be moderate periodontal disease, and progression level 4 may be severe periodontal disease.
一般に、遊離歯肉及び付着歯肉ともに正常な歯肉から歯周疾患の進行レベルが進むとR値、G値、B値の全ての輝度インデックスが低下し、その変化量は付着歯肉に比較して遊離歯肉の方が大きくなる傾向がある。この知見から、歯周疾患検出部55は、遊離歯肉に基づく評価を優先して歯周疾患の進行レベルを出力してもよい。または、歯周疾患検出部55は、遊離歯肉と付着歯肉とのそれぞれについて歯周病の進行レベルを出力してもよい。 In general, as the progression level of periodontal disease progresses from normal gingiva to attached gingiva, all brightness indices (R, G, B) decrease, and the amount of change tends to be greater for free gingiva compared to attached gingiva. Based on this knowledge, the periodontal disease detection unit 55 may output the progression level of periodontal disease by prioritizing an evaluation based on free gingiva. Alternatively, the periodontal disease detection unit 55 may output the progression level of periodontal disease for both free gingiva and attached gingiva.
なお、図8A~図8Cに示す判定ルールは、予め設定され記憶部58に記憶されている。記憶部58が記憶する判定ルールの数は2つ以上であれば特に限定されない。また、判定ルールは、歯周病の有無を検出するためのルールであってもよい。 The judgment rules shown in Figures 8A to 8C are set in advance and stored in the memory unit 58. There is no particular limit to the number of judgment rules stored in the memory unit 58, as long as it is two or more. The judgment rules may also be rules for detecting the presence or absence of periodontal disease.
なお、各判定ルールは、色空間ごとに設定されてもよいし、口腔内の各領域に対して個別に設定されてもよいし、事前にユーザの健康な歯肉の色情報(例えば、色の絶対値と色のばらつき)を取得ている場合、当該色情報に応じて設定されてもよい。つまり、判定ルールは、ユーザごとに設定されてもよい。例えば、当該ユーザの歯肉の複数点における色情報(色の絶対値であり、例えば、R値、G値、B値)を取得し、R値のばらつきを算出し、R値の統計値(例えば平均値)とR値のばらつきとに基づいてR値の正常範囲が決定されてもよい。G値、及び、B値についてもR値と同様に決定される。 Note that each judgment rule may be set for each color space, or may be set individually for each region in the oral cavity, or, if color information of the user's healthy gums (e.g., absolute color values and color variation) has been acquired in advance, may be set according to that color information. In other words, judgment rules may be set for each user. For example, color information (absolute color values, e.g., R value, G value, B value) at multiple points on the user's gums may be acquired, the variation in R value may be calculated, and the normal range of the R value may be determined based on the statistical value of the R value (e.g., average value) and the variation in the R value. The G value and B value may also be determined in the same way as the R value.
歯周疾患検出部55は、図4に示すステップS16において、上記のような判定ルールを用いて歯周疾患を検出する。歯周疾患検出部55は、入力された画像データを判定ルールと比較し、比較の結果に応じて、歯周疾患を検出するように構成される。入力された画像データを、判定ルールを用いて処理することを、画像データを判定ルールに入力するとも記載する。 In step S16 shown in Figure 4, the periodontal disease detection unit 55 detects periodontal disease using the above-mentioned judgment rules. The periodontal disease detection unit 55 is configured to compare the input image data with the judgment rules and detect periodontal disease depending on the comparison results. Processing the input image data using the judgment rules is also referred to as inputting the image data into the judgment rules.
例えば、歯周疾患検出部55は、第2RGB画像に基づく画像データから取得した歯肉の色のR値、G値、B値と、判定ルールとに基づいて、ユーザの歯周疾患を検出する。歯周疾患検出部55は、複数の判定ルールの中から、対象の歯牙が属する領域又はユーザに対応した判定ルールを抽出し、抽出した判定ルールを用いて歯周疾患の判定を行う。 For example, the periodontal disease detection unit 55 detects the user's periodontal disease based on the R, G, and B values of the gum color obtained from image data based on the second RGB image and the judgment rule. The periodontal disease detection unit 55 extracts a judgment rule from multiple judgment rules that corresponds to the region to which the target tooth belongs or the user, and uses the extracted judgment rule to judge periodontal disease.
歯周疾患検出部55は、例えば、第2RGB画像に基づく画像データから、歯肉の色情報を抽出し、抽出した色情報が示す色の値と、予め設定された正常範囲との差分(変化量)から当該歯肉が正常か否か、正常でない場合には歯周疾患がどの程度進行しているかの進行レベルを、遊離歯肉及び付着歯肉のそれぞれで判定する。色の値は、R値、G値、B値であってもよいし、色相情報、彩度情報、明度情報であってもよいし、色相情報、彩度情報、輝度情報であってもよい。 The periodontal disease detection unit 55, for example, extracts color information of the gums from image data based on the second RGB image, and determines whether the gums are normal or not based on the difference (amount of change) between the color value indicated by the extracted color information and a preset normal range, and if not normal, determines the progression level of the periodontal disease for both the free gums and the attached gums. The color values may be R values, G values, and B values, or may be hue information, saturation information, lightness information, or hue information, saturation information, and brightness information.
例えば、歯周疾患検出部55は、正常な歯肉の色のRGB色空間におけるR値、G値、B値の範囲と、歯周病の複数の進行段階ごとの歯肉の色のR値、G値、B値の範囲とから作成された判定ルールを用いて、第2RGB画像に基づく画像データから取得した歯肉の色のR値、G値、B値を入力とし当該領域におけるユーザの歯周疾患を検出してもよい。 For example, the periodontal disease detection unit 55 may use a determination rule created from the range of R, G, and B values in the RGB color space for normal gum color and the range of R, G, and B values for gum color for multiple stages of periodontal disease to input the R, G, and B values of the gum color obtained from image data based on the second RGB image and detect the user's periodontal disease in that area.
また、例えば、歯周疾患検出部55は、正常な歯肉の色のHSV色空間における色相情報、彩度情報、明度情報の少なくとも一つと、歯周病の複数の進行段階ごとの歯肉の色の色相情報、彩度情報、明度情報の範囲の少なくとも一つとから作成された判定ルールを用いて、第2RGB画像に基づく画像データから取得した歯肉の色の色相情報、彩度情報、明度情報の少なくとも一つを入力とし当該領域におけるユーザの歯周疾患を検出してもよい。 Furthermore, for example, the periodontal disease detection unit 55 may input at least one of the hue information, saturation information, and brightness information of the color of normal gums in the HSV color space, and at least one of the ranges of the hue information, saturation information, and brightness information of the color of gums for each of multiple stages of periodontal disease, and detect the user's periodontal disease in the relevant area.
また、例えば、歯周疾患検出部55は、正常な歯肉の色のHSL色空間における色相情報、彩度情報、輝度情報の少なくとも一つと、歯周病の複数の進行段階ごとの歯肉の色の色相情報、彩度情報、輝度情報の範囲の少なくとも一つとから作成された判定ルールを用いて、第2RGB画像に基づく画像データから取得した歯肉の色の色相情報、彩度情報、輝度情報の少なくとも一つを入力とし当該領域におけるユーザの歯周疾患を検出してもよい。 Furthermore, for example, the periodontal disease detection unit 55 may input at least one of the hue information, saturation information, and luminance information of the color of normal gums in the HSL color space, and at least one of the ranges of the hue information, saturation information, and luminance information of the color of gums for each of multiple stages of periodontal disease, and detect the user's periodontal disease in that area.
なお、歯肉の色の差は、歯肉における最も濃いところと最も薄いところとの色の差であり、判定ルールは、当該差と、正常及び進行レベルとが対応付けられたルールであってもよい。各ユーザで歯肉の色(例えば、正常な歯肉の色)は異なるので、歯肉の色の濃いところと薄いところとの色の差を使うことで、各ユーザ自身の歯肉の色に依存せずに歯周疾患を検出することができる。 Note that the difference in gum color is the difference in color between the darkest and lightest parts of the gums, and the determination rule may be a rule that associates this difference with normal and advanced levels. Since each user's gum color (e.g., normal gum color) is different, by using the difference in color between the darkest and lightest parts of the gums, periodontal disease can be detected without relying on each user's own gum color.
なお、判定ルールを用いる場合、評価パラメータとして色を用いる例について説明したが、色に替えて、又は、色とともに歯肉の形状を用いてもよい。歯肉の形状を用いる例については、実施の形態2の変形例において説明する。 Note that, while an example has been described in which color is used as an evaluation parameter when using a judgment rule, gum shape may be used instead of or in addition to color. An example in which gum shape is used will be described in a modified version of embodiment 2.
(実施の形態1の変形例2)
以下では、本変形例に係る歯周疾患検出システムについて、図9を参照しながら説明する。なお、以下では、実施の形態1との相違点を中心に説明し、実施の形態1と同一又は類似の内容については説明を省略又は簡略化する。本変形例では、機械学習モデルの学習を行うための学習データについて説明する。
(Modification 2 of Embodiment 1)
The periodontal disease detection system according to this modification will be described below with reference to Fig. 9. The following description will focus on differences from embodiment 1, and descriptions of content that is the same as or similar to embodiment 1 will be omitted or simplified. In this modification, training data for training a machine learning model will be described.
なお、本変形例の歯周疾患検出システムの構成は実施の形態1に係る歯周疾患検出システム1と同様であってもよく、以下では歯周疾患検出システム1の符号を用いて説明する。 Note that the configuration of the periodontal disease detection system of this modified example may be the same as that of the periodontal disease detection system 1 according to embodiment 1, and the following description will use the reference numerals of the periodontal disease detection system 1.
図9は、本変形例に係る学習データを説明するための図である。図9では、上顎と下顎とで歯牙の種類ごとに番号を付している。例えば、「1」は、中切歯であり、「2」は側切歯である。 Figure 9 is a diagram explaining the learning data for this modified example. In Figure 9, numbers are assigned to each type of tooth in the upper and lower jaws. For example, "1" is a central incisor and "2" is a lateral incisor.
図9に示すように、学習データの正解データを取得するために、1つの歯牙に対して複数点が設定される。図9の例では、上顎の中切歯の外側に対して3点(1(左)、2(中央)、3(右))が設定され、上顎の中切歯の頬側に対して3点(4~6)が設定されている。また、図9の例では、下顎の中切歯の外側に対して3点(11~13)が設定され、下顎の中切歯の頬側に対して3点(14~16)が設定されている。設定される計測点は、歯牙と歯肉との境界部分における歯肉の領域に設けられる。 As shown in Figure 9, multiple points are set for each tooth to obtain correct answer data for the training data. In the example of Figure 9, three points (1 (left), 2 (center), 3 (right)) are set on the outside of the maxillary central incisors, and three points (4 to 6) are set on the buccal side of the maxillary central incisors. In the example of Figure 9, three points (11 to 13) are set on the outside of the mandibular central incisors, and three points (14 to 16) are set on the buccal side of the mandibular central incisors. The measurement points are set in the gingival area at the boundary between the tooth and the gingiva.
計測は、歯周ポケット及びBOP値の少なくとも一方を含み、本変形例における計測項目は歯周ポケット及びBOP値の両方を含む。つまり、本変形例では、1つの歯牙に対して、歯周ポケット及びBOP値の計測が6箇所(例えば、上顎の中切歯では、1~6)で行われる。また、本変形例では、1つの歯牙に対して、歯周ポケット及びBOP値の計測が3箇所(例えば、上顎の中切歯では、1~3又は4~6)で行われる。 Measurements include at least one of periodontal pockets and BOP values, and the measurement items in this modified example include both periodontal pockets and BOP values. In other words, in this modified example, periodontal pockets and BOP values are measured at six locations for each tooth (for example, locations 1 to 6 for the maxillary central incisors). Also, in this modified example, periodontal pockets and BOP values are measured at three locations for each tooth (for example, locations 1 to 3 or 4 to 6 for the maxillary central incisors).
歯周ポケットのポケット値は、1~9mmそれぞれ一つのクラスで9クラスが設定されていてもよいし、3mm以下と4mm以上の2クラスが設定されていてもよいし、2mm以下、3~5mm、6mm以上の3クラスが設定されていてもよいし、2mm以下、3mm、4mm、5mm、6mm以上の5クラスに設定されていてもよい。 The periodontal pocket values may be set into nine classes, one for each of 1 to 9 mm, or two classes, 3 mm or less and 4 mm or more, or three classes, 2 mm or less, 3 to 5 mm, and 6 mm or more, or five classes, 2 mm or less, 3 mm, 4 mm, 5 mm, and 6 mm or more.
BOP値は、出血があるかないかで、0又は1が付与される。 BOP scores are assigned a value of 0 or 1 depending on whether or not there is bleeding.
ここで、機械学習モデルの学習データとして、画像に対して以下の付帯データが付与される。 Here, the following additional data is added to the image as training data for the machine learning model.
(1)上顎か下顎のいずれを撮影した画像であるかを示す情報、(2)前歯(中切歯)を中心に、右側か左側のいずれを撮影した画像であるかを示す情報、(3)歯番号(1~8)を示す情報、(4)舌側の画像であるか頬肉側の画像であるかを示す情報、(5)歯周ポケット値を示す情報、(6)BOP値を示す情報を含む。(5)歯周ポケット値を示す情報、及び、(6)BOP値を示す情報には、各計測点の計測結果が含まれる。例えば、付帯情報として、「上顎、左、5番、舌側、ポケット値3、2、5、BOP値0、0、1」が付与される。 It includes (1) information indicating whether the image was taken of the upper or lower jaw, (2) information indicating whether the image was taken of the right or left side, with the front teeth (central incisors) at the center, (3) information indicating the tooth number (1-8), (4) information indicating whether the image is of the lingual side or the buccal side, (5) information indicating the periodontal pocket value, and (6) information indicating the BOP value. (5) Information indicating the periodontal pocket value and (6) information indicating the BOP value include the measurement results for each measurement point. For example, the following additional information is added: "upper jaw, left, number 5, lingual side, pocket values 3, 2, 5, BOP values 0, 0, 1."
(5)歯周ポケット値及び(6)BOP値は、学習時に正解ラベルとして用いられる。なお、歯番号について、乳歯は対象外で設定されるがこれに限定されない。 (5) Periodontal pocket value and (6) BOP value are used as correct labels during learning. Note that tooth numbers are set to exclude primary teeth, but this is not a limitation.
各歯周領域における歯周ポケットの深さの計測値(正解ラベル)を多数含んでなるセットを学習データとして与え所定のアルゴリズムを用いて機械学習を行わせることで、生成される機械学習モデルの判定精度を向上させることができる。また、歯周ポケットの深さの推定結果、及び、BOP値を基に、歯周病の進行度合い、及び、歯周外科処置の要否の少なくとも一方を機械学習モデルに判定させることができる。 By providing a set containing a large number of periodontal pocket depth measurements (correct labels) in each periodontal region as training data and performing machine learning using a specified algorithm, the accuracy of the generated machine learning model can be improved. Furthermore, based on the estimated periodontal pocket depth and the BOP value, the machine learning model can determine at least one of the degree of progression of periodontal disease and the need for periodontal surgical treatment.
(実施の形態2)
以下、本実施の形態に係る歯周疾患検出システムについて、図10~図14を参照しながら説明する。
(Embodiment 2)
Hereinafter, the periodontal disease detection system according to this embodiment will be described with reference to FIGS.
[2-1.歯周疾患検出システムの構成]
本実施の形態に係る歯周疾患検出システムは、口腔内の所定の領域を撮影した画像に基づいてユーザの歯周疾患を検出するための情報処理システムであり、携帯端末50に替えて携帯端末50aを備える点において、実施の形態1に係る歯周疾患検出システムと相違する。以降では、携帯端末50aの構成等の実施の形態1との相違点を中心に説明し、同一又は類似の構成は説明を省略又は簡略化する。
[2-1. Configuration of periodontal disease detection system]
The periodontal disease detection system according to this embodiment is an information processing system for detecting periodontal disease in a user based on an image of a predetermined area in the oral cavity, and differs from the periodontal disease detection system according to embodiment 1 in that it includes a mobile terminal 50a instead of the mobile terminal 50. The following description will focus on differences from embodiment 1, such as the configuration of the mobile terminal 50a, and descriptions of identical or similar configurations will be omitted or simplified.
図10は、本実施の形態に係る携帯端末50aの機能構成を示すブロック図である。 Figure 10 is a block diagram showing the functional configuration of the mobile terminal 50a according to this embodiment.
図10に示すように、携帯端末50aは、実施の形態1に係る携帯端末50の歯牙種類識別部52を備えておらず、かつ、領域検出部151及びルール選択部152を備える。 As shown in FIG. 10, the mobile terminal 50a does not include the tooth type identification unit 52 of the mobile terminal 50 according to embodiment 1, but does include an area detection unit 151 and a rule selection unit 152.
画像処理部53は、撮影画像から色彩が補正された補正画像を生成する。画像処理部53は、実施の形態1に示すように基準色データを用いて撮影画像から補正画像を生成してもよいし、ホワイトバランス処理を実行することで撮影画像から補正画像を生成してもよい。 The image processing unit 53 generates a corrected image in which the colors have been corrected from the captured image. The image processing unit 53 may generate the corrected image from the captured image using reference color data as in embodiment 1, or may generate the corrected image from the captured image by performing white balance processing.
領域検出部151は、画像処理部53により生成された補正画像内の歯牙が、歯列を区分することで定義される口腔内の複数の領域のうちのいずれの領域の歯牙であるかを検出する。複数の領域は、口腔内を類似した特徴を有する歯牙群ごとに区分した領域である。類似した特徴とは、歯牙の形状が類似していることを含む。なお、歯牙の形状が変化すると歯肉の形状も同様に変化し得る。よって、ここでの歯牙の形状が類似しているとは、歯肉の形状が類似していることを意味する。 The area detection unit 151 detects which of the multiple areas of the oral cavity defined by dividing the tooth row the tooth in the corrected image generated by the image processing unit 53 belongs to. The multiple areas are areas obtained by dividing the oral cavity into groups of teeth with similar characteristics. Similar characteristics include similar tooth shapes. Note that if the shape of the teeth changes, the shape of the gums may also change similarly. Therefore, in this case, similar tooth shapes mean similar gum shapes.
図11は、本実施の形態に係る口腔内の複数の領域を説明するための図である。図11の(a)は、歯牙及び歯肉の形状が似ている領域を破線で区分した図である。図11の(b)は、本実施の形態において設定される複数の領域の一例を示す図である。 FIG. 11 is a diagram illustrating multiple regions within the oral cavity according to this embodiment. (a) of FIG. 11 is a diagram in which regions with similar shapes of teeth and gums are separated by dashed lines. (b) of FIG. 11 is a diagram showing an example of multiple regions set in this embodiment.
図11の(a)では、上顎、下顎それぞれで3個、計6個の領域が示されている。具体的には、上顎側では、右側の臼歯を含む上顎右と、中切歯、側切歯及び犬歯を含む上顎前と、左側の臼歯を含む上顎左との3つの領域を含み、下顎側では、右側の臼歯を含む下顎右と、中切歯、側切歯及び犬歯を含む下顎前と、左側の臼歯を含む下顎左との3つの領域を含む。 In Figure 11 (a), six regions are shown, three on each of the upper and lower jaws. Specifically, the upper jaw includes three regions: the right upper jaw, which includes the right molars; the front upper jaw, which includes the central incisors, lateral incisors, and canines; and the left upper jaw, which includes the left molars. The lower jaw includes three regions: the right lower jaw, which includes the right molars; the front lower jaw, which includes the central incisors, lateral incisors, and canines; and the left lower jaw, which includes the left molars.
図11の(b)に示すように、複数の領域は、第1領域R1~第4領域R4を含む。具体的には、第1領域R1は、上顎及び下顎の奥歯のうち舌側の領域を示し、第2領域R2は、上顎及び下顎の奥歯のうち頬側の領域を示し、第3領域R3は、上顎及び下顎の前歯のうち舌側の領域を示し、第4領域R4は、上顎及び下顎の前歯のうち頬側の領域を示す。 As shown in (b) of Figure 11, the multiple regions include a first region R1 to a fourth region R4. Specifically, the first region R1 indicates the tongue-side region of the molars of the upper and lower jaws, the second region R2 indicates the buccal-side region of the molars of the upper and lower jaws, the third region R3 indicates the tongue-side region of the anterior teeth of the upper and lower jaws, and the fourth region R4 indicates the buccal-side region of the anterior teeth of the upper and lower jaws.
なお、複数の領域は、2以上の領域であればよく、2個又は3個の領域であってもよいし、5個以上の領域であってもよい。 Note that the number of regions may be two or more, and may be two or three, or five or more.
図12は、本実施の形態に係る複数の領域を撮影した画像の一例を示す図である。図12の(a)は、下顎前歯を舌側から撮影した画像を示す。図12の(a)に示す画像は、図11の(b)に示す第3領域R3において撮影される画像の一例である。図12の(b)は、下顎前歯を頬側から撮影した画像を示す。図12の(b)に示す画像は、図11の(b)に示す第4領域R4において撮影される画像の一例である。図12の(c)は、下顎奥歯を舌側から撮影した画像を示す。図12の(c)に示す画像は、図11の(b)に示す第1領域R1において撮影される画像の一例である。図12の(d)は、下顎奥歯を頬側から撮影した画像を示す。図12の(d)に示す画像は、図11の(b)に示す第2領域R2において撮影される画像の一例である。 FIG. 12 shows an example of an image obtained by photographing multiple regions in this embodiment. (a) of FIG. 12 shows an image of a mandibular anterior tooth photographed from the lingual side. The image shown in (a) of FIG. 12 is an example of an image photographed in the third region R3 shown in (b) of FIG. 11. (b) of FIG. 12 shows an image of a mandibular anterior tooth photographed from the buccal side. The image shown in (b) of FIG. 12 is an example of an image photographed in the fourth region R4 shown in (b) of FIG. 11. (c) of FIG. 12 shows an image of a mandibular molar photographed from the lingual side. The image shown in (c) of FIG. 12 is an example of an image photographed in the first region R1 shown in (b) of FIG. 11. (d) of FIG. 12 shows an image of a mandibular molar photographed from the buccal side. The image shown in (d) of FIG. 12 is an example of an image photographed in the second region R2 shown in (b) of FIG. 11.
本実施の形態では、領域検出部151は、補正画像が第1領域R1~第4領域R4のいずれの領域の画像であるかを検出する。領域検出部151は、補正画像に映る歯牙及び歯周領域が第1領域R1~第4領域R4のいずれの領域から撮影されたかを検出するとも言える。 In this embodiment, the region detection unit 151 detects which of the first region R1 to fourth region R4 the corrected image is an image of. It can also be said that the region detection unit 151 detects from which of the first region R1 to fourth region R4 the teeth and periodontal region shown in the corrected image were photographed.
なお、領域検出部151が領域を検出する方法は特に限定されず、例えば、機械学習モデルを用いる方法であってもよいし、パターンマッチングを用いる方法であってもよいし、その他の公知のいかなる方法であってもよい。機械学習モデルは、歯牙及び歯周領域を含む画像が入力されると、当該画像に映る歯牙及び歯周領域が第1領域R1~第4領域R4のいずれの領域の画像であるかを出力するように学習された学習モデルである。 The method by which the region detection unit 151 detects regions is not particularly limited, and may be, for example, a method using a machine learning model, a method using pattern matching, or any other known method. The machine learning model is a learning model that is trained to output, when an image including teeth and periodontal regions is input, which of the first region R1 to fourth region R4 the teeth and periodontal region shown in the image belongs to.
図12の(a)~(d)に示すように、歯牙の種類(臼歯、犬歯、前歯)及び撮影方向(頬側、舌側)の少なくとも一方によって、歯牙及び歯肉の形状が異なっている。また、歯周疾患の種類及び進行に伴う歯肉の変形の度合も、歯牙の種類及び撮影方向の少なくとも一方により異なると考えられる。そのため、全ての歯牙及び歯肉を1つの学習モデルで学習した場合、歯牙、歯肉ごとの固有の形状が原因で、歯周病の進行度合いを誤検出する虞がある。 As shown in Figures 12(a) to 12(d), the shapes of teeth and gums vary depending on at least one of the tooth type (molar, canine, anterior tooth) and imaging direction (cheek side, lingual side). It is also believed that the type of periodontal disease and the degree of gum deformation associated with its progression also vary depending on at least one of the tooth type and imaging direction. Therefore, if all teeth and gums are trained using a single learning model, there is a risk that the unique shapes of each tooth and gum may result in incorrect detection of the degree of progression of periodontal disease.
そこで、本実施の形態では、記憶部58は、複数の領域それぞれにおいて、当該領域に応じた画像を用いて学習された複数の学習モデルを予め記憶している。複数の学習モデルは、複数の領域のそれぞれに対応する複数の学習モデルであって、それぞれが当該領域の歯牙及び歯肉を撮影した画像データを入力とし当該領域における歯周疾患に関する情報を出力するように学習された学習モデルである。つまり、複数の領域のそれぞれにおいて、当該領域に一対一で対応する学習モデルが生成されている。歯周疾患に関する情報は、歯周疾患の有無、歯周疾患の進行状況、歯周疾患の予兆等を含む。また、歯周疾患に関する情報は、歯周ポケットの深さ、BOP及びGI値の少なくとも1つの推定結果を含んでいてもよい。 In this embodiment, the memory unit 58 pre-stores, for each of the multiple regions, multiple learning models that have been trained using images corresponding to that region. The multiple learning models correspond to each of the multiple regions, and each is a learning model that has been trained to input image data of the teeth and gums in that region and output information related to periodontal disease in that region. In other words, for each of the multiple regions, a learning model that corresponds one-to-one to that region is generated. Information related to periodontal disease includes the presence or absence of periodontal disease, the progress of periodontal disease, signs of periodontal disease, etc. The information related to periodontal disease may also include estimated results for at least one of periodontal pocket depth, BOP, and GI value.
複数の学習モデルのそれぞれには、複数の領域のうちのどの領域に対応する学習モデルであるかを示す情報が対応付けられている。本実施の形態では、複数の学習モデルのそれぞれに、第1領域R1~第4領域R4のいずれかを示す情報が対応付けられている。 Each of the multiple learning models is associated with information indicating which of the multiple regions the learning model corresponds to. In this embodiment, each of the multiple learning models is associated with information indicating one of the first region R1 to fourth region R4.
本実施の形態では、第1領域R1の撮影画像に基づく補正画像、第2領域R2の撮影画像に基づく補正画像、第3領域R3の撮影画像に基づく補正画像、及び、第4領域R4の撮影画像に基づく補正画像のそれぞれに対応する学習モデルが存在する。つまり、本実施の形態では、4つの領域のそれぞれに一対一に対応する4つの学習モデルが予め作成され記憶部58に記憶されている。 In this embodiment, there are learning models corresponding to the corrected image based on the captured image of the first region R1, the corrected image based on the captured image of the second region R2, the corrected image based on the captured image of the third region R3, and the corrected image based on the captured image of the fourth region R4. In other words, in this embodiment, four learning models corresponding one-to-one to each of the four regions are created in advance and stored in the memory unit 58.
ルール選択部152は、記憶部58に記憶されている複数の学習モデルの中から、領域検出部151が検出した領域に対応する学習モデルを選択する。ルール選択部152は、ツール選択部の一例である。 The rule selection unit 152 selects a learning model that corresponds to the area detected by the area detection unit 151 from among multiple learning models stored in the memory unit 58. The rule selection unit 152 is an example of a tool selection unit.
歯周疾患検出部55は、ルール選択部152により選択された学習モデルを用いて、ユーザの歯周疾患を検出する。歯周疾患検出部55は、選択された学習モデルに補正画像に基づく画像データ(例えば、歯周領域画像データ)を入力することでユーザの歯周疾患を検出する。 The periodontal disease detection unit 55 detects the user's periodontal disease using the learning model selected by the rule selection unit 152. The periodontal disease detection unit 55 detects the user's periodontal disease by inputting image data based on the corrected image (e.g., periodontal region image data) into the selected learning model.
ここで、複数の学習モデルの学習に用いられる画像について、図13を参照しながら説明する。図13は、本実施の形態に係る学習モデルの学習に用いられる画像の一例を示す図である。図13に示す各画像において、一点鎖線の矩形枠の領域が画像データ抽出部54等により抽出され、抽出された画像データが学習用の画像データとして用いられてもよい。なお、一点鎖線の矩形枠は、図7Aの(a)に示す方法で形成されてもよい。 Here, images used for training multiple learning models will be described with reference to Figure 13. Figure 13 is a diagram showing an example of an image used for training a learning model according to this embodiment. In each image shown in Figure 13, the area enclosed by the dashed-dotted rectangular frame may be extracted by the image data extraction unit 54 or the like, and the extracted image data may be used as image data for training. Note that the dashed-dotted rectangular frame may be formed by the method shown in Figure 7A (a).
図13の(a)に示す画像は、下顎前歯を舌側から撮影した画像であり、第3領域R3を撮影した画像が入力される学習モデルの学習に用いられる。図13の(b)に示す画像は、下顎前歯を頬側から撮影した画像であり、第4領域R4を撮影した画像が入力される学習モデルの学習に用いられる。図13の(c)に示す画像は、下顎奥歯を舌側から撮影した画像であり、第1領域R1を撮影した画像が入力される学習モデルの学習に用いられる。図13の(d)に示す画像は、下顎奥歯を頬側から撮影した画像であり、第2領域R2を撮影した画像が入力される学習モデルの学習に用いられる。 The image shown in Figure 13 (a) is an image of a lower jaw front tooth photographed from the lingual side, and is used for training a learning model to which an image of the third region R3 is input. The image shown in Figure 13 (b) is an image of a lower jaw front tooth photographed from the buccal side, and is used for training a learning model to which an image of the fourth region R4 is input. The image shown in Figure 13 (c) is an image of a lower jaw back tooth photographed from the lingual side, and is used for training a learning model to which an image of the first region R1 is input. The image shown in Figure 13 (d) is an image of a lower jaw back tooth photographed from the buccal side, and is used for training a learning model to which an image of the second region R2 is input.
このように、領域に属している歯牙群ごとに学習用の画像(歯周領域画像)を生成し、当該領域に対応した学習モデルに当該領域に対応した歯周領域画像を入力することで、学習が行われる。また、第1領域R1を撮影した画像が入力される学習モデルには、図13の(a)、(b)及び(d)に示す画像は用いられず、第2領域R2を撮影した画像が入力される学習モデルには、図13の(a)~(c)に示す画像は用いられず、第3領域R3を撮影した画像が入力される学習モデルには、図13の(b)~(d)に示す画像は用いられず、第4領域R4を撮影した画像が入力される学習モデルには、図13の(a)、(c)、(d)に示す画像は用いられない。 In this way, learning is performed by generating a learning image (periodontal region image) for each group of teeth belonging to a region and inputting the periodontal region image corresponding to that region into the learning model corresponding to that region. Furthermore, the images shown in (a), (b), and (d) of FIG. 13 are not used in the learning model into which an image of the first region R1 is input; the images shown in (a) to (c) of FIG. 13 are not used in the learning model into which an image of the second region R2 is input; the images shown in (b) to (d) of FIG. 13 are not used in the learning model into which an image of the third region R3 is input; and the images shown in (a), (c), and (d) of FIG. 13 are not used in the learning model into which an image of the fourth region R4 is input.
それぞれの学習モデルの学習は、歯周病の進行状態が異なる複数の画像を用いて行われる。それぞれの学習モデルの学習方法は、実施の形態1と同様であり、説明を省略する。 Each learning model is trained using multiple images showing different stages of periodontal disease progression. The training method for each learning model is the same as in embodiment 1, so further explanation will be omitted.
なお、学習モデルを生成する処理部は、歯周疾患検出システムが備えていてもよいし、歯周疾患検出システム外の装置が備えていてもよい。 The processing unit that generates the learning model may be provided by the periodontal disease detection system, or by a device outside the periodontal disease detection system.
[2-2.歯周疾患検出システムの動作]
続いて、上記のように構成される歯周疾患検出システムの動作について、図14を参照しながら説明する。図14は、本実施の形態に係る歯周疾患検出システムの動作(歯周疾患検出方法)を示すフローチャートである。
[2-2. Operation of Periodontal Disease Detection System]
Next, the operation of the periodontal disease detection system configured as above will be described with reference to Fig. 14. Fig. 14 is a flowchart showing the operation of the periodontal disease detection system (periodontal disease detection method) according to this embodiment.
図14に示すように、まず、取得部51は、ユーザの口腔内の撮影領域を撮影した撮影画像を取得する(S101)。ステップS101は、図4に示すステップS12に相当する。 As shown in FIG. 14, first, the acquisition unit 51 acquires a captured image of the imaging area inside the user's oral cavity (S101). Step S101 corresponds to step S12 shown in FIG. 4.
次に、画像処理部53及び画像データ抽出部54は、撮影画像から、特定の歯牙の歯周領域の画像を生成する(S102)。ステップS102は、図4に示すステップS14及びS15に相当する。 Next, the image processing unit 53 and image data extraction unit 54 generate an image of the periodontal region of a specific tooth from the captured image (S102). Step S102 corresponds to steps S14 and S15 shown in Figure 4.
次に、領域検出部151は、歯周領域の画像、又は、補正画像から、当該画像に映る歯牙の種類及び撮影方向に基づく口腔内の領域を検出する(S103)。領域検出部151は、当該画像が第1領域R1~第4領域R4のいずれの領域に対応する画像であるかを、機械学習モデル又はパターンマッチング等により検出する。 Next, the region detection unit 151 detects an intraoral region from the image of the periodontal region or the corrected image based on the type of tooth shown in the image and the imaging direction (S103). The region detection unit 151 detects which of the first region R1 to fourth region R4 the image corresponds to by using a machine learning model, pattern matching, or the like.
次に、ルール選択部152は、領域検出部151が検出した口腔内の領域に基づいて、記憶部58に記憶されている複数の学習モデルの中から歯周疾患に関する検出を行う学習モデルを選択する(S104)。ルール選択部152は、複数の学習モデルのうち領域検出部151が検出した領域が対応付けられている学習モデルを、当該画像における歯周疾患に関する検出を行う学習モデルとして選択する。 Next, the rule selection unit 152 selects a learning model that performs periodontal disease detection from among the multiple learning models stored in the memory unit 58, based on the intraoral region detected by the region detection unit 151 (S104). The rule selection unit 152 selects, from the multiple learning models, the learning model that corresponds to the region detected by the region detection unit 151 as the learning model that performs periodontal disease detection in the image.
このように、口腔内の複数の領域それぞれに対応する学習モデルがあり、領域検出部151が検出した領域に応じた学習モデルがルール選択部152により選択される。 In this way, there are learning models corresponding to each of the multiple regions within the oral cavity, and the learning model corresponding to the region detected by the region detection unit 151 is selected by the rule selection unit 152.
次に、歯周疾患検出部55は、歯肉領域の画像と、選択された学習モデルとに基づいて、歯周疾患に関する検出を実行する(S105)。歯周疾患検出部55は、歯肉領域の画像を選択された学習モデルに入力することで、歯周疾患に関する検出結果を当該学習モデルの出力として取得する。ステップS105は、図4に示すステップS16に相当する。 Next, the periodontal disease detection unit 55 performs periodontal disease detection based on the image of the gingival region and the selected learning model (S105). The periodontal disease detection unit 55 inputs the image of the gingival region into the selected learning model, and obtains the detection results for periodontal disease as the output of the learning model. Step S105 corresponds to step S16 shown in Figure 4.
次に、出力部57は、歯周疾患検出部55の検出結果を出力する(S106)。ステップS106は、図4に示すステップS17に相当する。なお、どの学習モデルを用いて歯周疾患の検出を実行したかを示す情報が検出結果に含まれて出力されてもよい。 Next, the output unit 57 outputs the detection result of the periodontal disease detection unit 55 (S106). Step S106 corresponds to step S17 shown in Figure 4. Note that information indicating which learning model was used to detect periodontal disease may be included in the detection result and output.
このように、本実施の形態では、歯牙形状が似ている歯周領域画像を用いて学習された学習モデルに基づいて、当該歯牙の歯周領域の歯周疾患を検出することができるので、例えば、1つの学習モデルを用いて歯周疾患を検出する場合に比べて、歯周疾患の検出精度を向上させることができる。また、領域ごとに当該領域に対応する学習モデルを用いた評価を行うので、歯周疾患の検出精度を向上させることができる。 In this way, in this embodiment, periodontal disease in the periodontal region of a tooth can be detected based on a learning model trained using periodontal region images with similar tooth shapes, thereby improving the accuracy of periodontal disease detection compared to, for example, detecting periodontal disease using a single learning model. Furthermore, evaluation is performed for each region using a learning model corresponding to that region, thereby improving the accuracy of periodontal disease detection.
また、1つの学習モデルを用いて歯周疾患を検出する場合と、本実施の形態のように領域ごとの学習モデルを用いて歯周疾患を検出する場合とで学習モデルが分類する種類(変形、変色)は同じであり、かつ、クラス数(例えば、歯周病の進行レベル)もそれほど多くないので、大きいネットワークを必要としない。よって、本実施の形態に係る学習モデルであれば、処理量を抑えることができる。 Furthermore, when detecting periodontal disease using a single learning model, the types of classification (deformation, discoloration) performed by the learning model are the same as when detecting periodontal disease using learning models for each region as in this embodiment, and the number of classes (e.g., progression levels of periodontal disease) is not very large, so a large network is not required. Therefore, the learning model according to this embodiment can reduce the amount of processing.
さらには、決定する特徴(つまり、クラス数)が絞られるので、学習モデルの学習効率が高い。すなわち、本実施の形態によれば、学習モデルの学習効率が高いので、学習モデルの検出精度を効果的に向上させることができる。 Furthermore, because the determining features (i.e., the number of classes) are narrowed down, the learning efficiency of the learning model is high. In other words, according to this embodiment, the learning efficiency of the learning model is high, so the detection accuracy of the learning model can be effectively improved.
(実施の形態2の変形例)
以下では、本変形例に係る歯周疾患検出システムについて、図15~図18Bを参照しながら説明する。なお、以下では、実施の形態2との相違点を中心に説明し、実施の形態2と同一又は類似の内容については説明を省略又は簡略化する。本変形例では、ルール選択部(ツール選択部の一例)が選択する複数の歯周疾患検出ツールに、少なくとも1以上の判定ルールが含まれる点において実施の形態に係る歯周疾患検出システムと相違する。なお、ルール選択部が選択する複数の歯周疾患検出ツールには、1以上の機械学習モデルが含まれていてもよく、例えば1以上の判定ルール及び1以上の機械学習モデルが含まれていてもよい。機械学習モデルは、画像データから歯周疾患に関する情報を出力するために生成される。また、判定ルール及び機械学習モデルは、ツールの一例である。
(Modification of the second embodiment)
The periodontal disease detection system according to this modification will be described below with reference to FIGS. 15 to 18B. The following description will focus on differences from the second embodiment, and descriptions of content that is the same as or similar to the second embodiment will be omitted or simplified. This modification differs from the periodontal disease detection system according to the embodiment in that the multiple periodontal disease detection tools selected by the rule selection unit (an example of a tool selection unit) include at least one or more determination rules. The multiple periodontal disease detection tools selected by the rule selection unit may include one or more machine learning models, and may include, for example, one or more determination rules and one or more machine learning models. The machine learning model is generated to output information related to periodontal disease from image data. Furthermore, the determination rule and the machine learning model are examples of tools.
なお、本変形例の歯周疾患検出システムの構成は実施の形態2に係る歯周疾患検出システム2と同様であってもよく、以下では歯周疾患検出システム2の符号を用いて説明する。 Note that the configuration of the periodontal disease detection system of this modified example may be the same as that of the periodontal disease detection system 2 according to embodiment 2, and the following description will use the reference numerals of the periodontal disease detection system 2.
図15に示すように、ルール選択部152は、領域検出部151により口腔内の領域が検出される(S103)と、領域検出部151が検出した口腔内の領域に基づいて、記憶部58に記憶されている複数の歯周疾患検出ツールの中から歯周疾患に関する検出を行う2つの歯周疾患検出ツールを選択する(S204)。 As shown in FIG. 15, when the area detection unit 151 detects an area within the oral cavity (S103), the rule selection unit 152 selects two periodontal disease detection tools for detecting periodontal disease from among the multiple periodontal disease detection tools stored in the memory unit 58, based on the area within the oral cavity detected by the area detection unit 151 (S204).
ここで複数の歯周疾患検出ツールは、それぞれが当該領域を含む画像データ内の歯牙について隣接する歯牙との間の歯肉画像を入力とし当該領域における歯周疾患に関する情報を出力するための1以上の判定ルールを少なくも含む。1以上の判定ルールは、例えば、複数の領域に属する歯牙ごとの、歯周病の複数の進行段階での歯肉の形状データから作成されている。 Here, each of the multiple periodontal disease detection tools includes at least one or more judgment rules for inputting an image of the gingiva between adjacent teeth for a tooth in image data including the relevant region and outputting information related to periodontal disease in the relevant region. The one or more judgment rules are created, for example, from gingival shape data at multiple stages of periodontal disease for each tooth belonging to the multiple regions.
複数の歯周疾患検出ツールは、複数の判定ルールを含んでいてもよいし、1以上の機械学習モデルと1以上の判定ルールとを含んでいてもよい。判定ルールは、実施の形態1の変形例で示したように歯肉の色を用いたルールであってもよいし、歯肉の形状を用いたルールであってもよい。歯肉の色、歯肉の形状(例えば、歯肉乳頭歯肉の形状)は、ルールベースにおける評価パラメータである。 The multiple periodontal disease detection tools may include multiple judgment rules, or may include one or more machine learning models and one or more judgment rules. The judgment rules may be rules using gingival color, as shown in the modified example of embodiment 1, or may be rules using gingival shape. The gingival color and gingival shape (for example, the shape of the gingival papilla) are evaluation parameters in the rule base.
例えば、機械学習モデルと判定ルールとが選択される場合、機械学習モデル及び判定ルールの一方が検出しにくい歯周疾患を機械学習モデル及び判定ルールの他方が検出し得る。歯周疾患の検出を互いに補完することが可能となるので、歯周疾患の見落としを抑制することができる。 For example, when a machine learning model and a judgment rule are selected, periodontal disease that is difficult to detect by one of the machine learning model and the judgment rule may be detected by the other of the machine learning model and the judgment rule. Since the detection of periodontal disease can complement each other, it is possible to prevent periodontal disease from being overlooked.
ルール選択部152は、口腔内の領域と、用いる歯周疾患検出ツールとが対応付けられたテーブルを用いて2つの歯周疾患検出ツールを選択してもよいし、当該領域の歯周疾患の検出に用いた歯周疾患検出ツールの履歴に基づいて2つの歯周疾患検出ツールを選択してもよいし、他の方法により2つの歯周疾患検出モデルを選択してもよい。 The rule selection unit 152 may select two periodontal disease detection tools using a table that associates regions within the oral cavity with the periodontal disease detection tools to be used, or may select two periodontal disease detection tools based on the history of periodontal disease detection tools used to detect periodontal disease in that region, or may select two periodontal disease detection models using some other method.
なお、ルール選択部152は、ステップS204において、3以上の歯周疾患検出ツールを選択してもよい。3以上の歯周疾患検出ツールが選択される場合、選択される機械学習モデル及び判定ルールの数は特に限定されない。ルール選択部152が選択する歯周疾患検出ツールの数は、例えば、ユーザにより設定されてもよい。 In addition, the rule selection unit 152 may select three or more periodontal disease detection tools in step S204. When three or more periodontal disease detection tools are selected, there is no particular limit to the number of machine learning models and judgment rules selected. The number of periodontal disease detection tools selected by the rule selection unit 152 may be set by the user, for example.
次に、歯周疾患検出部55は、画像と、選択された2つの歯周疾患検出モデルとに基づいて、歯周疾患に関する検出を実行する(S205)。歯周疾患検出部55は、2つの歯周疾患検出モデルとして機械学習モデルと、判定ルールとが選択された場合、機械学習モデルを用いて歯周疾患に関する検出を実行し、判定ルールを用いて歯周疾患に関する検出を実行する。また、歯周疾患検出部55は、2つの歯周疾患検出モデルとして2つの判定ルールが選択された場合、2つの判定ルールのそれぞれで歯周疾患に関する検出を実行する。 Next, the periodontal disease detection unit 55 performs periodontal disease detection based on the image and the two selected periodontal disease detection models (S205). When a machine learning model and a judgment rule are selected as the two periodontal disease detection models, the periodontal disease detection unit 55 performs periodontal disease detection using the machine learning model, and performs periodontal disease detection using the judgment rule. Furthermore, when two judgment rules are selected as the two periodontal disease detection models, the periodontal disease detection unit 55 performs periodontal disease detection using each of the two judgment rules.
ここで、歯肉の形状を用いて歯周疾患を検出する処理について、図16~図18Bを参照しながら説明する。図16~図17Cを用いて、歯肉の形状の一例である歯間乳頭歯肉の先端部の角度について説明し、図18A及び図18Bを参照しながら歯肉の形状の他の一例である歯間乳頭歯肉の先端部の半径について説明する。 Here, the process of detecting periodontal disease using the shape of the gingiva will be explained with reference to Figures 16 to 18B. The angle of the tip of the interdental papilla gingiva, which is one example of the shape of the gingiva, will be explained with reference to Figures 16 to 17C, and the radius of the tip of the interdental papilla gingiva, which is another example of the shape of the gingiva, will be explained with reference to Figures 18A and 18B.
図16は、本変形例に係る歯間乳頭歯肉の先端部の角度θを示す図である。歯周病を発症すると歯牙と歯牙との間の歯肉が炎症により腫れるため、歯間乳頭歯肉の形状が変形する。よって、歯間乳頭歯肉の形状として歯間乳頭歯肉の先端部に基づく角度θと、健康な状態の歯肉における歯間乳頭歯肉の先端部に基づく角度とを比較し、先端部の角度から歯肉炎又は歯周病を発症している状態であるか否かを判定することができる。 Figure 16 is a diagram showing the angle θ of the tip of the interdental papilla gingiva in this modified example. When periodontal disease develops, the gums between the teeth become inflamed and swollen, causing the shape of the interdental papilla gingiva to change. Therefore, by comparing the angle θ based on the tip of the interdental papilla gingiva as the shape of the interdental papilla gingiva with the angle based on the tip of the interdental papilla gingiva in healthy gums, it is possible to determine whether gingivitis or periodontal disease is present from the angle of the tip.
図16では、各領域の正常時の角度θを示している。各領域において正常時の角度θが異なり得る。そこで、各領域において、角度θと正常及び進行レベルとが対応付けられた1以上の判定ルールが用いられる。なお、図16では、便宜上、歯牙の左右の歯間乳頭歯肉のうち一方の歯間乳頭歯肉の角度θのみを図示している。 Figure 16 shows the angle θ when normal for each region. The angle θ when normal can differ for each region. Therefore, one or more judgment rules are used in each region that associate the angle θ with normal and advanced levels. For convenience, Figure 16 only shows the angle θ of one of the interdental papilla gingiva on the left and right sides of the tooth.
なお、角度θを含む判定ルールを用いる場合、角度θを検出する必要があるので、歯牙の一部と、当該歯牙の左右の歯間乳頭歯肉の少なくとも一方を含む画像が必要となる。例えば、歯周疾患の見逃しを抑制する観点から、歯牙の一部と、当該歯牙の左右の歯間乳頭歯肉を含む画像が準備されるとよい。図16の例では、左右の歯間乳頭歯肉を含む画像が示されている。 When using a judgment rule that includes the angle θ, it is necessary to detect the angle θ, so an image that includes a portion of the tooth and at least one of the left and right interdental papilla and gingiva of the tooth is required. For example, from the perspective of preventing periodontal disease from being overlooked, it is advisable to prepare an image that includes a portion of the tooth and the left and right interdental papilla and gingiva of the tooth. The example in Figure 16 shows an image that includes the left and right interdental papilla and gingiva.
例えば、画像データ抽出部54は、少なくとも歯牙(例えば、特定の歯牙)の一部と、当該歯牙の左右の歯間乳頭歯肉の少なくとも一方を含む領域を、第2画像に基づく画像データとして第2画像から抽出してもよい。この抽出処理は、ステップS102で実行されてもよいし、ステップS205で実行されてもよい。このように、形状に関するルールを含む判定ルールを用いる場合、機械学習モデルに入力される画像とは異なる画像が準備されるとよい。 For example, the image data extraction unit 54 may extract from the second image an area including at least a portion of a tooth (e.g., a specific tooth) and at least one of the left and right interdental papilla gingiva of that tooth as image data based on the second image. This extraction process may be performed in step S102 or in step S205. In this way, when using a determination rule that includes a rule related to shape, it is preferable to prepare an image that is different from the image input to the machine learning model.
図17A~図17Cは、本変形例に係る歯間乳頭歯肉の先端部の角度の算出方法を説明する各図である。 Figures 17A to 17C are diagrams explaining the method for calculating the angle of the tip of the interdental papilla gingiva in this modified example.
図17Aに示すように、歯周疾患検出部55は、画像処理により、画像から歯牙(例えば、特定の歯牙)の間の歯間乳頭歯肉の先端部(例えば、頂点)を特定する。歯周疾患検出部55は、例えば、歯間において最も上方に位置する歯肉の位置を歯間乳頭歯肉の先端部としてもよいし、所定の形状を有する位置を歯間乳頭歯肉の先端部としてもよい。図17Aでは、矢印の先にある「●」部分が歯間乳頭歯肉の先端部として特定された例を示している。 As shown in Figure 17A, the periodontal disease detection unit 55 uses image processing to identify the tip (e.g., apex) of the interdental papilla gingiva between teeth (e.g., a specific tooth) from the image. The periodontal disease detection unit 55 may, for example, determine the position of the gingiva located highest between the teeth as the tip of the interdental papilla gingiva, or may determine a position having a predetermined shape as the tip of the interdental papilla gingiva. Figure 17A shows an example in which the "●" portion at the end of the arrow has been identified as the tip of the interdental papilla gingiva.
図17B及び図17Cに示すように、次に、歯周疾患検出部55は、歯間乳頭歯肉の先端部から下方に線分を引き、線分を先端部を軸として時計回り及び反時計回りに回転させて、最初に歯牙と接触した位置に線分を固定する。これにより、2つの線分と先端部とで角度θが形成される。 As shown in Figures 17B and 17C, the periodontal disease detection unit 55 then draws a line segment downward from the tip of the interdental papilla gingiva, rotates the line segment clockwise and counterclockwise around the tip as an axis, and fixes the line segment at the position where it first comes into contact with the tooth. This forms an angle θ between the two line segments and the tip.
なお、歯肉の形状として角度θを含む場合、角度θと、正常及び進行レベル、又は、正常及び異常とが対応付けられた判定ルールが用いられる。 If the gingival shape includes the angle θ, a determination rule is used that associates the angle θ with normal and advanced levels, or normal and abnormal.
図18A及び図18Bは、本変形例に係る歯間乳頭歯肉の先端部の半径rを示す各図である。歯周病を発症すると、歯牙と歯牙との間の歯肉が炎症により腫れるため、歯間乳頭歯肉が形成する三角形の先端部が丸みを帯びる。よって、健康な状態の歯肉の歯間乳頭歯肉の先端部の丸みの半径と、歯周疾患を発症した歯間乳頭歯肉の先端部の丸みとを比較し、先端部の丸みの半径から歯肉炎又は歯周病を発症している状態であるかを判定することができる。 Figures 18A and 18B are diagrams showing the radius r of the tip of the interdental papilla gingiva in this modified example. When periodontal disease develops, the gums between the teeth swell due to inflammation, causing the tips of the triangles formed by the interdental papilla gingiva to become rounded. Therefore, by comparing the radius of the rounded tip of the interdental papilla gingiva of healthy gums with the rounded tip of the interdental papilla gingiva of periodontal disease, it is possible to determine whether gingivitis or periodontal disease is present from the radius of the rounded tip.
図18Aは歯周疾患がある場合の半径rを示し、図18Bは歯周疾患がない場合の半径rが示す。半径rは、例えば歯間乳頭歯肉の先端部の曲率半径であってもよい。歯周病を発症している場合、半径rは大きな値となる傾向がある。 Figure 18A shows the radius r when periodontal disease is present, and Figure 18B shows the radius r when periodontal disease is not present. The radius r may be, for example, the radius of curvature of the tip of the interdental papilla gingiva. When periodontal disease is present, the radius r tends to be a large value.
なお、歯肉の形状として半径rを含む場合、半径rと、正常及び進行レベル、又は、正常及び異常とが対応付けられた判定ルールが用いられる。 If the gingival shape includes the radius r, a determination rule is used that associates the radius r with normal and advanced levels, or normal and abnormal.
そして、歯周疾患検出部55は、2つの歯周疾患検出ツールそれぞれの歯周疾患の検出結果に基づいて、ユーザの歯周疾患を検出する。例えば、歯周疾患検出部55は、2つの歯周疾患検出ツールそれぞれの歯周疾患の検出結果を比較し、歯周疾患の症状がより悪い方の検出結果を当該ユーザの歯周疾患の検出結果としてもよい。 Then, the periodontal disease detection unit 55 detects the user's periodontal disease based on the periodontal disease detection results of each of the two periodontal disease detection tools. For example, the periodontal disease detection unit 55 may compare the periodontal disease detection results of each of the two periodontal disease detection tools and determine the detection result of the one with the more severe periodontal disease symptoms as the detection result for the user's periodontal disease.
また、判定ルールと機械学習モデルとが選択されている場合、歯周疾患検出部55は、判定ルールによる歯周疾患の検出結果である第1の検出結果と、機械学習モデルによる歯周疾患の検出結果である第2の検出結果との両方を参酌し、ユーザの歯周疾患を検出する。例えば、歯周疾患検出部55は、第1の検出結果では歯周疾患が検出されず、かつ、第2の検出結果では歯周疾患が検出された場合、第1の検出結果を優先して歯周疾患が検出されていないことをユーザの歯周疾患の検出結果としてもよい。 Furthermore, when a judgment rule and a machine learning model are selected, the periodontal disease detection unit 55 detects the user's periodontal disease by taking into consideration both the first detection result, which is the periodontal disease detection result according to the judgment rule, and the second detection result, which is the periodontal disease detection result according to the machine learning model. For example, if periodontal disease is not detected in the first detection result and periodontal disease is detected in the second detection result, the periodontal disease detection unit 55 may prioritize the first detection result and determine that periodontal disease has not been detected as the detection result for the user's periodontal disease.
なお、歯周疾患の症状がより悪いと判定された検出結果は、歯周疾患がより進行している方の検出結果と言い換えることができる。 Furthermore, a detection result that indicates worse periodontal disease symptoms can be rephrased as a detection result of someone with more advanced periodontal disease.
図15を再び参照して、次に、出力部57は、歯周疾患検出部55の検出結果を出力する(S106)。出力部57は、歯周疾患検出部55により歯周疾患の症状が悪い方の検出結果がユーザの歯周疾患の検出結果と判定されている場合、当該悪い方の検出結果をユーザに通知する。また、機械学習モデルを用いる場合、歯肉の模様などの影響により、歯周疾患の症状が本来より悪く検出される場合がある。そこで、出力部57は、歯周疾患検出部55により第1の検出結果が優先されている場合、付帯情報として第2の検出結果も通知してもよい。 Referring again to FIG. 15, next, the output unit 57 outputs the detection result of the periodontal disease detection unit 55 (S106). If the periodontal disease detection unit 55 determines that the detection result showing worse periodontal disease symptoms is the detection result of the user's periodontal disease, the output unit 57 notifies the user of the worse detection result. Furthermore, when a machine learning model is used, the symptoms of periodontal disease may be detected as worse than they actually are due to the influence of factors such as the pattern of the gums. Therefore, if the periodontal disease detection unit 55 prioritizes the first detection result, the output unit 57 may also notify the user of the second detection result as additional information.
(その他の実施の形態)
以上、本開示の実施の形態1、2及び各変形例(実施の形態等)に係る歯周疾患検出システムについて説明したが、本開示は、この実施の形態等に限定されるものではない。
(Other embodiments)
The periodontal disease detection systems according to the first and second embodiments and the modifications (embodiments, etc.) of the present disclosure have been described above, but the present disclosure is not limited to these embodiments, etc.
例えば、上記実施の形態等では、歯牙を撮影することを主目的とした口腔内カメラ10を用いる例を説明したが、口腔内カメラ10は、カメラを備える口腔内ケア機器であってもよい。例えば、口腔内カメラ10は、カメラを備える口腔内洗浄機等であってもよい。 For example, in the above embodiments, an example has been described in which an intraoral camera 10 is used primarily for photographing teeth, but the intraoral camera 10 may also be an oral care device equipped with a camera. For example, the intraoral camera 10 may also be an oral irrigator equipped with a camera.
また、上記実施の形態等では、第2情報端末として携帯端末50、50aを例示したが、第2情報端末は、据え置き型の情報端末であってもよい。 Furthermore, in the above embodiments, mobile terminals 50 and 50a are given as examples of the second information terminal, but the second information terminal may also be a stationary information terminal.
また、上記実施の形態等では、携帯端末50、50aが表示部を備える例について説明したがこれに限定されず、携帯端末50、50aと別体の装置として、携帯端末50、50aと通信可能な表示装置が設けられてもよい。 Furthermore, in the above embodiments, examples have been described in which the mobile terminals 50, 50a are equipped with a display unit, but this is not limited to this. A display device capable of communicating with the mobile terminals 50, 50a may also be provided as a separate device from the mobile terminals 50, 50a.
また、上記実施の形態等に係る機械学習モデルは、機械学習により調整可能な1つ以上の演算パラメータを有する。機械学習モデルは、例えば、ニューラルネットワーク、回帰モデル、決定木モデル、サポートベクタマシン、その他の関数式(演算モデル)等により構成されてよい。機械学習の方法は、採用する機械学習のモデルに応じて適宜選択され、例えば誤差逆伝播法などが例示されるがこれに限定されない。 Furthermore, the machine learning model according to the above embodiments has one or more calculation parameters that can be adjusted by machine learning. The machine learning model may be configured, for example, using a neural network, a regression model, a decision tree model, a support vector machine, or other functional formulas (calculation models). The machine learning method is selected appropriately depending on the machine learning model employed, and examples include, but are not limited to, backpropagation.
また、上記実施の形態1及び実施の形態1の変形例における基準色データは、ユーザの歯牙の色に基づいていれば計測したデータを用いることに限定されず、例えば、シェードガイドが示す色であってもよい。 Furthermore, the reference color data in the above-mentioned first embodiment and the modified example of the first embodiment is not limited to measured data, as long as it is based on the color of the user's teeth, and may be, for example, the color indicated by a shade guide.
また、上記実施の形態2の変形例に係る歯周疾患検出部55は、例えば、口腔内の領域ごとに、機械学習モデルの検出結果、及び、判定ルールの検出結果のいずれを採用するかを決定してもよいし、口腔内の全領域に対して、機械学習モデルの検出結果、及び、判定ルールの検出結果のいずれを採用するかを決定してもよい。口腔内の領域は、例えば、上記の第1領域R1~第4領域R4のうち2以上の領域を含むがこれに限定されず、任意に設定されてもよい。 Furthermore, the periodontal disease detection unit 55 according to the modified example of the second embodiment may, for example, determine for each intraoral region whether to adopt the detection results of the machine learning model or the detection results of the judgment rule, or may determine for all intraoral regions whether to adopt the detection results of the machine learning model or the judgment rule. The intraoral regions may, for example, include two or more of the first region R1 to the fourth region R4 described above, but are not limited to this and may be set arbitrarily.
また、上記実施の形態2の変形例に係るルール選択部152は、2つ以上の機械学習モデルのみを選択する、つまり判定ルールを選択しなくてもよいし、2以上の判定ルールのみを選択する、つまり機械学習モデルを選択しなくてもよい。 Furthermore, the rule selection unit 152 according to the variation of the second embodiment may select only two or more machine learning models, i.e., may not select a judgment rule, or may select only two or more judgment rules, i.e., may not select a machine learning model.
また、上記実施の形態1及び実施の形態1の変形例における携帯端末50は、少なくとも出力部57と、取得部51と、処理部とを備えていればよい。処理部は、例えば、記憶部58に検出結果を記憶する、表示部56に検出結果を表示する、他の装置に検出結果を送信するなどの所定の処理を実行する。このような携帯端末50が実行する歯周疾患検出方法(情報処理方法)は、1以上のプロセッサが実行する歯周疾患検出方法であって、前記ユーザの口腔内に光を照射して、特定の歯牙と歯肉を含む当該特定の歯牙の歯周領域とを含む撮影領域を撮影した第1RGB画像を出力し、前記第1RGB画像に基づく前記ユーザの歯周疾患の検出結果を取得し、取得した前記検出結果に対して所定の処理を実行することを含み、前記検出結果は、前記第1RGB画像内の前記天然歯牙の領域を構成する赤色、緑色、青色の各色成分のうち少なくとも2つの色成分のゲインが、前記撮影領域に含まれる天然歯牙の色の基準を示す基準色データであって、前記ユーザに応じた基準色データに基づいて調整された第2RGB画像に基づく画像データに基づいて、検出された前記ユーザの前記歯周疾患の検出結果を含んでいてもよい。この場合、歯牙種類識別部52、画像処理部53、画像データ抽出部54、及び、歯周疾患検出部55の機能は、例えば、サーバ装置などの携帯端末の外部の装置により実行される。 Furthermore, the mobile terminal 50 in the above-mentioned first embodiment and the modified example of the first embodiment only needs to include at least an output unit 57, an acquisition unit 51, and a processing unit. The processing unit performs predetermined processing, such as storing the detection results in the memory unit 58, displaying the detection results on the display unit 56, and transmitting the detection results to another device. The periodontal disease detection method (information processing method) executed by such a mobile device 50 is a periodontal disease detection method executed by one or more processors, and includes irradiating the user's oral cavity with light, outputting a first RGB image capturing an image of a photographed area including a specific tooth and the periodontal region of the specific tooth, including the gums, acquiring a periodontal disease detection result for the user based on the first RGB image, and performing predetermined processing on the acquired detection result. The detection result may include the periodontal disease detection result for the user detected based on image data based on a second RGB image in which gains of at least two of the red, green, and blue color components constituting the natural tooth area in the first RGB image are reference color data indicating the color reference of the natural tooth included in the photographed area, and the reference color data is adjusted based on the user-specific reference color data. In this case, the functions of the tooth type identification unit 52, image processing unit 53, image data extraction unit 54, and periodontal disease detection unit 55 are executed by a device external to the mobile device, such as a server device.
また、上記実施の形態2及び実施の形態2の変形例における携帯端末50aは、少なくとも出力部57と、取得部51と、処理部とを備えていればよい。処理部は、例えば、記憶部58に検出結果を記憶する、表示部56に検出結果を表示する、他の装置に検出結果を送信するなどの所定の処理を実行する。このような携帯端末50aが実行する歯周疾患検出方法(情報処理方法)は、1以上のプロセッサが実行する歯周疾患検出方法であって、前記ユーザの口腔内の特定の歯牙及び歯肉を含む当該特定の歯牙の歯周領域を撮影した第1画像を出力し、前記第1画像に基づく前記ユーザの歯周疾患の検出結果を取得し、取得した前記検出結果に対して所定の処理を実行することを含み、前記検出結果は、前記口腔内複数の領域のそれぞれに対応する複数の歯周疾患検出ツールであって、それぞれが当該領域を含む画像データを入力とし当該領域における歯周疾患に関する情報を出力するように学習された複数の歯周疾患検出ツールから、前記第1画像から生成された前記特定の歯牙の前記歯周領域を含む第2画像が歯列を区分することで定義される前記複数の領域のうちのいずれの領域の画像であるかに応じて選択された歯周疾患検出ツールと、前記第2画像に基づく画像データとに基づいて、検出された前記ユーザの前記歯周疾患の検出結果を含んでいてもよい。この場合、画像処理部53、領域検出部151、ルール選択部152、画像データ抽出部54、及び、歯周疾患検出部55の機能は、例えば、サーバ装置などの携帯端末の外部の装置により実行される。 Furthermore, the mobile terminal 50a in the above-mentioned second embodiment and the modified example of the second embodiment only needs to include at least an output unit 57, an acquisition unit 51, and a processing unit. The processing unit performs predetermined processing, such as storing the detection results in the memory unit 58, displaying the detection results on the display unit 56, and transmitting the detection results to another device. A periodontal disease detection method (information processing method) executed by such a mobile terminal 50a is a periodontal disease detection method executed by one or more processors, and includes outputting a first image of a specific tooth in the user's oral cavity and the periodontal region of the specific tooth including the gums, obtaining a detection result of the user's periodontal disease based on the first image, and performing predetermined processing on the obtained detection result, wherein the detection result may include the detection result of the user's periodontal disease detected based on image data based on the second image, and a periodontal disease detection tool selected from a plurality of periodontal disease detection tools corresponding to each of a plurality of regions in the oral cavity, each of which is trained to input image data including the region and output information about periodontal disease in the region, depending on which of the plurality of regions defined by dividing the tooth row a second image including the periodontal region of the specific tooth generated from the first image is an image of. In this case, the functions of the image processing unit 53, area detection unit 151, rule selection unit 152, image data extraction unit 54, and periodontal disease detection unit 55 are executed by a device external to the mobile terminal, such as a server device.
また、上記実施の形態1及び2に係る歯周疾患検出システムに含まれる各処理部は典型的には集積回路であるLSIとして実現される。これらは個別に1チップ化されてもよいし、一部又は全てを含むように1チップ化されてもよい。 Furthermore, each processing unit included in the periodontal disease detection systems according to the first and second embodiments is typically realized as an LSI, which is an integrated circuit. These may be individually implemented as single chips, or some or all of them may be integrated into a single chip.
また、集積回路化はLSIに限るものではなく、専用回路又は汎用プロセッサで実現してもよい。LSI製造後にプログラムすることが可能なFPGA(Field Programmable Gate Array)、又はLSI内部の回路セルの接続や設定を再構成可能なリコンフィギュラブル・プロセッサを利用してもよい。 Furthermore, integrated circuits are not limited to LSIs, but may be realized using dedicated circuits or general-purpose processors. FPGAs (Field Programmable Gate Arrays), which can be programmed after LSI manufacturing, or reconfigurable processors, which allow the connections and settings of circuit cells within the LSI to be reconfigured, may also be used.
また、上記実施の形態1及び2において、各構成要素は、専用のハードウェアで構成されるか、各構成要素に適したソフトウェアプログラムを実行することによって実現されてもよい。各構成要素は、CPU又はプロセッサなどのプログラム実行部が、ハードディスク又は半導体メモリなどの記録媒体に記録されたソフトウェアプログラムを読み出して実行することによって実現されてもよい。 Furthermore, in the above-mentioned first and second embodiments, each component may be configured with dedicated hardware, or may be realized by executing a software program appropriate for each component. Each component may also be realized by a program execution unit such as a CPU or processor reading and executing a software program recorded on a recording medium such as a hard disk or semiconductor memory.
また、ブロック図における機能ブロックの分割は一例であり、複数の機能ブロックを一つの機能ブロックとして実現したり、一つの機能ブロックを複数に分割したり、一部の機能を他の機能ブロックに移してもよい。また、類似する機能を有する複数の機能ブロックの機能を単一のハードウェア又はソフトウェアが並列又は時分割に処理してもよい。 Furthermore, the division of functional blocks in the block diagram is one example; multiple functional blocks may be realized as a single functional block, one functional block may be divided into multiple blocks, or some functions may be moved to other functional blocks. Furthermore, the functions of multiple functional blocks with similar functions may be processed in parallel or time-shared by a single piece of hardware or software.
また、上記実施の形態1及び2に係る歯周疾患検出システム(例えば、携帯端末50、50a)は、単一の装置として実現されてもよいし、複数の装置により実現されてもよい。歯周疾患検出システムが複数の装置によって実現される場合、当該歯周疾患検出システムが有する各構成要素は、複数の装置にどのように振り分けられてもよい。例えば、歯周疾患検出システムの機能のうち少なくとも一部の機能は口腔内カメラ10(例えば、信号処理部30)により実現されてもよい。また、歯周疾患検出システムが複数の装置で実現される場合、当該複数の装置間の通信方法は、特に限定されず、無線通信であってもよいし、有線通信であってもよい。また、装置間では、無線通信及び有線通信が組み合わされてもよい。 Furthermore, the periodontal disease detection systems according to the first and second embodiments (e.g., mobile terminals 50, 50a) may be realized as a single device or may be realized by multiple devices. When the periodontal disease detection system is realized by multiple devices, the components of the periodontal disease detection system may be distributed in any manner among the multiple devices. For example, at least some of the functions of the periodontal disease detection system may be realized by the intraoral camera 10 (e.g., signal processing unit 30). Furthermore, when the periodontal disease detection system is realized by multiple devices, the communication method between the multiple devices is not particularly limited, and may be wireless communication or wired communication. Furthermore, wireless communication and wired communication may be combined between the devices.
また、本開示は、歯周疾患検出システムにより実行される歯周疾患検出方法として実現されてもよい。また、本開示は、歯周疾患検出システムに含まれる口腔内カメラ、携帯端末、又はクラウドサーバとして実現されてもよい。 The present disclosure may also be realized as a periodontal disease detection method executed by a periodontal disease detection system. The present disclosure may also be realized as an intraoral camera, mobile terminal, or cloud server included in the periodontal disease detection system.
また、シーケンス図における各ステップが実行される順序は、本開示を具体的に説明するために例示するためのものであり、上記以外の順序であってもよい。また、上記ステップの一部が、他のステップと同時(並列)に実行されてもよい。 Furthermore, the order in which each step is executed in the sequence diagram is merely an example to specifically explain the present disclosure, and orders other than those described above may also be used. Furthermore, some of the steps may be executed simultaneously (in parallel) with other steps.
また、本開示の一態様は、図4、図14及び図15のいずれかに示される歯周疾患検出方法に含まれる特徴的な各ステップをコンピュータに実行させるコンピュータプログラムであってもよい。 Furthermore, one aspect of the present disclosure may be a computer program that causes a computer to execute each of the characteristic steps included in the periodontal disease detection method shown in any of Figures 4, 14, and 15.
また、例えば、プログラムは、コンピュータに実行させるためのプログラムであってもよい。また、本開示の一態様は、そのようなプログラムが記録された、コンピュータ読み取り可能な非一時的な記録媒体であってもよい。例えば、そのようなプログラムを記録媒体に記録して頒布又は流通させてもよい。例えば、頒布されたプログラムを、他のプロセッサを有する装置にインストールして、そのプログラムをそのプロセッサに実行させることで、その装置に、上記各処理を行わせることが可能となる。 Furthermore, for example, the program may be a program to be executed by a computer. Furthermore, one aspect of the present disclosure may be a computer-readable non-transitory recording medium on which such a program is recorded. For example, such a program may be recorded on a recording medium and distributed or circulated. For example, by installing the distributed program in a device having another processor and having that processor execute the program, it becomes possible to cause that device to perform each of the above processes.
以上、一つ又は複数の態様に係る歯周疾患検出システム等について、実施の形態1及び2に基づいて説明したが、本開示は、この実施の形態1及び2に限定されるものではない。本開示の趣旨を逸脱しない限り、当業者が思いつく各種変形を実施の形態1及び2並びに各変形例に施したものや、異なる実施の形態における構成要素を組み合わせて構築される形態も、一つ又は複数の態様の範囲内に含まれてもよい。 The periodontal disease detection system and the like relating to one or more aspects have been described above based on embodiments 1 and 2, but the present disclosure is not limited to these embodiments 1 and 2. As long as they do not deviate from the spirit of the present disclosure, various modifications that would occur to those skilled in the art to embodiments 1 and 2 and their respective modifications, as well as forms constructed by combining components from different embodiments, may also be included within the scope of one or more aspects.
本開示は、歯周疾患を検出する歯周疾患検出システム等に有用である。 This disclosure is useful for periodontal disease detection systems that detect periodontal disease.
10 口腔内カメラ
10a ヘッド部
10b ハンドル部
20 ハード部
21 撮影部
22 センサ部
23 照明部
23A 第1のLED
23B 第2のLED
23C 第3のLED
23D 第4のLED
24 操作部
30 信号処理部
31 カメラ制御部
32、53 画像処理部
33 制御部
34 照明制御部
35 メモリ部
40 通信部
50、50a 携帯端末(第2情報端末)
51 取得部(第1取得部、第2取得部)
52 歯牙種類識別部
54 画像データ抽出部
55 歯周疾患検出部
56 表示部
57 出力部
58 記憶部
151 領域検出部
152 ルール選択部(ツール選択部)
F1、F2、F3 矩形枠
D 所定距離
r 半径
R1 第1領域
R2 第2領域
R3 第3領域
R4 第4領域
θ 角度
10 Intraoral camera 10a Head unit 10b Handle unit 20 Hardware unit 21 Photography unit 22 Sensor unit 23 Illumination unit 23A First LED
23B Second LED
23C Third LED
23D Fourth LED
24 Operation unit 30 Signal processing unit 31 Camera control unit 32, 53 Image processing unit 33 Control unit 34 Lighting control unit 35 Memory unit 40 Communication unit 50, 50a Portable terminal (second information terminal)
51 Acquisition unit (first acquisition unit, second acquisition unit)
52 Tooth type identification unit 54 Image data extraction unit 55 Periodontal disease detection unit 56 Display unit 57 Output unit 58 Storage unit 151 Area detection unit 152 Rule selection unit (tool selection unit)
F1, F2, F3 Rectangular frame D Predetermined distance r Radius R1 First area R2 Second area R3 Third area R4 Fourth area θ Angle
Claims (24)
前記ユーザの口腔内に光を照射して、特定の歯牙と歯肉を含む当該特定の歯牙の歯周領域とを含む撮影領域を撮影した第1RGB画像を取得する第1取得部と、
前記撮影領域に含まれる天然歯牙の色の基準を示す基準色データであって、前記ユーザに応じた基準色データを取得する第2取得部と、
前記第1RGB画像内の前記天然歯牙の領域を構成する赤色、緑色、青色の各色成分のうち少なくとも2つの色成分のゲインを前記基準色データに基づいて調整した第2RGB画像を生成する画像処理部と、
前記第2RGB画像に基づく画像データに基づいて、前記ユーザの歯周疾患を検出する歯周疾患検出部と、を備える
歯周疾患検出システム。 A periodontal disease detection system that detects periodontal disease in a user based on an image taken inside the oral cavity,
a first acquisition unit that irradiates light into the oral cavity of the user and acquires a first RGB image of an imaging area including a specific tooth and a periodontal region of the specific tooth including the gums;
a second acquisition unit that acquires reference color data indicating a reference color of natural teeth included in the photographed area according to the user;
an image processing unit that generates a second RGB image by adjusting the gains of at least two color components among red, green, and blue color components that constitute the natural tooth region in the first RGB image based on the reference color data;
a periodontal disease detection unit that detects periodontal disease of the user based on image data based on the second RGB image.
請求項1に記載の歯周疾患検出システム。 The periodontal disease detection system of claim 1, further comprising an image data extraction unit that extracts, from the second RGB image, periodontal region image data including free gingiva, which is the gingiva that surrounds the cervical region including the interdental papilla and marginal gingiva in a band-like shape, and a portion of the attached gingiva that is continuous with the free gingiva and extends from the bottom of the gingival sulcus to the gingival-alveolar junction, as the image data based on the second RGB image.
請求項2に記載の歯周疾患検出システム。 The periodontal disease detection system according to claim 2 , wherein the periodontal region image data further includes left and right interdental papillae and gingiva.
請求項1~3のいずれか1項に記載の歯周疾患検出システム。 The periodontal disease detection system according to any one of claims 1 to 3, wherein the reference color data is set based on the color of the user's natural teeth.
前記第2RGB画像の画素値に基づく値が所定範囲内である画素を含む特定画素領域を特定し、
前記第1RGB画像内の前記天然歯牙の領域を構成する赤色、緑色、青色の各色成分のうち少なくとも2つの色成分のゲインを前記基準色データに基づいて調整することで第3RGB画像を生成し、
前記第2RGB画像に基づく前記画像データは、前記第3RGB画像に基づく画像データである
請求項1~3のいずれか1項に記載の歯周疾患検出システム。 The image processing unit
Identifying a specific pixel area including pixels whose values based on the pixel values of the second RGB image are within a predetermined range;
generating a third RGB image by adjusting the gains of at least two of the red, green, and blue color components that constitute the natural tooth region in the first RGB image based on the reference color data;
The periodontal disease detection system according to any one of claims 1 to 3, wherein the image data based on the second RGB image is image data based on the third RGB image.
請求項5に記載の歯周疾患検出システム。 The periodontal disease detection system of claim 5, wherein the image processing unit generates an HSV image by converting the color space of the second RGB image into an HSV space, and identifies as the specific pixel area a pixel area in which one or more pixels of the HSV image that satisfy at least one of a first predetermined range for saturation, a second predetermined range for hue, and a third predetermined range for lightness are located.
請求項6に記載の歯周疾患検出システム。 The periodontal disease detection system of claim 6, wherein the image processing unit identifies as the specific pixel area a pixel area in which one or more pixels are located, the red pixel values of which fall within a first range, the green pixel values of which fall within a second range, and the blue pixel values of which fall within a third range, of the images constituting the area of the natural tooth in the second RGB image.
請求項6に記載の歯周疾患検出システム。 The periodontal disease detection system according to claim 6 , wherein the specific pixel region includes a plaque region.
請求項2に記載の歯周疾患検出システム。 The periodontal disease detection system according to claim 2 , wherein the image data extraction unit extracts the periodontal region from the tip of the interdental papilla gingiva of the specific tooth to the free gingival sulcus from the second RGB image as the periodontal region image data.
請求項9に記載の歯周疾患検出システム。 The periodontal disease detection system of claim 9, wherein the image data extraction unit extracts a rectangular frame area that circumscribes the left and right sides of the contour of the specific tooth and includes from the tip of the specific tooth to the free gingival sulcus from the second RGB image as the periodontal region image data.
請求項1又は2に記載の歯周疾患検出システム。 The periodontal disease detection system described in claim 1 or 2, wherein the periodontal disease detection unit detects the periodontal disease of the user by inputting image data based on the second RGB image into a learning model that is trained to input image data including a gingival area and output information regarding periodontal disease in the image data.
請求項11に記載の歯周疾患検出システム。 The periodontal disease detection system according to claim 11, wherein the learning model outputs an estimation result of at least one of periodontal pocket depth, Bleeding On Probing (BOP), and Gingival Index (GI) value as the information related to the periodontal disease.
請求項1又は2に記載の歯周疾患検出システム。 The periodontal disease detection system according to claim 1 or 2, wherein the periodontal disease detection unit receives as input the R, G, and B values of the gum color acquired from image data based on the second RGB image and detects the user's periodontal disease in the area based on a periodontal disease detection tool created from the range of R, G, and B values in the RGB color space of normal gum color and the range of R, G, and B values of gum color for each of multiple stages of periodontal disease.
請求項1又は2に記載の歯周疾患検出システム。 The periodontal disease detection system of claim 1 or 2, wherein the periodontal disease detection unit inputs at least one of the hue information, saturation information, and brightness information of the color of the gums obtained from image data based on the second RGB image and detects the periodontal disease of the user in the area based on a periodontal disease detection tool created from at least one of the hue information, saturation information, and brightness information of the color of normal gums in the HSV color space and at least one range of the hue information, saturation information, and brightness information of the color of the gums for each of multiple stages of periodontal disease.
請求項1又は2に記載の歯周疾患検出システム。 The periodontal disease detection system according to claim 1 or 2, wherein the periodontal disease detection unit inputs at least one of the hue information, saturation information, and luminance information of the color of the gums obtained from image data based on the second RGB image and detects the periodontal disease of the user in the relevant area based on a periodontal disease detection tool created from at least one of the hue information, saturation information, and luminance information of the color of normal gums in the HSL color space and at least one range of the hue information, saturation information, and luminance information of the color of the gums for each of multiple stages of periodontal disease.
請求項1又は2に記載の歯周疾患検出システム。 The periodontal disease detection system according to claim 1 or 2, further comprising a tooth type identification unit that identifies the type of the specific tooth from the first RGB image or the second RGB image.
前記出力部は、さらに、前記第1取得部を介して取得された、前記第1情報端末に対して入力された前記ユーザの受診の要否に関する受診要否情報を前記ユーザの第2情報端末に出力する
請求項1又は2に記載の歯周疾患検出システム。 an output unit that outputs information indicating the periodontal disease detected by the periodontal disease detection unit to a first information terminal of a dentist other than the user;
The periodontal disease detection system described in claim 1 or 2, wherein the output unit further outputs to the user's second information terminal information regarding whether the user needs to receive a medical examination, which information was acquired via the first acquisition unit and entered into the first information terminal.
請求項1又は2に記載の歯周疾患検出システム。 The periodontal disease detection system according to claim 1 or 2, further comprising an output unit that outputs information indicating the periodontal disease detected by the periodontal disease detection unit to a second information terminal of the user.
前記基準色データを記憶している記憶部をさらに備える
請求項1又は2に記載の歯周疾患検出システム。 the second acquisition unit acquires the reference color data before acquiring the first RGB image;
The periodontal disease detection system according to claim 1 or 2, further comprising a storage unit that stores the reference color data.
前記第1RGB画像は、前記第1の光とは異なる第2の光を前記撮影領域に照射したときに得られる画像である
請求項1又は2に記載の歯周疾患検出システム。 the reference color data is color data of the natural tooth obtained when a first light is irradiated onto the natural tooth;
The periodontal disease detection system according to claim 1 or 2, wherein the first RGB image is an image obtained when the imaging area is irradiated with a second light different from the first light.
前記画像処理部は、前記少なくとも2つの色成分と前記基準色データとの差が所定範囲内となるように前記ゲインを調整することで前記第2RGB画像を生成する
請求項1又は2に記載の歯周疾患検出システム。 the at least two color components include at least two of a first red pixel average value of a plurality of red pixel values of a plurality of pixels constituting the region of the natural tooth in the first RGB image, a first green pixel average value of a plurality of green pixel values of the plurality of pixels, and a first blue pixel average value of a plurality of blue pixel values of the plurality of pixels;
The periodontal disease detection system according to claim 1 or 2, wherein the image processing unit generates the second RGB image by adjusting the gain so that a difference between the at least two color components and the reference color data falls within a predetermined range.
前記ユーザの口腔内に光を照射して、特定の歯牙と歯肉を含む当該特定の歯牙の歯周領域とを含む撮影領域を撮影した第1RGB画像を取得し、
前記撮影領域に含まれる天然歯牙の色の基準を示す基準色データであって、前記ユーザに応じた基準色データを取得し、
前記第1RGB画像内の前記天然歯牙の領域を構成する赤色、緑色、青色の各色成分のうち少なくとも2つの色成分のゲインを前記基準色データに基づいて調整した第2RGB画像を生成し、
前記第2RGB画像に基づく画像データに基づいて、前記ユーザの歯周疾患を検出する
歯周疾患検出方法。 A periodontal disease detection method executed by a periodontal disease detection system that detects periodontal disease in a user based on an image taken inside the oral cavity, comprising:
irradiating light into the oral cavity of the user to acquire a first RGB image capturing an imaging area including a specific tooth and a periodontal region of the specific tooth including the gums;
acquiring reference color data indicating a reference color of the natural teeth included in the photographed area, the reference color data being corresponding to the user;
generating a second RGB image by adjusting the gains of at least two of the red, green, and blue color components that constitute the natural tooth region in the first RGB image based on the reference color data;
and detecting periodontal disease of the user based on image data based on the second RGB image.
前記1以上のプロセッサは、
ユーザの口腔内に光を照射して、特定の歯牙と歯肉を含む当該特定の歯牙の歯周領域とを含む撮影領域を撮影した第1RGB画像を出力し、
前記第1RGB画像に基づく前記ユーザの歯周疾患の検出結果を取得し、
取得した前記検出結果に対して所定の処理を実行し、
取得される前記検出結果は、前記第1RGB画像内の天然歯牙の領域を構成する赤色、緑色、青色の各色成分のうち少なくとも2つの色成分のゲインが、前記撮影領域に含まれる天然歯牙の色の基準を示す基準色データであって、前記ユーザに応じた基準色データに基づいて調整された第2RGB画像に基づく画像データに基づいて検出された、前記ユーザの前記歯周疾患の検出結果を含む
歯周疾患検出方法。 1. A method for detecting periodontal disease executed by one or more processors, comprising:
The one or more processors:
irradiating light into the oral cavity of the user, and outputting a first RGB image obtained by capturing an image of an imaging area including a specific tooth and a periodontal region of the specific tooth including the gums;
obtaining a detection result of periodontal disease of the user based on the first RGB image;
Execute a predetermined process on the acquired detection result;
A periodontal disease detection method, wherein the obtained detection results include a detection result of the periodontal disease of the user, detected based on image data based on a second RGB image adjusted based on reference color data corresponding to the user, where the gains of at least two of the red, green, and blue color components that constitute the natural tooth area in the first RGB image are reference color data that indicate the standard color of the natural tooth included in the captured area.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2024046268 | 2024-03-22 | ||
| JP2024-046268 | 2024-03-22 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025197661A1 true WO2025197661A1 (en) | 2025-09-25 |
Family
ID=97139094
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2025/008921 Pending WO2025197661A1 (en) | 2024-03-22 | 2025-03-11 | Periodontal disease detection system, periodontal disease detection method, and program |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025197661A1 (en) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2010005327A (en) * | 2008-06-30 | 2010-01-14 | Olympus Corp | Dental image processing apparatus, system, method, and program |
| JP2012011100A (en) * | 2010-07-02 | 2012-01-19 | Inspektor Research Systems Bv | Intraoral observation apparatus |
| JP2019030587A (en) * | 2017-08-09 | 2019-02-28 | Dicグラフィックス株式会社 | Gingivitis inspection system, gingivitis inspection method and program thereof |
| WO2020145238A1 (en) * | 2019-01-10 | 2020-07-16 | ミツミ電機株式会社 | Oral observation device |
| WO2023157842A1 (en) * | 2022-02-17 | 2023-08-24 | パナソニックIpマネジメント株式会社 | Intraoral camera, illumination control device, and illumination control method |
-
2025
- 2025-03-11 WO PCT/JP2025/008921 patent/WO2025197661A1/en active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2010005327A (en) * | 2008-06-30 | 2010-01-14 | Olympus Corp | Dental image processing apparatus, system, method, and program |
| JP2012011100A (en) * | 2010-07-02 | 2012-01-19 | Inspektor Research Systems Bv | Intraoral observation apparatus |
| JP2019030587A (en) * | 2017-08-09 | 2019-02-28 | Dicグラフィックス株式会社 | Gingivitis inspection system, gingivitis inspection method and program thereof |
| WO2020145238A1 (en) * | 2019-01-10 | 2020-07-16 | ミツミ電機株式会社 | Oral observation device |
| WO2023157842A1 (en) * | 2022-02-17 | 2023-08-24 | パナソニックIpマネジメント株式会社 | Intraoral camera, illumination control device, and illumination control method |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP2043509B1 (en) | Methods and products for analyzing gingival tissues | |
| Tabatabaian et al. | Visual and digital tooth shade selection methods, related effective factors and conditions, and their accuracy and precision: A literature review | |
| JP4230113B2 (en) | Interactive dental treatment network | |
| US20110058717A1 (en) | Methods and systems for analyzing hard tissues | |
| WO2007063980A1 (en) | Intraoral panoramic image pickup device and intraoral panoramic image pickup system | |
| JP2002529122A (en) | System and method for analyzing tooth shade | |
| WO2024106391A1 (en) | Image processing method, image processing device, and program | |
| La Rosa et al. | A scoping review of new technologies for dental plaque quantitation: Benefits and limitations | |
| Fayed et al. | A Comparison between visual shade matching and digital shade analysis system using K-NN algorithm | |
| Moussa | Dental Shade Matching: Recent technologies and future smart applications | |
| WO2025197661A1 (en) | Periodontal disease detection system, periodontal disease detection method, and program | |
| WO2025197670A1 (en) | Periodontal disease detection system, periodontal disease detection method, and program | |
| Haciali et al. | Clinical assessment of dental color during dehydration and rehydration by various dental photography techniques | |
| TW202322745A (en) | Oral cavity detecting system | |
| WO2025173550A1 (en) | Information processing device, information processing method, and program | |
| WO2025173551A1 (en) | Information processing device, information processing method, and program | |
| WO2025115504A1 (en) | Dentition image generation device, dentition image generation method, and program | |
| WO2025115505A1 (en) | Tooth row image generation device, tooth row image generation method, and program | |
| WO2025182844A1 (en) | Image processing method, image processing device, and program | |
| RU2748029C1 (en) | Method for index assessment of the level of hygiene of the surface of teeth | |
| WO2024253058A1 (en) | Dental plaque detection device, dental plaque detection method, and program | |
| US20140272762A1 (en) | Apparatus and method for determining tooth shade | |
| JP2024148710A (en) | Oral cavity condition estimation device | |
| WO2025079525A1 (en) | Image processing method, image processing device, and program | |
| TW202221647A (en) | Image analysis system and image analysis method |