[go: up one dir, main page]

WO2019003355A1 - Système permettant de fournir un résultat d'analyse d'image, procédé permettant de fournir un résultat d'analyse d'image et programme - Google Patents

Système permettant de fournir un résultat d'analyse d'image, procédé permettant de fournir un résultat d'analyse d'image et programme Download PDF

Info

Publication number
WO2019003355A1
WO2019003355A1 PCT/JP2017/023807 JP2017023807W WO2019003355A1 WO 2019003355 A1 WO2019003355 A1 WO 2019003355A1 JP 2017023807 W JP2017023807 W JP 2017023807W WO 2019003355 A1 WO2019003355 A1 WO 2019003355A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image analysis
learned model
unknown
learned
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2017/023807
Other languages
English (en)
Japanese (ja)
Inventor
俊二 菅谷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Optim Corp
Original Assignee
Optim Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Optim Corp filed Critical Optim Corp
Priority to PCT/JP2017/023807 priority Critical patent/WO2019003355A1/fr
Priority to JP2018545247A priority patent/JP6474946B1/ja
Publication of WO2019003355A1 publication Critical patent/WO2019003355A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention selects a learned model of a known image whose imaging condition is similar to an unknown image to be newly subjected to image analysis from among a plurality of patterns of machine-learned learned models in which artificial intelligence has image-analyzed a known image.
  • the present invention relates to an image analysis result providing system, an image analysis result providing method, and a program capable of outputting an accurate image analysis result without spending learning time.
  • Patent Document 1 There has been proposed a method of providing a mechanism for automatically classifying a person image by determining who is shown by performing an image analysis process on the person image.
  • Supervised learning is a well-known method as a machine learning method for artificial intelligence to perform image analysis.
  • the entrance detection is performed by image analysis to determine whether a person has entered a level crossing
  • the position of the monitoring camera is usually fixed. Therefore, there is a supervised at each level crossing such as level crossing A, level crossing B, level crossing C, etc.
  • Machine learning it is possible to improve the accuracy of entry detection for each of crossing A, crossing B, crossing C, and for the image analysis processing after that, the learned model of crossing A has been learned of crossing B Model and level crossing C learned models are used respectively.
  • a large number of images with tags to be supervised data for supervised learning for level crossing D are prepared and machine learning is performed from one on the basis of that. Because it is necessary to have the following procedure, it takes time and effort until entrance detection is introduced.
  • the inventor selects and uses a learned model of a level crossing similar to the level crossing D and the imaging condition from the level crossing A, the level crossing B, and the level crossing C already subjected to machine learning. So, also at the crossing D, we focused on the fact that the image analysis accuracy of the approach detection above a certain degree can be obtained from the beginning.
  • the present invention selects a learned model of a known image whose imaging condition is similar to an unknown image to be newly subjected to image analysis from among a plurality of patterns of machine-learned learned models in which artificial intelligence has image-analyzed a known image. It is an object of the present invention to provide an image analysis result providing system, an image analysis result providing method, and a program capable of outputting an accurate image analysis result without spending learning time.
  • the present invention provides the following solutions.
  • the invention according to the first feature is Storage means for storing a machine-learned learned model obtained by image analysis of a known image; Acquisition means for acquiring an unknown image for which a learned model has not yet been created; Selecting means for selecting a learned model of the known image having similar imaging conditions to the acquired unknown image from the stored learned models; Image analysis means for performing image analysis of the unknown image using the selected learned model; Providing means for providing the result of the image analysis; Providing an image analysis result providing system.
  • storage means for storing a machine-learned learned model obtained by image analysis of a known image, acquisition means for acquiring an unknown image for which a learned model has not yet been created, and The unknown image is selected using selection means for selecting a learned model of the known image whose imaging condition is similar to the acquired unknown image from the stored learned models, and using the selected learned model.
  • An image analysis means for analyzing an image, and a providing means for providing a result of the image analysis.
  • the invention according to the first aspect is a category of an image analysis result providing system, but the same operation and effect can be obtained even with an image analysis result providing method and a program.
  • An invention according to a second feature is an image analysis result providing system according to the first feature,
  • the image analysis result providing system is characterized in that the imaging condition is an imaging position and an imaging angle with respect to an imaging target.
  • the imaging condition is an imaging position and an imaging angle with respect to an imaging target.
  • An invention according to a third feature is an image analysis result providing system according to the first feature or the second feature, wherein The image analysis result providing system is characterized in that the learned model is a mathematical expression and parameters used for the image analysis calculated by the machine learning and an image used for the machine learning.
  • the learned model is used for the image analysis calculated by the machine learning. They are an equation and parameters and an image used for the machine learning.
  • An invention according to a fourth feature is an image analysis result providing system according to any of the first feature through the third feature, wherein There is provided an image analysis result providing system including: a creation unit configured to create a new machine-learned learned model subjected to image analysis for the unknown image.
  • the image analysis result providing system which is the invention as set forth in any one of the first feature to the third feature, new machine learning has already been performed in which the unknown image is analyzed It comprises a creation means for creating a learned model.
  • An invention according to a fifth feature is an image analysis result providing system according to the fourth feature, wherein The image analysis means characterized in that the image analysis of the unknown image is performed using the selected learned model in a period until the new machine-learned learned model is created. Provide a provision system.
  • the image analysis means is in a period until creating the new machine-learned learned model.
  • the unknown image is subjected to image analysis using the selected learned model.
  • An invention according to a sixth feature is an image analysis result providing system according to any of the first feature through the fifth feature, wherein Creating a new machine-learned learned model in which the unknown image is image-analyzed using the analysis result of the image analysis of the unknown image as teacher data using the selected learned model for the unknown image
  • an image analysis result providing system characterized by including a creating unit.
  • the selected learned model is used for the unknown image.
  • the invention according to the seventh feature is Storing a machine-learned learned model obtained by image analysis of a known image; Acquiring an unknown image for which a learned model has not yet been created; Selecting a learned model of the known image whose imaging condition is similar to the acquired unknown image from the stored learned models; Analyzing the unknown image using the selected learned model; Providing the result of the image analysis; Providing an image analysis result providing method.
  • the invention according to the eighth feature is Image analysis result provision system, Storing a machine-learned learned model obtained by image analysis of known images; Acquiring an unknown image for which a learned model has not yet been created; Selecting a learned model of the known image whose imaging condition is similar to the acquired unknown image from the stored learned models; Analyzing the unknown image using the selected learned model; Providing the result of the image analysis; Provide a program to run the program.
  • the present invention from among a plurality of machine-trained learned patterns in which artificial intelligence has analyzed a known image, a learned model of a known image whose imaging condition is similar to an unknown image to be newly analyzed It is possible to provide an image analysis result providing system, an image analysis result providing method, and a program capable of outputting an accurate image analysis result without spending learning time by selecting and using Become.
  • FIG. 1 is a schematic diagram of a preferred embodiment of the present invention.
  • FIG. 2 is a diagram showing the relationship between the functional blocks of the camera 100 and the computer 200 and the respective functions.
  • FIG. 3 is a flow chart in the case of acquiring an unknown image from the camera 100, performing image analysis processing by the computer 200, and providing an image analysis result.
  • FIG. 4 is a view showing the relationship between the functional blocks of the camera 100 and the computer 200 and the respective functions in the case of performing a learned model creation process of an unknown image.
  • FIG. 5 is a flowchart of the camera 100 and the computer 200 in the case of performing a learned model creation process of an unknown image.
  • FIG. 1 is a schematic diagram of a preferred embodiment of the present invention.
  • FIG. 2 is a diagram showing the relationship between the functional blocks of the camera 100 and the computer 200 and the respective functions.
  • FIG. 3 is a flow chart in the case of acquiring an unknown image from the camera 100, performing image analysis processing by the computer 200, and providing an image analysis result
  • FIG. 6 is a diagram showing the relationship between the functional blocks of the camera 100 and the computer 200 and the respective functions when the image analysis process is switched depending on whether the learned model creation process of the unknown image is finished.
  • FIG. 7 is a process corresponding to A of the flowchart of FIG. 6, and when the learning model creation process of the unknown image is not completed, the learned model of the unknown image performed by the computer 200 when the machine learning of the unknown image is possible. It is a flowchart figure of preparation processing.
  • FIG. 8 is a process corresponding to B of the flowchart of FIG. 6, and when the learned model creation process of the unknown image is not finished, the learned model selection process performed by the computer 200 when the machine learning of the unknown image is impossible.
  • FIG. 9 is a flowchart of the computer 200 in the case of performing machine learning of an unknown image by adding an image analysis result of the unknown image as teacher data when performing a learned model creation process of the unknown image.
  • FIG. 10 shows, as learned models of level crossing A, level crossing B and level crossing C, examples of mathematical expressions and parameters calculated by machine learning, examples of images used for machine learning, and examples of images of level crossing D which are unknown images
  • FIG. 11 is a diagram for schematically describing the relationship between the camera 100, the computer 200, and the subject 400.
  • FIG. 12 is an example of a table showing a data structure of a learned model for each camera.
  • FIG. 13 is an example of a table showing a learned model used for image analysis for each camera when there is no learned model of the unknown image captured by the camera D.
  • FIG. 14 is an example of a table showing a learned model used for image analysis for each camera when a learned model of an unknown image captured by a camera D is created.
  • FIG. 1 is a schematic diagram of a preferred embodiment of the present invention. The outline of the present invention will be described based on FIG.
  • the image analysis result providing system includes a camera 100, a computer 200, and a communication network 300.
  • the number of cameras 100 is not limited to one, and may be plural.
  • the computer 200 is not limited to an existing device, and may be a virtual device.
  • the camera 100 includes an imaging unit 10, a control unit 110, a communication unit 120, and a storage unit 130.
  • the computer 200 also includes a control unit 210, a communication unit 220, a storage unit 230, and an input / output unit 240, as also shown in FIG.
  • the control unit 210 cooperates with the communication unit 220 and the storage unit 230 to realize the acquisition module 211. Further, the control unit 210 cooperates with the storage unit 230 to implement the selection module 212 and the image analysis module 213.
  • the storage unit 230 implements the storage module 231 in cooperation with the control unit 210.
  • the input / output unit 240 implements the provision module 241 in cooperation with the control unit 210 and the storage unit 230.
  • the communication network 300 may be a public communication network such as the Internet or a dedicated communication network, and enables communication between the camera 100 and the computer 200.
  • the camera 100 is an imaging device that includes an imaging device such as an imaging element and a lens that can perform data communication with the computer 200 and can measure the distance to the subject 400.
  • an imaging device such as an imaging element and a lens that can perform data communication with the computer 200 and can measure the distance to the subject 400.
  • a web camera is illustrated as an example, but an imaging device provided with necessary functions such as a digital camera, digital video, a camera mounted on an unmanned aerial vehicle, a camera of a wearable device, a security camera, an on-vehicle camera, and a 360 degree camera It may be.
  • the captured image may be stored in the storage unit 130.
  • the computer 200 is a computing device capable of data communication with the camera 100.
  • a desktop computer is illustrated as an example, but in addition to a mobile phone, a portable information terminal, a tablet terminal, a personal computer, electric appliances such as a netbook terminal, a slate terminal, an electronic book terminal, a portable music player, etc. And wearable terminals such as smart glasses and head mounted displays.
  • the storage module 231 of the computer 200 stores a plurality of learned models in the storage unit 230 (step S01).
  • the learned model may be acquired from another computer or storage medium, or may be created by the computer 200.
  • the storage unit 230 may be provided with a dedicated database.
  • FIG. 10 is a diagram showing examples of mathematical expressions and parameters calculated by machine learning and examples of images used for machine learning as learned models of level crossing A, level crossing B and level crossing C. In addition, it is a figure which shows an example of the unknown image of the level crossing D in which the learning completed model is not yet created.
  • FIG. 12 is an example of a table showing a data structure of a learned model for each camera.
  • a learned model is a model in which an equation for analyzing an image of a subject is associated with a parameter for each camera.
  • the image file with supervised data used in machine learning for calculating a learned model may be associated.
  • an imaging condition for each camera an imaging angle and an imaging position may be associated and stored.
  • the learned model created using the supervised data by the camera A which captured the crossing A in FIG. 10 is the learned model created using the supervised data by the camera B which captured the learned model A and the crossing B.
  • the learned model C created by using the trained model B and the supervised data from the camera C that has captured the level crossing C is the trained model C. Further, for an image obtained by imaging the level crossing D with the camera D, a learned model is not created.
  • FIG. 11 is a diagram for schematically describing the relationship between the camera 100, the computer 200, and the subject 400. It is assumed that the camera 100 and the computer 200 can communicate with each other via the communication network 300.
  • the camera 100 in the present invention is an imaging device capable of measuring the distance to a subject.
  • the method of measuring the distance to the subject in the case where the subject can be simultaneously imaged from a plurality of different directions in addition to acquisition from the sensor or the like of the camera 100, the deviation of the image captured by each of the plurality of cameras It is also possible to measure the distance by learning the length and the actual distance. Also, it is possible to calculate the imaging angle using the measured distance. Furthermore, when the location of the camera 100 is fixed, the distance to the imaging location may be explicitly specified.
  • the imaging angle the number of times the camera 100 is inclined from the horizontal direction is taken as the imaging angle.
  • the imaging angle of the camera 100 is 30 degrees, and the imaging position, that is, the imaging distance is 5-6 m.
  • an imaging condition an imaging angle and an imaging position were taken as an example, but in addition to this, in the case of entrance detection to a level crossing, the presence or absence of an alarm, the presence or absence of a circuit breaker, and whether a line is a single or double , Etc., may be included to help determine the degree of similarity between the unknown image and the learned model.
  • the camera 100 transmits imaging data, which is an unknown image, to the computer 200 (step S02), and the acquisition module 211 of the computer 200 acquires the unknown image (step S03).
  • the acquisition module 211 acquires imaging conditions such as an imaging angle and an imaging position from the camera 100 together with the unknown image.
  • a flow for transmitting imaging data which is an unknown image from the camera 100 has been described, but the acquisition module 211 instructs the camera 100 to transmit imaging data, and the camera 100 transmits imaging data upon receiving the instruction.
  • the acquisition module 211 may acquire not only images acquired in real time by the camera 100 but also images acquired by the camera 100 in the past and stored in the storage unit 130. .
  • the selection module 212 of the computer 200 selects, from among the learned models stored in step S01, a learned model whose imaging condition is similar to the unknown image acquired in step S03 (step S04).
  • the imaging condition is from the learned model from any of the crossing A, the crossing B, and the crossing C where the learned model exists. Choose something similar. Assuming that the imaging angle of the camera D that captures the crossing D is 20 degrees, the imaging distance is 4-5 m, and further analysis of the composition of the image that captures the crossing D results in selecting the learned model B here I assume.
  • the image analysis module 213 of the computer 200 performs image analysis of the unknown image captured by the camera D using the learned model B (step S05).
  • FIG. 13 is an example of a table showing a learned model used for image analysis for each camera when there is no learned model of the unknown image captured by the camera D.
  • the learned model B is selected as the learned model whose imaging condition is similar to the level crossing D imaged by the camera D
  • the learned model B as the used model is displayed in the column of the camera D in FIG.
  • the table is filled assuming that yyyyyy is used as a mathematical expression and BBB, b and ⁇ are used as parameters.
  • BBB, b and ⁇ are used as parameters.
  • the field of the teacher data may be left blank.
  • the selection module 212 selects the learned model most suitable for the camera D again, as shown in FIG. Image analysis of the camera D can be performed using the table.
  • the provision module 241 of the computer 200 provides the input / output unit 240 of the computer 200 with the image analysis result (step S06).
  • image display results such as displaying the result with a warning sound or light as well as displaying the image Output shall be made in accordance with the purpose of the provided system.
  • image analysis includes, for example, face recognition for personal determination, discrimination of pest damage status of agricultural products, inventory confirmation in a warehouse, affected area for medical diagnosis It is assumed that the present invention is applicable to an appropriate one according to the purpose of the system, such as image recognition.
  • the present invention from among a plurality of machine-trained learned patterns in which artificial intelligence has analyzed a known image, a learned model of a known image whose imaging condition is similar to an unknown image to be newly analyzed It is possible to provide an image analysis result providing system, an image analysis result providing method, and a program capable of outputting an accurate image analysis result without spending learning time by selecting and using Become.
  • FIG. 2 is a diagram showing the relationship between the functional blocks of the camera 100 and the computer 200 and the respective functions.
  • the camera 100 includes an imaging unit 10, a control unit 110, a communication unit 120, and a storage unit 130.
  • the computer 200 also includes a control unit 210, a communication unit 220, a storage unit 230, and an input / output unit 240.
  • the control unit 210 cooperates with the communication unit 220 and the storage unit 230 to realize the acquisition module 211. Further, the control unit 210 cooperates with the storage unit 230 to implement the selection module 212 and the image analysis module 213.
  • the storage unit 230 implements the storage module 231 in cooperation with the control unit 210.
  • the input / output unit 240 implements the provision module 241 in cooperation with the control unit 210 and the storage unit 230.
  • the communication network 300 may be a public communication network such as the Internet or a dedicated communication network, and enables communication between the camera 100 and the computer 200.
  • the camera 100 is an imaging device that includes an imaging device such as an imaging element and a lens that can perform data communication with the computer 200 and can measure the distance to the subject 400.
  • an imaging device such as an imaging element and a lens that can perform data communication with the computer 200 and can measure the distance to the subject 400.
  • a web camera is illustrated as an example, but an imaging device provided with necessary functions such as a digital camera, digital video, a camera mounted on an unmanned aerial vehicle, a camera of a wearable device, a security camera, an on-vehicle camera, and a 360 degree camera It may be.
  • the captured image may be stored in the storage unit 130.
  • the camera 100 includes a lens, an imaging element, various buttons, an imaging device such as a flash as an imaging unit 10, and captures an image as a captured image such as a moving image or a still image. Further, an image obtained by imaging is a precise image having an amount of information necessary for image analysis. Further, the resolution at the time of imaging, the camera angle, the camera magnification, and the like may be designated.
  • the control unit 110 includes a central processing unit (CPU), a random access memory (RAM), a read only memory (ROM), and the like.
  • CPU central processing unit
  • RAM random access memory
  • ROM read only memory
  • a device for enabling communication with other devices as the communication unit 120 for example, an IMT-2000 standard such as a WiFi (Wireless Fidelity) compliant device compliant with IEEE 802.11 or a third generation or fourth generation mobile communication system
  • IMT-2000 standard such as a WiFi (Wireless Fidelity) compliant device compliant with IEEE 802.11 or a third generation or fourth generation mobile communication system
  • WiFi Wireless Fidelity
  • a compliant wireless device is provided. It may be a wired LAN connection.
  • the storage unit 130 includes a storage unit of data using a hard disk or a semiconductor memory, and stores captured images, necessary data such as imaging conditions, and the like.
  • the computer 200 is a computing device capable of data communication with the camera 100.
  • a desktop computer is illustrated as an example, but in addition to a mobile phone, a portable information terminal, a tablet terminal, a personal computer, electric appliances such as a netbook terminal, a slate terminal, an electronic book terminal, a portable music player, etc. And wearable terminals such as smart glasses and head mounted displays.
  • the control unit 210 includes a CPU, a RAM, a ROM, and the like.
  • the control unit 210 cooperates with the communication unit 220 and the storage unit 230 to realize the acquisition module 211. Further, the control unit 210 cooperates with the storage unit 230 to implement the selection module 212 and the image analysis module 213.
  • a device for enabling communication with other devices as the communication unit 220 for example, a wireless device compliant with IEEE 802.11 or a wireless device compliant with IMT-2000 such as a third generation or fourth generation mobile communication system Etc. It may be a wired LAN connection.
  • the storage unit 230 includes a storage unit of data using a hard disk or a semiconductor memory, and stores data necessary for processing of a captured image, teacher data, an image analysis result, and the like.
  • the storage unit 230 implements the storage module 231 in cooperation with the control unit 210.
  • the storage unit 230 may include a database of learned models.
  • the input / output unit 240 has a function necessary to use the image analysis result providing system.
  • the input / output unit 240 implements the provision module 241 in cooperation with the control unit 210 and the storage unit 230.
  • As an example for realizing the input it is possible to provide a liquid crystal display for realizing a touch panel function, a keyboard, a mouse, a pen tablet, hardware buttons on the device, a microphone for performing voice recognition, and the like.
  • a form such as a liquid crystal display, a display of a PC, a display such as a projection on a projector, and an audio output can be considered.
  • the present invention is not particularly limited in function by the input / output method.
  • FIG. 3 is a flow chart in the case of acquiring an unknown image from the camera 100, performing image analysis processing by the computer 200, and providing an image analysis result. The processing executed by each module described above will be described along with this processing.
  • the storage module 231 of the computer 200 stores a plurality of learned models in the storage unit 230 (step S301).
  • the learned model may be acquired from another computer or storage medium, or may be created by the computer 200. Further, the storage unit 230 may be provided with a dedicated database for storing the learned model. The process of step S301 may be skipped if there is no new learned model, if a plurality of learned models have already been stored.
  • FIG. 10 is a diagram showing an example of a mathematical expression and parameters calculated by machine learning and an image used for machine learning as a learned model of level crossing A, level crossing B and level crossing C. In addition, it is a figure which shows an example of the unknown image of the level crossing D in which the learning completed model is not yet created.
  • FIG. 12 is an example of a table showing a data structure of a learned model for each camera.
  • a learned model is a model in which an equation for analyzing an image of a subject is associated with a parameter for each camera.
  • the image file with supervised data used in machine learning for calculating a learned model may be associated.
  • an imaging condition for each camera an imaging angle and an imaging position may be associated and stored.
  • the learned model created using the supervised data by the camera A which captured the crossing A in FIG. 10 is the learned model created using the supervised data by the camera B which captured the learned model A and the crossing B.
  • the learned model C created by using the trained model B and the supervised data from the camera C that has captured the level crossing C is the trained model C. Further, for an image obtained by imaging the level crossing D with the camera D, a learned model is not created.
  • FIG. 11 is a diagram for schematically describing the relationship between the camera 100, the computer 200, and the subject 400. It is assumed that the camera 100 and the computer 200 can communicate with each other via the communication network 300.
  • the camera 100 in the present invention is an imaging device capable of measuring the distance to a subject.
  • the method of measuring the distance to the subject in the case where the subject can be simultaneously imaged from a plurality of different directions in addition to acquisition from the sensor or the like of the camera 100, the deviation of the image captured by each of the plurality of cameras It is also possible to measure the distance by learning the length and the actual distance. Also, it is possible to calculate the imaging angle using the measured distance. Furthermore, when the location of the camera 100 is fixed, the distance to the imaging location may be explicitly specified.
  • the imaging angle the number of times the camera 100 is inclined from the horizontal direction is taken as the imaging angle.
  • the imaging angle of the camera 100 is 30 degrees, and the imaging position, that is, the imaging distance is 5-6 m.
  • an imaging condition an imaging angle and an imaging distance were taken as an example, but in addition to this, in the case of entrance detection to a level crossing, the presence or absence of an alarm, the presence or absence of a circuit breaker, and whether the line is a single line or a double line , Etc., may be included to help determine the degree of similarity between the unknown image and the learned model.
  • the acquisition module 211 of the computer 200 requests the camera 100 to transmit an image (step 302). If there is no learned model for the image of the camera 100 at the time of the transmission request, the image acquired from the camera 100 is an unknown image.
  • the camera 100 performs imaging with the imaging unit 10 (step S303).
  • the camera 100 transmits imaging data, which is an unknown image, to the computer 200 via the communication unit 120 (step S304).
  • the acquisition module 211 of the computer 200 acquires an unknown image (step S305).
  • imaging conditions such as an imaging angle and an imaging position are acquired from the camera 100 together with the unknown image.
  • the acquisition module 211 may acquire an image captured by the camera 100 in the past and stored in the storage unit 130, in addition to acquiring an image captured by the camera 100 in real time.
  • the selection module 212 of the computer 200 selects a learned model having similar imaging conditions to the unknown image acquired in step S305 from the learned models stored in step S301 (step S306).
  • the imaging condition is from the learned model from any of the crossing A, the crossing B, and the crossing C where the learned model exists. Choose something similar. Assuming that the imaging angle of the camera D that captures the crossing D is 20 degrees, the imaging distance is 4-5 m, and further analysis of the composition of the image that captures the crossing D results in selecting the learned model B here I assume.
  • the image analysis module 213 of the computer 200 performs image analysis of the unknown image captured by the camera D using the learned model B (step S307).
  • FIG. 13 is an example of a table showing a learned model used for image analysis for each camera when there is no learned model of the unknown image captured by the camera D.
  • the learned model B is selected as a learned model whose imaging conditions are similar to the level crossing D imaged by the camera D
  • the learned model B as a use model is selected in the column of camera D in FIG.
  • the table is filled assuming that yyyyyy is used as a mathematical expression and BBB, b and ⁇ are used as parameters.
  • BBB, b and ⁇ are used as parameters.
  • the field of the teacher data may be left blank.
  • the selection module 212 selects the learned model most suitable for the camera D again, as shown in FIG. Image analysis of the camera D can be performed using the table.
  • the provision module 241 of the computer 200 provides the input / output unit 240 of the computer 200 with the image analysis result (step S308).
  • image display results such as displaying the result with a warning sound or light as well as displaying the image Output shall be made in accordance with the purpose of the provided system.
  • image analysis includes, for example, face recognition for personal determination, discrimination of pest damage status of agricultural products, inventory confirmation in a warehouse, affected area for medical diagnosis It is assumed that the present invention is applicable to an appropriate one according to the purpose of the system, such as image recognition. Further, the provision of the image analysis result does not have to be limited to the output to the input / output unit 240 of the computer 200, and the output according to the system is performed such as outputting to other devices via the communication unit 220. I assume.
  • an image analysis result providing system capable of outputting an accurate image analysis result without spending a learning time by selecting and using a learned model of It becomes possible.
  • FIG. 4 is a view showing the relationship between the functional blocks of the camera 100 and the computer 200 and the respective functions in the case of performing a learned model creation process of an unknown image.
  • the control unit 210 of the computer 200 cooperates with the storage unit 230 to implement the creation module 214.
  • FIG. 5 is a flowchart of the camera 100 and the computer 200 in the case of performing a learned model creation process of an unknown image. The processing executed by each module described above will be described along with this processing.
  • the processes in steps S501 to S503 in FIG. 5 correspond to the processes in steps S301 to S303 in FIG.
  • the process of step S501 may be skipped if a plurality of learned models are already stored and if a new learned model does not exist.
  • the camera 100 transmits the unknown image captured by the imaging unit 10 to the computer 200 via the communication unit 120 (step S504).
  • the communication unit 120 it is desirable to acquire as many unknown images as possible captured by the camera 100. Therefore, not only an image captured by the camera 100 in real time, but also an image captured by the camera 100 in the past and stored in the storage unit 130 may be transmitted.
  • the acquisition module 211 of the computer 200 acquires a plurality of unknown images (step S505).
  • imaging conditions such as an imaging angle and an imaging position are acquired from the camera 100 together with the respective unknown images.
  • the creation module 214 of the computer 200 assigns teacher data to the unknown image acquired in step S505 (step S506).
  • teacher data an operation of adding a label that is a correct answer of the image analysis result to a plurality of acquired unknown images is assigned as teacher data.
  • labels for supervised learning assuming that it is necessary to provide detailed image analysis results in actuality, if there is entry detection at a level crossing, “with entry / without entry” There is a possibility of no entry, entry (adult), entry (adult), entry (child), entry (old man), entry (vehicle), entry (bicycle), entry (animal), entry It is necessary to add in accordance with the purpose of the system, considering whether detailed division is necessary.
  • the creation module 214 of the computer 200 performs machine learning by supervised learning using the unknown image to which the teacher data is added (step S507).
  • the creation module 214 creates a learned model of the unknown image based on the result of the machine learning in step S507 (step S508).
  • the storage module 231 stores the learned model of the unknown image in the storage unit 230 (step S509).
  • FIG. 14 is an example of a table showing a learned model used for image analysis for each camera when a learned model of an unknown image captured by a camera D is created.
  • the learned model D of the camera D created in step S508 is described in the column of the camera D in FIG.
  • the table is filled, assuming that a learned model D as a use model of the camera D, vvvvvvv as a mathematical expression, and DDD, d, dD as parameters are used.
  • the teacher data column also describes teacher data. From this point onward, it is possible to perform image analysis of the camera D using the table of FIG. 14 until the training data is increased again by the camera D to create a learned model.
  • the system should not be stressed so as not to affect other image analysis tasks. It is necessary to properly execute the system according to the operation of the image analysis result provision system, such as execution.
  • a known image whose imaging condition is similar to an unknown image to be newly analyzed
  • machine learning in accordance with the new unknown image is performed by learning-made model creation processing.
  • FIG. 6 is a diagram showing the relationship between the functional blocks of the camera 100 and the computer 200 and the respective functions when the image analysis process is switched depending on whether the learned model creation process of the unknown image is finished.
  • FIG. 7 is a process corresponding to A of the flowchart of FIG. 6, and when the learning model creation process of the unknown image is not completed, the learned model of the unknown image performed by the computer 200 when the machine learning of the unknown image is possible. It is a flowchart figure of preparation processing.
  • FIG. 8 is a process corresponding to B of the flowchart of FIG. 6, and when the learned model creation process of the unknown image is not finished, the learned model selection process performed by the computer 200 when the machine learning of the unknown image is impossible.
  • FIG. 7 is a process corresponding to A of the flowchart of FIG. 6, and when the learning model creation process of the unknown image is not completed, the learned model of the unknown image performed by the computer 200 when the machine learning of the unknown image is possible.
  • It is a flowchart figure of preparation processing.
  • step S601 to step S605 of FIG. 6 correspond to the processes of step S301 to step S305 of FIG. 3, step S606 and subsequent steps will be described.
  • the creation module 214 of the computer 200 confirms whether the learned model of the unknown image has been created for the imaging data acquired in step S605 (step S606).
  • the imaging data acquired in step S605 is acquisition data from the camera 100 for the first time, it is an unknown image, so it is considered that a learned model of the unknown image is not created.
  • the creation module 214 determines whether machine learning is possible using the unknown image acquired in step S605 or the previously stored unknown image (Ste S607).
  • the determination may be made according to the operation status of the image analysis result providing system, and the determination according to the system is performed.
  • step S608 the process proceeds to the flowchart of process A of FIG. 7 (step S608).
  • the creation module 214 of the computer 200 adds teacher data to the unknown image acquired in step S605 or the unknown image stored before that (step S701).
  • teacher data an operation of adding a label that is a correct answer of the image analysis result to a plurality of acquired unknown images is assigned as teacher data.
  • the creating module 214 of the computer 200 performs machine learning by supervised learning using the unknown image to which the teacher data is added (step S702).
  • the creation module 214 creates a learned model of the unknown image based on the result of the machine learning in step S702 (step S703).
  • the storage module 231 stores the learned model of the unknown image in the storage unit 230 (step S704).
  • FIG. 14 is an example of a table showing a learned model used for image analysis for each camera when a learned model of an unknown image captured by a camera D is created.
  • the learned model D of the camera D created in step S703 is described in the column of the camera D in FIG.
  • the table is filled, assuming that a learned model D as a use model of the camera D, vvvvvvv as a mathematical expression, and DDD, d, dD as parameters are used.
  • the teacher data column also describes teacher data. From this point onward, it is possible to perform image analysis of the camera D using the table of FIG. 14 until the training data is increased again by the camera D to create a learned model.
  • the image analysis module 213 of the computer 200 performs image analysis of the unknown image captured by the camera D using the created learned model D (step S705).
  • the provision module 241 of the computer 200 provides the input / output unit 240 of the computer 200 with the image analysis result (step S706). Thereafter, the process returns to the flowchart of FIG. 6 and proceeds to step S614.
  • the creation module 214 stores the unknown image acquired in step S605 in the storage unit 230 (step S609). This is for later use as teacher data when performing machine learning for learned model creation processing on an unknown image.
  • step S609 After the storage process of step S609, the process proceeds to the flowchart of process B of FIG. 8 (step S610).
  • the selection module 212 of the computer 200 selects a learned model whose imaging condition is similar to the unknown image acquired in step S605 from the learned models stored in step S601 (step S801).
  • the imaging condition is from the learned model from any of the crossing A, the crossing B, and the crossing C where the learned model exists. Choose something similar. Assuming that the imaging angle of the camera D that captures the crossing D is 20 degrees, the imaging distance is 4-5 m, and further analysis of the composition of the image that captures the crossing D results in selecting the learned model B here I assume.
  • the image analysis module 213 performs image analysis of the unknown image captured by the camera D using the learned model B (step S802).
  • FIG. 13 is an example of a table showing a learned model used for image analysis for each camera when there is no learned model of the unknown image captured by the camera D.
  • the learned model B is selected as a learned model having similar imaging conditions to the level crossing D captured by the camera D. Therefore, in the field of the camera D in FIG.
  • the table is filled assuming that yyyyyy is used as a mathematical expression and BBB, b and ⁇ are used as parameters.
  • BBB, b and ⁇ are used as parameters.
  • the field of the teacher data may be left blank.
  • the selection module 212 selects the learned model most suitable for the camera D again, as shown in FIG. Image analysis of the camera D can be performed using the table.
  • the provision module 241 provides the input / output unit 240 of the computer 200 with the image analysis result (step S803). Thereafter, the process returns to the flowchart of FIG. 6 and proceeds to step S614.
  • the selection module 212 selects and applies the learned model D created in step S703 of process A (step S611).
  • the image analysis module 213 of the computer 200 performs image analysis of the unknown image captured by the camera D using the learned model D (step S612).
  • the provision module 241 of the computer 200 provides the input / output unit 240 of the computer 200 with the image analysis result (step S613).
  • step S706, step S803, and step S613, the provision module 241 provides the image analysis result to the input / output unit 240 of the computer 200.
  • image analysis includes, for example, face recognition for personal determination, discrimination of pest damage status of agricultural products, inventory confirmation in a warehouse, affected area for medical diagnosis It is assumed that the present invention is applicable to an appropriate one according to the purpose of the system, such as image recognition. Further, the provision of the image analysis result does not have to be limited to the output to the input / output unit 240 of the computer 200, and the output according to the system is performed such as outputting to other devices via the communication unit 220. I assume.
  • step S614 it is confirmed whether the image analysis result provision process may be ended. If not completed, the process returns to step S602 to continue the process, and when completed, the image analysis result provision process is ended. Do.
  • a known image whose imaging condition is similar to an unknown image to be newly analyzed
  • machine learning in accordance with the new unknown image is performed by learning-made model creation processing. It is possible to create a trained model.
  • FIG. 9 is a flowchart of the computer 200 in the case of performing machine learning of an unknown image by adding an image analysis result of the unknown image as teacher data when performing a learned model creation process of the unknown image.
  • the configurations of the camera 100 and the computer 200 are the same as in FIG. In FIG. 9, although described from step S901, before this, processing equivalent to step S301 to step S308 in FIG. 3 is performed, and image analysis of an unknown image by the selected learned model is performed, It is assumed that the image analysis result has been obtained.
  • the creation module 214 of the computer 200 assigns the image analysis result in step S307 as teacher data to the unknown image acquired in step S305 (step S901).
  • the cost of manually adding teacher data to be correct data to a large amount of images necessary for machine learning is It will be possible to reduce significantly.
  • the creating module 214 performs machine learning by supervised learning using the unknown image to which the teacher data is added (step S902).
  • the creation module 214 creates a learned model of the unknown image based on the result of the machine learning in step S902 (step S903).
  • the storage module 231 stores the learned model of the unknown image in the storage unit 230 (step S904).
  • a known image whose imaging condition is similar to an unknown image to be newly analyzed
  • the image analysis result of the unknown image by the selected learned model is directly used as teaching data.
  • the above-described means and functions are realized by a computer (including a CPU, an information processing device, and various terminals) reading and executing a predetermined program.
  • the program may be provided, for example, from a computer via a network (SaaS: software as a service), a flexible disk, a CD (CD-ROM, etc.), a DVD (DVD-ROM, DVD) Provided in the form of being recorded in a computer readable recording medium such as a RAM, a compact memory, etc.
  • the computer reads the program from the recording medium, transfers the program to an internal storage device or an external storage device, stores it, and executes it.
  • the program may be recorded in advance in a storage device (recording medium) such as, for example, a magnetic disk, an optical disk, or a magneto-optical disk, and may be provided from the storage device to the computer via a communication line.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

Le problème décrit par la présente invention consiste à fournir un système permettant de fournir un résultat d'analyse d'image, un procédé permettant de fournir un résultat d'analyse d'image et un programme grâce auquel il est possible de produire un résultat très précis d'analyse d'image pour une image inconnue nécessitant une nouvelle analyse, sans prendre de temps d'apprentissage. La solution selon la présente invention concerne : un module de stockage (231), permettant de stocker un modèle appris ayant un apprentissage machine fini, dans lequel une image connue est analysée par image ; un module d'acquisition (211), permettant d'acquérir une image inconnue pour laquelle aucun modèle appris n'a encore été créé ; un module de sélection (212), permettant de sélectionner, parmi les modèles appris stockés, le modèle appris d'une image connue qui est similaire, dans des conditions d'imagerie, à l'image inconnue acquise ; un module d'analyse d'image (213), permettant d'utiliser le modèle appris sélectionné pour analyser l'image de l'image inconnue ; et un module de fourniture (241), permettant de fournir le résultat de l'analyse d'image.
PCT/JP2017/023807 2017-06-28 2017-06-28 Système permettant de fournir un résultat d'analyse d'image, procédé permettant de fournir un résultat d'analyse d'image et programme Ceased WO2019003355A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2017/023807 WO2019003355A1 (fr) 2017-06-28 2017-06-28 Système permettant de fournir un résultat d'analyse d'image, procédé permettant de fournir un résultat d'analyse d'image et programme
JP2018545247A JP6474946B1 (ja) 2017-06-28 2017-06-28 画像解析結果提供システム、画像解析結果提供方法、およびプログラム

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/023807 WO2019003355A1 (fr) 2017-06-28 2017-06-28 Système permettant de fournir un résultat d'analyse d'image, procédé permettant de fournir un résultat d'analyse d'image et programme

Publications (1)

Publication Number Publication Date
WO2019003355A1 true WO2019003355A1 (fr) 2019-01-03

Family

ID=64741244

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/023807 Ceased WO2019003355A1 (fr) 2017-06-28 2017-06-28 Système permettant de fournir un résultat d'analyse d'image, procédé permettant de fournir un résultat d'analyse d'image et programme

Country Status (2)

Country Link
JP (1) JP6474946B1 (fr)
WO (1) WO2019003355A1 (fr)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112203059A (zh) * 2020-10-15 2021-01-08 石家庄粮保科技有限公司 一种基于ai图像识别技术的测虫方法
JP2021117964A (ja) * 2020-01-22 2021-08-10 キヤノンメディカルシステムズ株式会社 医用システム及び医用情報処理方法
JP2021165984A (ja) * 2020-04-08 2021-10-14 日本電気通信システム株式会社 推定装置、学習装置、推定方法及びプログラム
JPWO2022070491A1 (fr) * 2020-09-29 2022-04-07
WO2022113535A1 (fr) 2020-11-27 2022-06-02 株式会社Jvcケンウッド Dispositif de reconnaissance d'image, procédé de reconnaissance d'image et modèle de reconnaissance d'objet
JPWO2023135621A1 (fr) * 2022-01-11 2023-07-20
JP7360115B1 (ja) 2022-04-13 2023-10-12 株式会社Ridge-i 情報処理装置、情報処理方法及び情報処理プログラム
US12361533B2 (en) 2021-10-22 2025-07-15 Canon Kabushiki Kaisha Image processing apparatus and image processing method

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7493323B2 (ja) 2019-11-14 2024-05-31 キヤノン株式会社 情報処理装置、情報処理装置の制御方法およびプログラム
JP7744757B2 (ja) * 2021-04-02 2025-09-26 カナデビア株式会社 情報処理装置、判定方法、および判定プログラム
US20230409927A1 (en) * 2022-06-16 2023-12-21 Wistron Corporation Data predicting method and apparatus

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011060221A (ja) * 2009-09-14 2011-03-24 Sumitomo Electric Ind Ltd 識別器生成方法、コンピュータプログラム、識別器生成装置及び所定物体検出装置
JP2012068965A (ja) * 2010-09-24 2012-04-05 Denso Corp 画像認識装置
JP2016015045A (ja) * 2014-07-02 2016-01-28 キヤノン株式会社 画像認識装置、画像認識方法及びプログラム

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011060221A (ja) * 2009-09-14 2011-03-24 Sumitomo Electric Ind Ltd 識別器生成方法、コンピュータプログラム、識別器生成装置及び所定物体検出装置
JP2012068965A (ja) * 2010-09-24 2012-04-05 Denso Corp 画像認識装置
JP2016015045A (ja) * 2014-07-02 2016-01-28 キヤノン株式会社 画像認識装置、画像認識方法及びプログラム

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7519821B2 (ja) 2020-01-22 2024-07-22 キヤノンメディカルシステムズ株式会社 医用システム及び医用情報処理方法
JP2021117964A (ja) * 2020-01-22 2021-08-10 キヤノンメディカルシステムズ株式会社 医用システム及び医用情報処理方法
JP2021165984A (ja) * 2020-04-08 2021-10-14 日本電気通信システム株式会社 推定装置、学習装置、推定方法及びプログラム
JP7525220B2 (ja) 2020-04-08 2024-07-30 日本電気通信システム株式会社 推定装置、学習装置、推定方法及びプログラム
JPWO2022070491A1 (fr) * 2020-09-29 2022-04-07
WO2022070491A1 (fr) * 2020-09-29 2022-04-07 株式会社島津製作所 Dispositif d'analyse d'image
CN116075852A (zh) * 2020-09-29 2023-05-05 株式会社岛津制作所 图像解析装置
US12417619B2 (en) 2020-09-29 2025-09-16 Shimadzu Corporation Image analyzing device
JP2024112965A (ja) * 2020-09-29 2024-08-21 株式会社島津製作所 画像解析装置
JP7521588B2 (ja) 2020-09-29 2024-07-24 株式会社島津製作所 画像解析装置
CN112203059A (zh) * 2020-10-15 2021-01-08 石家庄粮保科技有限公司 一种基于ai图像识别技术的测虫方法
WO2022113535A1 (fr) 2020-11-27 2022-06-02 株式会社Jvcケンウッド Dispositif de reconnaissance d'image, procédé de reconnaissance d'image et modèle de reconnaissance d'objet
US12361533B2 (en) 2021-10-22 2025-07-15 Canon Kabushiki Kaisha Image processing apparatus and image processing method
JP7511781B2 (ja) 2022-01-11 2024-07-05 三菱電機株式会社 監視カメラ画像解析システム
WO2023135621A1 (fr) * 2022-01-11 2023-07-20 三菱電機株式会社 Système d'analyse d'image de caméra de surveillance
GB2629060A (en) * 2022-01-11 2024-10-16 Mitsubishi Electric Corp Surveillance camera image analysis system
US20250046088A1 (en) * 2022-01-11 2025-02-06 Mitsubishi Electric Corporation Surveillance camera image analysis system
JPWO2023135621A1 (fr) * 2022-01-11 2023-07-20
JP2023156898A (ja) * 2022-04-13 2023-10-25 株式会社Ridge-i 情報処理装置、情報処理方法及び情報処理プログラム
JP7360115B1 (ja) 2022-04-13 2023-10-12 株式会社Ridge-i 情報処理装置、情報処理方法及び情報処理プログラム

Also Published As

Publication number Publication date
JP6474946B1 (ja) 2019-02-27
JPWO2019003355A1 (ja) 2019-06-27

Similar Documents

Publication Publication Date Title
JP6474946B1 (ja) 画像解析結果提供システム、画像解析結果提供方法、およびプログラム
Siena et al. Utilising the intel realsense camera for measuring health outcomes in clinical research
CN107257338B (zh) 媒体数据处理方法、装置及存储介质
JP6404527B1 (ja) カメラ制御システム、カメラ制御方法、およびプログラム
CN104252712A (zh) 图像生成装置及图像生成方法
US11327320B2 (en) Electronic device and method of controlling the same
CN111598899A (zh) 图像处理方法、装置及计算机可读存储介质
EP3769188A1 (fr) Représentation de la position, du déplacement et du regard d'un utilisateur dans un espace de réalité mixte
Larrue et al. Influence of body-centered information on the transfer of spatial learning from a virtual to a real environment
CN109271929B (zh) 检测方法和装置
KR20210044116A (ko) 전자 장치 및 그 영유아 발달 상태 분류 방법
CN107179839A (zh) 用于终端的信息输出方法、装置及设备
Ho et al. IoTouch: whole-body tactile sensing technology toward the tele-touch
JP6246441B1 (ja) 画像解析システム、画像解析方法、およびプログラム
WO2022009821A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
US20230162619A1 (en) Systems and methods for accessible computer-user interactions
Tang et al. Guest editorial: special issue on human pose estimation and its applications
KR20220114849A (ko) 아이트래킹 기반 학습도 모니터링 방법, 장치 및 시스템
US10057321B2 (en) Image management apparatus and control method capable of automatically creating comment data relevant to an image
EP3406191A1 (fr) Méthode d'évaluation d'un matériel et dispositif d'évaluation d'un matériel
KR101575100B1 (ko) 사용자 그룹의 공간행동 센싱 및 의미분석 시스템
KR102490320B1 (ko) 3d 신체 모델을 이용한 사용자 맞춤형 제품 및 행동 추천 방법, 장치 및 시스템
CN114298915B (zh) 图像目标的处理方法和装置、存储介质及电子装置
CN118690453A (zh) 基于眼球信息的建筑要素的情感确定方法及相关设备
JP7497867B2 (ja) 自閉症者支援プログラム及び自閉症者支援システム

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2018545247

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17916290

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17916290

Country of ref document: EP

Kind code of ref document: A1