CN109166120A - For obtaining the method and device of information - Google Patents
For obtaining the method and device of information Download PDFInfo
- Publication number
- CN109166120A CN109166120A CN201811054409.6A CN201811054409A CN109166120A CN 109166120 A CN109166120 A CN 109166120A CN 201811054409 A CN201811054409 A CN 201811054409A CN 109166120 A CN109166120 A CN 109166120A
- Authority
- CN
- China
- Prior art keywords
- illness
- image
- sample
- information
- skin
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30088—Skin; Dermal
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the present application discloses the method and device for obtaining information.One specific embodiment of this method includes: to obtain image to be processed, and above-mentioned image to be processed includes skin disorder image;Above-mentioned image to be processed is imported into illness identification model trained in advance, obtain illness information, wherein, above-mentioned illness identification model is used to characterize the corresponding relationship between image to be processed and illness information, above-mentioned illness information includes the illness tag image and illness identification information for identifying illness, and above-mentioned illness tag image includes marking box and the skin disorder image in label box.This embodiment improves the accuracys for obtaining illness information.
Description
Technical field
The invention relates to technical field of data processing, and in particular to for obtaining the method and device of information.
Background technique
With universal and information technology the development of network, user can be sent out the image comprising skin disorder by network
It is sent to server.Server can carry out analysis identification to the image comprising skin disorder, determine the name of skin disorder in image
The information such as title, the cause of disease and points for attention, substantially increase the diagnosis efficiency to skin disorder.
Summary of the invention
The embodiment of the present application proposes the method and device for obtaining information.
In a first aspect, the embodiment of the present application provides a kind of method for obtaining information, this method comprises: obtaining wait locate
Image is managed, above-mentioned image to be processed includes skin disorder image;Above-mentioned image to be processed is imported to illness identification trained in advance
Model obtains illness information, wherein above-mentioned illness identification model is corresponding between image to be processed and illness information for characterizing
Relationship, above-mentioned illness information include the illness tag image and illness identification information for identifying illness, above-mentioned illness label figure
As including label box and the skin disorder image in label box.
In some embodiments, the above method further include: a kind of skin disorder is corresponded in response to illness information, by the skin
Illness is as final illness information.
In some embodiments, the above method further include: in response to corresponding at least two skin disorders of illness information, to upper
The illness feature for stating at least two skin disorders is counted, and determines final illness information, wherein illness feature includes following
At least one of: the quantity of the area of skin disorder, skin disorder.
In some embodiments, the above method further include: obtain state of an illness describe information, above-mentioned state of an illness describe information for pair
The corresponding skin disorder of above-mentioned image to be processed is described, and, the above method further include: believe in response to above-mentioned illness description
Breath and illness information matches, output diagnosis correct information.
In some embodiments, above-mentioned illness identification model includes convolutional neural networks and Classification Neural.
In some embodiments, above-mentioned that above-mentioned image to be processed is imported into illness identification model trained in advance, obtain disease
Disease information, comprising: above-mentioned image to be processed is input to above-mentioned convolutional neural networks, obtains the illness mark of above-mentioned image to be processed
Remember image, wherein above-mentioned convolutional neural networks are used to characterize the corresponding relationship between image to be processed and illness tag image, on
Illness tag image is stated for marking position of the skin disorder in above-mentioned image to be processed;Above-mentioned illness tag image is inputted
To above-mentioned Classification Neural, the illness description information of above-mentioned image to be processed is obtained, wherein above-mentioned Classification Neural is used for
Characterize the corresponding relationship between illness tag image and illness description information.
In some embodiments, training obtains above-mentioned illness identification model as follows: obtaining multiple comprising sample
Each sample graph in the sample image of skin disorder image and above-mentioned multiple sample images comprising sample skin illness image
As corresponding above-mentioned sample illness tag image and sample illness description information, wherein above-mentioned sample illness tag image packet
Label box and the sample skin illness image in label box are included, above-mentioned sample skin illness image includes sample illness
Image and sample perimeter symptomatic picture;By each sample graph in above-mentioned multiple sample images comprising sample skin illness image
As being used as input, by sample corresponding to each sample image in above-mentioned multiple sample images comprising sample skin illness image
As output, training obtains illness identification model for this illness tag image and sample illness description information.
In some embodiments, above-mentioned each sample by above-mentioned multiple sample images comprising sample skin illness image
This image, will be corresponding to each sample image in above-mentioned multiple sample images comprising sample skin illness image as input
Sample illness tag image and sample illness description information as output, training obtain illness identification model, comprising: execute with
Lower training step: by each sample image in above-mentioned multiple sample images comprising sample skin illness image sequentially input to
Initial illness identification model, obtains each sample image institute in above-mentioned multiple sample images comprising sample skin illness image
Corresponding prediction illness tag image and prediction illness description information, by above-mentioned multiple samples comprising sample skin illness image
Prediction illness tag image corresponding to each sample image in image and prediction illness description information respectively with the sample graph
As corresponding sample illness tag image and sample illness description information are compared, above-mentioned initial illness identification model is obtained
Predictablity rate, determine whether above-mentioned predictablity rate is greater than default accuracy rate threshold value, if more than above-mentioned default accuracy rate threshold
Value, then the illness identification model completed above-mentioned initial illness identification model as training.
In some embodiments, above-mentioned each sample by above-mentioned multiple sample images comprising sample skin illness image
This image, will be corresponding to each sample image in above-mentioned multiple sample images comprising sample skin illness image as input
Sample illness tag image and sample illness description information as output, training obtain illness identification model, further includes: response
In being not more than above-mentioned default accuracy rate threshold value, the parameter of above-mentioned initial illness identification model is adjusted, and continues to execute above-mentioned training
Step.
Second aspect, the embodiment of the present application provide it is a kind of for obtaining the device of information, the device include: image obtain
Unit, is configured to obtain image to be processed, and above-mentioned image to be processed includes skin disorder image;Information acquisition unit is matched
It is set to and above-mentioned image to be processed is imported into illness identification model trained in advance, obtain illness information, wherein above-mentioned illness identification
Model is used to characterize the corresponding relationship between image to be processed and illness information, and above-mentioned illness information includes for identifying illness
Illness tag image and illness identification information, above-mentioned illness tag image include marking box and the skin in label box
Illness image.
In some embodiments, above-mentioned apparatus further include: the first output unit corresponds to a kind of skin in response to illness information
Illness is configured to using the skin disorder as final illness information.
In some embodiments, above-mentioned apparatus further include: the second output unit, in response to illness information corresponding at least two
Skin disorder is configured to count the illness feature of above-mentioned at least two skin disorder, determines final illness information,
Wherein, illness feature includes at least one of the following: the quantity of the area of skin disorder, skin disorder.
In some embodiments, above-mentioned apparatus further include: state of an illness describe information acquiring unit is configured to obtain the state of an illness and retouches
State information, above-mentioned state of an illness describe information is used to that the corresponding skin disorder of above-mentioned image to be processed to be described, and, above-mentioned dress
It sets further include: third output unit is being configured to export diagnosis just in response to above-mentioned state of an illness describe information and illness information matches
Firmly believe breath.
In some embodiments, above-mentioned illness identification model includes convolutional neural networks and Classification Neural.
In some embodiments, above- mentioned information acquiring unit include: illness tag image obtain subelement, be configured to by
Above-mentioned image to be processed is input to above-mentioned convolutional neural networks, obtains the illness tag image of above-mentioned image to be processed, wherein on
Convolutional neural networks are stated for characterizing the corresponding relationship between image to be processed and illness tag image, above-mentioned illness tag image
For marking position of the skin disorder in above-mentioned image to be processed;Illness description information obtains subelement, and being configured to will be upper
It states illness tag image and is input to above-mentioned Classification Neural, obtain the illness description information of above-mentioned image to be processed, wherein on
Classification Neural is stated for characterizing the corresponding relationship between illness tag image and illness description information.
In some embodiments, above-mentioned apparatus further includes illness identification model training unit, is configured to that illness is trained to know
Other model, above-mentioned illness identification model training unit include: sample acquisition subelement, are configured to obtain multiple comprising sample skin
Each sample image in the sample image of skin illness image and above-mentioned multiple sample images comprising sample skin illness image
Corresponding above-mentioned sample illness tag image and sample illness description information, wherein above-mentioned sample illness tag image includes
Box and the sample skin illness image in label box are marked, above-mentioned sample skin illness image includes sample illness figure
Picture and sample perimeter symptomatic picture;Illness identification model trains subelement, and being configured to will be above-mentioned multiple comprising sample skin disease
Each sample image in the sample image of disease image is as input, by above-mentioned multiple samples comprising sample skin illness image
Sample illness tag image corresponding to each sample image in image and sample illness description information, which are used as, to be exported, trained
To illness identification model.
In some embodiments, above-mentioned illness identification model training subelement includes: illness identification model training module, quilt
It is configured to sequentially input each sample image in above-mentioned multiple sample images comprising sample skin illness image to initial
Illness identification model obtains corresponding to each sample image in above-mentioned multiple sample images comprising sample skin illness image
Prediction illness tag image and prediction illness description information, by above-mentioned multiple sample images comprising sample skin illness image
In each sample image corresponding to prediction illness tag image and prediction illness description information respectively with the sample image institute
Corresponding sample illness tag image and sample illness description information are compared, and obtain the pre- of above-mentioned initial illness identification model
Accuracy rate is surveyed, determines whether above-mentioned predictablity rate is greater than default accuracy rate threshold value, if more than above-mentioned default accuracy rate threshold value, then
The illness identification model that above-mentioned initial illness identification model is completed as training.
In some embodiments, above-mentioned illness identification model training subelement further include: parameter adjustment module, in response to not
It greater than above-mentioned default accuracy rate threshold value, is configured to adjust the parameter of above-mentioned initial illness identification model, and continues to execute above-mentioned
Training step.
The third aspect, the embodiment of the present application provide a kind of server, comprising: one or more processors;Memory,
On be stored with one or more programs, when said one or multiple programs are executed by said one or multiple processors so that
Said one or multiple processors execute the method for obtaining information of above-mentioned first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable medium, are stored thereon with computer program,
It is characterized in that, which realizes the method for obtaining information of above-mentioned first aspect when being executed by processor.
The method and device provided by the embodiments of the present application for being used to obtain information, obtains image to be processed first;Then will
Above-mentioned image to be processed imports illness identification model trained in advance, obtains illness information.Wherein, the illness that illness information includes
Tag image includes marking box and the skin disorder image in label box, realizes the accurate mark to skin disorder
Note improves the accuracy for obtaining illness information.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that one embodiment of the application can be applied to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the method for obtaining information of the application;
Fig. 3 is the flow chart according to one embodiment of the illness identification model training method of the application;
Fig. 4 is the schematic diagram according to an application scenarios of the method for obtaining information of the application;
Fig. 5 is the structural schematic diagram according to one embodiment of the device for obtaining information of the application;
Fig. 6 is adapted for the structural schematic diagram for the computer system for realizing the server of the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can the method for obtaining information using the embodiment of the present application or the device for obtaining information
Exemplary system architecture 100.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 to provide the medium of communication link.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101,102,103 and be interacted by network 104 with server 105, to receive or send out
Send message etc..Various image applications can be installed, such as image obtains application, image is set on terminal device 101,102,103
Set application, image sending application, information editing's application, information sending application etc..
Terminal device 101,102,103 can be hardware, be also possible to software.When terminal device 101,102,103 is hard
When part, the various electronic equipments of image acquisition, including but not limited to smart phone, plate are can be with display screen and supported
Computer, pocket computer on knee and desktop computer etc..When terminal device 101,102,103 is software, can install
In above-mentioned cited electronic equipment.Multiple softwares or software module may be implemented into (such as providing distributed clothes in it
Business), single software or software module also may be implemented into, be not specifically limited herein.
Server 105 can be to provide the server of various services, for example, to terminal device 101,102,103 send to
The server that processing image is handled.Server can carry out the processing such as analyzing to data such as the images to be processed received,
And obtain corresponding to the illness information of image to be processed.
It should be noted that the method provided by the embodiment of the present application for obtaining information is generally held by server 105
Row, correspondingly, the device for obtaining information is generally positioned in server 105.
It should be noted that server can be hardware, it is also possible to software.When server is hardware, may be implemented
At the distributed server cluster that multiple servers form, individual server also may be implemented into.It, can when server is software
To be implemented as multiple softwares or software module (such as providing Distributed Services), single software or software also may be implemented into
Module is not specifically limited herein.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need
It wants, can have any number of terminal device, network and server.
With continued reference to Fig. 2, the process of one embodiment of the method for obtaining information according to the application is shown
200.This be used for obtain information method the following steps are included:
Step 201, image to be processed is obtained.
It in the present embodiment, can be with for obtaining the executing subject (such as server 105 shown in FIG. 1) of the method for information
Image to be processed is received using its terminal for carrying out image transmission from user by wired connection mode or radio connection.
It should be pointed out that above-mentioned radio connection can include but is not limited to 3G/4G connection, WiFi connection, bluetooth connection,
WiMAX connection, Zigbee connection, UWB (ultra wideband) connection and other it is currently known or in the future exploitation it is wireless
Connection type.
During the existing skin disorder by network diagnosis, the image comprising skin disorder is that user passes through intelligence mostly
Terminal (such as can be mobile phone, tablet computer etc.) obtain.Clear effective image in order to obtain, user need to consider to obtain
The factors such as illumination, distance, angle when image.Also, feature can be different between different skin disorders, and image should include
Content can also be different.And user usually only shoots the illness image oneself thought.Therefore, the skin disease that server is got
The validity of disease image is not high.It, usually can be to skin disorder position (such as skin in addition, when existing server is to image procossing
Skin breaking point) accurately marked.The peripheral symptoms (such as redness around skin injury point etc.) of disease locus often with disease
The factors such as disease degree, morbidity object are related, and the boundary of peripheral symptoms is not easy accurately to mark, and eventually leads to the accuracy of diagnostic result
It is not high.
The available user of the executing subject of the application passes through the image to be processed that terminal device 101,102,103 is sent.
Wherein, above-mentioned image to be processed can be it is that user is obtained by terminal device 101,102,103, comprising skin disorder image.
Step 202, above-mentioned image to be processed is imported into illness identification model trained in advance, obtains illness information.
After getting image to be processed, image to be processed can be imported into illness identification model by executing subject.Illness
Identification model can carry out data processing to image to be processed, to obtain corresponding to the illness identification information of image to be processed.Wherein,
Above-mentioned illness identification model can be used for characterizing the corresponding relationship between image to be processed and illness information.Above-mentioned illness information can
To include illness tag image and illness identification information for identifying illness.Above-mentioned illness tag image may include label side
Frame and the skin disorder image in label box.For example, image to be processed can be comprising the skin disease on user's arm
Disease image.Illness tag image can be from image to be processed, by marking box to mark skin disorder region
Skin disorder image.Illness identification information can be the information being illustrated to the skin disorder in illness tag image.
It so, it is possible to get the corresponding illness tag image of image to be processed and illness information.Also, illness label figure
As passing through label box mark skin disorder region, rather than skin disorder point is only marked, therefore skin can be accurately identified
Illness improves the accuracy to skin disorder judgement.
In some optional implementations of the present embodiment, the above method can also include: in response to illness information pair
A kind of skin disorder is answered, using the skin disorder as final illness information.
If obtained illness information only corresponds to a kind of skin after image to be processed is imported illness identification model by executing subject
Skin illness, it can be said that bright image to be processed includes that skin disorder characteristics of image is obvious and skin disorder is single.In general, illness is known
Other model can make accurate judgement to this kind of image to be processed.Executing subject can be directly using the skin disorder as final
Illness information.
In some optional implementations of the present embodiment, the above method can also include: in response to illness information pair
At least two skin disorders are answered, the illness feature of above-mentioned at least two skin disorder is counted, determine final illness letter
Breath.
When corresponding at least two skin disorder of illness information, illustrate situations such as skin disorder may be with complication.
Executing subject can the illness feature to each skin disorder at least two skin disorders carry out identification statistics, and according to
Statistical result determines final illness information.Wherein, illness feature may include at least one of following: the face of skin disorder
Product, the quantity of skin disorder.
In some optional implementations of the present embodiment, the above method can also include: acquisition state of an illness describe information,
And the above method further include: in response to above-mentioned state of an illness describe information and illness information matches, output diagnosis correct information.
User can also send when sending image to be processed by terminal device 101,102,103 with image to be processed
State of an illness describe information.Wherein, above-mentioned state of an illness describe information can be used for carrying out the corresponding skin disorder of above-mentioned image to be processed
Description." start only to itch, spent two days and become red and swollen, among present inflamed area for example, state of an illness describe information may is that
There is skin injury ".Illness identification information in illness information may is that " skin disorder XXX has skin early period and itches disease
There is skin injury phenomenon at red and swollen shape, and in inflamed area in shape, later, region of itching ".Executing subject can be respectively to the state of an illness
Description information and illness information carry out the operation such as semantics recognition, to extract characteristic information.For example, executing subject can be retouched from the state of an illness
It states and extracts characteristic information in information: " itching ", " redness " and " skin injury ";Executing subject can be extracted from illness information
Characteristic information out: " symptom of itching ", " red and swollen shape " and " skin injury ".Hereafter, executing subject can compare two characteristic informations, such as
Occur multiple identical and similar entry in two characteristic information of fruit, it is believed that state of an illness describe information and illness information matches, and
Output diagnosis correct information.Otherwise, executing subject can show illness information, and receive user by terminal device 101,102,
103 feedback informations sent.When feedback information is that user accepts illness information, it is believed that the recognition result of skin disorder
Accurately;When feedback information is that user does not accept illness information, executing subject can again be identified image to be processed.
In some optional implementations of the present embodiment, illness identification model may include convolutional neural networks and divide
Neural network.It is above-mentioned that above-mentioned image to be processed is imported into illness identification model trained in advance, illness information is obtained, can wrap
It includes:
Above-mentioned image to be processed is input to above-mentioned convolutional neural networks, obtains the disease of above-mentioned image to be processed by the first step
Disease tag image.
Image to be processed can be imported illness identification model first after obtaining image to be processed by executing subject, thus
Obtain corresponding to the illness tag image of image to be processed.Wherein, above-mentioned convolutional neural networks can be used for characterizing image to be processed
With the corresponding relationship between illness tag image.Above-mentioned illness tag image can be used for marking skin disorder above-mentioned to be processed
Position in image.
In the present embodiment, convolutional neural networks can be a kind of feedforward neural network, its artificial neuron can ring
Answer the surrounding cells in a part of coverage area.In general, the basic structure of convolutional neural networks includes two layers, one, which is characterized, to be mentioned
Layer is taken, the input of each neuron is connected with the local acceptance region of preceding layer, and extracts the feature of the part.Once the part is special
After sign is extracted, its positional relationship between other feature is also decided therewith;The second is Feature Mapping layer, each of network
Computation layer is made of multiple Feature Mappings, and each Feature Mapping is a plane, and the weight of all neurons is equal in plane.This
In, executing subject can input image to be processed from the input side of convolutional neural networks, successively by convolutional neural networks
Each layer parameter processing, and from the outlet side of convolutional neural networks export, outlet side output information be figure to be processed
The illness tag image of picture.
In the present embodiment, convolutional neural networks can be used for characterizing the illness of skin disorder image Yu skin disorder image
Corresponding relationship between tag image.Executing subject can train in several ways can characterize skin disorder image and skin
The convolutional neural networks of corresponding relationship between the illness tag image of skin illness image.
As an example, executing subject can be marked based on the sample illness to great amount of samples image and sample image and be schemed
As being counted and generating pair for being stored with the corresponding relationship of sample illness tag image of multiple sample images and sample image
Relation table is answered, and using the mapping table as convolutional neural networks.In this way, executing subject can be right with this by image to be processed
Multiple sample images in relation table are answered successively to be compared, if in the mapping table a sample image and figure to be processed
As same or similar, then using the sample illness tag image of the sample image in the mapping table as image to be processed
Illness tag image.
As another example, executing subject can obtain the sample illness label figure of sample image and sample image first
Picture;Then using sample image as input, using the sample illness tag image of sample image as output, training obtains being capable of table
Levy the convolutional neural networks of the corresponding relationship between sample image and the sample illness tag image of sample image.In this way, executing
Main body can input image to be processed from the input side of convolutional neural networks, successively by each layer in convolutional neural networks
The processing of parameter, and exported from the outlet side of convolutional neural networks, the information of outlet side output is the sample of image to be processed
Illness tag image.
Above-mentioned illness tag image is input to above-mentioned Classification Neural, obtains above-mentioned image to be processed by second step
Illness description information.
After obtaining illness tag image, illness tag image can be inputted Classification Neural by executing subject, be obtained pair
Answer the illness description information of illness tag image.
In the present embodiment, Classification Neural can be used for characterizing between illness tag image and illness description information
Corresponding relationship, executing subject can be trained in several ways and can be characterized between illness tag image and illness description information
Corresponding relationship Classification Neural.
As an example, executing subject can be based on to great amount of samples illness tag image and sample illness tag image
Sample illness description information counted and generate and be stored with multiple sample illness tag images and sample illness tag image
Sample illness description information corresponding relationship mapping table, and using the mapping table as Classification Neural.This
Sample, executing subject can successively compare illness tag image and multiple sample illness tag images in the mapping table
Compared with if the sample illness tag image and illness tag image in the mapping table are same or similar, by the correspondence
Illness description information of the sample illness description information of the sample illness tag image in relation table as illness tag image.
As another example, executing subject can obtain sample illness tag image and sample illness tag image first
Sample illness description information;Then using sample illness tag image as input, by the sample disease of sample illness tag image
Disease description information as output, training obtains that pair between sample illness tag image and sample illness description information can be characterized
The Classification Neural that should be related to.In this way, executing subject can be defeated by the input side of illness tag image from Classification Neural
Enter, successively by the processing of the parameter of each layer in Classification Neural, and is exported from the outlet side of Classification Neural, output
The information of side output is the illness description information (the illness description information of image i.e. to be processed) of illness tag image.
With further reference to Fig. 3, it illustrates according to one embodiment of the illness identification model training method of the application
Process 300.The process 300 of the illness identification model training method, comprising the following steps:
Step 301, multiple sample images comprising sample skin illness image and above-mentioned multiple comprising sample skin are obtained
Above-mentioned sample illness tag image corresponding to each sample image in the sample image of illness image and the description of sample illness
Information.
In the present embodiment, executing subject (such as the clothes shown in FIG. 1 of illness identification model training method operation thereon
Business device 105) available multiple records have each sample image institute in the sample image and multiple sample images of skin disorder
Corresponding sample illness tag image and sample illness description information.Wherein, above-mentioned sample illness tag image includes label side
Frame and the sample skin illness image in label box, above-mentioned sample skin illness image includes sample illness image and sample
This periphery symptomatic picture.Wherein, sample illness image may be considered the principal pathogenetic area of skin disorder, such as can be skin
It is damaged.Sample perimeter symptomatic picture may be considered the simultaneous phenomenon region around the principal pathogenetic area of skin disorder, such as can
To be the inflamed area around skin injury.For specific illness, peripheral symptoms image can be different, specifically regard actual conditions
Depending on.
In the present embodiment, the available multiple records of executing subject have the sample image of skin disorder, and are this field
Technical staff plays, and those skilled in the art rule of thumb can mark sample to each sample image in multiple sample images
Illness tag image and sample illness description information.
Step 302, successively by each sample image in above-mentioned multiple sample images comprising sample skin illness image
It is input to initial illness identification model, obtains each sample in above-mentioned multiple sample images comprising sample skin illness image
Prediction illness tag image corresponding to image and prediction illness description information.
In the present embodiment, there are the sample image of skin disorder, executing subject based on multiple records acquired in step 301
Each sample image in multiple sample images can be sequentially input to initial illness identification model, to obtain multiple samples
Prediction illness tag image corresponding to each sample image in image and prediction illness description information.Here, executing subject
Each sample image can be inputted from the input side of initial illness identification model, successively by initial illness identification model
The processing of the parameter of each layer, and exported from the outlet side of initial illness identification model, the information of outlet side output is the sample
Prediction illness tag image corresponding to image and prediction illness description information.Wherein, initial illness identification model can be not
Housebroken illness identification model or the illness identification model that training is not completed, each layer are provided with initiation parameter, initialize
Parameter can be continuously adjusted in the training process of illness identification model.
Step 303, each sample image institute in above-mentioned multiple sample images comprising sample skin illness image is right
The prediction illness tag image answered and prediction illness description information respectively with sample illness label figure corresponding to the sample image
Picture and sample illness description information are compared, and obtain the predictablity rate of above-mentioned initial illness identification model.
In the present embodiment, based on corresponding to each sample image in the obtained multiple sample images of step 302
Predict illness tag image and prediction illness description information.Executing subject can be by each sample image in multiple sample images
Sample illness label corresponding to corresponding prediction illness tag image and prediction illness description information and the sample image is schemed
Picture and sample illness description information are compared, to obtain the predictablity rate of initial illness identification model.Here, master is executed
Body can calculate the ratio for predicting correct number and total sample number, and using the ratio as the prediction of initial illness identification model
Accuracy rate.
Step 304, determine whether above-mentioned predictablity rate is greater than default accuracy rate threshold value.
In the present embodiment, the predictablity rate based on the obtained initial illness identification model of step 303, executing subject
The predictablity rate of initial illness identification model can be compared with default accuracy rate threshold value.If more than default accuracy rate threshold
Value, thens follow the steps 305;If thening follow the steps 306 no more than default accuracy threshold value.
Step 305, the illness identification model above-mentioned initial illness identification model completed as training.
In the present embodiment, the case where the prediction accuracy of initial illness identification model is greater than default accuracy rate threshold value
Under, illustrate that illness identification model training is completed.At this point, executing subject can be completed initial illness identification model as training
Illness identification model.
Step 306, the parameter of above-mentioned initial illness identification model is adjusted.
In the present embodiment, the case where the prediction accuracy of initial illness identification model is not more than default accuracy rate threshold value
Under, the parameter of the adjustable initial illness identification model of executing subject, and 302 are returned to step, until training being capable of table
Until the illness identification model for levying the corresponding relationship between image to be processed and illness information.
With continued reference to the signal that Fig. 4, Fig. 4 are according to the application scenarios of the method for obtaining information of the present embodiment
Figure.In the application scenarios of Fig. 4, image to be processed can be sent to service via network 104 by terminal device 102 by user
Device 105.Server 105 obtains illness information after image to be processed is imported illness identification model.Later, server can also incite somebody to action
Illness information feeds back to terminal device 102.
The method provided by the above embodiment of the application obtains image to be processed first;Then above-mentioned image to be processed is led
Enter illness identification model trained in advance, obtains illness information.Wherein, the illness tag image that illness information includes includes label
Box and the skin disorder image in label box, realize the accurate marker to skin disorder, improve acquisition illness
The accuracy of information.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides one kind for obtaining letter
One embodiment of the device of breath, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which can specifically answer
For in various electronic equipments.
As shown in figure 5, the device 500 for obtaining information of the present embodiment may include: image acquisition unit 501 and letter
Cease acquiring unit 502.Wherein, image acquisition unit 501 is configured to obtain image to be processed, and above-mentioned image to be processed includes skin
Skin illness image;Information acquisition unit 502 is configured to above-mentioned image to be processed importing illness identification model trained in advance,
Obtaining illness information, wherein above-mentioned illness identification model is used to characterize the corresponding relationship between image to be processed and illness information,
Above-mentioned illness information includes the illness tag image and illness identification information for identifying illness, and above-mentioned illness tag image includes
Mark box and the skin disorder image in label box.
In some optional implementations of the present embodiment, the above-mentioned device 500 for obtaining information can also include:
First output unit (not shown) corresponds to a kind of skin disorder in response to illness information, is configured to the skin disorder
As final illness information.
In some optional implementations of the present embodiment, the above-mentioned device 500 for obtaining information can also include:
Second output unit (not shown), in response to corresponding at least two skin disorders of illness information, be configured to it is above-mentioned extremely
The illness feature of few two kinds of skin disorders is counted, and determines final illness information, wherein illness feature include it is following at least
One: the quantity of the area of skin disorder, skin disorder.
In some optional implementations of the present embodiment, the above-mentioned device 500 for obtaining information can also include:
State of an illness describe information acquiring unit (not shown), is configured to obtain state of an illness describe information, and above-mentioned state of an illness describe information is used
It is described in the corresponding skin disorder of above-mentioned image to be processed, and, the above-mentioned device 500 for obtaining information also wraps
Include: third output unit (not shown) is configured to export in response to above-mentioned state of an illness describe information and illness information matches
Diagnose correct information.
In some optional implementations of the present embodiment, above-mentioned illness identification model may include convolutional neural networks
And Classification Neural.
In some optional implementations of the present embodiment, above- mentioned information acquiring unit 502 may include: illness label
Image obtains subelement (not shown) and illness description information obtains subelement (not shown).
Wherein, illness tag image obtains subelement and is configured to above-mentioned image to be processed being input to above-mentioned convolutional Neural
Network obtains the illness tag image of above-mentioned image to be processed, wherein above-mentioned convolutional neural networks are for characterizing image to be processed
With the corresponding relationship between illness tag image, above-mentioned illness tag image is for marking skin disorder in above-mentioned image to be processed
In position;
Illness description information obtains subelement, is configured to above-mentioned illness tag image being input to above-mentioned classification nerve net
Network obtains the illness description information of above-mentioned image to be processed, wherein above-mentioned Classification Neural is for characterizing illness tag image
With the corresponding relationship between illness description information.
In some optional implementations of the present embodiment, the above-mentioned device 500 for obtaining information can also include
Illness identification model training unit (not shown) is configured to train illness identification model, above-mentioned illness identification model instruction
Practicing unit may include: that sample acquisition subelement (not shown) and illness identification model training subelement (do not show in figure
Out).Wherein, sample acquisition subelement is configured to obtain multiple sample images comprising sample skin illness image and above-mentioned more
Above-mentioned sample illness tag image corresponding to each sample image in a sample image comprising sample skin illness image
With sample illness description information, wherein above-mentioned sample illness tag image includes marking box and the sample in label box
This skin disorder image, above-mentioned sample skin illness image include sample illness image and sample perimeter symptomatic picture;Illness is known
Other model training subelement is configured to each sample in above-mentioned multiple sample images comprising sample skin illness image
Image, will be corresponding to each sample image in above-mentioned multiple sample images comprising sample skin illness image as input
As output, training obtains illness identification model for sample illness tag image and sample illness description information.
In some optional implementations of the present embodiment, above-mentioned illness identification model training subelement may include:
Illness identification model training module (not shown) is configured to above-mentioned multiple samples comprising sample skin illness image
Each sample image in image is sequentially input to initial illness identification model, is obtained above-mentioned multiple comprising sample skin illness figure
Prediction illness tag image corresponding to each sample image in the sample image of picture and prediction illness description information, will be above-mentioned
Prediction illness tag image corresponding to each sample image in multiple sample images comprising sample skin illness image and
Predict illness description information respectively with sample illness tag image corresponding to the sample image and sample illness description information into
Row compares, and obtains the predictablity rate of above-mentioned initial illness identification model, determines whether above-mentioned predictablity rate is greater than default standard
True rate threshold value, if more than above-mentioned default accuracy rate threshold value, then using above-mentioned initial illness identification model as the illness of training completion
Identification model.
In some optional implementations of the present embodiment, above-mentioned illness identification model training subelement can also be wrapped
Include: parameter adjustment module (not shown) is configured to adjust above-mentioned first in response to being not more than above-mentioned default accuracy rate threshold value
The parameter of beginning illness identification model, and continue to execute above-mentioned training step.
The present embodiment additionally provides a kind of server, comprising: one or more processors;Memory is stored thereon with one
A or multiple programs, when said one or multiple programs are executed by said one or multiple processors so that said one or
Multiple processors execute the above-mentioned method for obtaining information.
The present embodiment additionally provides a kind of computer-readable medium, is stored thereon with computer program, and the program is processed
Device realizes the above-mentioned method for obtaining information when executing.
Below with reference to Fig. 6, it illustrates the servers for being suitable for being used to realize the embodiment of the present application (for example, the service in Fig. 1
Device 105) computer system 600 structural schematic diagram.Server shown in Fig. 6 is only an example, should not be to the application
The function and use scope of embodiment bring any restrictions.
As shown in fig. 6, computer system 600 includes central processing unit (CPU) 601, it can be read-only according to being stored in
Program in memory (ROM) 602 or be loaded into the program in random access storage device (RAM) 603 from storage section 608 and
Execute various movements appropriate and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data.
CPU 601, ROM 602 and RAM 603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to always
Line 604.
I/O interface 605 is connected to lower component: the importation 606 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 607 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage section 608 including hard disk etc.;
And the communications portion 609 of the network interface card including LAN card, modem etc..Communications portion 609 via such as because
The network of spy's net executes communication process.Driver 610 is also connected to I/O interface 605 as needed.Detachable media 611, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on as needed on driver 610, in order to read from thereon
Computer program be mounted into storage section 608 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communications portion 609, and/or from detachable media
611 are mounted.When the computer program is executed by central processing unit (CPU) 601, limited in execution the present processes
Above-mentioned function.
It should be noted that the above-mentioned computer-readable medium of the application can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not
Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.Meter
The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, just of one or more conducting wires
Taking formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In this application, computer readable storage medium can be it is any include or storage journey
The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.And at this
In application, computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal,
Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including but unlimited
In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can
Any computer-readable medium other than storage medium is read, which can send, propagates or transmit and be used for
By the use of instruction execution system, device or device or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to: wireless, electric wire, optical cable, RF etc. are above-mentioned
Any appropriate combination.
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the application, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use
The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor packet
Include image acquisition unit and information acquisition unit.Wherein, the title of these units is not constituted under certain conditions to the unit
The restriction of itself, for example, information acquisition unit is also described as " for obtaining illness information by illness identification model
Unit ".
As on the other hand, present invention also provides a kind of computer-readable medium, which be can be
Included in device described in above-described embodiment;It is also possible to individualism, and without in the supplying device.Above-mentioned calculating
Machine readable medium carries one or more program, when said one or multiple programs are executed by the device, so that should
Device: obtaining image to be processed, and above-mentioned image to be processed includes skin disorder image;Above-mentioned image to be processed is imported into instruction in advance
Experienced illness identification model, obtains illness information, wherein above-mentioned illness identification model is believed for characterizing image to be processed and illness
Corresponding relationship between breath, above-mentioned illness information include the illness tag image and illness identification information for identifying illness, on
Stating illness tag image includes marking box and the skin disorder image in label box.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.Those skilled in the art
Member is it should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed herein
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (20)
1. a kind of method for obtaining information, comprising:
Image to be processed is obtained, the image to be processed includes skin disorder image;
The image to be processed is imported into illness identification model trained in advance, obtains illness information, wherein the illness identification
Model is used to characterize the corresponding relationship between image to be processed and illness information, and the illness information includes for identifying illness
Illness tag image and illness identification information, the illness tag image include marking box and the skin in label box
Illness image.
2. according to the method described in claim 1, wherein, the method also includes:
A kind of skin disorder is corresponded in response to illness information, using the skin disorder as final illness information.
3. according to the method described in claim 1, wherein, the method also includes:
In response to corresponding at least two skin disorders of illness information, unite to the illness feature of at least two skin disorder
Meter, determines final illness information, wherein illness feature includes at least one of the following: the area of skin disorder, skin disorder
Quantity.
4. according to the method described in claim 1, wherein, the method also includes:
State of an illness describe information is obtained, the state of an illness describe information is used to retouch the corresponding skin disorder of the image to be processed
It states, and
The method also includes:
In response to the state of an illness describe information and illness information matches, output diagnosis correct information.
5. according to the method described in claim 1, wherein, the illness identification model includes convolutional neural networks and classification nerve
Network.
6. according to the method described in claim 5, wherein, the illness that the image importing to be processed is trained in advance identifies
Model obtains illness information, comprising:
The image to be processed is input to the convolutional neural networks, obtains the illness tag image of the image to be processed,
Wherein, the convolutional neural networks are used to characterize the corresponding relationship between image to be processed and illness tag image, the illness
Tag image is for marking position of the skin disorder in the image to be processed;
The illness tag image is input to the Classification Neural, obtains the illness description letter of the image to be processed
Breath, wherein the Classification Neural is used to characterize the corresponding relationship between illness tag image and illness description information.
7. according to the method described in claim 1, wherein, training obtains the illness identification model as follows:
It obtains in multiple sample images comprising skin disorder image and the multiple sample image comprising skin disorder image
Each sample image corresponding to the sample illness tag image and sample illness description information, wherein sample disease
Disease tag image includes marking box and the sample skin illness image in label box, the sample skin illness image
Including sample illness image and sample perimeter symptomatic picture;
It, will be described more using each sample image in the multiple sample image comprising sample skin illness image as input
Sample illness tag image and sample corresponding to each sample image in a sample image comprising sample skin illness image
This illness description information obtains illness identification model as output, training.
8. described by the multiple sample graph comprising sample skin illness image according to the method described in claim 7, wherein
Each sample image as in is as input, by each sample in the multiple sample image comprising sample skin illness image
As output, training obtains illness identification mould for sample illness tag image corresponding to this image and sample illness description information
Type, comprising:
Execute following training step: by each sample image in the multiple sample image comprising sample skin illness image
It sequentially inputs and obtains the multiple each of the sample image comprising sample skin illness image to initial illness identification model
Prediction illness tag image corresponding to sample image and prediction illness description information, by the multiple comprising sample skin illness
Prediction illness tag image corresponding to each sample image in the sample image of image and prediction illness description information difference
It is compared with sample illness tag image corresponding to the sample image and sample illness description information, obtains the initial disease
The predictablity rate of disease identification model, determines whether the predictablity rate is greater than default accuracy rate threshold value, if more than described pre-
If accuracy rate threshold value, then using the initial illness identification model as the illness identification model of training completion.
9. described by the multiple sample graph comprising sample skin illness image according to the method described in claim 8, wherein
Each sample image as in is as input, by each sample in the multiple sample image comprising sample skin illness image
As output, training obtains illness identification mould for sample illness tag image corresponding to this image and sample illness description information
Type, further includes:
In response to being not more than the default accuracy rate threshold value, the parameter of the initial illness identification model is adjusted, and continue to execute
The training step.
10. a kind of for obtaining the device of information, comprising:
Image acquisition unit, is configured to obtain image to be processed, and the image to be processed includes skin disorder image;
Information acquisition unit is configured to the image to be processed importing illness identification model trained in advance, obtains illness
Information, wherein the illness identification model is used to characterize the corresponding relationship between image to be processed and illness information, the illness
Information includes the illness tag image and illness identification information for identifying illness, and the illness tag image includes label box
With the skin disorder image being located in label box.
11. device according to claim 10, wherein described device further include:
First output unit corresponds to a kind of skin disorder in response to illness information, is configured to using the skin disorder as final
Illness information.
12. device according to claim 10, wherein described device further include:
Second output unit is configured in response to corresponding at least two skin disorders of illness information at least two skin
The illness feature of skin illness is counted, and determines final illness information, wherein illness feature includes at least one of the following: skin
The quantity of the area of skin illness, skin disorder.
13. device according to claim 10, wherein described device further include:
State of an illness describe information acquiring unit, is configured to obtain state of an illness describe information, and the state of an illness describe information is used for described
The corresponding skin disorder of image to be processed is described, and
Described device further include:
Third output unit is configured to export diagnosis and just firmly believes in response to the state of an illness describe information and illness information matches
Breath.
14. device according to claim 10, wherein the illness identification model includes convolutional neural networks and classification mind
Through network.
15. device according to claim 14, wherein the information acquisition unit includes:
Illness tag image obtains subelement, is configured to the image to be processed being input to the convolutional neural networks, obtain
To the illness tag image of the image to be processed, wherein the convolutional neural networks are for characterizing image to be processed and illness
Corresponding relationship between tag image, the illness tag image is for marking position of the skin disorder in the image to be processed
It sets;
Illness description information obtains subelement, is configured to the illness tag image being input to the Classification Neural,
Obtain the illness description information of the image to be processed, wherein the Classification Neural for characterize illness tag image with
Corresponding relationship between illness description information.
16. device according to claim 10, wherein described device further includes illness identification model training unit, is matched
It is set to trained illness identification model, the illness identification model training unit includes:
Sample acquisition subelement is configured to obtain multiple sample images and the multiple packet comprising sample skin illness image
The sample illness tag image and sample corresponding to each sample image in the sample image of the image of illness containing sample skin
This illness description information, wherein the sample illness tag image includes marking box and the sample skin in label box
Skin illness image, the sample skin illness image include sample illness image and peripheral symptoms image;
Illness identification model trains subelement, and being configured to will be in the multiple sample image comprising sample skin illness image
Each sample image as input, by each sample graph in the multiple sample image comprising sample skin illness image
It is exported as corresponding sample illness tag image and sample illness description information are used as, training obtains illness identification model.
17. device according to claim 16, wherein illness identification model training subelement includes:
Illness identification model training module, being configured to will be in the multiple sample image comprising sample skin illness image
Each sample image, which is sequentially input, obtains the multiple sample comprising sample skin illness image to initial illness identification model
Prediction illness tag image corresponding to each sample image in image and prediction illness description information include by the multiple
Prediction illness tag image corresponding to each sample image in the sample image of sample skin illness image and prediction illness
Description information is compared with sample illness tag image corresponding to the sample image and sample illness description information respectively, is obtained
To the predictablity rate of the initial illness identification model, determine whether the predictablity rate is greater than default accuracy rate threshold value,
If more than the default accuracy rate threshold value, then using the initial illness identification model as the illness identification model of training completion.
18. device according to claim 17, wherein the illness identification model training subelement further include:
Parameter adjustment module is configured to adjust the initial illness identification in response to being not more than the default accuracy rate threshold value
The parameter of model, and continue to execute the training step.
19. a kind of server, comprising:
One or more processors;
Memory is stored thereon with one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors
Perform claim requires any method in 1 to 9.
20. a kind of computer-readable medium, is stored thereon with computer program, which is characterized in that the program is executed by processor
Method of the Shi Shixian as described in any in claim 1 to 9.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811054409.6A CN109166120A (en) | 2018-09-11 | 2018-09-11 | For obtaining the method and device of information |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811054409.6A CN109166120A (en) | 2018-09-11 | 2018-09-11 | For obtaining the method and device of information |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN109166120A true CN109166120A (en) | 2019-01-08 |
Family
ID=64894699
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201811054409.6A Pending CN109166120A (en) | 2018-09-11 | 2018-09-11 | For obtaining the method and device of information |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN109166120A (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110135517A (en) * | 2019-05-24 | 2019-08-16 | 北京百度网讯科技有限公司 | Method and device for obtaining vehicle similarity |
| CN113096100A (en) * | 2021-04-15 | 2021-07-09 | 杭州睿胜软件有限公司 | Method for diagnosing plant diseases and plant disease diagnosis system |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160364526A1 (en) * | 2015-06-12 | 2016-12-15 | Merge Healthcare Incorporated | Methods and Systems for Automatically Analyzing Clinical Images Using Models Developed Using Machine Learning Based on Graphical Reporting |
| CN107145910A (en) * | 2017-05-08 | 2017-09-08 | 京东方科技集团股份有限公司 | Performance generation system, its training method and the performance generation method of medical image |
| CN107203995A (en) * | 2017-06-09 | 2017-09-26 | 合肥工业大学 | Endoscopic images intelligent analysis method and system |
| CN107247958A (en) * | 2017-04-14 | 2017-10-13 | 安徽工程大学 | A kind of skin disease feature extracting method based on image recognition |
| CN107330449A (en) * | 2017-06-13 | 2017-11-07 | 瑞达昇科技(大连)有限公司 | Method and device for detecting signs of diabetic retinopathy |
| CN107563123A (en) * | 2017-09-27 | 2018-01-09 | 百度在线网络技术(北京)有限公司 | Method and apparatus for marking medical image |
| CN107945173A (en) * | 2017-12-11 | 2018-04-20 | 深圳市宜远智能科技有限公司 | A kind of skin disease detection method and system based on deep learning |
| CN108491808A (en) * | 2018-03-28 | 2018-09-04 | 百度在线网络技术(北京)有限公司 | Method and device for obtaining information |
-
2018
- 2018-09-11 CN CN201811054409.6A patent/CN109166120A/en active Pending
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160364526A1 (en) * | 2015-06-12 | 2016-12-15 | Merge Healthcare Incorporated | Methods and Systems for Automatically Analyzing Clinical Images Using Models Developed Using Machine Learning Based on Graphical Reporting |
| CN107247958A (en) * | 2017-04-14 | 2017-10-13 | 安徽工程大学 | A kind of skin disease feature extracting method based on image recognition |
| CN107145910A (en) * | 2017-05-08 | 2017-09-08 | 京东方科技集团股份有限公司 | Performance generation system, its training method and the performance generation method of medical image |
| CN107203995A (en) * | 2017-06-09 | 2017-09-26 | 合肥工业大学 | Endoscopic images intelligent analysis method and system |
| CN107330449A (en) * | 2017-06-13 | 2017-11-07 | 瑞达昇科技(大连)有限公司 | Method and device for detecting signs of diabetic retinopathy |
| CN107563123A (en) * | 2017-09-27 | 2018-01-09 | 百度在线网络技术(北京)有限公司 | Method and apparatus for marking medical image |
| CN107945173A (en) * | 2017-12-11 | 2018-04-20 | 深圳市宜远智能科技有限公司 | A kind of skin disease detection method and system based on deep learning |
| CN108491808A (en) * | 2018-03-28 | 2018-09-04 | 百度在线网络技术(北京)有限公司 | Method and device for obtaining information |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110135517A (en) * | 2019-05-24 | 2019-08-16 | 北京百度网讯科技有限公司 | Method and device for obtaining vehicle similarity |
| CN113096100A (en) * | 2021-04-15 | 2021-07-09 | 杭州睿胜软件有限公司 | Method for diagnosing plant diseases and plant disease diagnosis system |
| CN113096100B (en) * | 2021-04-15 | 2023-08-22 | 杭州睿胜软件有限公司 | Methods for plant condition diagnosis and plant condition diagnosis systems |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN108154196B (en) | Method and apparatus for exporting image | |
| CN108171274B (en) | The method and apparatus of animal for identification | |
| CN110110811A (en) | Method and apparatus for training pattern, the method and apparatus for predictive information | |
| CN108898185A (en) | Method and apparatus for generating image recognition model | |
| CN108446387A (en) | Method and apparatus for updating face registration library | |
| CN108830235A (en) | Method and apparatus for generating information | |
| CN107563123A (en) | Method and apparatus for marking medical image | |
| CN109410253B (en) | For generating method, apparatus, electronic equipment and the computer-readable medium of information | |
| CN108171203A (en) | For identifying the method and apparatus of vehicle | |
| CN108989882A (en) | Method and apparatus for exporting the snatch of music in video | |
| CN108494778A (en) | Identity identifying method and device | |
| CN109299477A (en) | Method and apparatus for generating text header | |
| CN109389169A (en) | Method and apparatus for handling image | |
| CN108509994A (en) | character image clustering method and device | |
| CN110046254A (en) | Method and apparatus for generating model | |
| CN109447246A (en) | Method and apparatus for generating model | |
| CN109117758A (en) | Method and apparatus for generating information | |
| CN109947989A (en) | Method and apparatus for handling video | |
| CN107729928A (en) | Information acquisition method and device | |
| CN108921323A (en) | Method and apparatus for generating information | |
| CN108960110A (en) | Method and apparatus for generating information | |
| CN108509921A (en) | Method and apparatus for generating information | |
| CN109064464A (en) | Method and apparatus for detecting battery pole piece burr | |
| CN109086828A (en) | Method and apparatus for detecting battery pole piece | |
| CN108133197A (en) | For generating the method and apparatus of information |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190108 |