US20220207879A1 - Method for evaluating environment of a pedestrian passageway and electronic device using the same - Google Patents
Method for evaluating environment of a pedestrian passageway and electronic device using the same Download PDFInfo
- Publication number
- US20220207879A1 US20220207879A1 US17/562,297 US202117562297A US2022207879A1 US 20220207879 A1 US20220207879 A1 US 20220207879A1 US 202117562297 A US202117562297 A US 202117562297A US 2022207879 A1 US2022207879 A1 US 2022207879A1
- Authority
- US
- United States
- Prior art keywords
- streetscape image
- streetscape
- image
- target
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/54—Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30236—Traffic on road, railway or crossing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Definitions
- the subject matter herein generally relates to a field of deep learning, and especially relates to a method for evaluating environment of a pedestrian passageway and an electronic device.
- the facilities and environment around the house are also important in affecting an evaluation of area, for example, pedestrian road facilities, traffic environment, and weather conditions around the house may persuade or dissuade potential buyers of the house. Therefore, within a preset limit of a house purchase budget, how to calculate a trade-off between various environments and facilities around the house is very important.
- a pedestrian friendly environment of a passageway is undoubtedly an important indicator to add points to a local environment of the house.
- FIG. 1 is a flowchart of one embodiment of a method for evaluating surroundings of a pedestrian passageway according to the present disclosure.
- FIG. 2 is a block diagram of one embodiment of a device for evaluating surroundings of a pedestrian passageway according to the present disclosure.
- FIG. 3 is a schematic diagram of one embodiment of an electronic device using the method of FIG. 1 according to the present disclosure.
- module refers to logic embodied in hardware or firmware, or to a collection of software instructions, written in a programming language, such as, Java, C, or assembly. One or more software instructions in the modules can be embedded in firmware, such as in an EPROM.
- the modules described herein can be implemented as either software and/or hardware modules and can be stored in any type of non-transitory computer-readable medium or other storage device. Some non-limiting examples of non-transitory computer-readable media include CDs, DVDs, BLU-RAY, flash memory, and hard disk drives.
- the term “comprising” means “including, but not necessarily limited to”; it specifically indicates open-ended inclusion or membership in a so-described combination, group, series, and the like.
- a method for evaluating environment and surroundings of a pedestrian passageway is disclosed.
- the method is applied in one or more electronic devices.
- the hardware of the electronic device includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Digital signal processor (DSP), or an embedded equipment, etc.
- the electronic device can be a desktop computer, a notebook computer, a tablet computer, a cloud server, or other computing device.
- the device can carry out a human-computer interaction with user by a keyboard, a mouse, a remote controller, a touch pad or a voice control device.
- FIG. 1 illustrates the method for evaluating environment of a pedestrian passageway.
- the method is applied in the electronic device 6 (referring to FIG. 3 ).
- the method is provided by way of example, as there are a variety of ways to carry out the method.
- Each block shown in FIG. 1 represents one or more processes, methods, or subroutines carried out in the example method.
- the illustrated order of blocks is by example only and the order of the blocks can be changed. Additional blocks may be added or fewer blocks may be utilized, without departing from this disclosure.
- the example method can begin at block 11 .
- the position information includes longitude and dimension.
- the electronic device 6 obtains the longitude and the dimension of the area. In one embodiment, the electronic device 6 obtains the longitude and the dimension of the target area by a GPS positioning device. In another embodiment, the electronic device 6 provides a user interface and receives the position information input by the user interface.
- the target area can be a commercial or other building and an area of a street.
- obtaining the streetscape image corresponding to the position information of the target area includes: querying an image database by the position information of the target area, and obtaining at least one streetscape image corresponding to the position information of the target area, where the image database includes a number of streetscape images, and each of the streetscape images corresponds to one position information.
- each streetscape image includes at least one target object.
- the target object includes at least one of a bus stop sign, a roadside tree, a road, a sidewalk, an electric pole, a roadside bench, a roadside fire hydrant and an electric substation box.
- the target object in this application is not limited to above objects, and any facility or hardware object that hinders progress on the sidewalk or is otherwise conspicuous can be used as the target object in this application.
- obtaining the streetscape image corresponding to the position information of the target area includes: obtaining four streetscape images corresponding to the position information, the four streetscape images are respectively captured at four 90 degree angles of rotation, and the four streetscape images constitute a 360 angles panoramic image.
- inputting the streetscape image into a trained convolutional neural network making the trained convolutional neural network carry out a convolution calculation of the streetscape image to generate a feature vector for classifying the number of the target objects in the streetscape image, and outputting the feature vector.
- the electronic device 6 uses the trained convolution neural network to carry out a convolution calculation of the streetscape image to generate a feature map of the streetscape image, uses an object recognition and segmentation model to take each point of the feature map of the streetscape image as center points of frames with a certain sizes, compares each frame with the streetscape image to determine each of the target objects in the streetscape image, outputs a target frame that frames each of the target objects in the streetscape image, and classifies each of the target objects to obtain the feature vector.
- the trained convolution neural network includes a convolution layer, a pooling layer, and a full connection layer.
- the trained convolutional neural network includes ten convolution layers, three pooling layers, and three full connection layers.
- the method further includes: training a convolutional neural network to obtain the trained convolutional neural network.
- training the convolutional neural network to obtain the trained convolutional neural network includes: establishing a training set by using a number of training images, each of the target objects being marked in each of the training images, training the convolutional neural network by using the training set to obtain the trained convolutional neural network.
- training the convolutional neural network by using the training set to obtain the trained convolutional neural network includes: by the convolution layer of the convolution neural network, making the training image in the training set carry out a convolution manipulation and outputting the feature map of the training image; making the feature map deal with dimension reduction by the pooling layer to generate a second feature map, and inputting the second feature map into the full connection layer, where the full connection layer is configured to synthesize the second feature map extracted after convolution operation and output a number of training parameters and a feature model of the convolution neural network, where the feature model is an abstract feature expression of the training image.
- the convolutional neural network accords with a convergence condition must be determined, namely, determining whether the feature model is consistent with a preset standard feature model.
- the feature model When the feature model is consistent with the preset standard feature model, it is determined that the convolutional neural network accords with the convergence condition; when the feature model is not consistent with the preset standard feature model, it is determined that the convolutional neural network does not accord with the convergence conditions, and the preset standard feature model is the target object marked in the training image.
- the feature model when the feature model accords with the preset standard feature model, the feature model is output; and when the characteristic model does not accord with the preset standard feature model, a weighting matrix of the convolutional neural network is adjusted in the manner of a back propagation.
- the error is transmitted back along an original path by back propagation, so as to correct the training parameters of each of layers (E. G, convolution layer and pooling layer) of the convolutional neural network.
- the training parameters include a number of weighted values and bias, and a modified convolution layer and a pooling layer of the convolutional neural network are used to convolute the training data (for example training images) again until the feature model accords with the convergence condition.
- a number of feature maps can be applied to the training image to obtain the features of the training image, and each feature map extracts a feature of the training image.
- inputting the feature vector of the streetscape image into the convolution neural network to label a number of pixels belonging to the same target object by color, and outputting the streetscape image with target objects in color includes: inputting the feature vector of the streetscape image into the full convolution neural network to label the pixels belonging to the same target object in the feature vector by color, framing the pixels with a same color label together and the pixels with the same color label forming a same target object, and outputting the streetscape image with color label of the same target object.
- the method further includes: identifying the target objects from the streetscape image with color label of the target objects, and displaying the target objects by displaying text information.
- the present disclosure inputs the feature vector of the streetscape image into the full convolution neural network to label the pixels belonging to the same target object in the feature vector by color, and outputs the streetscape image with color label of the target object, so that a classification of the target object can be distinguished by color from the streetscape image, so as to help people identify obstacles in the environment of the pedestrian passageway.
- FIG. 2 illustrates a device 30 for evaluating environment of a pedestrian passageway.
- the device 30 is applied in the electronic device 6 .
- the device 30 can be divided into a plurality of functional modules.
- the functional modules perform the blocks 11 - 14 in the embodiment of FIG. 1 to perform the functions of evaluating environment of the pedestrian passageway.
- the device 30 includes, but is not limited to, a position information acquisition module 301 , an image information acquisition module 302 , a classification module 303 , a training module 304 , a color labeling module 305 and a display module 306 .
- the modules 301 - 306 of the device 30 can be collections of software instructions.
- the program code of each program segment in the software instructions can be stored and executed by at least one processor to perform the function of evaluating environment of the pedestrian passageway.
- the position information acquisition module 301 obtains a position information of an area to be investigated in an environment of a pedestrian passageway.
- the position information of the target area includes longitude and dimension.
- the position information acquisition module 301 obtains the longitude and the dimension of the target area and regards the longitude and the dimension as the position information of the target area.
- the position information acquisition module 301 obtains the longitude and the dimension of the target area to be detected by a GPS positioning device.
- the position information acquisition module 301 provides a user interface and receives the position information input by the user interface.
- the target area can be a commercial or other building and an area of an adjacent street.
- the image information acquisition module 302 obtains a streetscape image corresponding to the position information of the target area, where the streetscape image includes a number of target objects.
- the image information acquisition module 302 queries an image database by the position information of the target area, and obtains at least one streetscape image corresponding to the position information, where the image database includes a number of streetscape images, and each of the streetscape images is corresponding to one position information.
- each of the streetscape images includes at least one target object.
- a target object can include at least one of a bus stop sign, a roadside tree, a road, a sidewalk, an electric pole, a roadside bench, a roadside fire hydrant, and an electrical substation box. It should be noted that the target object in this application is not limited to above objects, and any conspicuous facility or hardware object that hinders progress on the sidewalk can be used as the target object in this application.
- the image information acquisition module 302 obtains four streetscape images corresponding to the position information of the target area, and the four streetscape images are captured at four 90 degree angles, and the four streetscape images constitute a 360 degree angles of panoramic image.
- the classification module 303 inputs the streetscape image into a trained convolutional neural network, makes the trained convolutional neural network carry out a convolution calculation of the streetscape image to generate a feature vector for classifying a number of target objects in the streetscape image, and outputs the feature vector.
- the classification module 303 uses the trained convolution neural network to carry out a convolution calculation of the streetscape image to generate a feature map of the streetscape image.
- An object recognition and segmentation model is used to take each point of the feature map of the streetscape image as center points of frames with a certain sizes, compare each of the frames with the streetscape image to determine each target object in the streetscape image, output a number of target frames that frame each target object in the streetscape image, and classify each target object to obtain the feature vector.
- the trained convolution neural network includes a convolution layer, a pooling layer, and a full connection layer.
- the trained convolutional neural network includes ten convolution layers, three pooling layers, and three full connection layers.
- the training module 304 trains a convolutional neural network to obtain the trained convolutional neural network.
- the training module 304 establishes a training set by using a number of training images and each target object is marked in each of the training images.
- the convolutional neural network is trained by using the training set to obtain the trained convolutional neural network.
- the training module 304 carries out a convolution calculation of the training image in the training set to generate the feature map of the training image, makes the feature map deal with dimension reduction by the pooling layer to generate a second feature map, and inputs the second feature map into the full connection layer, where the full connection layer is configured to synthesize the second feature map extracted after convolution process.
- the training module 304 outputs a number of training parameters and a feature model of the convolution neural network, where the feature model is an abstract feature expression of the training image.
- the training module 304 determines whether the convolutional neural network accords with a convergence condition, namely, determines whether the feature model is consistent with a preset standard feature model.
- the training module 304 determines that the convolutional neural network accords with the convergence condition, and when the feature model is not consistent with the preset standard feature model, the training module 304 determines that the convolutional neural network does not accord with the convergence conditions, and the preset standard feature model is the target object marked in the training image.
- the feature model is output.
- a weighting matrix of the convolutional neural network is adjusted by back propagation.
- the error in a training process of the convolutional neural network, if an error between the feature model and the preset standard feature model exists, the error is transmitted back along an original path of the convolutional neural network by back propagation, so as to correct the training parameters of each of layers (E. G, convolution layer and pooling layer) of the convolutional neural network.
- the training parameters include a number of weighted values and bias, and a modified convolution layer and a pooling layer of the convolutional neural network are used to convolute the training data (for example training images) again until the feature model accords with the convergence condition.
- a number of feature maps can be applied to the training image to obtain the features of the training image, and each feature map extracts a feature of the training image.
- the color labeling module 305 inputs the feature vector of the streetscape image into a full convolution neural network to label a number of pixels belonging to the same target object by color, and outputs the streetscape image labeling a number of target objects with color.
- the color labeling module 305 inputs the feature vector of the streetscape image into the full convolution neural network to label the pixels belonging to the same target object in the feature vector by color, frames the pixels with the same color label together and the pixels with the same color label forming the same target object, and outputs the streetscape image with colored target objects.
- the display module 306 identifies the target objects from the streetscape image with color, and displays the target objects by displaying text information.
- FIG. 3 illustrates the electronic device 6 .
- the electronic device 6 includes a storage 61 , a processor 62 , and a computer program 63 stored in the storage 61 and executed by the processor 62 .
- the processor 62 executes the computer program 63
- the processing in the embodiment of the method for evaluating environment of a pedestrian passageway are implemented, for example, blocks 11 to 14 as shown in FIG. 1 .
- the processor 62 executes the computer program 63
- the functions of the modules in the embodiment of the device 30 for evaluating environment of a pedestrian passageway are implemented, for example, modules 301 - 307 shown in FIG. 2 .
- the computer program 63 can be partitioned into one or more modules/units that are stored in the storage 61 and executed by the processor 62 .
- the one or more modules/units may be a series of computer program instruction segments capable of performing a particular function, and the instruction segments describe the execution of the computer program 63 in the electronic device 6 .
- the computer program 63 can be divided into the position information acquisition module 301 , the image information acquisition module 302 , the classification module 303 , the training module 304 , the color labeling module 305 , and the display module 306 as shown in FIG. 2 .
- FIG. 3 shows only one example of the electronic device 6 .
- the components of the electronic device 6 may also include input devices, output devices, communication units, network access devices, buses, and the like.
- the processor 62 can be a central processing unit (CPU), and also include other general-purpose processors, a digital signal processor (DSP), and application specific integrated circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc.
- the processor 62 may be a microprocessor or the processor may be any conventional processor or the like.
- the processor 62 is the control center of the electronic device 6 , and connects the electronic device 6 by using various interfaces and lines.
- the storage 61 can be used to store the computer program 63 , modules or units, and the processor 62 can realize various functions of the electronic device 6 by running or executing the computer program, modules or units stored in the storage 61 and calling up the data stored in the storage 61 .
- the storage 61 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program (such as a sound playback function, an image playing function, etc.) required for at least one function, etc.
- the data storage area can store data (such as audio data, telephone book, etc.) created according to the use of electronic device 6 .
- the storage 61 may include a high-speed random access memory, and may also include a non-volatile memory, such as a hard disk, an internal memory, a plug-in hard disk, a smart media card (SMC), a secure digital (SD) card, a flash card, at least one disk storage device, a flash memory device, or other volatile solid state storage device.
- SMC smart media card
- SD secure digital
- the modules/units integrated in the electronic device 6 can be stored in a computer readable storage medium if such modules/units are implemented in the form of a product.
- the present disclosure may be implemented and realized in any part of the method of the foregoing embodiments, or may be implemented by the computer program, which may be stored in the computer readable storage medium.
- the steps of the various method embodiments described above may be implemented by a computer program when executed by a processor.
- the computer program includes computer program code, which may be in the form of source code, object code form, executable file, or some intermediate form.
- the computer readable medium may include any entity or device capable of carrying the computer program code, a recording medium, a USB flash drive, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM), random access memory (RAM), electrical carrier signals, telecommunication signals, and software distribution media.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Geometry (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
Description
- This application claims priority to Chinese Patent Application No. 202011613540.9 filed on Dec. 30, 2020, in China National Intellectual Property Administration the contents of which are incorporated by reference herein.
- The subject matter herein generally relates to a field of deep learning, and especially relates to a method for evaluating environment of a pedestrian passageway and an electronic device.
- When evaluating livable areas, in addition to conditions of house itself, the facilities and environment around the house are also important in affecting an evaluation of area, for example, pedestrian road facilities, traffic environment, and weather conditions around the house may persuade or dissuade potential buyers of the house. Therefore, within a preset limit of a house purchase budget, how to calculate a trade-off between various environments and facilities around the house is very important. Thus, in a process of evaluating the environments and facilities around the house, a pedestrian friendly environment of a passageway is undoubtedly an important indicator to add points to a local environment of the house. However, at present, there is no effective method to assist people to evaluate the environment of the pedestrian passageway.
- Implementations of the present disclosure will now be described, by way of embodiment, with reference to the attached figures.
-
FIG. 1 is a flowchart of one embodiment of a method for evaluating surroundings of a pedestrian passageway according to the present disclosure. -
FIG. 2 is a block diagram of one embodiment of a device for evaluating surroundings of a pedestrian passageway according to the present disclosure. -
FIG. 3 is a schematic diagram of one embodiment of an electronic device using the method ofFIG. 1 according to the present disclosure. - It will be appreciated that for simplicity and clarity of illustration, where appropriate, reference numerals have been repeated among the different figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein can be practiced without these specific details. In other instances, methods, procedures, and components have not been described in detail so as not to obscure the related relevant feature being described. Also, the description is not to be considered as limiting the scope of the embodiments described herein. The drawings are not necessarily to scale and the proportions of certain parts may be exaggerated to better illustrate details and features of the present disclosure.
- The present disclosure, including the accompanying drawings, is illustrated by way of examples and not by way of limitation. Several definitions that apply throughout this disclosure will now be presented. It should be noted that references to “an” or “one” embodiment in this disclosure are not necessarily to the same embodiment, and such references mean “at least one”.
- The term “module”, as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions, written in a programming language, such as, Java, C, or assembly. One or more software instructions in the modules can be embedded in firmware, such as in an EPROM. The modules described herein can be implemented as either software and/or hardware modules and can be stored in any type of non-transitory computer-readable medium or other storage device. Some non-limiting examples of non-transitory computer-readable media include CDs, DVDs, BLU-RAY, flash memory, and hard disk drives. The term “comprising” means “including, but not necessarily limited to”; it specifically indicates open-ended inclusion or membership in a so-described combination, group, series, and the like.
- A method for evaluating environment and surroundings of a pedestrian passageway is disclosed. The method is applied in one or more electronic devices. The hardware of the electronic device includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Digital signal processor (DSP), or an embedded equipment, etc.
- In one embodiment, the electronic device can be a desktop computer, a notebook computer, a tablet computer, a cloud server, or other computing device. The device can carry out a human-computer interaction with user by a keyboard, a mouse, a remote controller, a touch pad or a voice control device.
-
FIG. 1 illustrates the method for evaluating environment of a pedestrian passageway. The method is applied in the electronic device 6 (referring toFIG. 3 ). The method is provided by way of example, as there are a variety of ways to carry out the method. Each block shown inFIG. 1 represents one or more processes, methods, or subroutines carried out in the example method. Furthermore, the illustrated order of blocks is by example only and the order of the blocks can be changed. Additional blocks may be added or fewer blocks may be utilized, without departing from this disclosure. The example method can begin atblock 11. - At
block 11, obtaining a position information of a target area in an environment of a pedestrian passageway. - In one embodiment, the position information includes longitude and dimension. The
electronic device 6 obtains the longitude and the dimension of the area. In one embodiment, theelectronic device 6 obtains the longitude and the dimension of the target area by a GPS positioning device. In another embodiment, theelectronic device 6 provides a user interface and receives the position information input by the user interface. In one embodiment, the target area can be a commercial or other building and an area of a street. - At
block 12, obtaining a streetscape image corresponding to the position information of the target area, where the streetscape image includes a number of target objects. - In one embodiment, obtaining the streetscape image corresponding to the position information of the target area includes: querying an image database by the position information of the target area, and obtaining at least one streetscape image corresponding to the position information of the target area, where the image database includes a number of streetscape images, and each of the streetscape images corresponds to one position information. In one embodiment, each streetscape image includes at least one target object. The target object includes at least one of a bus stop sign, a roadside tree, a road, a sidewalk, an electric pole, a roadside bench, a roadside fire hydrant and an electric substation box. The target object in this application is not limited to above objects, and any facility or hardware object that hinders progress on the sidewalk or is otherwise conspicuous can be used as the target object in this application.
- In one embodiment, obtaining the streetscape image corresponding to the position information of the target area includes: obtaining four streetscape images corresponding to the position information, the four streetscape images are respectively captured at four 90 degree angles of rotation, and the four streetscape images constitute a 360 angles panoramic image.
- At
block 13, inputting the streetscape image into a trained convolutional neural network, making the trained convolutional neural network carry out a convolution calculation of the streetscape image to generate a feature vector for classifying the number of the target objects in the streetscape image, and outputting the feature vector. - In one embodiment, the
electronic device 6 uses the trained convolution neural network to carry out a convolution calculation of the streetscape image to generate a feature map of the streetscape image, uses an object recognition and segmentation model to take each point of the feature map of the streetscape image as center points of frames with a certain sizes, compares each frame with the streetscape image to determine each of the target objects in the streetscape image, outputs a target frame that frames each of the target objects in the streetscape image, and classifies each of the target objects to obtain the feature vector. In one embodiment, the trained convolution neural network includes a convolution layer, a pooling layer, and a full connection layer. In one embodiment, the trained convolutional neural network includes ten convolution layers, three pooling layers, and three full connection layers. - In one embodiment, the method further includes: training a convolutional neural network to obtain the trained convolutional neural network. In one embodiment, training the convolutional neural network to obtain the trained convolutional neural network includes: establishing a training set by using a number of training images, each of the target objects being marked in each of the training images, training the convolutional neural network by using the training set to obtain the trained convolutional neural network.
- In one embodiment, training the convolutional neural network by using the training set to obtain the trained convolutional neural network includes: by the convolution layer of the convolution neural network, making the training image in the training set carry out a convolution manipulation and outputting the feature map of the training image; making the feature map deal with dimension reduction by the pooling layer to generate a second feature map, and inputting the second feature map into the full connection layer, where the full connection layer is configured to synthesize the second feature map extracted after convolution operation and output a number of training parameters and a feature model of the convolution neural network, where the feature model is an abstract feature expression of the training image. Whether the convolutional neural network accords with a convergence condition must be determined, namely, determining whether the feature model is consistent with a preset standard feature model. When the feature model is consistent with the preset standard feature model, it is determined that the convolutional neural network accords with the convergence condition; when the feature model is not consistent with the preset standard feature model, it is determined that the convolutional neural network does not accord with the convergence conditions, and the preset standard feature model is the target object marked in the training image. In one embodiment, when the feature model accords with the preset standard feature model, the feature model is output; and when the characteristic model does not accord with the preset standard feature model, a weighting matrix of the convolutional neural network is adjusted in the manner of a back propagation.
- In one embodiment, in a training process of the convolutional neural network, if there is an error between the feature model and the preset standard feature model, the error is transmitted back along an original path by back propagation, so as to correct the training parameters of each of layers (E. G, convolution layer and pooling layer) of the convolutional neural network. For example, the training parameters include a number of weighted values and bias, and a modified convolution layer and a pooling layer of the convolutional neural network are used to convolute the training data (for example training images) again until the feature model accords with the convergence condition. In one embodiment, when carrying out convolution operation of the convolutional neural network, a number of feature maps can be applied to the training image to obtain the features of the training image, and each feature map extracts a feature of the training image.
- At
block 14, inputting the feature vector of the streetscape image into a full convolution neural network to label a number of pixels belonging to the same target object by color, and outputting the streetscape image labeling a number of target objects with color. - In one embodiment, inputting the feature vector of the streetscape image into the convolution neural network to label a number of pixels belonging to the same target object by color, and outputting the streetscape image with target objects in color includes: inputting the feature vector of the streetscape image into the full convolution neural network to label the pixels belonging to the same target object in the feature vector by color, framing the pixels with a same color label together and the pixels with the same color label forming a same target object, and outputting the streetscape image with color label of the same target object.
- In one embodiment, the method further includes: identifying the target objects from the streetscape image with color label of the target objects, and displaying the target objects by displaying text information.
- The present disclosure inputs the feature vector of the streetscape image into the full convolution neural network to label the pixels belonging to the same target object in the feature vector by color, and outputs the streetscape image with color label of the target object, so that a classification of the target object can be distinguished by color from the streetscape image, so as to help people identify obstacles in the environment of the pedestrian passageway.
-
FIG. 2 illustrates adevice 30 for evaluating environment of a pedestrian passageway. Thedevice 30 is applied in theelectronic device 6. In one embodiment, according to the functions it performs, thedevice 30 can be divided into a plurality of functional modules. The functional modules perform the blocks 11-14 in the embodiment ofFIG. 1 to perform the functions of evaluating environment of the pedestrian passageway. - In one embodiment, the
device 30 includes, but is not limited to, a positioninformation acquisition module 301, an imageinformation acquisition module 302, aclassification module 303, atraining module 304, acolor labeling module 305 and adisplay module 306. The modules 301-306 of thedevice 30 can be collections of software instructions. In one embodiment, the program code of each program segment in the software instructions can be stored and executed by at least one processor to perform the function of evaluating environment of the pedestrian passageway. - The position
information acquisition module 301 obtains a position information of an area to be investigated in an environment of a pedestrian passageway. - In one embodiment, the position information of the target area includes longitude and dimension. The position
information acquisition module 301 obtains the longitude and the dimension of the target area and regards the longitude and the dimension as the position information of the target area. In one embodiment, the positioninformation acquisition module 301 obtains the longitude and the dimension of the target area to be detected by a GPS positioning device. In another embodiment, the positioninformation acquisition module 301 provides a user interface and receives the position information input by the user interface. In one embodiment, the target area can be a commercial or other building and an area of an adjacent street. - The image
information acquisition module 302 obtains a streetscape image corresponding to the position information of the target area, where the streetscape image includes a number of target objects. - In one embodiment, the image
information acquisition module 302 queries an image database by the position information of the target area, and obtains at least one streetscape image corresponding to the position information, where the image database includes a number of streetscape images, and each of the streetscape images is corresponding to one position information. In one embodiment, each of the streetscape images includes at least one target object. A target object can include at least one of a bus stop sign, a roadside tree, a road, a sidewalk, an electric pole, a roadside bench, a roadside fire hydrant, and an electrical substation box. It should be noted that the target object in this application is not limited to above objects, and any conspicuous facility or hardware object that hinders progress on the sidewalk can be used as the target object in this application. - In one embodiment, the image
information acquisition module 302 obtains four streetscape images corresponding to the position information of the target area, and the four streetscape images are captured at four 90 degree angles, and the four streetscape images constitute a 360 degree angles of panoramic image. - The
classification module 303 inputs the streetscape image into a trained convolutional neural network, makes the trained convolutional neural network carry out a convolution calculation of the streetscape image to generate a feature vector for classifying a number of target objects in the streetscape image, and outputs the feature vector. - In one embodiment, the
classification module 303 uses the trained convolution neural network to carry out a convolution calculation of the streetscape image to generate a feature map of the streetscape image. An object recognition and segmentation model is used to take each point of the feature map of the streetscape image as center points of frames with a certain sizes, compare each of the frames with the streetscape image to determine each target object in the streetscape image, output a number of target frames that frame each target object in the streetscape image, and classify each target object to obtain the feature vector. In one embodiment, the trained convolution neural network includes a convolution layer, a pooling layer, and a full connection layer. In one embodiment, the trained convolutional neural network includes ten convolution layers, three pooling layers, and three full connection layers. - In one embodiment, the
training module 304 trains a convolutional neural network to obtain the trained convolutional neural network. In one embodiment, thetraining module 304 establishes a training set by using a number of training images and each target object is marked in each of the training images. The convolutional neural network is trained by using the training set to obtain the trained convolutional neural network. - In one embodiment, by the convolution layer of the convolution neural network, the
training module 304 carries out a convolution calculation of the training image in the training set to generate the feature map of the training image, makes the feature map deal with dimension reduction by the pooling layer to generate a second feature map, and inputs the second feature map into the full connection layer, where the full connection layer is configured to synthesize the second feature map extracted after convolution process. Thetraining module 304 outputs a number of training parameters and a feature model of the convolution neural network, where the feature model is an abstract feature expression of the training image. Thetraining module 304 determines whether the convolutional neural network accords with a convergence condition, namely, determines whether the feature model is consistent with a preset standard feature model. When the feature model is consistent with the preset standard feature model, thetraining module 304 determines that the convolutional neural network accords with the convergence condition, and when the feature model is not consistent with the preset standard feature model, thetraining module 304 determines that the convolutional neural network does not accord with the convergence conditions, and the preset standard feature model is the target object marked in the training image. In one embodiment, when the feature model accords with the preset standard feature model, the feature model is output. When the characteristic model does not accord with the preset standard feature model, a weighting matrix of the convolutional neural network is adjusted by back propagation. - In one embodiment, in a training process of the convolutional neural network, if an error between the feature model and the preset standard feature model exists, the error is transmitted back along an original path of the convolutional neural network by back propagation, so as to correct the training parameters of each of layers (E. G, convolution layer and pooling layer) of the convolutional neural network. For example, the training parameters include a number of weighted values and bias, and a modified convolution layer and a pooling layer of the convolutional neural network are used to convolute the training data (for example training images) again until the feature model accords with the convergence condition. In one embodiment, when carrying out convolution process of the convolutional neural network, a number of feature maps can be applied to the training image to obtain the features of the training image, and each feature map extracts a feature of the training image.
- The
color labeling module 305 inputs the feature vector of the streetscape image into a full convolution neural network to label a number of pixels belonging to the same target object by color, and outputs the streetscape image labeling a number of target objects with color. - In one embodiment, the
color labeling module 305 inputs the feature vector of the streetscape image into the full convolution neural network to label the pixels belonging to the same target object in the feature vector by color, frames the pixels with the same color label together and the pixels with the same color label forming the same target object, and outputs the streetscape image with colored target objects. - In one embodiment, the
display module 306 identifies the target objects from the streetscape image with color, and displays the target objects by displaying text information. -
FIG. 3 illustrates theelectronic device 6. Theelectronic device 6 includes astorage 61, aprocessor 62, and acomputer program 63 stored in thestorage 61 and executed by theprocessor 62. When theprocessor 62 executes thecomputer program 63, the processing in the embodiment of the method for evaluating environment of a pedestrian passageway are implemented, for example, blocks 11 to 14 as shown inFIG. 1 . Alternatively, when theprocessor 62 executes thecomputer program 63, the functions of the modules in the embodiment of thedevice 30 for evaluating environment of a pedestrian passageway are implemented, for example, modules 301-307 shown inFIG. 2 . - In one embodiment, the
computer program 63 can be partitioned into one or more modules/units that are stored in thestorage 61 and executed by theprocessor 62. The one or more modules/units may be a series of computer program instruction segments capable of performing a particular function, and the instruction segments describe the execution of thecomputer program 63 in theelectronic device 6. For example, thecomputer program 63 can be divided into the positioninformation acquisition module 301, the imageinformation acquisition module 302, theclassification module 303, thetraining module 304, thecolor labeling module 305, and thedisplay module 306 as shown inFIG. 2 . -
FIG. 3 shows only one example of theelectronic device 6. There are no limitations of theelectronic device 6, and other examples may include more or less components than those illustrated, or some components may be combined, or have a different arrangement. The components of theelectronic device 6 may also include input devices, output devices, communication units, network access devices, buses, and the like. - The
processor 62 can be a central processing unit (CPU), and also include other general-purpose processors, a digital signal processor (DSP), and application specific integrated circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc. Theprocessor 62 may be a microprocessor or the processor may be any conventional processor or the like. Theprocessor 62 is the control center of theelectronic device 6, and connects theelectronic device 6 by using various interfaces and lines. Thestorage 61 can be used to store thecomputer program 63, modules or units, and theprocessor 62 can realize various functions of theelectronic device 6 by running or executing the computer program, modules or units stored in thestorage 61 and calling up the data stored in thestorage 61. - In one embodiment, the
storage 61 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program (such as a sound playback function, an image playing function, etc.) required for at least one function, etc. The data storage area can store data (such as audio data, telephone book, etc.) created according to the use ofelectronic device 6. In addition, thestorage 61 may include a high-speed random access memory, and may also include a non-volatile memory, such as a hard disk, an internal memory, a plug-in hard disk, a smart media card (SMC), a secure digital (SD) card, a flash card, at least one disk storage device, a flash memory device, or other volatile solid state storage device. - In one embodiment, the modules/units integrated in the
electronic device 6 can be stored in a computer readable storage medium if such modules/units are implemented in the form of a product. Thus, the present disclosure may be implemented and realized in any part of the method of the foregoing embodiments, or may be implemented by the computer program, which may be stored in the computer readable storage medium. The steps of the various method embodiments described above may be implemented by a computer program when executed by a processor. The computer program includes computer program code, which may be in the form of source code, object code form, executable file, or some intermediate form. The computer readable medium may include any entity or device capable of carrying the computer program code, a recording medium, a USB flash drive, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM), random access memory (RAM), electrical carrier signals, telecommunication signals, and software distribution media. - The exemplary embodiments shown and described above are only examples. Even though numerous characteristics and advantages of the present disclosure have been set forth in the foregoing description, together with details of the structure and function of the present disclosure, the disclosure is illustrative only, and changes may be made in the detail, including in matters of shape, size, and arrangement of the parts within the principles of the present disclosure, up to and including the full extent established by the broad general meaning of the terms used in the claims.
Claims (18)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202011613540.9 | 2020-12-30 | ||
| CN202011613540.9A CN114764890A (en) | 2020-12-30 | 2020-12-30 | Pedestrian passageway environment assessment method and device and electronic equipment |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20220207879A1 true US20220207879A1 (en) | 2022-06-30 |
| US12154344B2 US12154344B2 (en) | 2024-11-26 |
Family
ID=82119429
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/562,297 Active 2043-01-11 US12154344B2 (en) | 2020-12-30 | 2021-12-27 | Method for evaluating environment of a pedestrian passageway and electronic device using the same |
Country Status (2)
| Country | Link |
|---|---|
| US (1) | US12154344B2 (en) |
| CN (1) | CN114764890A (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115439747A (en) * | 2022-09-01 | 2022-12-06 | 中国地质科学院 | A fast and automatic environmental quality assessment method based on convolutional neural network |
| CN120495859A (en) * | 2025-07-15 | 2025-08-15 | 天津大学 | Element feature measurement method and device for neighborhood landscape modeling |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116824305B (en) * | 2023-08-09 | 2024-06-04 | 中国气象服务协会 | Ecological environment monitoring data processing method and system applied to cloud computing |
Citations (16)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20120147010A1 (en) * | 2010-12-08 | 2012-06-14 | Definiens Ag | Graphical User Interface For Interpreting The Results Of Image Analysis |
| US20160295128A1 (en) * | 2015-04-01 | 2016-10-06 | Owl Labs, Inc. | Densely compositing angularly separated sub-scenes |
| US10067509B1 (en) * | 2017-03-10 | 2018-09-04 | TuSimple | System and method for occluding contour detection |
| US20190057507A1 (en) * | 2017-08-18 | 2019-02-21 | Samsung Electronics Co., Ltd. | System and method for semantic segmentation of images |
| US20190271550A1 (en) * | 2016-07-21 | 2019-09-05 | Intelligent Technologies International, Inc. | System and Method for Creating, Updating, and Using Maps Generated by Probe Vehicles |
| US20200320401A1 (en) * | 2019-04-08 | 2020-10-08 | Nvidia Corporation | Segmentation using an unsupervised neural network training technique |
| US20200364870A1 (en) * | 2019-05-14 | 2020-11-19 | University-Industry Cooperation Group Of Kyung Hee University | Image segmentation method and apparatus, and computer program thereof |
| US20200394921A1 (en) * | 2019-06-13 | 2020-12-17 | Garin System Co., Ltd. | System and method for guiding parking location of vehicle |
| US20210049372A1 (en) * | 2019-08-12 | 2021-02-18 | Naver Labs Corporation | Method and system for generating depth information of street view image using 2d map |
| US20210127059A1 (en) * | 2019-10-29 | 2021-04-29 | Microsoft Technology Licensing, Llc | Camera having vertically biased field of view |
| US20210150203A1 (en) * | 2019-11-14 | 2021-05-20 | Nec Laboratories America, Inc. | Parametric top-view representation of complex road scenes |
| US20210406561A1 (en) * | 2019-03-12 | 2021-12-30 | Zhejiang Dahua Technology Co., Ltd. | Systems and methods for lane detection |
| US20220126439A1 (en) * | 2019-01-22 | 2022-04-28 | Sony Group Corporation | Information processing apparatus and information processing method |
| US11450008B1 (en) * | 2020-02-27 | 2022-09-20 | Amazon Technologies, Inc. | Segmentation using attention-weighted loss and discriminative feature learning |
| US20230135051A1 (en) * | 2019-11-14 | 2023-05-04 | Si Wan Lee | Pedestrian road data construction method using mobile device, and system therefor |
| US20230222671A1 (en) * | 2020-12-29 | 2023-07-13 | Aimatics Co., Ltd. | System for predicting near future location of object |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106897741B (en) * | 2017-02-20 | 2019-06-14 | 中国人民解放军国防科学技术大学 | Polarimetric SAR Terrain Classification Method Combined with Polarimetric Coherence Features in Rotation Domain |
| CN109993803A (en) * | 2019-02-25 | 2019-07-09 | 复旦大学 | An Intelligent Analysis and Evaluation Method of Urban Tones |
| CN110543850B (en) * | 2019-08-30 | 2022-07-22 | 上海商汤临港智能科技有限公司 | Target detection method and device, neural network training method and device |
| CN111325788B (en) * | 2020-02-07 | 2020-10-30 | 北京科技大学 | A method for determining the height of buildings based on street view images |
-
2020
- 2020-12-30 CN CN202011613540.9A patent/CN114764890A/en active Pending
-
2021
- 2021-12-27 US US17/562,297 patent/US12154344B2/en active Active
Patent Citations (17)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20120147010A1 (en) * | 2010-12-08 | 2012-06-14 | Definiens Ag | Graphical User Interface For Interpreting The Results Of Image Analysis |
| US20160295128A1 (en) * | 2015-04-01 | 2016-10-06 | Owl Labs, Inc. | Densely compositing angularly separated sub-scenes |
| US20190271550A1 (en) * | 2016-07-21 | 2019-09-05 | Intelligent Technologies International, Inc. | System and Method for Creating, Updating, and Using Maps Generated by Probe Vehicles |
| US10067509B1 (en) * | 2017-03-10 | 2018-09-04 | TuSimple | System and method for occluding contour detection |
| US20190057507A1 (en) * | 2017-08-18 | 2019-02-21 | Samsung Electronics Co., Ltd. | System and method for semantic segmentation of images |
| US20220126439A1 (en) * | 2019-01-22 | 2022-04-28 | Sony Group Corporation | Information processing apparatus and information processing method |
| US20210406561A1 (en) * | 2019-03-12 | 2021-12-30 | Zhejiang Dahua Technology Co., Ltd. | Systems and methods for lane detection |
| US12087010B2 (en) * | 2019-03-12 | 2024-09-10 | Zhejiang Dahua Technology Co., Ltd. | Systems and methods for lane detection |
| US20200320401A1 (en) * | 2019-04-08 | 2020-10-08 | Nvidia Corporation | Segmentation using an unsupervised neural network training technique |
| US20200364870A1 (en) * | 2019-05-14 | 2020-11-19 | University-Industry Cooperation Group Of Kyung Hee University | Image segmentation method and apparatus, and computer program thereof |
| US20200394921A1 (en) * | 2019-06-13 | 2020-12-17 | Garin System Co., Ltd. | System and method for guiding parking location of vehicle |
| US20210049372A1 (en) * | 2019-08-12 | 2021-02-18 | Naver Labs Corporation | Method and system for generating depth information of street view image using 2d map |
| US20210127059A1 (en) * | 2019-10-29 | 2021-04-29 | Microsoft Technology Licensing, Llc | Camera having vertically biased field of view |
| US20210150203A1 (en) * | 2019-11-14 | 2021-05-20 | Nec Laboratories America, Inc. | Parametric top-view representation of complex road scenes |
| US20230135051A1 (en) * | 2019-11-14 | 2023-05-04 | Si Wan Lee | Pedestrian road data construction method using mobile device, and system therefor |
| US11450008B1 (en) * | 2020-02-27 | 2022-09-20 | Amazon Technologies, Inc. | Segmentation using attention-weighted loss and discriminative feature learning |
| US20230222671A1 (en) * | 2020-12-29 | 2023-07-13 | Aimatics Co., Ltd. | System for predicting near future location of object |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115439747A (en) * | 2022-09-01 | 2022-12-06 | 中国地质科学院 | A fast and automatic environmental quality assessment method based on convolutional neural network |
| CN120495859A (en) * | 2025-07-15 | 2025-08-15 | 天津大学 | Element feature measurement method and device for neighborhood landscape modeling |
Also Published As
| Publication number | Publication date |
|---|---|
| US12154344B2 (en) | 2024-11-26 |
| CN114764890A (en) | 2022-07-19 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Sun et al. | DMA-Net: DeepLab with multi-scale attention for pavement crack segmentation | |
| Cai et al. | Applying machine learning and google street view to explore effects of drivers’ visual environment on traffic safety | |
| US12154344B2 (en) | Method for evaluating environment of a pedestrian passageway and electronic device using the same | |
| US10628890B2 (en) | Visual analytics based vehicle insurance anti-fraud detection | |
| CN108121986A (en) | Object detection method and device, computer installation and computer readable storage medium | |
| CN111767831B (en) | Methods, devices, equipment and storage media for processing images | |
| CN110533950A (en) | Detection method, device, electronic device and storage medium for parking space usage status | |
| CN112215188B (en) | Traffic police gesture recognition method, device, equipment and storage medium | |
| US20230267750A1 (en) | Vehicle parking violation detection | |
| Dhatbale et al. | Deep learning techniques for vehicle trajectory extraction in mixed traffic | |
| US20240037911A1 (en) | Image classification method, electronic device, and storage medium | |
| WO2021203882A1 (en) | Attitude detection and video processing method and apparatus, and electronic device and storage medium | |
| CN113269730B (en) | Image processing method, image processing device, computer equipment and storage medium | |
| Isa et al. | Real-time traffic sign detection and recognition using Raspberry Pi | |
| CN115409985A (en) | Target object detection method and device, electronic equipment and readable storage medium | |
| TWI764489B (en) | Environmental assessment method, environmental assessment device for pedestrian path, and electronic device | |
| Hu et al. | An image-based crash risk prediction model using visual attention mapping and a deep convolutional neural network | |
| CN114897933A (en) | Vehicle detection and tracking method and device | |
| CN112115928B (en) | Training method and detection method of neural network based on illegal parking vehicle labels | |
| CN114596548A (en) | Target detection method, apparatus, computer equipment, and computer-readable storage medium | |
| CN116824491B (en) | Visibility detection method, detection model training method, device and storage medium | |
| US12260619B2 (en) | Image recognition method, electronic device and readable storage medium | |
| CN112132015A (en) | Detection method, device, medium and electronic equipment for illegal driving posture | |
| CN117975219A (en) | Alignment module, decoder training method, image segmentation method, device and medium | |
| Shen et al. | Vehicle detection based on improved YOLOv5s using coordinate attention and decoupled head |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: HON HAI PRECISION INDUSTRY CO., LTD., TAIWAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHANG, YUEH;KUO, CHIN-PIN;LIN, TZU-CHEN;REEL/FRAME:058481/0444 Effective date: 20211124 |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |