[go: up one dir, main page]

WO2019188877A1 - Dispositif de transmission d'informations, structure de données, procédé de commande, programme et support de stockage - Google Patents

Dispositif de transmission d'informations, structure de données, procédé de commande, programme et support de stockage Download PDF

Info

Publication number
WO2019188877A1
WO2019188877A1 PCT/JP2019/012318 JP2019012318W WO2019188877A1 WO 2019188877 A1 WO2019188877 A1 WO 2019188877A1 JP 2019012318 W JP2019012318 W JP 2019012318W WO 2019188877 A1 WO2019188877 A1 WO 2019188877A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
vehicle
voxel
position estimation
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2019/012318
Other languages
English (en)
Japanese (ja)
Inventor
加藤 正浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pioneer Corp
Original Assignee
Pioneer Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pioneer Corp filed Critical Pioneer Corp
Publication of WO2019188877A1 publication Critical patent/WO2019188877A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0968Systems involving transmission of navigation instructions to the vehicle
    • G08G1/0969Systems involving transmission of navigation instructions to the vehicle having a display in the form of a map
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B29/00Maps; Plans; Charts; Diagrams, e.g. route diagram
    • G09B29/10Map spot or coordinate position indicators; Map reading aids

Definitions

  • the present invention relates to a self-position estimation technique.
  • Patent Document 1 discloses a technique for estimating a self-position by collating the output of a measurement sensor with the position information of a feature registered in advance on a map.
  • Patent Document 2 discloses a vehicle position estimation technique using a Kalman filter.
  • Non-Patent Document 1 discloses specifications related to a data format for collecting data detected by a vehicle-side sensor with a cloud server.
  • the target object When estimating the vehicle position by comparing the measurement results of objects around the vehicle position with an external sensor such as a rider and the position information of the object on the map, the target object is affected by occlusion by other vehicles, rain or snow. If it cannot be detected, the vehicle position estimation accuracy deteriorates. Similarly, when the position or shape of the object itself changes, the result of matching is shifted due to inconsistency with the position of the object on the map, and the vehicle position estimation accuracy deteriorates.
  • the present invention has been made to solve the above-described problems, and provides an information transmission apparatus that transmits data suitable for generating prior information indicating an object to be used for self-position estimation to an information processing apparatus.
  • the main purpose is to solve the above-described problems, and provides an information transmission apparatus that transmits data suitable for generating prior information indicating an object to be used for self-position estimation to an information processing apparatus. The main purpose.
  • the invention described in claim is an information transmission device, wherein the moving body is based on a positional relationship with an object measured by a measuring device mounted on the moving body and position information of the object included in map information.
  • a generating unit that generates validity information related to the effectiveness of position estimation accuracy improvement, and a transmission unit that transmits the identification information of the object and the validity information to the information processing apparatus.
  • the invention described in claim is a data structure of data transmitted to an information processing apparatus that collects information on an object measured by a measurement apparatus mounted on a moving body, the identification information of the object, Validity information regarding the effectiveness of improving the accuracy of position estimation of the moving object using an object, and used by the information processing device to calculate a recommended value for position estimation of the moving object using the object
  • the invention described in the claims is a control method executed by the information transmission device, the positional relationship with the object measured by the measurement device mounted on the moving body, and the position information of the object included in the map information
  • an information transmission device which is based on a positional relationship with an object measured by a measurement device mounted on a moving body and the positional information of the object included in map information.
  • a generating unit that generates validity information related to the effectiveness of improving the accuracy of position estimation of the moving object;
  • the information transmission apparatus can transmit the validity information regarding the effectiveness of improving the accuracy of position estimation for the object used for position estimation to the information processing apparatus. Therefore, for example, the information processing apparatus statistically determines the effectiveness of improving the accuracy of position estimation of each object by receiving validity information from each information transmission apparatus, and indicates an object to be used for self-position estimation. Prior information can be generated suitably.
  • the generation unit generates flag information indicating whether or not the estimation accuracy of the position of the moving body based on the object is improved as the validity information.
  • the information transmitting apparatus can suitably notify the information processing apparatus of the effectiveness of improving the accuracy of position estimation for the object used for position estimation.
  • the generation unit may generate validity information regarding the effectiveness of the accuracy improvement of the position estimation with respect to a direction with respect to the moving body.
  • the generation unit may further generate validity information regarding the effectiveness of the accuracy improvement of the position estimation with respect to the orientation of the moving object. According to these examples, it is possible to suitably notify the information processing apparatus of the effectiveness of improving the accuracy of position estimation for the object used for position estimation for each state variable estimated in the position estimation processing.
  • the position estimation is performed by checking the position information of the object and the positional relationship with the object for each unit region that divides the space. A position is estimated, and the transmission unit uses the identification information for each unit region and the collation result of the collation for each unit region as the identification information and the validity information of the object. Send to. Also according to this aspect, the information transmission apparatus preferably executes position estimation based on the positional relationship with the object measured by the measurement apparatus, and the validity information regarding the effectiveness of improving the accuracy of position estimation with respect to the measured object, It can be transmitted to the information processing apparatus.
  • the generation unit calculates an average of the collation results in a period in which the object can be measured by the measurement device, and the transmission unit includes identification information for each unit region. And the average for each unit area is transmitted to the information processing apparatus.
  • the information transmitting apparatus can transmit validity information that more accurately reflects the effectiveness of improving the accuracy of position estimation for the measured object to the information processing apparatus.
  • the information processing apparatus calculates a recommended value for position estimation of the moving object using the object. It is the data structure of the data used for.
  • the information processing apparatus can suitably calculate a recommended value for position estimation for each object by receiving data having such a data structure from each moving object.
  • a control method executed by an information transmitting device, the positional relationship with an object measured by a measuring device mounted on a moving body, and the object included in map information A generating step of generating validity information related to the effectiveness of improving accuracy of the position estimation of the moving object based on the position information of the moving object, and a transmitting step of transmitting the identification information of the object and the validity information to the information processing device; Have.
  • the information transmitting apparatus can transmit to the information processing apparatus validity information regarding the effectiveness of improving the accuracy of position estimation for the object used for position estimation.
  • a program causes a computer to execute the control method described above.
  • the computer functions as the information transmission device described above by executing this program.
  • the program is stored in a storage medium.
  • FIG. 1 is a schematic configuration of a driving support system according to the first embodiment.
  • the driving support system includes an in-vehicle device 1 that moves together with each vehicle that is a moving body, and a server device 6 that communicates with each in-vehicle device 1 via a network.
  • a driving assistance system updates distribution map DB20 which is a map for distribution which the server apparatus 6 holds based on the information transmitted from each vehicle equipment 1.
  • the “map” includes data used for ADAS (Advanced Driver Assistance System) and automatic driving in addition to data referred to by a conventional in-vehicle device for route guidance.
  • ADAS Advanced Driver Assistance System
  • the in-vehicle device 1 is electrically connected to the lidar 2, the gyro sensor 3, the vehicle speed sensor 4, and the GPS receiver 5, and based on these outputs, a predetermined object is detected and the vehicle in which the in-vehicle device 1 is mounted.
  • the position of the vehicle also referred to as “the vehicle position” is estimated.
  • the vehicle equipment 1 performs automatic driving
  • the in-vehicle device 1 stores a map database (DB: DataBase) 10 in which information on objects such as road data and landmarks and lane markings provided near the road are registered.
  • DB DataBase
  • the features that serve as the above-mentioned landmarks are, for example, features such as kilometer posts, 100 m posts, delineators, traffic infrastructure facilities (for example, signs, direction signs, signals), utility poles, street lamps, and the like that are periodically arranged on the side of the road.
  • the vehicle equipment 1 estimates the own vehicle position by collating with the output of the lidar 2 etc. based on this map DB10.
  • the in-vehicle device 1 transmits upload information “Iu” including information on the detected object to the server device 6.
  • the in-vehicle device 1 is an example of an information transmission device.
  • the lidar 2 emits a pulse laser in a predetermined angle range in the horizontal direction and the vertical direction, thereby discretely measuring the distance to an object existing in the outside world, and a three-dimensional point indicating the position of the object Generate group information.
  • the lidar 2 includes an irradiation unit that irradiates laser light while changing the irradiation direction, a light receiving unit that receives reflected light (scattered light) reflected by the object, and a light reception signal output by the light receiving unit. And an output unit for outputting scan data based on.
  • the scan data is point cloud data, and is generated based on the irradiation direction corresponding to the laser beam received by the light receiving unit and the distance to the object in the irradiation direction specified based on the light reception signal.
  • an object such as a lane line and a feature to be measured by the lidar 2 in the own vehicle position estimation process is also referred to as a “landmark”.
  • the rider 2, the gyro sensor 3, the vehicle speed sensor 4, and the GPS receiver 5 each supply output data to the in-vehicle device 1.
  • the lidar 2 is an example of a “measuring device”.
  • the server device 6 receives the upload information Iu from each in-vehicle device 1 and stores it. For example, the server device 6 updates the distribution map DB 20 based on the collected upload information Iu. In addition, the server device 6 transmits download information Id including update information of the distribution map DB 20 to each in-vehicle device 1.
  • the server device 6 is an example of an information processing device and a map data generation device.
  • FIG. 2A is a block diagram showing a functional configuration of the in-vehicle device 1.
  • the in-vehicle device 1 mainly includes an interface 11, a storage unit 12, a communication unit 13, an input unit 14, a control unit 15, and an information output unit 16. Each of these elements is connected to each other via a bus line.
  • the interface 11 acquires output data from sensors such as the lidar 2, the gyro sensor 3, the vehicle speed sensor 4, and the GPS receiver 5, and supplies the output data to the control unit 15.
  • the storage unit 12 stores a program executed by the control unit 15 and information necessary for the control unit 15 to execute a predetermined process.
  • the storage unit 12 stores a map DB 10 including attribute information such as the position, size, and shape of an object that is a landmark and object recommendation information described later.
  • the object recommendation information is information indicating, for each object, a recommended value (also referred to as “object recommended value”) for estimating the vehicle position using the target object. The data structure of the object recommendation information will be described later.
  • the communication unit 13 performs transmission of the upload information Iu and reception of the download information Id based on the control of the control unit 15.
  • the input unit 14 is a button, a touch panel, a remote controller, a voice input device, or the like for a user to operate.
  • the information output unit 16 is, for example, a display or a speaker that outputs based on the control of the control unit 15.
  • the control unit 15 includes a CPU that executes a program and controls the entire vehicle-mounted device 1.
  • the control unit 15 includes a host vehicle position estimation unit 17 and an upload control unit 18.
  • the own vehicle position estimation unit 17 is based on the measured values of the distance and angle by the lidar 2 with respect to the landmark and the position information of the landmark extracted from the map DB 10, and the gyro sensor 3, the vehicle speed sensor 4, and / or the GPS receiver.
  • the vehicle position estimated from the output data of 5 is corrected.
  • the vehicle position estimation unit 17 predicts the vehicle position from output data of the gyro sensor 3, the vehicle speed sensor 4 and the like based on a state estimation method based on Bayesian estimation,
  • the measurement update step for correcting the predicted value of the vehicle position calculated in the prediction step is executed alternately.
  • the vehicle position estimation unit 17 performs vehicle position estimation using an extended Kalman filter as an example.
  • the vehicle position estimation using the extended Kalman filter will be described in the section “position estimation based on the extended Kalman filter”.
  • the upload control unit 18 When the upload control unit 18 detects a predetermined object based on the output of an external sensor such as the lidar 2, the upload control unit 18 generates upload information Iu including information regarding the detected object, and transmits the upload information Iu to the server device 6. In addition, when the vehicle position estimation unit 17 performs vehicle position estimation using the detected object as a landmark, the upload control unit 18 determines whether the accuracy of position estimation accuracy by the vehicle position estimation is a state variable.
  • the flag information also referred to as “valid flag” indicating the determination result is included in the upload information Iu and transmitted to the server device 6.
  • the upload control unit 18 is an example of a “generation unit”, a “transmission unit”, and a “computer” that executes a program.
  • the validity flag is an example of “validity information”.
  • FIG. 2B is a block diagram showing a functional configuration of the server device 6.
  • the server device 6 mainly includes a communication unit 61, a storage unit 62, and a control unit 65. Each of these elements is connected to each other via a bus line.
  • the communication unit 61 receives the upload information Iu and transmits the download information Id based on the control of the control unit 65.
  • the storage unit 62 stores a program executed by the control unit 65 and information necessary for the control unit 65 to execute a predetermined process.
  • the storage unit 62 stores a distribution map DB 20 and an upload information DB 27 that accumulates the upload information Iu received from each in-vehicle device 1.
  • the distribution map DB 20 includes object recommendation information generated by the control unit 65 with reference to the upload information DB 27.
  • the control unit 65 includes a CPU that executes a program and controls the entire server device 6.
  • the control unit 65 stores the upload information Iu received from each in-vehicle device 1 by the communication unit 61 in the upload information DB 27, generates object recommendation information based on the upload information DB 27, and the generated object Processing for transmitting map update information such as recommended information to each in-vehicle device 1 by the communication unit 61 is performed.
  • FIG. 3 is a diagram showing the vehicle position to be estimated in two-dimensional orthogonal coordinates.
  • the vehicle position on the plane defined on the two-dimensional orthogonal coordinates of xy is represented by coordinates “(x, y)” and the direction (yaw angle) “ ⁇ ” of the vehicle.
  • the yaw angle ⁇ is defined as an angle formed by the traveling direction of the vehicle and the x-axis.
  • FIG. 4 is a diagram illustrating a schematic relationship between the prediction step and the measurement update step.
  • FIG. 5 shows an example of the vehicle position estimation unit 17 that is a functional block of the control unit 15. As shown in FIG. 4, by repeating the prediction step and the measurement update step, calculation and update of the estimated value of the state variable vector “X” indicating the vehicle position are sequentially executed. Moreover, as shown in FIG. 5, the own vehicle position estimation part 17 has the position estimation part 21 which performs a prediction step, and the position estimation part 22 which performs a measurement update step.
  • the position prediction unit 21 includes a dead reckoning block 23 and a position prediction block 24, and the position estimation unit 22 includes a landmark search / extraction block 25 and a position correction block 26.
  • FIG. 5 shows an example of the vehicle position estimation unit 17 that is a functional block of the control unit 15. As shown in FIG. 4, by repeating the prediction step and the measurement update step, calculation and update of the estimated value of the state variable vector “X” indicating the vehicle position are sequentially executed. Moreover
  • the state variable vector of the reference time (ie, current time) “k” to be calculated is represented as “X ⁇ (k)” or “X ⁇ (k)”.
  • the provisional estimated value (predicted value) estimated in the predicting step is appended with “ - ” on the character representing the predicted value, and the estimated value with higher accuracy updated in the measurement updating step. Is appended with “ ⁇ ” on the character representing the value.
  • T is used to obtain the movement distance and azimuth change from the previous time.
  • the position prediction block 24 of the control unit 15 adds the obtained moving distance and azimuth change to the state variable vector X ⁇ (k-1) at the time k-1 calculated in the immediately previous measurement update step, so that the time k A predicted value (also referred to as “predicted position”) X ⁇ (k) is calculated.
  • the landmark search / extraction block 25 of the control unit 15 associates the landmark position vector registered in the map DB 10 with the scan data of the lidar 2. Then, the landmark search / extraction block 25 of the control unit 15, when this association is possible, the measurement value “Z (k)” by the lidar 2 of the landmark that has been associated, and the predicted position X ⁇ ( k) and a landmark measurement value obtained by modeling the measurement processing by the lidar 2 using the landmark position vector registered in the map DB 10 (referred to as “measurement prediction value”) “Z ⁇ (k)”. And get respectively.
  • the measured value Z (k) is a vehicle coordinate system ("vehicle coordinate system") converted from a landmark distance and a scan angle measured by the rider 2 at time k into components with the vehicle traveling direction and the lateral direction as axes. Vector value).
  • the position correction block 26 of the control unit 15 then adds the Kalman gain “K (k) to the difference value between the measured value Z (k) and the measured predicted value Z ⁇ (k) as shown in the following equation (1). ”And adding this to the predicted position X ⁇ (k), an updated state variable vector (also referred to as“ estimated position ”) X ⁇ (k) is calculated.
  • the position correction block 26 of the control unit 15 is similar to the prediction step in that the covariance matrix P ⁇ (k) (simply referred to as P (k) corresponding to the error distribution of the estimated position X ⁇ (k). Is expressed from the covariance matrix P ⁇ (k). Parameters such as the Kalman gain K (k) can be calculated in the same manner as a known self-position estimation technique using an extended Kalman filter, for example.
  • the prediction step and the measurement update step are repeatedly performed, and the predicted position X ⁇ (k) and the estimated position X ⁇ (k) are sequentially calculated, so that the most likely vehicle position is calculated. .
  • the diagonal elements “ ⁇ x 2 (k)”, “ ⁇ y 2 (k)”, “ ⁇ z 2 (k)”, and “ ⁇ ⁇ 2 (k)” of the covariance matrix P (k) are When each becomes smaller, it can be determined that the vehicle position estimation accuracy using the landmark at time k has improved.
  • the vehicle-mounted device 1 has diagonal elements ⁇ X 2 (k), ⁇ Y 2 (k), ⁇ Z 2 (k), ⁇ ⁇ 2 (after conversion into the vehicle coordinate system (X, Y, Z). k), it is possible to determine the estimation accuracy of each state variable regarding the traveling direction, the lateral direction, the height direction, and the direction of the vehicle position estimation.
  • FIG. 6 shows an example of the data structure of object recommendation information.
  • the object recommendation information is information in which “object ID”, “position”, and “object recommended value” are associated with each other.
  • object ID an object ID that is an identification number assigned to each object to be a landmark is designated.
  • the object ID is an example of “object identification information”.
  • position the latitude, longitude, and altitude of the target landmark are specified.
  • the recommended object value the recommended object value given to the target object is a state variable for estimating the vehicle position (here, the traveling direction (x), the lateral direction (y), the height direction (z)). , Direction ( ⁇ )).
  • the recommended object value is set to a value from 0 to 1, and approaches 1 as the effectiveness in improving the estimation accuracy of the vehicle position estimation for the target state variable increases.
  • object recommendation information indicating the effectiveness (property) of position estimation for each object is included in the map DB 10.
  • the vehicle equipment 1 can identify the object which improves the own vehicle position estimation accuracy with reference to the object recommendation information.
  • the in-vehicle device 1 transmits a predetermined request signal including the current position information to the server device 6, thereby receiving the download information Id including the object recommendation information regarding the object around the vehicle position from the server device 6. Good.
  • the vehicle-mounted device 1 controls the vehicle so as to improve the vehicle position estimation accuracy based on the object recommendation information included in the received download information Id.
  • the vehicle-mounted device 1 searches the object recommendation information for objects that exist within a predetermined distance from the current position. Then, the moving route of the vehicle may be determined based on the position of the object having a high recommended object value of the target state variable (here, the horizontal direction) among the searched objects. In this case, for example, the vehicle-mounted device 1 moves the vehicle to a lane where the target object can be easily detected (for example, the lane closest to the target object).
  • the vehicle-mounted device 1 measures an object having a high recommended object value of a target state variable when performing vehicle position estimation using a plurality of objects.
  • the Kalman filter or the like may be calculated by increasing the value weight.
  • the vehicle-mounted device 1 switches the position estimation method (for example, position estimation by dead reckoning or a second embodiment to be described later) when an object having a high object recommended value of the target state variable does not exist around the current position. May be switched to the position estimation method.
  • FIG. 7 is a diagram showing an outline of the data structure of the upload information Iu transmitted by the in-vehicle device 1. As shown in FIG. 7, the upload information Iu includes header information, travel route information, event information, and media information.
  • the header information includes items of “version”, “transmission source”, and “vehicle metadata”.
  • the in-vehicle device 1 designates information on the version of the data structure of the upload information Iu used in “Version”, and the name of the company (OEM name or system vendor of the vehicle that transmits the upload information Iu) in “Sender” Name) information. Further, the in-vehicle device 1 specifies vehicle attribute information (for example, vehicle type, vehicle ID, vehicle width, vehicle height, etc.) in “vehicle metadata”.
  • the travel route information includes an item “position estimation”. The in-vehicle device 1 designates, for this “position estimation”, the time stamp information indicating the position estimation time, the latitude, longitude, altitude information indicating the estimated vehicle position, and information regarding the estimation accuracy. .
  • Event information includes an item of “object recognition event”.
  • the vehicle-mounted device 1 detects an object recognition event, it designates information as a detection result as an “object recognition event”.
  • the media information is a data type used when transmitting raw data that is output data (detection information) of an external sensor such as the lidar 2.
  • FIG. 8 shows the data structure of “object recognition event” included in the event information.
  • FIG. 8 shows information indicating whether designation of information corresponding to each element is essential or optional for each element (sub-item) included in the “object recognition event”.
  • the “object recognition event” includes “time stamp”, “object ID”, “offset position”, “object type”, “object size”, “object size accuracy”, “media ID”, Each element of “valid flag” is included.
  • the control unit 15 of the in-vehicle device 1 generates event information of “object recognition event” having the data structure shown in FIG. 8 when an object is detected based on the output of an external sensor such as the lidar 2.
  • the in-vehicle device 1 designates the time when the object is detected in “time stamp”, and designates the object ID of the detected object in “object ID”.
  • the in-vehicle device 1 designates information on a relative position (for example, a latitude difference and a longitude difference) of the detected object from the vehicle.
  • the in-vehicle device 1 specifies information indicating the type of the detected object.
  • the in-vehicle device 1 designates these information as elements of “object size” and “object size accuracy”.
  • the identification information given to the raw data is set to “Media ID”. specify.
  • the detailed information of the media (raw data) specified by the element of “Media ID” is stored separately in the item “Media Information”.
  • the in-vehicle device 1 estimates a valid flag indicating the effectiveness of improving the accuracy of position estimation when the position estimation of the vehicle is performed using the target object, in the vehicle position estimation. Specify for each.
  • the vehicle-mounted device 1 has diagonal elements (also referred to as “pre-estimated diagonal elements”) ⁇ x 2 , ⁇ y 2 , ⁇ before the vehicle position estimation is performed using the target object.
  • the in-vehicle device 1 sets the effective flag of the state variable having a smaller diagonal element after estimation (that is, improved accuracy) than the diagonal element before estimation to “1”, and the diagonal after estimation rather than the diagonal element before estimation.
  • the valid flag of the state variable having a large element that is, the accuracy is lowered is set to “0”.
  • FIG. 9 is an overhead view showing the periphery of the vehicle on which the in-vehicle device 1 is mounted.
  • FIG. 10 shows an example of the setting value of the valid flag designated by the vehicle-mounted device 1 in the upload information Iu indicating the detection result of each object with the object IDs “1”, “2”, and “3”.
  • FIG. 11 is a graph showing the transition of the diagonal element ⁇ y 2 in a predetermined period including the execution period “Tw1” to “Tw4” of the vehicle position estimation based on the white line 51 of the object ID “1”.
  • FIG. 12 is a graph showing the transition of the diagonal element ⁇ x 2 in a predetermined period including the execution period “Tw5” of the vehicle position estimation based on the sign 52 of the object ID “2”.
  • the vehicle-mounted device 1 refers to the position information and size information of the objects registered in the map DB 10, and the white line 51 of the object ID “1”, the indicator 52 of the object ID “2”, and the object ID, which are landmarks.
  • Prediction windows “Wp1” to “Wp3” that define the range in which each object is detected are set as shown in FIG. 9 for the region where the label 53 of “3” is estimated to exist.
  • the in-vehicle device 1 detects the white line 51 of the object ID “1” in the prediction window Wp ⁇ b> 1 and estimates the vehicle position based on the white line 51.
  • the diagonal element ⁇ y 2 is reduced at each end of the vehicle position estimation execution periods Tw1 to Tw4 based on the white line 51 compared to the corresponding start period. That is, after the vehicle position estimation based on the white line 51, the position estimation accuracy in the lateral direction (y) is improved. Therefore, in this case, the in-vehicle device 1 sets the effective flag in the horizontal direction (y) to “1” as shown in FIG.
  • the in-vehicle device 1 determines the traveling direction, the height direction, and the azimuth direction. Each valid flag is set to “0”.
  • the in-vehicle device 1 sets the effective flag for the target state variable to “1” when the reduction width of the post-estimation diagonal element from the pre-estimation diagonal element for the target state variable is equal to or greater than a predetermined threshold.
  • the valid flag for the target state variable may be set to “0”. Then, the in-vehicle device 1 generates upload information Iu including the object ID of the white line 51 and the set valid flag, and transmits it to the server device 6.
  • the in-vehicle device 1 detects the sign 52 with the object ID “2” in the prediction window Wp ⁇ b> 2, and performs the vehicle position estimation based on the sign 52.
  • the diagonal element ⁇ x 2 is reduced at the end of the execution period Tw5 of the vehicle position estimation based on the sign 52 compared to the corresponding start period. That is, after the vehicle position is estimated based on the sign 52, the position estimation accuracy in the traveling direction (x) is improved. Therefore, in this case, as shown in FIG. 10, the in-vehicle device 1 sets the valid flag for the traveling direction (x) to “1”.
  • the in-vehicle device 1 sets the height direction and azimuth valid flags to “1”. To do.
  • the in-vehicle device 1 sets the effective flag in the horizontal direction to “0”. Then, the in-vehicle device 1 generates upload information Iu including the object ID of the sign 52 and the set valid flag, and transmits it to the server device 6.
  • the in-vehicle device 1 cannot detect the occlusion caused by the obstacles 54 and 55 in the prediction window Wp3 for the sign 53 of the object ID “3”. In this case, the in-vehicle device 1 sets the valid flag of each state variable of the object ID “3” to “0”. Then, the in-vehicle device 1 generates upload information Iu including the object ID of the sign 53 and the set valid flag, and transmits it to the server device 6.
  • the server device 6 can be suitably set so that the recommended object value for each state variable of each object ID is in the range from 0 to 1. Also, in this case, the recommended object value of objects that were not detected due to the effects of occlusion, rain, snow, etc. will be small, and even objects that have been detected as being effective for position estimation will be low. The recommended value also decreases. On the other hand, the recommended object value of an object having a high ratio determined to be effective for position estimation is high.
  • the server device 6 may perform the calculation of the recommended object value using the valid flag of the upload information Iu received in the past predetermined time (for example, 10 minutes). Thereby, the server apparatus 6 can set or update the recommended object value accurately based on the collected latest valid flag.
  • the server device 6 sets the predetermined time according to the traffic on the road. It is good to set. For example, the server device 6 sets the above-mentioned predetermined time to 10 minutes for an object that can be detected on a road with a high traffic volume, and the above-described object for an object that can be detected on a road with a low traffic volume. The predetermined time may be set to 1 hour. In another example, the server device 6 may calculate the recommended object value of the target object based on the valid flag included in the latest predetermined number of pieces of upload information Iu acquired in the past with respect to the target object. In this case, since the value S1 is the above-mentioned predetermined number, the vehicle-mounted device 1 can preferably calculate the recommended object value based on a certain number of samples.
  • FIG. 13 is an example of a flowchart showing an outline of processing related to transmission / reception of upload information Iu including a valid flag and download information Id including object recommendation information.
  • the vehicle-mounted device 1 refers to the map DB 10 and sets a prediction window for detecting the object when there is an object to be a landmark (step S101). And the vehicle equipment 1 determines whether the target object was detected (step S102). And when the vehicle equipment 1 detects the object of object (Step S102; Yes), by determining for every state variable whether the accuracy of position estimation before and after the vehicle position estimation based on the object is improved, A valid flag is set for each state variable (step S103). On the other hand, when the vehicle-mounted device 1 cannot detect the target object (step S102; No), it sets the valid flag of each state variable for the object to 0 (step S104). And the vehicle equipment 1 transmits the upload information Iu including the object ID and valid flag of the target object to the server device 6 (step S105).
  • the server device 6 receives the upload information Iu transmitted in step S105, and accumulates the upload information Iu in the upload information DB 27 (step S201). And the server apparatus 6 determines whether it is the update timing of distribution map DB20 (step S202).
  • the above update timing may be determined based on the length of time since the previous update of the distribution map DB 20, or may be determined based on the cumulative number of upload information Iu received since the previous update of the distribution map DB 20. Good.
  • step S202 If the update timing of the distribution map DB 20 is reached (step S202; Yes), the server device 6 generates object recommendation information and the like with reference to the upload information DB 27, and uses the generated object recommendation information and the distribution map DB 20 Is updated (step S203). Then, the server device 6 transmits download information Id including the object recommendation information generated in step S203 to each in-vehicle device 1 (step S204). The server device 6 may transmit the download information Id only to the in-vehicle device 1 that has requested transmission of the download information Id. On the other hand, if it is not the update timing of the distribution map DB 20 (step S202; No), the server device 6 continues to execute step S201.
  • step S106 When the in-vehicle device 1 receives the download information Id (step S106; Yes), it updates the map DB 10 using the download information Id (step S107). As a result, the latest object recommendation information is recorded in the map DB 10. On the other hand, when the in-vehicle device 1 has not received the download information Id from the server device 6 (step S106; No), the process returns to step S101.
  • voxel data in which position information of a stationary structure is recorded for each unit region (also referred to as “voxel”) when the three-dimensional space is divided into a plurality of regions is stored in the map DB 10.
  • the vehicle-mounted device 1 performs vehicle position estimation using the voxel data.
  • the map DB 10 and the distribution map DB 20 include information (also referred to as “voxel recommended information”) in which recommended values for use in position estimation are indicated for each voxel instead of the object recommended information. .
  • symbol is attached
  • Voxel data includes data representing the point cloud data of stationary structures in each voxel represented by a normal distribution, and scan matching using NDT (Normal Distributions Transform) Used for.
  • FIG. 14 shows an example of the vehicle position estimation unit 17 in position estimation based on voxel data.
  • the difference from the vehicle position estimation unit 17 in the position estimation based on the extended Kalman filter shown in FIG. 5 is that, instead of the landmark search / extraction unit 25, the point cloud data obtained from the lidar 2 and the voxel acquired from the map DB
  • the point cloud data association block 27 is provided as a process for associating.
  • FIG. 15 shows an example of a schematic data structure of voxel data.
  • the voxel data includes parameter information when the point group in the voxel is expressed by a normal distribution.
  • the voxel ID, voxel coordinates, average vector, and covariance matrix are used.
  • voxel coordinates indicate absolute three-dimensional coordinates of a reference position such as the center position of each voxel.
  • Each voxel is a cube obtained by dividing the space into a lattice shape, and since the shape and size are determined in advance, the space of each voxel can be specified by the voxel coordinates.
  • the voxel coordinates may be used as a voxel ID.
  • the voxel ID is an example of “object identification information”.
  • the mean vector“ ⁇ n ”and the covariance matrix“ V n ”at voxel n are expressed by the following equations (5) and (6), respectively.
  • the in-vehicle device 1 uses the point group obtained by coordinate transformation, the average vector ⁇ n and the covariance matrix V n included in the voxel data, and the voxel n represented by the following equation (9).
  • Overall evaluation function value “E (k)” also referred to as “overall evaluation function value” for all voxels to be matched indicated by the evaluation function value “E n ” and Expression (10). Is calculated.
  • the evaluation function value E n of each voxel is also referred to as "individual evaluation function value”.
  • the in-vehicle device 1 calculates an estimation parameter P that maximizes the overall evaluation function value E (k) by an arbitrary root finding algorithm such as Newton's method.
  • the in-vehicle device 1 applies the estimation parameter P to the own vehicle position X ⁇ (k) predicted from the position prediction unit 21 and the like shown in FIG. An accurate own vehicle position X ⁇ (k) is estimated.
  • Individual evaluation function value E n is an example of a "verification result”.
  • FIG. 16 shows an example of the data structure of voxel recommended information.
  • the voxel recommendation information is information in which “voxel ID”, “position”, and “recommended voxel value” are associated with each other.
  • voxel ID a voxel ID assigned to each voxel is designated.
  • position voxel coordinates (latitude, longitude, altitude, or xyz coordinates from the reference point) of the target voxel are designated.
  • a recommended voxel value a recommended voxel value that is a recommended value for using the target voxel for position estimation is designated.
  • the recommended voxel value is set to a value from 0 to 1, and approaches 1 as the effectiveness for improving the position estimation accuracy increases.
  • the map DB 10 includes voxel recommendation information indicating the effectiveness (adequacy) of position estimation for each voxel.
  • the vehicle equipment 1 can control the vehicle so as to improve the vehicle position estimation accuracy by referring to the voxel recommendation information, similarly to the case of using the object recommendation information of the first embodiment. For example, when the current vehicle position estimation accuracy is poor (that is, the overall evaluation function value E (k) is low), the vehicle-mounted device 1 moves the vehicle to a lane that is easy to detect a voxel with a high recommended voxel value, In NDT matching, the weighting of voxels with high voxel recommended values is increased. Moreover, the vehicle equipment 1 may switch a position estimation method, when the voxel with a high voxel recommended value does not exist in the periphery.
  • the in-vehicle device 1 transmits a predetermined request signal including the current position information to the server device 6 so that the download information Id including the voxel recommended information related to the voxels around the vehicle position is received from the server device 6. Good.
  • the vehicle-mounted device 1 controls the vehicle so as to improve the vehicle position estimation accuracy based on the received download information Id.
  • the in-vehicle device 1 determines a value (also referred to as “effective value”) indicating the effectiveness in improving the vehicle position estimation accuracy of the target voxel instead of the effective flag.
  • the vehicle-mounted device 1 defines the individual evaluation function value E n of the voxel of interest as the effective value of the above.
  • the vehicle equipment 1 transmits the upload information Iu which contains at least the information which linked
  • the valid value is an example of “validity information”.
  • E n the individual evaluation function value, the higher the degree of matching approaches 1, close to 0 when the low degree of matching. Then, it can be said that the individual evaluation function value E n as voxel is large, contributing to enhance the overall evaluation function value E (k), is highly effective. Therefore, individual evaluation function value E n of each voxel is suitable as an effective value for each voxel.
  • FIG. 17 is an overhead view showing the vicinity of the vehicle on which the in-vehicle device 1 is mounted.
  • voxels with voxel IDs “1” to “35” exist within a predetermined distance from the vehicle.
  • the voxels with voxel IDs “1” to “35” are located on the surfaces of the objects 56 to 58.
  • FIG. 18 shows a setting example of valid values designated by the in-vehicle device 1 in the upload information Iu indicating the detection results of the voxel IDs “1” to “35”. Further, FIG.
  • 19 is a graph showing a transition of the individual evaluation function value E 4 with respect to the voxel ID “4” in a predetermined period including the detection period “Tw6” of the voxel ID “4”. It is a graph showing a change in the individual evaluation function value E 12 for the voxel ID "12" at a predetermined period including the detection period "Tw7" 12 ".
  • the in-vehicle device 1 acquires voxel data of voxel IDs “1” to “35” existing within a predetermined distance from the vehicle from the map DB 10 and performs measurement by the lidar 2.
  • the vehicle-mounted device 1 has the same or different voxels with the voxel IDs “4” to “11” located on the surface of the object 56 and voxels with the voxel IDs “12” to “19” located on the surface of the object 57.
  • Detection is performed at different timings, and the vehicle position is estimated by NDT matching.
  • the vehicle unit 1 sets the individual evaluation function value E n of each voxel calculated at the vehicle position estimated by NDT matching, as an effective value for corresponding voxel ID.
  • the vehicle-mounted unit 1, the effective values corresponding to each voxel ID may determine the average value of the individual evaluation function value E n calculated in the period of detecting the voxels of each voxel ID.
  • the in-vehicle device 1 sets the average value of the individual evaluation function values E 4 calculated in the detection period Tw6 as an effective value for the voxel ID “4”. To do.
  • the voxel ID "12" as shown in FIG.
  • the vehicle-mounted unit 1 the average value of the calculated detection period Tw7 individual evaluation function value E 12, as an effective value for the voxel ID "12" Set.
  • the detection period may be all the times when the target voxel is detected, or only the time when the target voxel is detected within a predetermined distance range may be the detection period.
  • the vehicle-mounted device 1 measures the voxels with the voxel IDs “22” to “31” located on the surface of the object 58 corresponding to the building where the curing sheet is stretched by repairing the outer wall due to the presence of the curing sheet described above. Since the value has shifted, it cannot be detected. Therefore, in this case, the in-vehicle device 1 sets the effective value corresponding to the voxel IDs “22” to “31” to 0. And the vehicle equipment 1 produces
  • the server apparatus 6 can be suitably set so that the voxel recommended value of each voxel ID is in the range from 0 to 1.
  • the voxel recommended values of the voxels that were not found in influence of occlusion or rain snow decreases, individual evaluation function value E n be detected voxel lower statistically (in other words the position Voxel recommended values for voxels that were not effective for estimation also become smaller.
  • individual evaluation function value E n is (was effective in other words the position estimation) statistically higher voxel recommended value of the voxel becomes high.
  • the server device 6 may perform the calculation of the voxel recommended value using the valid value of the upload information Iu received in the past predetermined time (for example, 10 minutes). More preferably, the server device 6 may set the predetermined time shorter as the traffic volume on the road where the target voxel can be detected increases. In another example, the server device 6 may calculate the recommended voxel value of the target voxel based on the valid value included in the latest predetermined number of pieces of upload information Iu acquired in the past regarding the target voxel. In this case, since the value S3 is equal to the predetermined number described above, the in-vehicle device 1 can suitably calculate the recommended voxel value with a certain number of samples.
  • the server device 6 may perform the calculation of the voxel recommended value using the valid value of the upload information Iu received in the past predetermined time (for example, 10 minutes). More preferably, the server device 6 may set the predetermined time shorter as the traffic volume on the road where the target vox
  • FIG. 21 is an example of a flowchart showing an overview of processing related to transmission / reception of upload information Iu including valid values and download information Id including voxel recommendation information.
  • the vehicle-mounted device 1 refers to the map DB 10 and acquires voxel data of voxels existing at a position detectable by the lidar 2 (step S111).
  • the vehicle unit 1 based on the individual evaluation function value E n for each presence and voxel detection by the rider 2 for each voxel that has acquired the voxel data in step S111, sets the effective value for each voxel ID (step S112) .
  • the vehicle-mounted device 1 sets the effective value for the voxel ID of the voxel for which the point cloud data could not be acquired by the lidar 2 to 0, and the effective value for the voxel ID of the voxel for which the point cloud data could be acquired by the lidar 2 set of the individual evaluation function value E n. And the vehicle equipment 1 transmits the upload information Iu which matched the effective value for every voxel ID to the server apparatus 6 (step S113).
  • the server device 6 receives the upload information Iu transmitted in step S113, and accumulates the upload information Iu in the upload information DB 27 (step S211). Then, when it is the update timing of the distribution map DB 20 (step S212; Yes), the server device 6 generates voxel recommended information by referring to the upload information DB 27, and uses the generated voxel recommended information and the distribution map DB 20 Is updated (step S213). And the server apparatus 6 transmits download information Id containing the voxel recommendation information etc. which were produced
  • step S114 When the in-vehicle device 1 receives the download information Id (step S114; Yes), it updates the map DB 10 using the download information Id (step S115). As a result, the latest voxel recommendation information is recorded in the map DB 10. On the other hand, the vehicle equipment 1 returns a process to step S111, when the download information Id is not received from the server apparatus 6 (step S114; No).
  • the server device 6 that has received the route search request from the vehicle unit 1 has the object recommendation information or You may perform the process which determines the path
  • the object recommendation information and the voxel recommendation information are also preferably used by the server device 6.
  • the configuration of the driving support system shown in FIG. 1 is an example, and the configuration of the driving support system to which the present invention is applicable is not limited to the configuration shown in FIG.
  • the electronic control device of the vehicle instead of having the in-vehicle device 1, the electronic control device of the vehicle may execute the processes of the vehicle position estimation unit 17 and the upload control unit 18 of the in-vehicle device 1.
  • the map DB 10 is stored in, for example, a storage unit in the vehicle, and the electronic control device of the vehicle exchanges upload information Iu and download information Id with the server device 6 via the in-vehicle device 1 or communication (not shown). You may go through the part.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Automation & Control Theory (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

L'invention porte sur une machine embarquée (1) qui estime une position en fonction d'une valeur de mesure d'objet mesurée par un LIDAR (2) monté sur un véhicule et des informations de position d'objet incluses dans une base de données de carte (10). La machine embarquée (1) génère un indicateur valide ou une valeur valide indiquant la validité d'amélioration de précision de l'estimation de position de véhicule hôte à l'aide d'un objet, et transmet l'indicateur valide ou la valeur valide conjointement avec un ID d'objet ou un ID de voxel à un dispositif serveur (6).
PCT/JP2019/012318 2018-03-27 2019-03-25 Dispositif de transmission d'informations, structure de données, procédé de commande, programme et support de stockage Ceased WO2019188877A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018060519 2018-03-27
JP2018-060519 2018-03-27

Publications (1)

Publication Number Publication Date
WO2019188877A1 true WO2019188877A1 (fr) 2019-10-03

Family

ID=68061875

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/012318 Ceased WO2019188877A1 (fr) 2018-03-27 2019-03-25 Dispositif de transmission d'informations, structure de données, procédé de commande, programme et support de stockage

Country Status (1)

Country Link
WO (1) WO2019188877A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112731434A (zh) * 2020-12-15 2021-04-30 武汉万集信息技术有限公司 基于激光雷达和标识物的定位方法、系统

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07218277A (ja) * 1994-02-03 1995-08-18 Zanabui Informatics:Kk 車両用ナビゲーション装置
JP2015108604A (ja) * 2013-12-06 2015-06-11 日立オートモティブシステムズ株式会社 車両位置推定システム,装置,方法、及び、カメラ装置
US20170270361A1 (en) * 2016-03-15 2017-09-21 Solfice Research, Inc. Systems and methods for providing vehicle cognition

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07218277A (ja) * 1994-02-03 1995-08-18 Zanabui Informatics:Kk 車両用ナビゲーション装置
JP2015108604A (ja) * 2013-12-06 2015-06-11 日立オートモティブシステムズ株式会社 車両位置推定システム,装置,方法、及び、カメラ装置
US20170270361A1 (en) * 2016-03-15 2017-09-21 Solfice Research, Inc. Systems and methods for providing vehicle cognition

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112731434A (zh) * 2020-12-15 2021-04-30 武汉万集信息技术有限公司 基于激光雷达和标识物的定位方法、系统

Similar Documents

Publication Publication Date Title
JP2022113746A (ja) 判定装置
JP6608456B2 (ja) 推定装置、制御方法、プログラム及び記憶媒体
JPWO2018221453A1 (ja) 出力装置、制御方法、プログラム及び記憶媒体
JP7155284B2 (ja) 計測精度算出装置、自己位置推定装置、制御方法、プログラム及び記憶媒体
JP2017072422A (ja) 情報処理装置、制御方法、プログラム及び記憶媒体
JP2023164553A (ja) 位置推定装置、推定装置、制御方法、プログラム及び記憶媒体
JP2023054314A (ja) 情報処理装置、制御方法、プログラム及び記憶媒体
JP6923750B2 (ja) 自己位置推定装置、自己位置推定方法、プログラム及び記憶媒体
JP6980010B2 (ja) 自己位置推定装置、制御方法、プログラム及び記憶媒体
JPWO2018221454A1 (ja) 地図作成装置、制御方法、プログラム及び記憶媒体
JP2025083454A (ja) 情報処理装置、地図データ生成装置、方法及びプログラム
JP2023075184A (ja) 出力装置、制御方法、プログラム及び記憶媒体
JP2022176322A (ja) 自己位置推定装置、制御方法、プログラム及び記憶媒体
WO2021112078A1 (fr) Dispositif de traitement d'informations, procédé de commande, programme et support de stockage
JP2019174675A (ja) データ構造、地図データ生成装置、制御方法、プログラム及び記憶媒体
JP2019174191A (ja) データ構造、情報送信装置、制御方法、プログラム及び記憶媒体
WO2019188886A1 (fr) Dispositif terminal, procédé de traitement d'informations et support d'informations
WO2019188820A1 (fr) Dispositif de transmission d'informations, structure de données, procédé de commande, programme et support d'informations
JP2020098196A (ja) 推定装置、制御方法、プログラム及び記憶媒体
WO2018212302A1 (fr) Dispositif d'estimation de position propre, procédé de commande, programme et support d'informations
WO2019188877A1 (fr) Dispositif de transmission d'informations, structure de données, procédé de commande, programme et support de stockage
WO2019188874A1 (fr) Structure de données, dispositif de traitement d'informations et dispositif de génération de données cartographiques

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19777156

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19777156

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP