[go: up one dir, main page]

CN116136408A - Indoor navigation method, server, device and terminal - Google Patents

Indoor navigation method, server, device and terminal Download PDF

Info

Publication number
CN116136408A
CN116136408A CN202111369855.8A CN202111369855A CN116136408A CN 116136408 A CN116136408 A CN 116136408A CN 202111369855 A CN202111369855 A CN 202111369855A CN 116136408 A CN116136408 A CN 116136408A
Authority
CN
China
Prior art keywords
information
position information
virtual
navigation
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111369855.8A
Other languages
Chinese (zh)
Inventor
施文哲
朱方
欧阳新志
周琴芬
夏宏飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zte Nanjing Co ltd
ZTE Corp
Original Assignee
Zte Nanjing Co ltd
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zte Nanjing Co ltd, ZTE Corp filed Critical Zte Nanjing Co ltd
Priority to CN202111369855.8A priority Critical patent/CN116136408A/en
Priority to PCT/CN2022/130486 priority patent/WO2023088127A1/en
Publication of CN116136408A publication Critical patent/CN116136408A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

The application provides an indoor navigation method, a server, a device and a terminal, and relates to the technical field of augmented reality. The method comprises the following steps: acquiring the actual position information of the augmented reality AR device indoors in real time, wherein the actual position information is used for representing the position information of the AR device in a world coordinate system; the actual position information is matched with a preset navigation map, and real-time positioning information is determined; determining virtual guide information according to the real-time positioning information and the acquired guide route, wherein the guide route is determined based on the acquired initial position information and target position information of the AR device; and sending the virtual guide information to the AR device so that the AR device generates and dynamically displays the virtual navigation image according to the virtual guide information. The navigation accuracy is improved through the virtual guide information; and sending the virtual guide information to the AR device, so that the AR device generates and dynamically displays a high-precision virtual navigation image according to the virtual guide information, and intuitively guides a user to quickly go to a target position.

Description

Indoor navigation method, server, device and terminal
Technical Field
The application relates to the technical field of augmented reality, in particular to an indoor navigation method, a server, a device and a terminal.
Background
With the continued development of cities, large buildings (e.g., airports, high-speed rail stations, malls, and high-rise office buildings, etc.) continue to emerge. When people move in each large building, the problem that the people cannot be positioned easily occurs. With the increase of the space data acquisition means, the portable panoramic camera can be adopted to acquire visual data, and then the visual data are processed to realize indoor positioning.
However, due to the complexity of indoor environments, the need for accurate indoor navigation is increasing. The existing indoor plane navigation method cannot intuitively guide a user to quickly go to a target position; in addition, the indoor plane navigation method has the problem of low navigation precision, and cannot meet the high-precision navigation requirement of users.
Disclosure of Invention
The application provides an indoor navigation method, a server, a device and a terminal.
An embodiment of the present application provides an indoor navigation method, including: acquiring the actual position information of the augmented reality AR device indoors in real time, wherein the actual position information is used for representing the position information of the AR device in a world coordinate system; the actual position information is matched with a preset navigation map, and real-time positioning information is determined; determining virtual guide information according to the real-time positioning information and the acquired guide route, wherein the guide route is determined based on the acquired initial position information and target position information of the AR device; and sending the virtual guide information to the AR device so that the AR device generates and dynamically displays the virtual navigation image according to the virtual guide information.
In another embodiment of the present application, an indoor navigation method includes: the method comprises the steps of sending actual position information of an augmented reality AR device indoors to a server, enabling the server to match the actual position information with a preset navigation map, determining real-time positioning information, and generating and sending virtual guiding information to the AR device according to a guiding route and the real-time positioning information, wherein the guiding route is determined based on the acquired initial position information and target position information of the AR device, and the actual position information is used for representing the position information of the AR device in a world coordinate system; responding to the virtual guide information sent by the server, and generating a virtual navigation image; and dynamically displaying the virtual navigation image.
The server provided by the embodiment of the application comprises: the acquisition module is configured to acquire the actual position information of the augmented reality AR device indoors in real time, wherein the actual position information is used for representing the position information of the AR device in a world coordinate system; the matching module is configured to match the actual position information with a preset navigation map and determine real-time positioning information; a determining module configured to determine virtual guidance information according to the real-time positioning information and the acquired guidance route, the guidance route being a route determined based on the acquired initial position information and the target position information of the AR device; the first sending module is configured to send the virtual guide information to the AR device so that the AR device can generate and dynamically display the virtual navigation image according to the virtual guide information.
An augmented reality AR device provided in an embodiment of the present application includes: the second sending module is configured to send the actual position information of the augmented reality AR device indoors to the server, so that the server matches the actual position information with a preset navigation map, determines real-time positioning information, and generates and sends virtual guiding information to the AR device according to a guiding route and the real-time positioning information, wherein the guiding route is a route determined based on the acquired initial position information and target position information of the AR device; the generation module is configured to respond to the virtual guide information sent by the server and generate a virtual navigation image; and the display module is configured to dynamically display the virtual navigation image.
The terminal provided by the embodiment of the application comprises: at least one augmented reality AR device for implementing any one of the indoor navigation methods of the embodiments of the present application.
An embodiment of the present application provides an electronic device, including: one or more processors; and a memory having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement any one of the indoor navigation methods of the embodiments of the present application.
The embodiment of the application provides a readable storage medium storing a computer program which, when executed by a processor, implements any one of the indoor navigation methods of the embodiments of the application.
According to the indoor navigation method, the server, the device and the terminal, the actual position of the AR device in the world coordinate system can be determined by acquiring the actual position information of the augmented reality AR device in the indoor in real time; the real position information of the AR device in the world coordinate system is matched with the preset navigation map, and the real position information can be mapped into the preset navigation map on the two-dimensional space to determine the corresponding real-time positioning information of the AR device in the preset navigation map, so that the subsequent processing is convenient; determining virtual guide information according to the real-time positioning information and the acquired guide route, wherein the guide route is determined based on the acquired initial position information and target position information of the AR device, the real-time positioning information is matched with the guide route, and virtual guide information required to be provided for the AR device is determined so as to improve the navigation accuracy; the virtual guide information is sent to the AR device, so that the AR device generates and dynamically displays a high-precision virtual navigation image according to the virtual guide information, a user is intuitively guided to quickly go to a target position, and navigation accuracy is improved.
With respect to the above examples and other aspects of the present application and their implementation, further description is provided in the accompanying description, detailed description and claims.
Drawings
Fig. 1 is a schematic flow chart of an indoor navigation method according to an embodiment of the present application.
Fig. 2 is a flow chart illustrating an indoor navigation method according to another embodiment of the present application.
Fig. 3 is a flow chart illustrating an indoor navigation method according to another embodiment of the present application.
Fig. 4 shows a block diagram of the server provided in the embodiment of the present application.
Fig. 5 shows a block diagram of the components of an augmented reality AR device provided in an embodiment of the present application.
Fig. 6 shows a block diagram of a terminal according to an embodiment of the present application.
Fig. 7 shows a block diagram of the indoor navigation system provided in the embodiment of the present application.
Fig. 8 is a flowchart illustrating a navigation method of the indoor navigation system according to an embodiment of the present application.
Fig. 9 illustrates a block diagram of an exemplary hardware architecture of a computing device capable of implementing the indoor navigation method and apparatus according to embodiments of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present application more apparent, embodiments of the present application will be described in detail hereinafter with reference to the accompanying drawings. It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be arbitrarily combined with each other.
With the continuous updating of the requirements of users for navigation technology, the traditional positioning and navigation technology based on the global positioning system (Global Positioning System, GPS) and the positioning and navigation technology based on the wireless radio frequency signals cannot meet the navigation requirements of users for higher precision.
In addition, in an indoor environment, because illumination provided by different light sources has time variability, and the time variability of an indoor deployment environment, the influence of factors such as shielding of obstacles, change of an observation angle of a user and the like, the acquired positioning information and navigation information are inaccurate, and the high-precision navigation requirement of the user cannot be met.
Fig. 1 is a schematic flow chart of an indoor navigation method according to an embodiment of the present application. The indoor navigation method can be applied to a server. As shown in fig. 1, the indoor navigation method in the embodiment of the present application at least includes, but is not limited to, the following steps.
Step S101, acquiring actual position information of the augmented reality device indoors in real time.
Wherein the actual location information is used to characterize the location information of the augmented reality (Augmented Reality, AR) device in the world coordinate system. The world coordinate system may be defined as: the center of a circle with a small circle is taken as an origin o, the x axis is horizontally right, the y axis is vertically downward, and the direction of the z axis is determined according to a right-hand rule. The world coordinate system may be used as a starting coordinate space when performing graphics conversion.
The real-time position information is obtained, so that the indoor position of the AR device can be updated in time, the real-time position information is processed, and the positioning accuracy of the AR device is improved.
Step S102, the actual position information is matched with a preset navigation map, and real-time positioning information is determined.
The preset navigation map may include a planar map in a building to be navigated, and the preset navigation map may be a two-dimensional planar map.
The actual position information characterizes the position information of the AR device in the three-dimensional space, and the real-time positioning information of the AR device in the two-dimensional plane map is determined by mapping the three-dimensional actual position information into the two-dimensional preset navigation map, so that the positioning information of the AR device can be determined in real time, and the accuracy of the real-time information of the AR device is ensured. And the acquired real-time positioning information can facilitate the subsequent auxiliary positioning and navigation updating of the navigation information.
Step S103, virtual guide information is determined according to the real-time positioning information and the acquired guide route.
Wherein the guiding route is a route determined based on the acquired initial position information and target position information of the AR device.
For example, the initial location information may include the location of the AR device at the time of initial entry into the navigation system, while the target location information characterizes end location information that the AR device needs to reach. By planning the initial position information and the target position information, a plurality of proper navigation paths are found, a navigation route with the shortest path is found from the navigation paths, and the navigation route is used as a guiding route, so that the AR device can reach the end position as soon as possible according to the guiding route.
In the process of providing navigation information for the AR device, the AR device needs to provide real-time positioning information so that a current server can match the real-time positioning information corresponding to the AR device with a guiding route, dynamically adjusts the guiding route in real time, generates virtual guiding information, can avoid the influence of factors such as shielding of obstacles, change of an observation angle and the like, sends accurate navigation information to the AR device, and timely prompts the AR device by using the virtual guiding information so as to avoid the displacement of the AR device, prompt navigation accuracy and meet the high-precision navigation requirement of a user.
Step S104, transmitting virtual guidance information to the augmented reality device.
After the AR device obtains the virtual guide information, the AR device generates and dynamically displays a virtual navigation image according to the virtual guide information. Because the AR device can support dynamic and three-dimensional display of image information or video information, the navigation information can be dynamically displayed in real time in a virtual navigation image mode, so that a user can conveniently and intuitively check the navigation information and the position information of the AR device in an actual environment, and the navigation accuracy is improved.
In this embodiment, by acquiring actual position information of the augmented reality AR device indoors in real time, the actual position of the AR device in the world coordinate system can be determined; the real position information of the AR device in the world coordinate system is matched with the preset navigation map, and the real position information can be mapped into the preset navigation map on the two-dimensional space to determine the corresponding real-time positioning information of the AR device in the preset navigation map, so that the subsequent processing is convenient; determining virtual guide information according to the real-time positioning information and the acquired guide route, wherein the guide route is determined based on the acquired initial position information and target position information of the AR device, the real-time positioning information is matched with the guide route, and virtual guide information required to be provided for the AR device is determined so as to improve the navigation accuracy; the virtual guide information is sent to the AR device, so that the AR device generates and dynamically displays a high-precision virtual navigation image according to the virtual guide information, a user is intuitively guided to quickly go to a target position, and navigation accuracy is improved.
Fig. 2 is a flow chart illustrating an indoor navigation method according to another embodiment of the present application. The indoor navigation method can be applied to a server. The difference between this embodiment and the previous embodiment is that: the actual position information comprises a real view corresponding to the actual position of the AR device in the room; the real view corresponding to the actual position of the AR device indoors is processed through the deep learning neural network, so that the positioning characteristics in the real view corresponding to the actual position can be refined, and the positioning accuracy is improved.
As shown in fig. 2, the indoor navigation method in the embodiment of the present application at least includes, but is not limited to, the following steps.
In step S201, the actual location information of the augmented reality AR device in the room is acquired in real time.
Wherein the actual position information includes: the actual position of the AR device in the room corresponds to the live view. The live view corresponding to the actual location of the AR device in the room may comprise multiple levels of views. For example, the live view may be a panoramic view acquired by a panoramic camera and/or a panoramic camera, or may be a view of a partial area that can be seen by an observer acquired by a general camera. The foregoing is only an example for the live view, and the view finding range of the live view can be set according to actual needs, and other non-explained live view is also within the protection range of the application, which is not described herein again.
Through the live-action view, the real position information of the AR device in the room can be displayed in a multi-angle mode, omission of azimuth information is avoided, the obtained real position information is more comprehensive, and follow-up processing is facilitated.
Step S202, processing a live-action view corresponding to the actual position of the AR device indoors based on the deep learning neural network to obtain position information to be matched.
Wherein the live view is a view acquired by the AR device. The live view may include: a partial area view or a panoramic view. The location information to be matched is used to characterize the location information of the AR device within the building to be navigated.
In some embodiments, the deep learning neural network is used to extract image information in a partial region view and/or a panoramic view, so as to extract features of the image information, obtain processed image information, and enable the processed image information to better represent a relative position of the AR device, so that the relative position of the AR device can represent position information to be matched.
For example, partial region views which can be seen by an observer and are acquired by a common image pickup device can be layered, each layered view is subjected to block processing, each view is classified, the categories corresponding to each layered view are further refined, and the relative positions of the AR devices are refined, so that position information to be matched is obtained.
For another example, if the live view comprises a panoramic view (i.e., 360 degrees of image), the 360 degrees of image may be divided equally into 12 projection planes; and then, adopting a coding mode of local aggregation descriptor vectors (Net Vector of Locally Aggregated Descriptors, net VLAD) based on a network to respectively carry out image retrieval and scene recognition on the 12 projection surfaces so as to obtain position information to be matched, and improving the accuracy of the position information to be matched.
Step S203, searching a preset navigation map according to the position information to be matched, and determining whether the position information to be matched exists in the preset navigation map.
The preset navigation map comprises a plurality of pieces of position information, and the position information to be matched is matched with each piece of position information in the preset navigation map to determine whether the position information to be matched exists in the navigation map.
If the position information to be matched exists in the navigation map, the travel direction of the AR device and the relative position of the AR device are characterized to be correct; otherwise, if it is determined that the position information to be matched does not exist in the navigation map, the travel direction of the AR device and the relative position of the AR device are wrong, and the AR device needs to be reminded to adjust the travel direction or adjust the relative position of the AR device as soon as possible, so as to correct the travel direction of the AR device and the relative position of the AR device, so that the AR device can keep moving in a correct path to reach the target position as soon as possible.
Step S204, under the condition that the position information to be matched exists in the preset navigation map, the position information matched with the position information to be matched in the preset navigation map is used as real-time positioning information.
Wherein the real-time positioning information is capable of characterizing that the current direction of travel of the AR device and the relative position of the AR device are correct.
By taking the position information matched with the position information to be matched in the preset navigation map as the real-time positioning information, the real-time positioning information can embody the position of the AR device in the preset navigation map, and the subsequent processing is convenient.
Step S205, virtual guiding information is determined according to the real-time positioning information and the acquired guiding route.
Wherein the guiding route includes a plurality of guiding position information. Each piece of guiding position information characterizes the direction in which the AR device needs to travel and the relative position information of the AR device, and through each piece of guiding position information, the traveling path of the AR device can be corrected in time, so that the AR device can be guided to reach the target position as soon as possible.
For example, determining virtual guidance information based on real-time positioning information and the acquired guidance route includes: matching the real-time positioning information with a plurality of pieces of guiding position information, and determining the information of the direction to be moved and the information of the distance to be moved, which correspond to the AR device; updating a guiding route according to the information of the direction to be moved, the information of the distance to be moved and the information of a plurality of guiding positions corresponding to the AR device; and determining virtual guide information according to the updated guide route.
The information of the direction to be moved can be matched with the direction in which the AR device in the guiding position information needs to travel, and a direction matching result is obtained; the distance information to be moved can be matched with the relative position information of the AR device in the guiding position information, and a position matching result is obtained; through the direction matching result and the position matching result, whether the current traveling state of the AR device is matched with the guiding route can be represented, the guiding route is updated under the condition that the current traveling state of the AR device is determined to have errors with the guiding route and the errors exceed a preset threshold value, virtual guiding information is determined according to the updated guiding route, the virtual guiding information is information corresponding to the updated guiding route, the AR device is calibrated in time, and navigation accuracy is improved.
For example, calibrating the AR device may include: relevant shooting parameters (such as shooting angles, or image resolution, etc.) of the AR device are adjusted, so that navigation accuracy is improved.
In some implementations, the virtual boot information includes: at least one of camera pose estimation information, environment perception information, and light source perception information.
The environment perception information is used for representing the position information of the AR device in the building to be navigated, the camera posture estimation information is used for representing the direction information corresponding to the AR device, and the light source perception information is used for representing the light source information acquired by the AR device.
Information of the AR device in the navigation process is described through information of different dimensions, so that navigation accuracy of the AR device can be improved, and influence of factors such as shielding of obstacles and observation angle change is avoided.
Step S206, transmitting virtual guidance information to the AR device.
It should be noted that, step S206 in the present embodiment is the same as step S104 in the previous embodiment, and will not be described again here.
In the embodiment, the real view corresponding to the actual position of the AR device indoors is processed based on the deep learning neural network to obtain the position information to be matched, so that the accuracy of the position information to be matched is improved; searching a preset navigation map according to the position information to be matched, and determining whether the position information to be matched exists in the preset navigation map so as to judge whether the traveling direction of the AR device and the relative position of the AR device are correct; under the condition that the position information to be matched exists in the preset navigation map, the position information matched with the position information to be matched in the preset navigation map is used as real-time positioning information, so that the real-time positioning information can embody the position of the AR device in the preset navigation map, and subsequent processing is convenient; the real-time positioning information is matched with the acquired guide route, virtual guide information is determined, and the virtual guide information is sent to the AR device, so that the traveling route of the AR device can be calibrated in time, and the navigation accuracy is improved.
The embodiment of the present application provides another possible implementation manner, where the processing, based on the deep learning neural network, the live-action view corresponding to the actual position of the AR device in the room in step S202, to obtain the position information to be matched includes:
extracting local features of the AR device in a live view corresponding to the indoor actual position; inputting the local features into a deep learning neural network to obtain global features corresponding to the actual indoor positions of the AR device; and determining the position information to be matched based on the global features corresponding to the actual position of the AR device in the room.
The global features are used for representing the position information of the AR device in the building to be navigated; the local features are used to characterize the relative position information of the AR device in its acquired partial region view (e.g., view data of images, photographs, etc. uploaded by the AR device). Through global features and local features, the positioning of the AR device can be used as an environmental reference object, so that the positioning is more accurate.
The local features are analyzed through the deep learning neural network (for example, a machine learning mode is used for learning and matching a plurality of local positions in the building to be navigated), so that the fact that the local features corresponding to the AR device are matched with the local positions in the building to be navigated can be clarified, the position information to be matched is determined based on the actual positions of the matched local positions in the building to be navigated, the actual position information of the AR device in the building to be navigated can be reflected by the position information to be matched, and the positioning accuracy is improved.
The embodiment of the present application provides still another possible implementation manner, where before performing the real-time obtaining of the actual location information of the augmented reality AR device in the room in step S101 or step S201, the method further includes:
acquiring panoramic data in a building to be navigated; performing point cloud mapping based on panoramic data and a preset algorithm to generate a dense map; and determining a preset navigation map according to the dense map and the plane view corresponding to the building to be navigated.
Wherein, the panoramic data may include: view data corresponding to all scenes in the whole building to be navigated, or panoramic data corresponding to areas needing to be navigated in the building to be navigated.
For example, the panoramic data may further include multi-frame point cloud data collected by the panoramic camera, and by performing rotation transformation on the front and rear two-frame point cloud data, coordinate systems corresponding to the front and rear two-frame point cloud data are the same; and continuously superposing multiple frames of point cloud data based on the method to perform point cloud mapping (for example, adopting orthographic projection views corresponding to multiple frames of point cloud views to align with plane views corresponding to a building to be navigated) so as to generate a dense map. The dense map can comprehensively embody the position characteristics and the direction characteristics of the building to be navigated.
The planar view corresponding to the building to be navigated is matched with the dense map, so that a two-dimensional planar view, namely a preset navigation map, can be obtained, the preset navigation map can inherit the position characteristics and the direction characteristics in the dense map, the comprehensiveness and the integrity of the map are ensured, and the subsequent positioning and navigation are facilitated.
In some implementations, performing point cloud mapping based on panoramic data and a preset algorithm to generate a dense map includes: processing panoramic data according to a photogrammetry principle to generate point cloud data, wherein the point cloud data comprises three-dimensional coordinate information and color information; and processing the point cloud data according to a preset three-dimensional reconstruction algorithm to generate a dense map.
The principle of photogrammetry is to acquire an image by an optical camera and process the acquired image to acquire the shape, size, position, characteristics and interrelationships of a subject. For example, a plurality of images of a subject are acquired, measurement and analysis are performed for each image, an analysis result is obtained, and the analysis result is output in a graphic form or in a digital data form.
The position of the shot object can be represented by three-dimensional coordinate information, and the position information of the shot object in a three-dimensional space can be accurately obtained by carrying out multiple analysis and matching on the three-dimensional coordinate information in the point cloud data of multiple frames; by acquiring the color of the object a plurality of times and analyzing the color information of the object, the color characteristics of the object (for example, the color characteristics based on Red Green Blue (RGB) color space, or the color characteristics based on YUV color space, etc.) can be accurately known
Where "Y" in the YUV color space represents brightness (luminence or Luma), i.e., gray scale values, "U" and "V" represent chromaticity (Chroma) for describing the color and saturation of an image.
The three-dimensional coordinate information and the color information in the point cloud data are processed through a preset three-dimensional reconstruction algorithm, so that the generated dense map can embody the position information and the color information of each shot object in the three-dimensional space, the stereoscopic image of each shot object in the map is perfected, and the dense map is more accurate and convenient to navigate and position.
In some implementations, determining the preset navigation map according to the dense map and the planar view corresponding to the building to be navigated includes: mapping the dense map into a plane view to be processed by adopting a front projection mode according to a preset scale factor; and matching the plane view to be processed with the plane view corresponding to the building to be navigated, and determining a preset navigation map.
The matching of the plane view to be processed and the plane view corresponding to the building to be navigated can be realized in the following manner: and comparing the plane view to be processed with the plane view corresponding to the building to be navigated, or aligning the plane view to be processed with the plane view corresponding to the building to be navigated, so as to determine a preset navigation map.
The preset scale factor is a coefficient factor which can be calibrated, and the dense map can be reasonably scaled through the preset scale factor, so that the scaled dense map can be suitable for display equipment with different sizes.
It should be noted that, because the dense map is a three-dimensional map and the planar view corresponding to the building to be navigated is a two-dimensional view, the dense map needs to be mapped into the planar view to be processed by adopting a front projection manner, so as to facilitate matching of the planar view to be processed with the planar view corresponding to the building to be navigated, the planar view corresponding to the building to be navigated is used for verifying the accuracy of the planar view to be processed, and when the planar view to be processed is determined to be matched with the planar view corresponding to the building to be navigated, a preset navigation map can be obtained, which can embody the accuracy of the dense map and can also embody the characteristics of the planar view corresponding to the building to be navigated, and the AR device is navigated and positioned by using the preset navigation map, so that the accuracy of navigation and positioning can be further improved.
In some implementations, the corresponding plan view of the building to be navigated includes: a computer aided design (Computer Aided Design, CAD) view, the CAD view being a vector planar view determined based on preset scale factors.
The CAD view is displayed on different devices (for example, mobile phones or tablet computers with different sizes) under the condition that the preset scale factor is kept unchanged, so that the definition of the view meets the expected requirement, that is, the CAD view has the characteristic of unchanged scaling scale, and the CAD view is a directional vector plane view.
Through designing the plane view to be processed and the plane view corresponding to the building to be navigated into CAD views, the plane view to be processed and the plane view corresponding to the building to be navigated can be guaranteed to have the characteristic of unchanged scaling scale, and are vector plane views with directivity, the display effect of the terminal is improved, and better use experience is brought to users.
The embodiment of the present application provides another possible implementation manner, where after performing the sending of the virtual guiding information to the AR device in step S104 or step S206, the method further includes:
receiving arrival position information fed back by the AR device; comparing the arrival location information with the target location information to determine whether the AR device arrives at the target location; in case it is determined that the AR device reaches the target location, the navigation is ended.
The arrival position information fed back by the AR device is compared with the target position information, whether the arrival position information is identical to the target position information or not is determined, and when the arrival position information is identical to the target position information, the AR device is determined to have arrived at the target position, so that navigation can be ended, real-time position information fed back by the AR device does not need to be acquired, information processing amount is reduced, and information processing efficiency is improved.
Otherwise, under the condition that the AR device does not reach the target position, the real-time position information fed back by the AR device is required to be continuously acquired so as to assist the AR device in navigation and positioning, and navigation accuracy is improved.
It should be noted that, in navigating the AR device, the navigation beacon may select the cartoon character (e.g., the character cartoon character and/or the animal cartoon character) to represent, so as to increase the interest of the AR navigation.
Fig. 3 is a flow chart illustrating an indoor navigation method according to another embodiment of the present application. The indoor navigation method may be applied to an AR device, which may be mounted on a terminal. As shown in fig. 3, the indoor navigation method in the embodiment of the present application at least includes, but is not limited to, the following steps.
In step S301, the actual location information of the augmented reality AR device in the room is transmitted to the server.
Wherein the actual location information of the AR device in the room may include; three-dimensional space coordinate information corresponding to the position of the AR device. The actual position of the AR device in the building to be navigated can be embodied through the three-dimensional space coordinate information.
When the server obtains the actual position information, the actual position information is matched with a preset navigation map, real-time positioning information is determined, virtual guide information is generated and sent to the AR device according to a guide route and the real-time positioning information, wherein the guide route is determined based on the obtained initial position information and target position information of the AR device, and the actual position information is used for representing the position information of the AR device in a world coordinate system.
The route information of the navigation is definitely needed through the guide route, and the guide route is matched with the real-time positioning information, so that the mapping position information of the AR device in the preset navigation map can be definitely determined, the guide route is dynamically adjusted in real time, and the navigation accuracy is improved.
Step S302, a virtual navigation image is generated in response to the virtual guide information sent by the server.
Wherein the virtual navigation image may include: AR images or AR videos based on dynamic imaging. Through dynamic AR images or AR videos, the position information of the AR device in the building to be navigated and the target route information can be clearly and three-dimensionally checked.
The virtual guiding information comprises camera posture estimating information, environment sensing information and light source sensing information, wherein the environment sensing information is used for representing position information of the AR device in a building to be navigated, the camera posture estimating information is used for representing direction information corresponding to the AR device, and the light source sensing information is used for representing light source information acquired by the AR device.
For example, the camera pose estimation information may include: the orientation information corresponding to the AR device, for example, the relative position of the AR device (e.g., the camera of the AR device is facing in a direction opposite the user's face, or the camera of the AR device is facing the ground, etc.). The light source perception information may include: the AR device receives multiple angles of light information.
In some implementations, generating a virtual navigation image in response to virtual guidance information sent by a server includes: receiving virtual guide information sent by a server; processing the environment sensing information and the light source sensing information according to a preset three-dimensional reconstruction algorithm to obtain a virtual image of the augmented reality AR; the camera pose estimation information is matched with the virtual image of the AR, and a virtual navigation image is determined.
The preset three-dimensional reconstruction algorithm may include: a Multi-view geometry (Open Multiple View Geometry, openMVG) algorithm and a Multi-view stereo reconstruction library (Open Multi-View Stereo reconstruction library, openMVS) algorithm.
The OpenMVG algorithm can accurately solve common problems in multi-view geometry. For example, calibration based on scene structure information; self-calibration based on camera active information (e.g., pure rotation information); and the self-calibration of the active information of the camera and the scene structure are not depended on. The OpenMVS algorithm is suitable for reconstructing dense point cloud, surface reconstruction, surface refinement, texture mapping and other scenes, and the surface refinement can enable images to be clearer. And the OpenMVG algorithm and the OpenMVS algorithm are optimized, and the obtained optimized algorithms can be cooperatively used to realize three-dimensional reconstruction.
The OpenMVS algorithm is used for carrying out surface reconstruction and surface refinement on the environment perception information, and carrying out texture mapping and other processing on the light source perception information to obtain the virtual image of the AR, wherein the virtual image can more finely embody the environment information of the AR device and the acquired projection information of the light source to the AR device. The camera attitude estimation information is processed by using an OpenMVG algorithm to obtain a plurality of view information based on the AR device, the view information is matched with the virtual image of the AR, the virtual navigation image is determined, and the accuracy of the virtual navigation image is improved.
Step S303, dynamically displaying the virtual navigation image.
For example, the obtained virtual navigation image can be displayed in real time, and also can be dynamically played in a frame mode, so that the user can conveniently check the virtual navigation image, the navigation information can be visually checked in a three-dimensional mode, and the navigation accuracy is improved.
In this embodiment, by sending the actual location information of the AR device in the room to the server, so that the server can match the actual location information with a preset navigation map, determine real-time positioning information, generate and send virtual guiding information to the AR device according to a guiding route and the real-time positioning information, and clear the path information to be navigated through the guiding route, and match the guiding route with the real-time positioning information, so that the mapping location information of the AR device in the preset navigation map can be clear, the guiding route can be dynamically adjusted in real time, and the navigation accuracy is improved; and the virtual navigation image is dynamically displayed in response to the virtual guide information sent by the server, so that a user can conveniently and intuitively view the navigation information in a three-dimensional way, and the navigation accuracy is improved.
Various devices according to embodiments of the present application are described in detail below with reference to the accompanying drawings. Fig. 4 shows a block diagram of the server provided in the embodiment of the present application. As shown in fig. 4. The server 400 includes the following modules.
An acquisition module 401 configured to acquire real-time actual location information of the augmented reality AR device indoors, the actual location information being used to characterize the location information of the AR device in a world coordinate system; a matching module 402 configured to match the actual location information with a preset navigation map, and determine real-time positioning information; a determining module 403 configured to determine virtual guidance information according to the real-time positioning information and the acquired guidance route, the guidance route being a route determined based on the acquired initial position information and target position information of the AR device; the first sending module 404 is configured to send the virtual guiding information to the AR device, so that the AR device generates and dynamically displays the virtual navigation image according to the virtual guiding information.
In some implementations, the actual location information includes: real view corresponding to the actual position of the AR device in the room; the matching module 402 is specifically configured to: processing a live view corresponding to the actual position of the AR device indoors based on the deep learning neural network to obtain position information to be matched; searching a preset navigation map according to the position information to be matched, and determining whether the position information to be matched exists in the preset navigation map; and under the condition that the position information to be matched exists in the preset navigation map, the position information matched with the position information to be matched in the preset navigation map is used as real-time positioning information.
In some specific implementations, processing a live view corresponding to an actual position of an AR device indoors based on a deep learning neural network to obtain position information to be matched includes: extracting local features of the AR device in a live view corresponding to the indoor actual position; inputting the local features into a deep learning neural network to obtain global features corresponding to the actual indoor positions of the AR devices, wherein the global features are used for representing the position information of the AR devices in a building to be navigated; and determining the position information to be matched based on the global features corresponding to the actual position of the AR device in the room.
In some implementations, the server 400 further includes: the preset navigation map generation module is used for: acquiring panoramic data in a building to be navigated; performing point cloud mapping based on panoramic data and a preset algorithm to generate a dense map; and determining a preset navigation map according to the dense map and the plane view corresponding to the building to be navigated.
In some implementations, performing point cloud mapping based on panoramic data and a preset algorithm to generate a dense map includes: processing panoramic data according to a photogrammetry principle to generate point cloud data, wherein the point cloud data comprises three-dimensional coordinate information and color information; and processing the point cloud data according to a preset three-dimensional reconstruction algorithm to generate a dense map.
In some implementations, determining the preset navigation map according to the dense map and the planar view corresponding to the building to be navigated includes: mapping the dense map into a plane view to be processed by adopting a front projection mode according to a preset scale factor; and matching the plane view to be processed with the plane view corresponding to the building to be navigated, and determining a preset navigation map.
In some implementations, the corresponding plan view of the building to be navigated includes: the CAD view is designed in a computer-aided manner, and the CAD view is determined to be a vector plane view based on a preset scale factor.
In some implementations, the guide route includes a plurality of guide location information; the determining module 403 is specifically configured to: matching the real-time positioning information with a plurality of pieces of guiding position information, and determining the information of the direction to be moved and the information of the distance to be moved, which correspond to the AR device; updating a guiding route according to the information of the direction to be moved, the information of the distance to be moved and the information of a plurality of guiding positions corresponding to the AR device; and determining virtual guide information according to the updated guide route.
In some implementations, the server 400 further includes: a confirmation module for: receiving arrival position information fed back by the AR device; comparing the arrival location information with the target location information to determine whether the AR device arrives at the target location; in case it is determined that the AR device reaches the target location, the navigation is ended.
In some implementations, the virtual boot information includes: at least one of camera pose estimation information, environment perception information, and light source perception information; the environment perception information is used for representing the position information of the AR device in the building to be navigated, the camera posture estimation information is used for representing the direction information corresponding to the AR device, and the light source perception information is used for representing the light source information acquired by the AR device.
In the embodiment, the actual position of the AR device in the world coordinate system can be determined by acquiring the actual position information of the augmented reality AR device in the room in real time through the acquisition module; the real position information of the AR device in the world coordinate system is matched with the preset navigation map by using the matching module, and the real position information can be mapped into the preset navigation map on the two-dimensional space to determine the corresponding real-time positioning information of the AR device in the preset navigation map, so that the subsequent processing is convenient; the method comprises the steps that a determining module is used for determining virtual guide information according to real-time positioning information and an acquired guide route, wherein the guide route is determined based on initial position information and target position information of an AR device, the real-time positioning information is matched with the guide route, and virtual guide information required to be provided for the AR device is determined, so that the accuracy of navigation is improved; and the first sending module is used for sending the virtual guide information to the AR device, so that the AR device generates and dynamically displays a high-precision virtual navigation image according to the virtual guide information, a user is intuitively guided to quickly go to a target position, and the navigation accuracy is improved.
Fig. 5 shows a block diagram of the components of an augmented reality AR device provided in an embodiment of the present application. As shown in fig. 5. The augmented reality apparatus 500 includes the following modules.
A second transmitting module 501 configured to transmit actual location information of the augmented reality AR device indoors to a server, so that the server matches the actual location information with a preset navigation map, determines real-time positioning information, and generates and transmits virtual guiding information to the AR device according to a guiding route and the real-time positioning information, wherein the guiding route is a route determined based on the acquired initial location information and target location information of the AR device; a generation module 502 configured to generate a virtual navigation image in response to the virtual guidance information transmitted by the server; and a display module 503 configured to dynamically display the virtual navigation image.
In this embodiment, the second sending module sends the actual position information of the AR device in the room to the server, so that the server can match the actual position information with the preset navigation map, determine the real-time positioning information, generate and send virtual guiding information to the AR device according to the guiding route and the real-time positioning information, and clear the path information to be navigated through the guiding route, and match the guiding route with the real-time positioning information, so that the mapping position information of the AR device in the preset navigation map can be cleared, the guiding route can be dynamically adjusted in real time, and the navigation accuracy is improved; the virtual navigation image is generated by using the generating module in response to the virtual guide information sent by the server, and is dynamically displayed by using the display module, so that a user can conveniently and intuitively view the navigation information in a three-dimensional way, and the navigation accuracy is improved.
Fig. 6 shows a block diagram of a terminal according to an embodiment of the present application. As shown in fig. 6. The terminal 600 includes: at least one augmented reality device 500, the augmented reality device 500 being for implementing any one of the indoor navigation methods of the embodiments of the present application.
For example, the augmented reality apparatus 500 includes: a second transmitting module 501 configured to transmit actual location information of the augmented reality device 500 indoors to a server, so that the server matches the actual location information with a preset navigation map, determines real-time positioning information, and generates and transmits virtual guiding information to the augmented reality device 500 according to a guiding route and the real-time positioning information, wherein the guiding route is a route determined based on the acquired initial location information and target location information of the augmented reality device 500; a generation module 502 configured to generate a virtual navigation image in response to the virtual guidance information transmitted by the server; and a display module 503 configured to dynamically display the virtual navigation image.
In this embodiment, the second sending module 501 sends the actual position information of the augmented reality device 500 in the room to the server, so that the server can match the actual position information with the preset navigation map, determine the real-time positioning information, generate and send virtual guiding information to the augmented reality device 500 according to the guiding route and the real-time positioning information, and definitely determine the path information to be navigated through the guiding route, and match the guiding route with the real-time positioning information, so that the mapping position information of the augmented reality device 500 in the preset navigation map can be definitely and dynamically adjust the guiding route in real time, thereby improving the navigation accuracy; the generation module 502 is used for responding to the virtual guide information sent by the server to generate a virtual navigation image, and the display module is used for dynamically displaying the virtual navigation image, so that a user can conveniently and intuitively view the navigation information in a three-dimensional way, and the navigation accuracy is improved.
It should be clear that the present application is not limited to the specific arrangements and processes described in the above embodiments and shown in the drawings. For convenience and brevity of description, detailed descriptions of known methods are omitted herein, and specific working processes of the systems, modules and units described above may refer to corresponding processes in the foregoing method embodiments, which are not repeated herein.
Fig. 7 shows a block diagram of the indoor navigation system provided in the embodiment of the present application. As shown in fig. 7. The indoor navigation system includes the following devices.
The system comprises offline map creation equipment 710, a cloud navigation server 720, a terminal 730 and a preset navigation map generation device 740.
Wherein the offline map creation device 710 includes: a panoramic image acquisition device 711 and a point cloud map creation device 712. Cloud navigation server 720 includes: a recognition positioning module 721, a path planning module 722, and a real-time navigation module 723. Terminal 730 includes: an initial positioning module 731, a destination selection module 732, a real-time navigation image feedback module 733, a virtual navigation image generation module 734, and a display module 735.
Wherein, the terminal 730 may be a mobile phone terminal supporting an AR function (for example, supporting at least one of a motion capture function, an environment sensing function, a light source sensing function, etc.), and the panorama image collection device 711 may include: panoramic camera device and/or panoramic camera device.
Fig. 8 is a flowchart illustrating a navigation method of the indoor navigation system according to an embodiment of the present application. As shown in fig. 8, the navigation method of the indoor navigation system at least includes, but is not limited to, the following steps.
In step S801, the terminal 730 sends a map downloading request to the preset navigation map generating device 740.
Wherein the download request is for requesting acquisition of a preset navigation map, which is a map determined based on image data acquired in advance by the panoramic image acquisition apparatus 711 in the offline map creation apparatus 710.
For example, the download request may include information such as an identifier of the terminal 730 and a number of a preset navigation map, where the number of the preset navigation map may be a number obtained from historical interaction information of the terminal 730, or may be a number determined by real-time image information uploaded by the terminal.
For example, the panoramic image collection device 711 may send the collected image data based on the indoor environment (for example, the indoor environment of a certain mall or the like) to the point cloud map creation device 712, so that the point cloud map creation device 712 can process the collected image data based on the indoor environment according to a preset three-dimensional reconstruction algorithm to obtain a preset navigation map.
For example, the point cloud map creation means 712 generates a dense map by calling an optimized three-dimensional reconstruction algorithm from the image data acquired by the panoramic image acquisition means 711; then, matching the dense map with a plane view corresponding to a floor of a certain mall to be navigated; and determining a preset navigation map.
The optimized three-dimensional reconstruction algorithm may include an OpenMVG algorithm and an OpenMVS algorithm. The OpenMVG algorithm can accurately solve common problems in multi-view geometry. For example, calibration based on scene structure information; self-calibration based on camera active information (e.g., pure rotation information); and the self-calibration of the active information of the camera and the scene structure are not depended on.
The OpenMVS algorithm is suitable for reconstructing dense point cloud, surface reconstruction, surface refinement, texture mapping and other scenes, and the surface refinement can enable images to be clearer. And the OpenMVG algorithm and the OpenMVS algorithm are optimized, and the obtained optimized algorithms can be cooperatively used to realize three-dimensional reconstruction of the image.
It should be noted that, the point cloud map creation device 712 may perform the dimension reduction processing on the three-dimensional map by adopting the orthographic projection manner, so as to obtain the two-dimensional planar map. For example, a three-dimensional map is mapped onto a two-dimensional planar map in a front projection manner. The three-dimensional map and the two-dimensional plane map keep the abscissas and ordinates (such as x-coordinates, y-coordinates and the like) unchanged, and the two-dimensional plane map can be aligned with a preset CAD view to keep the consistency of coordinates, so that the generated preset navigation map has the characteristic of unchanged scaling scale and is a directional vector plane view.
In some specific implementations, the preset navigation map can be accurately displayed on display screens of different sizes (for example, display screens of mobile phones or display screens of tablet computers of different sizes, etc.), so as to improve the use experience of users.
For example, a two-dimensional planar CAD view corresponding to a floor in a mall may be used to reversely improve the accuracy of a dense map constructed from point cloud data, and remove noise interference information in the dense map, so that the dense map is more accurate.
In step S802, the preset navigation map generating device 740 sends the preset navigation map corresponding to the current scene of the terminal 730 to the terminal 730, and the map initialization of the terminal 730 is completed.
In step S803, the terminal 730 uploads the live-action image of the current location to the cloud navigation server 720.
In step S804, the cloud navigation server 720 processes the live-action image uploaded by the terminal 730 through the identification positioning module 721, and matches the live-action image with a preset navigation map to determine real-time positioning information.
For example, the recognition positioning module 721 invokes the deep learning hierarchical semantic description algorithm to classify the live-action image uploaded by the terminal 730, and primarily clarifies the large category (e.g., house image or person image, etc.) of the live-action image; a further refinement is then performed for this large category to obtain the initial position of the terminal 730.
Further, the live image uploaded by the terminal 730 may include: partial area view or panoramic image. The panoramic image may be a 360-degree image captured by a panoramic camera, then the 360-degree image is divided into 12 projection planes averagely, and then the 12 projection planes are respectively subjected to image retrieval and scene recognition by adopting a Net VLAD-based coding mode, so as to obtain initial position of the terminal 730, for example, three-dimensional space coordinate information corresponding to the position of the terminal 730.
In some implementations, the real-time positioning information may include: the position of the terminal 730 corresponds to two-dimensional coordinate information in a preset navigation map, which is coordinate information obtained by mapping three-dimensional space coordinate information corresponding to the position of the terminal 730 to the preset navigation map.
In step S805, the cloud navigation server 720 sends the real-time positioning information to the terminal 730, so that the terminal 730 uses the display module 735 to display the real-time positioning information.
Wherein, the real-time positioning information can represent the real-time position of the terminal 730 in the preset navigation map.
In step S806, the destination selection module 732 in the terminal 730 obtains the target location information input by the user, and generates and sends the path navigation message to the cloud navigation server 720 based on the initial location information of the terminal 730 and the target location information.
For example, the target position information can be information determined by directly clicking a specific position through a map displayed in the mobile phone terminal by a user, so that the user can simply and conveniently operate when selecting the target position, and the usability of address selection is ensured.
In step S807, the cloud navigation server 720 analyzes the received path navigation message to obtain the initial position information and the target position information of the terminal 730, and then invokes a fast traversal random tree (RRT) algorithm using the path planning module 722 to process the initial position information and the target position information to obtain the guiding route.
It should be noted that, the RRT algorithm is a tree-shaped data storage structure and algorithm, and establishes a data storage result by a path increment method, and rapidly reduces the distance between a random selection point and a tree, and the RRT algorithm can effectively search a Non-Convex (Non-Convex) high-dimensional space, and is particularly suitable for path planning under the differential constraint conditions including an obstacle and a Non-complete (Non-holomic) system or reverse dynamics (Kino-Dynamic).
In step S808, the cloud navigation server 720 sends the guiding route to the terminal 730, so that the terminal 730 displays the guiding route using the display module 735.
In step S809, the terminal 730 uses the real-time navigation image feedback module 733 to upload the real-time acquired position information and scene image to the real-time navigation module 723 in the cloud navigation server 720.
In step S810, the real-time navigation module 723 determines virtual navigation information by performing processes such as motion capturing, environment sensing, and light source sensing on the position information and the scene image acquired in real time.
Wherein, the virtual navigation information may include: at least one of camera pose estimation information, environment perception information, and light source perception information; the environment sensing information is used for representing the position information of the terminal 730 in the building to be navigated, the camera gesture estimation information is used for representing the direction information corresponding to the terminal 730, and the light source sensing information is used for representing the light source information acquired by the terminal 730. Information of the terminal 730 in real-time navigation can be comprehensively measured, and navigation accuracy is improved.
In step S811, the cloud navigation server 720 sends the updated virtual guiding information to the terminal 730, so that the terminal 730 generates the updated virtual navigation image by using the virtual navigation image generating module 734 and dynamically displays the updated virtual navigation image in real time by using the display module 735.
It should be noted that, steps S809 to S811 may be repeatedly performed during the navigation process to adjust the real-time virtual navigation image.
For example, in the navigation process, the terminal 730 uploads the image information corresponding to the location of the terminal to the cloud navigation server 720 in real time, so that the cloud navigation server 720 can match the image information corresponding to the location of the terminal 730 with the guiding route, and dynamically adjust the guiding route in real time to ensure the consistency and accuracy of the guiding route.
In step S812, after the terminal 730 reaches the target position (i.e. the navigation terminal), the terminal image corresponding to the navigation terminal is continuously transmitted to the cloud navigation server 720, so that the cloud navigation server 720 can determine whether the navigation terminal matches with the preset target position information by combining the previous guiding route, and the navigation process is ended if the matching is determined.
In some implementations, navigation beacons during AR navigation may select representations of cartoon characters or the like to increase interest in AR navigation.
In this embodiment, a panoramic camera device in a portable terminal is adopted to collect visual data of a scene in a building to be navigated, three-dimensional reconstruction is performed based on the collected visual data, a dense map corresponding to a physical space in the building to be navigated is generated, segmentation mapping is performed on the visual data in the building to be navigated, a point cloud map is generated, and the point cloud map is aligned with a preset CAD view, so that the generated preset navigation map has the characteristic of unchanged scale and is a directional vector plane view. In the navigation process, the terminal uploads image information corresponding to the position of the terminal to the cloud navigation server in real time, so that the cloud navigation server can adjust a pre-planned guiding route in real time, virtual guiding information is generated, the virtual guiding information is sent to the terminal, the terminal can generate an updated virtual navigation image based on the virtual guiding information, the updated virtual navigation image is dynamically displayed in an AR mode, a user can dynamically and three-dimensionally view the navigation information, positioning and navigation of the user are facilitated, and navigation accuracy is improved.
Fig. 9 illustrates a block diagram of an exemplary hardware architecture of a computing device capable of implementing the indoor navigation method and apparatus according to embodiments of the present application.
As shown in fig. 9, the computing device 900 includes an input device 901, an input interface 902, a central processor 903, a memory 904, an output interface 905, and an output device 906. The input interface 902, the central processing unit 903, the memory 904, and the output interface 905 are connected to each other through a bus 907, and the input device 901 and the output device 906 are connected to the bus 907, and further connected to other components of the computing device 900 through the input interface 902 and the output interface 905, respectively.
Specifically, the input device 901 receives input information from the outside, and transmits the input information to the central processor 903 through the input interface 902; the central processor 903 processes the input information based on computer-executable instructions stored in the memory 904 to generate output information, temporarily or permanently stores the output information in the memory 904, and then transmits the output information to the output device 906 through the output interface 905; output device 906 outputs the output information to the outside of computing device 900 for use by a user.
In one embodiment, the computing device shown in fig. 9 may be implemented as an electronic device, which may include: a memory configured to store a program; and a processor configured to run a program stored in the memory to perform the indoor navigation method described in the above embodiment.
In one embodiment, the computing device shown in FIG. 9 may be implemented as an indoor navigation system, which may include: a memory configured to store a program; and a processor configured to run a program stored in the memory to perform the indoor navigation method described in the above embodiment.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. In general, the various embodiments of the application may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. For example, some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the application is not limited thereto.
Embodiments of the present application may be implemented by a data processor of a mobile device executing computer program instructions, e.g. in a processor entity, either in hardware, or in a combination of software and hardware. The computer program instructions may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages.
The block diagrams of any logic flow in the figures of this application may represent program steps, or may represent interconnected logic circuits, modules, and functions, or may represent a combination of program steps and logic circuits, modules, and functions. The computer program may be stored on a memory. The memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as, but not limited to, read Only Memory (ROM), random Access Memory (RAM), optical storage devices and systems (digital versatile disk DVD or CD optical disk), etc. The computer readable medium may include a non-transitory storage medium. The data processor may be of any type suitable to the local technical environment, such as, but not limited to, general purpose computers, special purpose computers, microprocessors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), programmable logic devices (FGPAs), and processors based on a multi-core processor architecture.
By way of exemplary and non-limiting example, a detailed description of exemplary embodiments of the present application has been provided above. Various modifications and adaptations to the above embodiments may become apparent to those skilled in the art without departing from the scope of the present application, as considered in conjunction with the accompanying drawings and claims. Accordingly, the proper scope of the present application is to be determined according to the claims.

Claims (17)

1. An indoor navigation method, characterized in that the method comprises:
acquiring actual position information of an Augmented Reality (AR) device indoors in real time, wherein the actual position information is used for representing the position information of the AR device in a world coordinate system;
matching the actual position information with a preset navigation map to determine real-time positioning information;
determining virtual guide information according to the real-time positioning information and the acquired guide route, wherein the guide route is a route determined based on the acquired initial position information and target position information of the AR device;
and sending the virtual guide information to the AR device so that the AR device generates and dynamically displays a virtual navigation image according to the virtual guide information.
2. The method of claim 1, wherein the actual location information comprises: a live view corresponding to the actual position of the AR device indoors;
the step of matching the actual position information with a preset navigation map to determine real-time positioning information comprises the following steps:
processing a live view corresponding to the actual position of the AR device indoors based on a deep learning neural network to obtain position information to be matched;
Searching the preset navigation map according to the position information to be matched, and determining whether the position information to be matched exists in the preset navigation map;
and under the condition that the position information to be matched exists in the preset navigation map, the position information matched with the position information to be matched in the preset navigation map is used as the real-time positioning information.
3. The method according to claim 2, wherein the processing the live view corresponding to the actual location of the AR device indoors based on the deep learning neural network to obtain the location information to be matched includes:
extracting local features of the AR device in a live view corresponding to the indoor actual position;
inputting the local features into the deep learning neural network to obtain global features corresponding to the actual indoor positions of the AR devices, wherein the global features are used for representing the position information of the AR devices in a building to be navigated;
and determining the position information to be matched based on global features corresponding to the actual position of the AR device in the room.
4. The method of claim 1, wherein the real-time acquisition of the actual location information of the augmented reality AR device indoors is preceded by:
Acquiring panoramic data in a building to be navigated;
performing point cloud mapping based on the panoramic data and a preset algorithm to generate a dense map;
and determining the preset navigation map according to the dense map and the plane view corresponding to the building to be navigated.
5. The method of claim 4, wherein the generating a dense map based on the panoramic data and a predetermined algorithm comprises:
processing the panoramic data according to a photogrammetry principle to generate point cloud data, wherein the point cloud data comprises three-dimensional coordinate information and color information;
and processing the point cloud data according to a preset three-dimensional reconstruction algorithm to generate the dense map.
6. The method of claim 4, wherein determining the preset navigation map from the dense map and the corresponding plan view of the building to be navigated comprises:
according to a preset scale factor, mapping the dense map into a plane view to be processed by adopting a orthographic projection mode;
and matching the plane view to be processed with the plane view corresponding to the building to be navigated, and determining the preset navigation map.
7. The method of claim 6, wherein the corresponding plan view of the building to be navigated comprises: and designing a CAD view in a computer-aided manner, wherein the CAD view is a vector plane view determined based on the preset scale factors.
8. The method of claim 1, wherein the guidance route includes a plurality of guidance location information;
the determining virtual guiding information according to the real-time positioning information and the acquired guiding route comprises:
matching the real-time positioning information with a plurality of pieces of guiding position information, and determining to-be-moved direction information and to-be-moved distance information corresponding to the AR device;
updating the guiding route according to the information of the direction to be moved, the information of the distance to be moved and the plurality of guiding position information corresponding to the AR device;
and determining the virtual guide information according to the updated guide route.
9. The method of claim 1, wherein after the sending the virtual guidance information to the AR device to cause the AR device to generate and dynamically display a virtual navigation image according to the virtual guidance information, further comprises:
receiving arrival position information fed back by the AR device;
Comparing the arrival location information with the target location information to determine whether the AR device arrives at a target location;
in case it is determined that the AR device reaches the target location, navigation is ended.
10. The method according to any one of claims 1 to 9, wherein the virtual guide information comprises: at least one of camera pose estimation information, environment perception information, and light source perception information;
the environment perception information is used for representing position information of the AR device in a building to be navigated, the camera attitude estimation information is used for representing direction information corresponding to the AR device, and the light source perception information is used for representing light source information acquired by the AR device.
11. An indoor navigation method, characterized in that the method comprises:
transmitting actual position information of an augmented reality AR device indoors to a server, so that the server matches the actual position information with a preset navigation map, determines real-time positioning information, and generates and transmits virtual guiding information to the AR device according to a guiding route and the real-time positioning information, wherein the guiding route is determined based on the acquired initial position information and target position information of the AR device, and the actual position information is used for representing the position information of the AR device in a world coordinate system;
Responding to the virtual guide information sent by the server, and generating a virtual navigation image;
and dynamically displaying the virtual navigation image.
12. The method of claim 11, wherein generating a virtual navigation image in response to the virtual guidance information sent by the server comprises:
receiving virtual guide information sent by the server, wherein the virtual guide information comprises camera posture estimation information, environment perception information and light source perception information, the environment perception information is used for representing position information of the AR device in a building to be navigated, the camera posture estimation information is used for representing direction information corresponding to the AR device, and the light source perception information is used for representing light source information acquired by the AR device;
processing the environment sensing information and the light source sensing information according to a preset three-dimensional reconstruction algorithm to obtain a virtual image of the augmented reality AR;
and matching the camera attitude estimation information with the virtual image of the AR, and determining the virtual navigation image.
13. A server, comprising:
an acquisition module configured to acquire actual location information of an augmented reality AR device indoors in real time, the actual location information being used to characterize location information of the AR device in a world coordinate system;
The matching module is configured to match the actual position information with a preset navigation map and determine real-time positioning information;
a determining module configured to determine virtual guidance information according to the real-time positioning information and the acquired guidance route, the guidance route being a route determined based on the acquired initial position information and target position information of the AR device;
the first sending module is configured to send the virtual guide information to the AR device so that the AR device can generate and dynamically display a virtual navigation image according to the virtual guide information.
14. An augmented reality AR device, comprising:
the second sending module is configured to send the actual position information of the augmented reality AR device indoors to a server, so that the server matches the actual position information with a preset navigation map, determines real-time positioning information, and generates and sends virtual guiding information to the AR device according to a guiding route and the real-time positioning information, wherein the guiding route is a route determined based on the acquired initial position information and target position information of the AR device;
the generation module is configured to respond to the virtual guide information sent by the server and generate a virtual navigation image;
And the display module is configured to dynamically display the virtual navigation image.
15. A terminal, comprising:
the at least one augmented reality AR device of claim 14.
16. An electronic device, comprising:
one or more processors;
a memory having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement the indoor navigation method of any of claims 1-10, or any of claims 11-12.
17. A readable storage medium, characterized in that the readable storage medium stores a computer program, which when executed by a processor implements the indoor navigation method according to any one of claims 1-10, or any one of claims 11-12.
CN202111369855.8A 2021-11-18 2021-11-18 Indoor navigation method, server, device and terminal Pending CN116136408A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111369855.8A CN116136408A (en) 2021-11-18 2021-11-18 Indoor navigation method, server, device and terminal
PCT/CN2022/130486 WO2023088127A1 (en) 2021-11-18 2022-11-08 Indoor navigation method, server, apparatus and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111369855.8A CN116136408A (en) 2021-11-18 2021-11-18 Indoor navigation method, server, device and terminal

Publications (1)

Publication Number Publication Date
CN116136408A true CN116136408A (en) 2023-05-19

Family

ID=86333159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111369855.8A Pending CN116136408A (en) 2021-11-18 2021-11-18 Indoor navigation method, server, device and terminal

Country Status (2)

Country Link
CN (1) CN116136408A (en)
WO (1) WO2023088127A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117579791B (en) * 2024-01-16 2024-04-02 安科优选(深圳)技术有限公司 Information display system with camera function and information display method

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI672482B (en) * 2018-06-08 2019-09-21 林器弘 Indoor navigation system
KR102622585B1 (en) * 2018-06-29 2024-01-08 현대오토에버 주식회사 Indoor navigation apparatus and method
CN111065891B (en) * 2018-08-16 2023-11-14 北京嘀嘀无限科技发展有限公司 Indoor navigation system based on augmented reality
TWI695966B (en) * 2019-01-28 2020-06-11 林器弘 Indoor positioning and navigation system for mobile communication device
CN111583335B (en) * 2019-02-18 2023-09-19 上海欧菲智能车联科技有限公司 Positioning system, positioning method, and non-transitory computer readable storage medium
CN113628349B (en) * 2021-08-06 2024-02-02 西安电子科技大学 AR navigation method, device and readable storage medium based on scene content adaptation
CN113532442A (en) * 2021-08-26 2021-10-22 杭州北斗时空研究院 Indoor AR pedestrian navigation method

Also Published As

Publication number Publication date
WO2023088127A1 (en) 2023-05-25

Similar Documents

Publication Publication Date Title
JP7236565B2 (en) POSITION AND ATTITUDE DETERMINATION METHOD, APPARATUS, ELECTRONIC DEVICE, STORAGE MEDIUM AND COMPUTER PROGRAM
US11393173B2 (en) Mobile augmented reality system
US10977818B2 (en) Machine learning based model localization system
CN109993793B (en) Visual positioning method and device
US9460517B2 (en) Photogrammetric methods and devices related thereto
US8896660B2 (en) Method and apparatus for computing error-bounded position and orientation of panoramic cameras in real-world environments
US9129435B2 (en) Method for creating 3-D models by stitching multiple partial 3-D models
CN108830894A (en) Remote guide method, apparatus, terminal and storage medium based on augmented reality
CN110361005B (en) Positioning method, positioning device, readable storage medium and electronic device
CN108401461A (en) Three-dimensional mapping method, device and system, cloud platform, electronic equipment and computer program product
JP2023546739A (en) Methods, apparatus, and systems for generating three-dimensional models of scenes
US20180350216A1 (en) Generating Representations of Interior Space
KR20240027395A (en) Method and apparatus for visual positioning based on single image object recognition in the mobile environment
CN116109684A (en) 2D and 3D data mapping method and device for online video monitoring of substation
EP4207100A1 (en) Method and system for providing user interface for map target creation
Kim et al. IMAF: in situ indoor modeling and annotation framework on mobile phones
CN116136408A (en) Indoor navigation method, server, device and terminal
Pinard et al. Does it work outside this benchmark? Introducing the rigid depth constructor tool: Depth validation dataset construction in rigid scenes for the masses
CN114089836B (en) Labeling method, terminal, server and storage medium
CN114723923B (en) Transmission solution simulation display system and method
US20200005527A1 (en) Method and apparatus for constructing lighting environment representations of 3d scenes
Yang et al. Dense depth estimation from multiple 360-degree images using virtual depth
CN117057086A (en) Three-dimensional reconstruction method, device and equipment based on target identification and model matching
Kurka et al. Automatic estimation of camera parameters from a solid calibration box
CN117115434A (en) Data dividing apparatus and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination