[go: up one dir, main page]

WO2022201682A1 - Position detection system, position detection method, and program - Google Patents

Position detection system, position detection method, and program Download PDF

Info

Publication number
WO2022201682A1
WO2022201682A1 PCT/JP2021/046976 JP2021046976W WO2022201682A1 WO 2022201682 A1 WO2022201682 A1 WO 2022201682A1 JP 2021046976 W JP2021046976 W JP 2021046976W WO 2022201682 A1 WO2022201682 A1 WO 2022201682A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
user
section
positioning
linking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2021/046976
Other languages
French (fr)
Japanese (ja)
Inventor
燕峰 王
昌幸 天野
貴丈 大塚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Intellectual Property Management Co Ltd
Original Assignee
Panasonic Intellectual Property Management Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Panasonic Intellectual Property Management Co Ltd filed Critical Panasonic Intellectual Property Management Co Ltd
Priority to JP2023508628A priority Critical patent/JP7565504B2/en
Publication of WO2022201682A1 publication Critical patent/WO2022201682A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • G01C3/02Details
    • G01C3/06Use of electric means to obtain final indication
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/02Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/33Services specially adapted for particular environments, situations or purposes for indoor environments, e.g. buildings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management

Definitions

  • the present disclosure relates generally to location sensing systems, location sensing methods and programs, and more particularly to location sensing systems, location sensing methods and programs utilizing beacon signals and imaging devices.
  • a position calculation system (position detection system) described in Patent Document 1 includes a plurality of beacons (beacon terminals), a photographing device (imaging device), and an information processing device.
  • the photographing device is connected to one of the plurality of beacons and photographs the space in which the moving object moves.
  • the information processing device receives information on the position of the moving object calculated based on the image of the space captured by the imaging device.
  • An object of the present disclosure is to provide a position detection system, a position detection method, and a program capable of identifying a user to be positioned while performing positioning using the positioning accuracy of an imaging device.
  • a position detection system includes a first positioning section, a second positioning section, and a linking section.
  • a said 1st positioning part produces
  • the first data includes the identification information of the user and first location information of the user.
  • the second positioning unit generates second data based on the image data generated by an imaging device that captures a space in which the user stays and generates image data.
  • the second data includes second location information of the user.
  • the linking unit links the first data and the second data.
  • the linking unit sets a section corresponding to the first position information among a plurality of sections obtained by dividing the space in which the user stays as a first corresponding section corresponding to the first data, and sets the section corresponding to the first data to the second position information. Let the corresponding section be a second corresponding section corresponding to the second data.
  • the linking unit determines linking of the first data and the second data based on a match rate between the first corresponding section and the second corresponding section.
  • a position detection method includes a first positioning process, a second positioning process, and a linking process.
  • first data is generated based on a beacon signal including identification information of the user transmitted from a beacon terminal owned by the user and received by the scanner.
  • the first data includes the identification information of the user and first location information of the user.
  • second data is generated based on the image data generated by an imaging device that captures a space in which the user stays and generates image data.
  • the second data includes second location information of the user.
  • the first data and the second data are linked.
  • the linking step among a plurality of sections obtained by dividing the space in which the user stays, the section corresponding to the first position information is set as the first corresponding section corresponding to the first data, and the second position information is set to the first corresponding section.
  • the corresponding section be a second corresponding section corresponding to the second data.
  • linking between the first data and the second data is determined based on the match rate between the first corresponding section and the second corresponding section.
  • a program according to one aspect of the present disclosure is a program for causing one or more processors to execute the position detection method.
  • FIG. 1 is a block diagram of a position detection system according to one embodiment.
  • FIG. 2 is a schematic diagram showing an installation state of the position detection system same as the above.
  • FIG. 3 is a flow chart showing a position detection method using the same position detection system.
  • FIG. 4 is an explanatory diagram for explaining the process of obtaining the match rate, which is executed by the position detection system.
  • the position detection system 1 (LPS: Local Positioning System) of this embodiment includes a first positioning section 421 , a second positioning section 422 , and a linking section 423 .
  • the first positioning unit 421 generates first data based on the beacon signal including the identification information of the user U1, which is transmitted from the beacon terminal 2 owned by the user U1 (see FIG. 2) and received by the scanner 3.
  • the first data includes identification information of user U1 and first location information of user U1.
  • the second positioning unit 422 generates second data based on the image data generated by the imaging device 5 that captures the space where the user U1 stays and generates image data.
  • the second data includes second location information of user U1.
  • the linking unit 423 links the first data and the second data.
  • the linking unit 423 sets the section corresponding to the first position information among the plurality of sections D1 to D16 (see FIG. 4) obtained by dividing the space where the user U1 stays as the first corresponding section corresponding to the first data.
  • the tying unit 423 determines tying between the first data and the second data based on the match rate between the first corresponding section and the second corresponding section.
  • the first data including the identification information of the user U1 can be associated with the second data. Therefore, even if the second data does not include the identification information of the user U1, the second data can be associated with the identification information of the user U1.
  • the first data includes first position information based on beacon signals
  • the second data includes second position information based on image data captured by imaging device 5 .
  • positioning based on image data is more accurate than positioning based on beacon signals. That is, in many cases, the second location information more accurately represents the location of user U1 than the first location information.
  • the position detection system 1 includes multiple beacon terminals 2 , multiple scanners 3 , a positioning server 4 , and multiple imaging devices 5 .
  • Each of the plurality of beacon terminals 2, the plurality of scanners 3, the positioning server 4 and the plurality of imaging devices 5 includes a computer system having one or more processors and memory. At least part of the functions of each of the plurality of beacon terminals 2, the plurality of scanners 3, the positioning server 4, and the plurality of imaging devices 5 is performed by the processor of the computer system executing a program recorded in the memory of the computer system. Realized.
  • the program may be recorded in a memory, provided through an electric communication line such as the Internet, or recorded in a non-temporary recording medium such as a memory card and provided.
  • the beacon terminal 2 is carried by the user U1 (see FIG. 2).
  • a plurality of users U1 who use the facility carry beacon terminals 2, respectively.
  • Identification information is assigned to each beacon terminal 2, and the position detection system 1 distinguishes between the plurality of beacon terminals 2 based on the identification information.
  • the position detection system 1 measures the position of each of the multiple beacon terminals 2 in the facility, that is, the position of each of the multiple users U1.
  • “Facilities” referred to in this disclosure are, for example, office buildings, factories, commercial complexes, libraries, museums, museums, amusement facilities, theme parks, parks, airports, railway stations, ballparks, hotels, hospitals and residences.
  • the "facility” may be, for example, a mobile object such as a ship or a railway vehicle.
  • a “beacon terminal” as used in the present disclosure is a mobile terminal carried by the user U1, for example, a communication terminal such as a smartphone.
  • the “beacon terminal” referred to in the present disclosure is not limited to a smart phone, and may be a tablet-type mobile terminal or a terminal such as a tag that is dedicated to the position detection system 1 .
  • the "beacon terminal” may be owned by the user U1 or may be borrowed.
  • a plurality of scanners 3 are installed in the facility.
  • the multiple scanners 3 are installed on the ceiling of the building in the facility (see FIG. 2).
  • the multiple scanners 3 receive beacon signals transmitted from the beacon terminals 2 and measure the received signal strength indication (RSSI) of the beacon signals.
  • RSSI received signal strength indication
  • the positioning server 4 obtains the distance between each of the multiple scanners 3 and the beacon terminal 2 based on the received signal strength of the beacon signal in each of the multiple scanners 3 . Then, the positioning server 4 obtains the position (first position information) of the beacon terminal 2 based on the distance. As an example, the positioning server 4 performs three-point positioning or the like using the position information of each of the plurality of scanners 3 and the distance between each of the plurality of scanners 3 and the beacon terminal 2, so that the beacon terminal Find the position of 2.
  • a plurality of imaging devices 5 are installed in the facility.
  • the plurality of imaging devices 5 are installed on the ceiling of the building in the facility (see FIG. 2).
  • the positioning server 4 obtains the position (second position information) of each of the users U1 by processing the images captured by the plurality of imaging devices 5 .
  • the beacon terminal 2 includes a communication section 21, a processing section 22, and a storage section .
  • the communication unit 21 includes a communication interface device.
  • the communication section 21 can communicate with the communication section 31 of the scanner 3 .
  • “Communicable” as used in the present disclosure means that a signal can be sent and received directly or indirectly via a network, a repeater, or the like, by an appropriate communication method such as wired communication or wireless communication.
  • the communication method between the communication unit 21 and the communication unit 31 is, for example, Bluetooth (registered trademark) Low Energy, WiFi (registered trademark), or the like.
  • the communication unit 21 transmits a beacon signal.
  • the beacon signal includes identification information of user U1 registered in beacon terminal 2 .
  • the identification information of the beacon terminal 2 may be used as the identification information of the user U1.
  • the processing unit 22 is mainly composed of a computer system.
  • the processing unit 22 performs overall control of the beacon terminal 2 .
  • the processing unit 22 controls the communication unit 21 and causes the communication unit 21 to transmit a beacon signal.
  • the storage unit 23 stores information about the beacon terminal 2.
  • the storage unit 23 stores identification information of the user U1 who carries the beacon terminal 2 .
  • the scanner 3 includes a communication section 31 , a processing section 32 and a storage section 33 .
  • the communication unit 31 includes a communication interface device.
  • the communication section 31 has a function as the first communication section 311 and a function as the second communication section 312 .
  • the first communication unit 311 can communicate with the communication unit 21 of the beacon terminal 2 .
  • the second communication unit 312 can communicate with the communication unit 41 of the positioning server 4 .
  • the processing unit 32 is mainly composed of a computer system.
  • the processing unit 32 performs overall control of the scanner 3 . Also, the processing unit 32 measures the received signal strength of the beacon signal received by the communication unit 31 .
  • the storage unit 33 stores information about the scanner 3.
  • the storage unit 33 stores identification information of the scanner 3 .
  • the imaging device 5 includes a communication section 51 , a processing section 52 , a storage section 53 and an imaging section 55 .
  • the communication unit 51 includes a communication interface device.
  • the communication unit 51 can communicate with the communication unit 41 of the positioning server 4 .
  • the processing unit 52 is mainly composed of a computer system.
  • the processing unit 52 performs overall control of the imaging device 5 .
  • the storage unit 53 stores information regarding the imaging device 5 .
  • the storage unit 53 stores identification information of the imaging device 5 .
  • the imaging unit 55 is the subject that photographs the space where the user U1 stays.
  • the imaging unit 55 includes a two-dimensional image sensor such as a CCD (Charge Coupled Devices) image sensor or a CMOS (Complementary Metal-Oxide Semiconductor) image sensor.
  • CCD Charge Coupled Devices
  • CMOS Complementary Metal-Oxide Semiconductor
  • the imaging device 5 is installed on the ceiling of the building.
  • the imaging device 5 photographs the space below the imaging device 5 .
  • the image captured by the imaging device 5 is an image of the user U1 taken from above. As a result, it becomes difficult to distinguish the face of the user U1, and the privacy of the user U1 is easily protected.
  • the positioning server 4 includes a communication section 41 , a processing section 42 and a storage section 43 .
  • the communication unit 41 includes a communication interface device.
  • the communication unit 41 can communicate with the communication unit 31 of each of the plurality of scanners 3 and the communication unit 51 of each of the plurality of imaging devices 5 .
  • the processing unit 42 is mainly composed of a computer system.
  • the processing unit 42 has functions as a first positioning unit 421 , a second positioning unit 422 and a linking unit 423 .
  • the storage unit 43 stores identification information and position information for each of the multiple scanners 3 . Further, the storage unit 43 stores identification information and position information of each of the plurality of imaging devices 5 .
  • the positioning server 4 acquires received signal information from each of the multiple scanners 3 .
  • the received signal information includes information regarding the received signal strength of the beacon signal. Further, the received signal information includes identification information of the user U1 and identification information of the scanner 3.
  • FIG. The first positioning unit 421 obtains the distance between each of the plurality of scanners 3 and the beacon terminal 2 based on the received signal strength of the beacon signal in each of the plurality of scanners 3, and determines the distance between the beacon terminal 2 based on the distance. , that is, the first position information of the user U1.
  • the first positioning unit 421 can distinguish between the multiple users U1 and individually measure the positions of the multiple users U1.
  • the first positioning unit 421 measures the position of each of the multiple users U1 at predetermined time intervals (for example, every second).
  • the positioning server 4 acquires image data from each of the multiple imaging devices 5 .
  • the second positioning unit 422 obtains second position information of the user U1 based on the image data. More specifically, the positioning server 4 obtains the second position information of the user U1 based on the position and orientation of the imaging device 5 and the position of the user U1 in the image data.
  • the second positioning unit 422 does not identify who the user U1 is. However, the second positioning unit 422 distinguishes by giving a management ID to each user U1. Furthermore, the second positioning unit 422 tracks each user U1 by referring to temporally continuous image data. Thereby, the second positioning unit 422 distinguishes whether the user U1 whose position was measured at one point in time and the user U1 whose position was measured at another point in time are the same person or different persons.
  • the positioning accuracy of the second positioning unit 422 is higher than the positioning accuracy of the first positioning unit 421. Also, the time interval at which the second positioning unit 422 generates the second location information is shorter than the time interval at which the first positioning unit 421 generates the first location information.
  • the position detection method of this embodiment can be realized by the position detection system 1 .
  • the position detection method includes a first positioning process (steps ST1, ST2), a second positioning process (steps ST3, ST4), and a linking process (steps ST5 to ST8).
  • the first positioning step the first data is generated based on the beacon signal including the identification information of the user U1, which is transmitted from the beacon terminal 2 owned by the user U1 and received by the scanner 3.
  • the first data includes identification information of user U1 and first location information of user U1.
  • the second data is generated based on the image data generated by the imaging device 5 that captures the space where the user U1 stays and generates the image data.
  • the second data includes second location information of user U1.
  • the first data and the second data are linked.
  • the section corresponding to the first position information is set as the first corresponding section corresponding to the first data
  • the section corresponding to the second position information is set as the first corresponding section.
  • linking between the first data and the second data is determined based on the matching rate between the first corresponding section and the second corresponding section.
  • a program according to one aspect is a program for causing one or more processors to execute the position detection method described above.
  • the program may be recorded on a computer-readable non-transitory recording medium.
  • FIG. 3 is merely an example of the position detection method according to the present disclosure, and the order of processing may be changed as appropriate, and processing may be added or omitted as appropriate.
  • each of the plurality of users U1 possesses a beacon terminal 2.
  • N be the number of target users U1 whose positions are determined based on beacon signals using the beacon terminal 2 and the scanner 3 in a certain period.
  • M be the number of target users U1 whose positions are measured based on the image data using the imaging device 5 during the same period as the certain period described above. N and M may or may not match.
  • the first positioning unit 421 performs positioning based on the beacon signal (step ST1). Thereby, the first positioning unit 421 generates the first data ⁇ 1, ⁇ 2, . . . , ⁇ N (step ST2).
  • the first data ⁇ i is the first data corresponding to the i-th user U1.
  • the first data ⁇ i includes identification information of the i-th user U1 and first location information of the i-th user U1.
  • the second data ⁇ j includes second location information of the j-th user U1.
  • the first positioning unit 421 generates a plurality of first data corresponding to a plurality of users U1.
  • the second positioning unit 422 generates a plurality of second data corresponding to a plurality of users U1.
  • the first data ⁇ i and the second data ⁇ j are continuously generated. Therefore, steps ST1 and ST2 are actually executed in parallel with steps ST3 and ST4.
  • the linking unit 423 links the first data ⁇ i and the second data ⁇ j according to conditions.
  • the space in which the user U1 stays is divided into a plurality of (16 in FIG. 4) sections D1 to D16 when viewed from above and below.
  • the boundaries of each partition D1-D16 are virtual boundaries.
  • the size of each partition D1-D16 is the same.
  • Each of the sections D1 to D16 is included in a range in which at least one of the multiple scanners 3 can receive the beacon signal. Also, each of the sections D1 to D16 is included in the imaging range of at least one imaging device 5 out of the plurality of imaging devices 5 .
  • the first location information A1 of the first user U1 is illustrated. More specifically, the trajectory of movement of the first user U1 is obtained based on the beacon signal, and shown in FIG. 4 as first position information A1.
  • the second location information B1 of the N+1-th user U1 is illustrated. More specifically, the locus of movement of the N+1-th user U1 is obtained based on the image data, and is shown as second position information B1 in FIG.
  • FIG. 4 shows the first location information A2 of the second user U1 and the second location information B2 of the (N+2)th user U1.
  • the linking unit 423 determines whether the first user U1 is the same person as the third user U1 or the fourth user U1. The linking unit 423 also determines whether or not the second user U1 is the same person as the third user U1 or the fourth user U1. If it is determined that they are the same person, the linking unit 423 links the first data and the second data of the corresponding user U1.
  • the first position information A1 indicates that the first user U1 is in section D5 at times t1, t2, and t3.
  • the first position information A1 indicates that the first user U1 is in section D2 at times t4, t5, and t6.
  • Table 1 shows the relationship between each time point t1 to t6 and the position of each user U1 (sections D1 to D16) corresponding to the first position information A1, A2 and the second position information B1, B2.
  • the section corresponding to the first position information A1 of the first user U1 among the plurality of sections D1 to D16 be the first corresponding section of the first user U1.
  • the first position information A1 indicates that the first user U1 is in section D5.
  • the first corresponding section of the first user U1 at times t4 to t6 is section D2.
  • the linking unit 423 obtains the rate of matching between each first corresponding section and each second corresponding section (step ST5).
  • the tying unit 423 determines tying between the first data and the second data based on the match rate between the first corresponding section and the second corresponding section. More specifically, the tying unit 423 determines tying between the first data and the second data based on the matching rate between the first corresponding section and the second corresponding section over a predetermined period of time. More specifically, the tying unit 423 determines tying between the first data and the second data based on the integrated value of the matching rate between the first corresponding section and the second corresponding section over a predetermined period of time.
  • the linking unit 423 assumes that the matching rate is the maximum when the first corresponding section and the second corresponding section match.
  • the linking unit 423 determines that the closer the first corresponding section and the second corresponding section are, the higher the matching rate is.
  • [Table 2] shows an example of a correspondence table for obtaining the matching rate between the first corresponding section and the second corresponding section.
  • the match rate is increased by 10. If the length of the period in which the first corresponding section matches the second corresponding section is 3 seconds or more and less than 5 seconds, 20 is added to the match rate. If the length of the period in which the first corresponding section matches the second corresponding section is 5 seconds or longer, 30 is added to the matching rate.
  • the match rate is increased by 5. If the first corresponding section is the section next to the second corresponding section for 3 seconds or more and less than 5 seconds, 3 is added to the matching rate. If the length of the period in which the first corresponding section is the section next to the second corresponding section is 5 seconds or longer, 1 is added to the matching rate.
  • the matching rate is increased by 3. If the length of the period in which the first corresponding section is the section next to the second corresponding section is 3 seconds or more and less than 5 seconds, 2 is added to the matching rate. If the length of the period in which the first corresponding section is the section next to the second corresponding section is 5 seconds or longer, the matching rate is not added.
  • the matching rate is not added.
  • the tying unit 423 determines tying between the first data and the second data based on the match rates obtained over a predetermined period in this way.
  • the associating unit 423 associates the target first data selected from the plurality of first data with the second data that satisfies a predetermined condition among the plurality of second data (steps ST6 to ST8).
  • the predetermined condition preferably includes a condition that the matching rate between the first corresponding section and the second corresponding section is the highest.
  • the predetermined conditions further include a condition that the match rate between the first corresponding section and the second corresponding section is greater than a threshold.
  • the threshold is 18, and the linking unit 423 obtains the matching rate based on the relationship between the first corresponding section and the second corresponding section over 6 seconds.
  • Time points 1 second, 2 seconds, 3 seconds, 4 seconds, and 5 seconds after time t1 are defined as time points t2, t3, t4, t5, and t6, respectively.
  • the first data including the first positional information A1 is set as the target first data, and the first corresponding section corresponding to the first positional information A1 (referred to as "A1” in the following description of this paragraph) and the second A second corresponding section (referred to as “B1” in the following description of this paragraph) corresponding to the position information B1 is compared. Since “A1" matches “B1" for two seconds corresponding to time points t5 and t6, 10 is added to the match rate. During the two seconds corresponding to time points t3 and t4, "A1" is adjacent to "B1", so 5 is added to the matching rate. Since “A1” is next to "B1" for one second corresponding to time t1, 3 is added to the coincidence rate.
  • the first data including the first position information A1 is set as the target first data, and the first corresponding section corresponding to the first position information A1 (hereinafter referred to as "A1" in this paragraph) , with a second corresponding section (referred to as “B2” in the following description of this paragraph) corresponding to the second position information B2. Since “A1” is adjacent to "B2" for three seconds corresponding to time points t1, t2, and t6, 3 is added to the coincidence rate. Since “A1” is next to "B2" for three seconds corresponding to time points t3, t4, and t5, 2 is added to the coincidence rate. The sum of match rates is 5. Since the total matching rate is smaller than the threshold value of 18, the predetermined condition is not satisfied, and the first data including the first location information A1 is not associated with the second data including the second location information B2 (step ST8). .
  • the first data including the first position information A2 is set as the target first data, and the first corresponding section corresponding to the first position information A2 (hereinafter referred to as "A2" in this paragraph) , with a second corresponding section (referred to as “B1” in the following description of this paragraph) corresponding to the second position information B1. Since “A2" coincides with "B1" for one second corresponding to time t1, 10 is added to the coincidence rate. Since “A2" is next to "B1” for 1 second corresponding to time t4, 5 is added to the coincidence rate. Since “A2” is next to "B1” for four seconds corresponding to time points t2, t3, t5, and t6, 2 is added to the coincidence rate.
  • the total match rate is 17. Since the total matching rate is smaller than the threshold value of 18, the predetermined condition is not satisfied, and the first data including the first location information A2 is not associated with the second data including the second location information B1 (step ST8). .
  • the first data including the first position information A2 is set as the target first data, and the first corresponding section corresponding to the first position information A2 (hereinafter referred to as "A2" in this paragraph) , with a second corresponding section (referred to as “B2” in the following description of this paragraph) corresponding to the second position information B2. Since “A2" matches “B2" for three seconds corresponding to time points t2, t4, and t5, 20 is added to the match rate. Since “A2" is next to "B2" for 3 seconds corresponding to time points t1, t3, and t6, 3 is added to the coincidence rate. The total match rate is 23. The sum of match rates is greater than the threshold of 18.
  • the match rate between "A2" and “B2” is higher than the match rate between "A2" and “B1" (the second corresponding section corresponding to the second position information B1). That is, the matching rate for "A2" is the highest for "B2" among “B1” and "B2". Therefore, the predetermined condition is satisfied, and the first data including the first location information A2 is associated with the second data including the second location information B2 (step ST7).
  • the linking unit 423 links the first data including the first position information A2 with the second data including the second position information B2. In other words, the linking unit 423 determines that the user U1 corresponding to the first data including the first location information A2 and the user U1 corresponding to the second data including the second location information B2 are the same person. Further, the linking unit 423 links the first data including the first location information A1 with neither the second data including the second location information B1 nor the second data including the second location information B2. In other words, the linking unit 423 determines that the user U1 corresponding to the first data including the first location information A1 includes the user U1 corresponding to the second data including the second location information B1 and the second location information B2. It is determined that the user is different from any of the users U1 corresponding to the second data.
  • the positioning server 4 outputs the result of linking by the linking unit 423.
  • the first data including the first location information A2 and the second data including the second location information B2 are linked and output.
  • the first location information A2 and the second location information B2 are displayed on the display as shown in FIG. Display a label that represents identifying information. Accordingly, the administrator who sees the display can know that the first location information A2 and the second location information B2 are location information of the same user U1. Of the first position information A2 and the second position information B2, only the second position information B2 may be displayed.
  • the positioning accuracy of the second positioning unit 422 that generates the second position information B2 is higher than the positioning accuracy of the first positioning unit 421 that generates the first position information A2. Therefore, by referring to the second location information B2, the administrator can know the more accurate location of the user U1 than when referring to the first location information A2. Further, since the identification information of the user U1 is linked to the second location information B2 by the linking unit 423, the administrator can identify the user U1 corresponding to the second location information B2.
  • the dimensions of each of the plurality of sections D1 to D16 are determined in advance and stored in the storage unit 43 of the positioning server 4.
  • the dimensions of each of the plurality of sections D1 to D16 are preferably determined based on the positioning accuracy of the first position information.
  • the dimension of each section is determined such that the absolute value of the difference between the maximum error of the first position information and the maximum length of each section is equal to or less than a predetermined value. According to this configuration, compared to the case where the dimension of each section is sufficiently large compared to the positioning accuracy of the first position information, the possibility of accurately linking the first data and the second data increases.
  • the number of sections can be reduced compared to the case where the dimension of each section is sufficiently smaller than the positioning accuracy of the first position information.
  • the imaging device 5 may include a stereo camera as the imaging unit 55.
  • the second positioning unit 422 may calculate the distance from the imaging unit 55 to the user U1 based on the magnitude of parallax between the two lenses of the stereo camera.
  • the scanner 3 and imaging device 5 are not limited to being installed on the ceiling, and may be installed, for example, on the wall of the building or on a member other than the building.
  • the second positioning unit 422 may identify the user U1 appearing in the image data by performing image recognition processing (for example, face recognition processing) on the image data generated by the imaging device 5.
  • image recognition processing for example, face recognition processing
  • the linking unit 423 may perform the process of linking the first data and the second data only when the user U1 is moving. Thereby, the amount of processing can be reduced.
  • the associating unit 423 performs processing for associating the first data and the second data, for example, when it is determined, based on the first data, that the user U1 has moved from the section where the user U1 has stayed until then to another section. may start.
  • At least one of the plurality of compartments D1 to D16 may have dimensions different from those of the other compartments.
  • the association unit 423 may cancel the association when the release condition is satisfied.
  • the release condition is a condition that is stricter than the condition in which the associating unit 423 determines "not to associate the first data and the second data" when the first data and the second data are not originally associated. There may be.
  • the cancellation condition may be, for example, a condition that the sum of matching rates obtained based on [Table 2] is smaller than a predetermined threshold.
  • a position detection system 1 in the present disclosure includes a computer system.
  • a computer system is mainly composed of a processor and a memory as hardware. At least part of the function of the position detection system 1 in the present disclosure is realized by the processor executing a program recorded in the memory of the computer system.
  • the program may be recorded in advance in the memory of the computer system, may be provided through an electric communication line, or may be recorded in a non-temporary recording medium such as a computer system-readable memory card, optical disk, or hard disk drive. may be provided.
  • a processor in a computer system consists of one or more electronic circuits, including semiconductor integrated circuits (ICs) or large scale integrated circuits (LSIs).
  • Integrated circuits such as ICs or LSIs are called differently depending on the degree of integration, and include integrated circuits called system LSI, VLSI (Very Large Scale Integration), or ULSI (Ultra Large Scale Integration).
  • FPGAs Field-Programmable Gate Arrays
  • a plurality of electronic circuits may be integrated into one chip, or may be distributed over a plurality of chips.
  • a plurality of chips may be integrated in one device, or may be distributed in a plurality of devices.
  • a computer system includes a microcontroller having one or more processors and one or more memories. Accordingly, the microcontroller also consists of one or more electronic circuits including semiconductor integrated circuits or large scale integrated circuits.
  • the position detection system 1 it is not an essential configuration of the position detection system 1 that a plurality of functions in the position detection system 1 are integrated into one device, and the components of the position detection system 1 are distributed and provided in a plurality of devices. may have been Furthermore, at least part of the functions of the position detection system 1, for example, part of the functions of the positioning server 4, may be realized by the cloud (cloud computing) or the like.
  • At least part of the functions of the position detection system 1 distributed among multiple devices may be integrated into one device.
  • greater than or equal to includes both the case where the two values are equal and the case where one of the two values exceeds the other.
  • the term “greater than or equal to” as used herein may be synonymous with “greater than” which includes only the case where one of the two values exceeds the other. That is, whether the two values are equal can be arbitrarily changed depending on the setting of the reference value, etc., so there is no technical difference between “greater than” and “greater than”.
  • less than may be synonymous with “less than”.
  • a position detection system (1) includes a first positioning section (421), a second positioning section (422), and a linking section (423).
  • the first positioning unit (421) performs the first Generate data.
  • the first data includes identification information of the user (U1) and first location information of the user (U1).
  • a second positioning unit (422) generates second data based on image data generated by an imaging device (5) that captures the space in which the user (U1) stays and generates image data.
  • the second data includes second location information of the user (U1).
  • a linking unit (423) links the first data and the second data.
  • a linking unit (423) defines a section corresponding to the first position information among a plurality of sections (D1 to D16) obtained by dividing the space where the user (U1) stays as a first corresponding section corresponding to the first data, Let the section corresponding to the second position information be the second corresponding section corresponding to the second data.
  • a linking unit (423) determines linking between the first data and the second data based on the matching rate between the first corresponding section and the second corresponding section.
  • the first data including the identification information of the user (U1) can be associated with the second data. Therefore, even if the second data does not include the identification information of the user (U1), the second data can be associated with the identification information of the user (U1).
  • the first data includes first position information based on the beacon signal
  • the second data includes second position information based on image data captured by the imaging device (5).
  • the second location information more accurately represents the location of the user (U1) than the first location information.
  • the tying unit (423) performs a second A link between the first data and the second data is determined.
  • each of the users (U1) possesses a beacon terminal (2).
  • a first positioning unit (421) generates a plurality of first data corresponding to a plurality of users (U1).
  • a second positioning unit (422) generates a plurality of second data corresponding to a plurality of users (U1).
  • An associating unit (423) associates target first data selected from among the plurality of first data with second data satisfying a predetermined condition among the plurality of second data.
  • the predetermined condition includes the condition that the matching rate between the first corresponding section and the second corresponding section is the highest.
  • the predetermined condition further includes a condition that the matching rate between the first corresponding section and the second corresponding section is greater than a threshold.
  • the dimension of each of the plurality of sections (D1 to D16) is the positioning accuracy of the first position information determined based on
  • the possibility of accurately linking the first data and the second data increases compared to the case where the dimension of the section is sufficiently large compared to the positioning accuracy of the first position information.
  • the number of sections can be reduced compared to the case where the dimensions of the sections are sufficiently smaller than the positioning accuracy of the first position information.
  • the position detection system (1) according to the seventh aspect further comprises a scanner (3) in any one of the first to sixth aspects.
  • the position detection system (1) according to the eighth aspect further comprises a beacon terminal (2) in any one of the first to seventh aspects.
  • the position detection system (1) according to the ninth aspect further comprises an imaging device (5) in any one of the first to eighth aspects.
  • Configurations other than the first aspect are not essential configurations for the position detection system (1) and can be omitted as appropriate.
  • the position detection method includes a first positioning process, a second positioning process, and a linking process.
  • first data is generated based on a beacon signal including identification information of the user (U1) transmitted from the beacon terminal (2) possessed by the user (U1) and received by the scanner (3). do.
  • the first data includes identification information of the user (U1) and first location information of the user (U1).
  • second data is generated based on image data generated by an imaging device (5) that captures the space where the user (U1) stays and generates image data.
  • the second data includes second location information of the user (U1).
  • the first data and the second data are linked.
  • the linking step among a plurality of sections (D1 to D16) obtained by dividing the space where the user (U1) stays, the section corresponding to the first position information is set as the first corresponding section corresponding to the first data, and the second position Let the section corresponding to the information be the second corresponding section corresponding to the second data.
  • linking between the first data and the second data is determined based on the matching rate between the first corresponding section and the second corresponding section.
  • a program according to the eleventh aspect is a program for causing one or more processors to execute the position detection method according to the tenth aspect.
  • Various configurations (including modifications) of the position detection system (1) according to the embodiment are not limited to the above aspects, and are embodied in a position detection method, a (computer) program, or a non-temporary recording medium recording the program. can be converted.
  • Position detection system 2 Beacon terminal 3 Scanner 5 Imaging device 421 First positioning unit 422 Second positioning unit 423 Linking units D1 to D16 Section U1 User

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Measurement Of Optical Distance (AREA)
  • Position Fixing By Use Of Radio Waves (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The purpose of this disclosure is to identify a user under positioning while carrying out positioning using the positioning accuracy of an imaging device. This position detection system (1) comprises a first positioning unit (421), a second positioning unit (422), and an association unit (423). The first positioning unit (421) uses a beacon signal to generate first data including user identification information and first position information for the user. The second positioning unit (422) uses image data to generate second data including second position information for the user. The association unit (423) makes a section corresponding to the first position information a first corresponding section corresponding to the first data and makes a section corresponding to the second position information a second corresponding section corresponding to the second data. The association unit (423) determines the association between the first data and second data on the basis of a rate of concordance between the first corresponding section and second corresponding section.

Description

位置検知システム、位置検知方法及びプログラムPOSITION DETECTION SYSTEM, POSITION DETECTION METHOD AND PROGRAM

 本開示は一般に位置検知システム、位置検知方法及びプログラムに関し、より詳細には、ビーコン信号及び撮像装置を利用する位置検知システム、位置検知方法及びプログラムに関する。 TECHNICAL FIELD The present disclosure relates generally to location sensing systems, location sensing methods and programs, and more particularly to location sensing systems, location sensing methods and programs utilizing beacon signals and imaging devices.

 特許文献1に記載の位置算出システム(位置検知システム)は、複数のビーコン(ビーコン端末)と、撮影装置(撮像装置)と、情報処理装置と、を含む。撮影装置は、複数のビーコンの1つに接続され、移動体が移動する空間の撮影を行う。情報処理装置は、撮影装置によって空間を撮影された画像に基づいて算出された移動体の位置の情報を受信する。 A position calculation system (position detection system) described in Patent Document 1 includes a plurality of beacons (beacon terminals), a photographing device (imaging device), and an information processing device. The photographing device is connected to one of the plurality of beacons and photographs the space in which the moving object moves. The information processing device receives information on the position of the moving object calculated based on the image of the space captured by the imaging device.

特開2020-112441号公報Japanese Patent Application Laid-Open No. 2020-112441

 本開示は、撮像装置の測位精度を用いた測位を行いつつ、測位対象のユーザを識別することができる位置検知システム、位置検知方法及びプログラムを提供することを目的とする。 An object of the present disclosure is to provide a position detection system, a position detection method, and a program capable of identifying a user to be positioned while performing positioning using the positioning accuracy of an imaging device.

 本開示の一態様に係る位置検知システムは、第1測位部と、第2測位部と、紐付部と、を備える。前記第1測位部は、ユーザが所持するビーコン端末から送信されスキャナで受信された、前記ユーザの識別情報を含むビーコン信号に基づいて、第1データを生成する。前記第1データは、前記ユーザの前記識別情報と前記ユーザの第1位置情報とを含む。前記第2測位部は、前記ユーザが滞在する空間を撮影し画像データを生成する撮像装置で生成された前記画像データに基づいて、第2データを生成する。前記第2データは、前記ユーザの第2位置情報を含む。前記紐付部は、前記第1データと前記第2データとを紐付ける。前記紐付部は、前記ユーザが滞在する前記空間を分割した複数の区画のうち、前記第1位置情報に対応する区画を前記第1データに対応する第1対応区画とし、前記第2位置情報に対応する区画を前記第2データに対応する第2対応区画とする。前記紐付部は、前記第1対応区画と前記第2対応区画との一致率に基づいて前記第1データと前記第2データとの紐付けを決定する。 A position detection system according to an aspect of the present disclosure includes a first positioning section, a second positioning section, and a linking section. A said 1st positioning part produces|generates 1st data based on the beacon signal containing the said user's identification information transmitted from the beacon terminal which a user has and received with the scanner. The first data includes the identification information of the user and first location information of the user. The second positioning unit generates second data based on the image data generated by an imaging device that captures a space in which the user stays and generates image data. The second data includes second location information of the user. The linking unit links the first data and the second data. The linking unit sets a section corresponding to the first position information among a plurality of sections obtained by dividing the space in which the user stays as a first corresponding section corresponding to the first data, and sets the section corresponding to the first data to the second position information. Let the corresponding section be a second corresponding section corresponding to the second data. The linking unit determines linking of the first data and the second data based on a match rate between the first corresponding section and the second corresponding section.

 本開示の一態様に係る位置検知方法は、第1測位工程と、第2測位工程と、紐付工程と、を含む。前記第1測位工程では、ユーザが所持するビーコン端末から送信されスキャナで受信された、前記ユーザの識別情報を含むビーコン信号に基づいて、第1データを生成する。前記第1データは、前記ユーザの前記識別情報と前記ユーザの第1位置情報とを含む。前記第2測位工程では、前記ユーザが滞在する空間を撮影し画像データを生成する撮像装置で生成された前記画像データに基づいて、第2データを生成する。前記第2データは、前記ユーザの第2位置情報を含む。前記紐付工程では、前記第1データと前記第2データとを紐付ける。前記紐付工程では、前記ユーザが滞在する前記空間を分割した複数の区画のうち、前記第1位置情報に対応する区画を前記第1データに対応する第1対応区画とし、前記第2位置情報に対応する区画を前記第2データに対応する第2対応区画とする。前記紐付工程では、前記第1対応区画と前記第2対応区画との一致率に基づいて前記第1データと前記第2データとの紐付けを決定する。 A position detection method according to an aspect of the present disclosure includes a first positioning process, a second positioning process, and a linking process. In the first positioning step, first data is generated based on a beacon signal including identification information of the user transmitted from a beacon terminal owned by the user and received by the scanner. The first data includes the identification information of the user and first location information of the user. In the second positioning step, second data is generated based on the image data generated by an imaging device that captures a space in which the user stays and generates image data. The second data includes second location information of the user. In the linking step, the first data and the second data are linked. In the linking step, among a plurality of sections obtained by dividing the space in which the user stays, the section corresponding to the first position information is set as the first corresponding section corresponding to the first data, and the second position information is set to the first corresponding section. Let the corresponding section be a second corresponding section corresponding to the second data. In the linking step, linking between the first data and the second data is determined based on the match rate between the first corresponding section and the second corresponding section.

 本開示の一態様に係るプログラムは、前記位置検知方法を、1以上のプロセッサに実行させるためのプログラムである。 A program according to one aspect of the present disclosure is a program for causing one or more processors to execute the position detection method.

図1は、一実施形態に係る位置検知システムのブロック図である。FIG. 1 is a block diagram of a position detection system according to one embodiment. 図2は、同上の位置検知システムの設置状態を表す概略図である。FIG. 2 is a schematic diagram showing an installation state of the position detection system same as the above. 図3は、同上の位置検知システムを用いた位置検知方法を表すフローチャートである。FIG. 3 is a flow chart showing a position detection method using the same position detection system. 図4は、同上の位置検知システムにより実行される、一致率を求める処理を説明するための説明図である。FIG. 4 is an explanatory diagram for explaining the process of obtaining the match rate, which is executed by the position detection system.

 以下、実施形態に係る位置検知システムについて、図面を用いて説明する。ただし、下記の実施形態は、本開示の様々な実施形態の1つに過ぎない。下記の実施形態は、本開示の目的を達成できれば、設計等に応じて種々の変更が可能である。また、下記の実施形態において説明する各図は、模式的な図であり、図中の各構成要素の大きさ及び厚さそれぞれの比が必ずしも実際の寸法比を反映しているとは限らない。 The position detection system according to the embodiment will be described below with reference to the drawings. However, the embodiment described below is but one of the various embodiments of the present disclosure. The embodiments described below can be modified in various ways according to design and the like as long as the objects of the present disclosure can be achieved. Each drawing described in the following embodiments is a schematic drawing, and the ratio of the size and thickness of each component in the drawing does not necessarily reflect the actual dimensional ratio. .

 (概要)
 図1に示すように、本実施形態の位置検知システム1(LPS:Local Positioning System)は、第1測位部421と、第2測位部422と、紐付部423と、を備える。第1測位部421は、ユーザU1(図2参照)が所持するビーコン端末2から送信されスキャナ3で受信された、ユーザU1の識別情報を含むビーコン信号に基づいて、第1データを生成する。第1データは、ユーザU1の識別情報とユーザU1の第1位置情報とを含む。第2測位部422は、ユーザU1が滞在する空間を撮影し画像データを生成する撮像装置5で生成された画像データに基づいて、第2データを生成する。第2データは、ユーザU1の第2位置情報を含む。紐付部423は、第1データと第2データとを紐付ける。紐付部423は、ユーザU1が滞在する空間を分割した複数の区画D1~D16(図4参照)のうち、第1位置情報に対応する区画を第1データに対応する第1対応区画とし、第2位置情報に対応する区画を第2データに対応する第2対応区画とする。紐付部423は、第1対応区画と第2対応区画との一致率に基づいて第1データと第2データとの紐付けを決定する。
(Overview)
As shown in FIG. 1 , the position detection system 1 (LPS: Local Positioning System) of this embodiment includes a first positioning section 421 , a second positioning section 422 , and a linking section 423 . The first positioning unit 421 generates first data based on the beacon signal including the identification information of the user U1, which is transmitted from the beacon terminal 2 owned by the user U1 (see FIG. 2) and received by the scanner 3. The first data includes identification information of user U1 and first location information of user U1. The second positioning unit 422 generates second data based on the image data generated by the imaging device 5 that captures the space where the user U1 stays and generates image data. The second data includes second location information of user U1. The linking unit 423 links the first data and the second data. The linking unit 423 sets the section corresponding to the first position information among the plurality of sections D1 to D16 (see FIG. 4) obtained by dividing the space where the user U1 stays as the first corresponding section corresponding to the first data. Let the section corresponding to the second position information be the second corresponding section corresponding to the second data. The tying unit 423 determines tying between the first data and the second data based on the match rate between the first corresponding section and the second corresponding section.

 本実施形態によれば、ユーザU1の識別情報を含む第1データを、第2データと紐付けることができる。よって、第2データがユーザU1の識別情報を含まない場合であっても、第2データをユーザU1の識別情報と紐付けることができる。 According to this embodiment, the first data including the identification information of the user U1 can be associated with the second data. Therefore, even if the second data does not include the identification information of the user U1, the second data can be associated with the identification information of the user U1.

 第1データは、ビーコン信号に基づく第1位置情報を含み、第2データは、撮像装置5で撮影された画像データに基づく第2位置情報を含む。多くの場合、ビーコン信号に基づく測位よりも、画像データに基づく測位の方が、高精度な測位である。つまり、多くの場合、第1位置情報よりも第2位置情報の方が、ユーザU1の位置を正確に表している。第2データをユーザU1の識別情報と紐付けることで、撮像装置の測位精度を用いた高精度な測位を行いつつ、測位対象のユーザU1を識別することができる。 The first data includes first position information based on beacon signals, and the second data includes second position information based on image data captured by imaging device 5 . In many cases, positioning based on image data is more accurate than positioning based on beacon signals. That is, in many cases, the second location information more accurately represents the location of user U1 than the first location information. By associating the second data with the identification information of the user U1, it is possible to identify the user U1 to be positioned while performing highly accurate positioning using the positioning accuracy of the imaging device.

 (詳細)
 (1)全体構成
 以下、本実施形態の位置検知システム1について、より詳細に説明する。
(detail)
(1) Overall Configuration Hereinafter, the position detection system 1 of this embodiment will be described in more detail.

 図1に示すように、位置検知システム1は、複数のビーコン端末2と、複数のスキャナ3と、測位サーバ4と、複数の撮像装置5と、を備える。 As shown in FIG. 1 , the position detection system 1 includes multiple beacon terminals 2 , multiple scanners 3 , a positioning server 4 , and multiple imaging devices 5 .

 複数のビーコン端末2、複数のスキャナ3、測位サーバ4及び複数の撮像装置5の各々は、1以上のプロセッサ及びメモリを有するコンピュータシステムを含んでいる。コンピュータシステムのメモリに記録されたプログラムを、コンピュータシステムのプロセッサが実行することにより、複数のビーコン端末2、複数のスキャナ3、測位サーバ4及び複数の撮像装置5の各々の少なくとも一部の機能が実現される。プログラムは、メモリに記録されていてもよいし、インターネット等の電気通信回線を通して提供されてもよく、メモリカード等の非一時的記録媒体に記録されて提供されてもよい。 Each of the plurality of beacon terminals 2, the plurality of scanners 3, the positioning server 4 and the plurality of imaging devices 5 includes a computer system having one or more processors and memory. At least part of the functions of each of the plurality of beacon terminals 2, the plurality of scanners 3, the positioning server 4, and the plurality of imaging devices 5 is performed by the processor of the computer system executing a program recorded in the memory of the computer system. Realized. The program may be recorded in a memory, provided through an electric communication line such as the Internet, or recorded in a non-temporary recording medium such as a memory card and provided.

 ビーコン端末2は、ユーザU1(図2参照)に携帯される。施設を利用する複数のユーザU1は、それぞれ、ビーコン端末2を携帯している。各ビーコン端末2には、識別情報が割り当てられており、位置検知システム1は、識別情報に基づいて、複数のビーコン端末2を区別する。位置検知システム1は、施設における複数のビーコン端末2の各々の位置、すなわち、複数のユーザU1の各々の位置を測定する。 The beacon terminal 2 is carried by the user U1 (see FIG. 2). A plurality of users U1 who use the facility carry beacon terminals 2, respectively. Identification information is assigned to each beacon terminal 2, and the position detection system 1 distinguishes between the plurality of beacon terminals 2 based on the identification information. The position detection system 1 measures the position of each of the multiple beacon terminals 2 in the facility, that is, the position of each of the multiple users U1.

 本開示でいう「施設」は、例えば、オフィスビル、工場、複合商業施設、図書館、美術館、博物館、遊戯施設、テーマパーク、公園、空港、鉄道駅、球場、ホテル、病院及び住宅等である。その他、「施設」は、例えば、船舶及び鉄道車両等の移動体であってもよい。 "Facilities" referred to in this disclosure are, for example, office buildings, factories, commercial complexes, libraries, museums, museums, amusement facilities, theme parks, parks, airports, railway stations, ballparks, hotels, hospitals and residences. In addition, the "facility" may be, for example, a mobile object such as a ship or a railway vehicle.

 本開示でいう「ビーコン端末」は、ユーザU1によって携帯される携帯端末であり、例えば、スマートフォンのような通信端末である。また、本開示でいう「ビーコン端末」は、スマートフォンに限定されず、タブレット型の携帯端末でもよいし、タグ等の位置検知システム1専用の端末でもよい。また、「ビーコン端末」は、ユーザU1の所有物であってもよいし、借り物であってもよい。 A "beacon terminal" as used in the present disclosure is a mobile terminal carried by the user U1, for example, a communication terminal such as a smartphone. In addition, the “beacon terminal” referred to in the present disclosure is not limited to a smart phone, and may be a tablet-type mobile terminal or a terminal such as a tag that is dedicated to the position detection system 1 . Also, the "beacon terminal" may be owned by the user U1 or may be borrowed.

 複数のスキャナ3は、施設に設置されている。本実施形態では、複数のスキャナ3は、施設内の建物の天井に設置されている(図2参照)。複数のスキャナ3は、ビーコン端末2から送信されるビーコン信号を受信し、ビーコン信号の受信信号強度(RSSI:Received Signal Strength Indication)を測定する。 A plurality of scanners 3 are installed in the facility. In this embodiment, the multiple scanners 3 are installed on the ceiling of the building in the facility (see FIG. 2). The multiple scanners 3 receive beacon signals transmitted from the beacon terminals 2 and measure the received signal strength indication (RSSI) of the beacon signals.

 測位サーバ4は、複数のスキャナ3の各々におけるビーコン信号の受信信号強度に基づいて、複数のスキャナ3の各々とビーコン端末2との間の距離を求める。そして、測位サーバ4は、上記距離に基づいてビーコン端末2の位置(第1位置情報)を求める。一例として、測位サーバ4は、複数のスキャナ3の各々の位置情報と、複数のスキャナ3の各々とビーコン端末2との間の距離と、を用いて3点測位等を行うことにより、ビーコン端末2の位置を求める。 The positioning server 4 obtains the distance between each of the multiple scanners 3 and the beacon terminal 2 based on the received signal strength of the beacon signal in each of the multiple scanners 3 . Then, the positioning server 4 obtains the position (first position information) of the beacon terminal 2 based on the distance. As an example, the positioning server 4 performs three-point positioning or the like using the position information of each of the plurality of scanners 3 and the distance between each of the plurality of scanners 3 and the beacon terminal 2, so that the beacon terminal Find the position of 2.

 複数の撮像装置5は、施設に設置されている。本実施形態では、複数の撮像装置5は、施設内の建物の天井に設置されている(図2参照)。測位サーバ4は、複数の撮像装置5で撮影された画像を処理することで、複数のユーザU1の各々の位置(第2位置情報)を求める。 A plurality of imaging devices 5 are installed in the facility. In this embodiment, the plurality of imaging devices 5 are installed on the ceiling of the building in the facility (see FIG. 2). The positioning server 4 obtains the position (second position information) of each of the users U1 by processing the images captured by the plurality of imaging devices 5 .

 (2)ビーコン端末
 図1に示すように、ビーコン端末2は、通信部21と、処理部22と、記憶部23と、を備える。
(2) Beacon Terminal As shown in FIG. 1, the beacon terminal 2 includes a communication section 21, a processing section 22, and a storage section .

 通信部21は、通信インタフェース装置を含んでいる。通信部21は、スキャナ3の通信部31と通信可能である。本開示でいう「通信可能」とは、有線通信又は無線通信の適宜の通信方式により、直接的、又はネットワーク若しくは中継器等を介して間接的に、信号を授受できることを意味する。通信部21と通信部31との通信方式は、例えば、Bluetooth(登録商標) Low Energy、又はWiFi(登録商標)等である。 The communication unit 21 includes a communication interface device. The communication section 21 can communicate with the communication section 31 of the scanner 3 . “Communicable” as used in the present disclosure means that a signal can be sent and received directly or indirectly via a network, a repeater, or the like, by an appropriate communication method such as wired communication or wireless communication. The communication method between the communication unit 21 and the communication unit 31 is, for example, Bluetooth (registered trademark) Low Energy, WiFi (registered trademark), or the like.

 通信部21は、ビーコン信号を送信する。ビーコン信号は、ビーコン端末2に登録されたユーザU1の識別情報を含む。なお、ビーコン端末2はユーザU1に固有の端末であるので、ビーコン端末2の識別情報が、ユーザU1の識別情報として用いられてもよい。 The communication unit 21 transmits a beacon signal. The beacon signal includes identification information of user U1 registered in beacon terminal 2 . In addition, since the beacon terminal 2 is a terminal specific to the user U1, the identification information of the beacon terminal 2 may be used as the identification information of the user U1.

 処理部22は、コンピュータシステムを主構成とする。処理部22は、ビーコン端末2の全体的な制御を行う。また、処理部22は、通信部21を制御して、通信部21にビーコン信号を送信させる。 The processing unit 22 is mainly composed of a computer system. The processing unit 22 performs overall control of the beacon terminal 2 . In addition, the processing unit 22 controls the communication unit 21 and causes the communication unit 21 to transmit a beacon signal.

 記憶部23は、ビーコン端末2に関する情報を記憶している。記憶部23は、ビーコン端末2を携帯するユーザU1の識別情報を記憶している。 The storage unit 23 stores information about the beacon terminal 2. The storage unit 23 stores identification information of the user U1 who carries the beacon terminal 2 .

 (3)スキャナ
 図1に示すように、スキャナ3は、通信部31と、処理部32と、記憶部33と、を備える。
(3) Scanner As shown in FIG. 1 , the scanner 3 includes a communication section 31 , a processing section 32 and a storage section 33 .

 通信部31は、通信インタフェース装置を含んでいる。通信部31は、第1通信部311としての機能と、第2通信部312としての機能と、を有する。第1通信部311は、ビーコン端末2の通信部21と通信可能である。第2通信部312は、測位サーバ4の通信部41と通信可能である。 The communication unit 31 includes a communication interface device. The communication section 31 has a function as the first communication section 311 and a function as the second communication section 312 . The first communication unit 311 can communicate with the communication unit 21 of the beacon terminal 2 . The second communication unit 312 can communicate with the communication unit 41 of the positioning server 4 .

 処理部32は、コンピュータシステムを主構成とする。処理部32は、スキャナ3の全体的な制御を行う。また、処理部32は、通信部31で受信されたビーコン信号の受信信号強度を測定する。 The processing unit 32 is mainly composed of a computer system. The processing unit 32 performs overall control of the scanner 3 . Also, the processing unit 32 measures the received signal strength of the beacon signal received by the communication unit 31 .

 記憶部33は、スキャナ3に関する情報を記憶している。記憶部33は、スキャナ3の識別情報を記憶している。 The storage unit 33 stores information about the scanner 3. The storage unit 33 stores identification information of the scanner 3 .

 (4)撮像装置
 図1に示すように、撮像装置5は、通信部51と、処理部52と、記憶部53と、撮像部55と、を備える。
(4) Imaging Device As shown in FIG. 1 , the imaging device 5 includes a communication section 51 , a processing section 52 , a storage section 53 and an imaging section 55 .

 通信部51は、通信インタフェース装置を含んでいる。通信部51は、測位サーバ4の通信部41と通信可能である。 The communication unit 51 includes a communication interface device. The communication unit 51 can communicate with the communication unit 41 of the positioning server 4 .

 処理部52は、コンピュータシステムを主構成とする。処理部52は、撮像装置5の全体的な制御を行う。 The processing unit 52 is mainly composed of a computer system. The processing unit 52 performs overall control of the imaging device 5 .

 記憶部53は、撮像装置5に関する情報を記憶している。記憶部53は、撮像装置5の識別情報を記憶している。 The storage unit 53 stores information regarding the imaging device 5 . The storage unit 53 stores identification information of the imaging device 5 .

 撮像部55は、ユーザU1が滞在する空間を撮影する主体である。撮像部55は、例えばCCD(Charge Coupled Devices)イメージセンサ、又はCMOS(Complementary Metal-Oxide Semiconductor)イメージセンサ等の二次元イメージセンサを含む。 The imaging unit 55 is the subject that photographs the space where the user U1 stays. The imaging unit 55 includes a two-dimensional image sensor such as a CCD (Charge Coupled Devices) image sensor or a CMOS (Complementary Metal-Oxide Semiconductor) image sensor.

 図2に示すように、撮像装置5は、建物の天井に設置される。撮像装置5は、撮像装置5の下方の空間を撮影する。撮像装置5で撮影される画像は、ユーザU1を上から写した画像となる。これにより、ユーザU1の顔が判別し難くなるので、ユーザU1のプライバシーが保護されやすい。 As shown in FIG. 2, the imaging device 5 is installed on the ceiling of the building. The imaging device 5 photographs the space below the imaging device 5 . The image captured by the imaging device 5 is an image of the user U1 taken from above. As a result, it becomes difficult to distinguish the face of the user U1, and the privacy of the user U1 is easily protected.

 (5)測位サーバ
 図1に示すように、測位サーバ4は、通信部41と、処理部42と、記憶部43と、を備える。
(5) Positioning Server As shown in FIG. 1 , the positioning server 4 includes a communication section 41 , a processing section 42 and a storage section 43 .

 通信部41は、通信インタフェース装置を含んでいる。通信部41は、複数のスキャナ3の各々の通信部31及び複数の撮像装置5の各々の通信部51と通信可能である。 The communication unit 41 includes a communication interface device. The communication unit 41 can communicate with the communication unit 31 of each of the plurality of scanners 3 and the communication unit 51 of each of the plurality of imaging devices 5 .

 処理部42は、コンピュータシステムを主構成とする。処理部42は、第1測位部421、第2測位部422及び紐付部423としての機能を有する。 The processing unit 42 is mainly composed of a computer system. The processing unit 42 has functions as a first positioning unit 421 , a second positioning unit 422 and a linking unit 423 .

 記憶部43は、複数のスキャナ3の各々の識別情報及び位置情報を記憶している。また、記憶部43は、複数の撮像装置5の各々の識別情報及び位置情報を記憶している。 The storage unit 43 stores identification information and position information for each of the multiple scanners 3 . Further, the storage unit 43 stores identification information and position information of each of the plurality of imaging devices 5 .

 測位サーバ4は、複数のスキャナ3の各々から受信信号情報を取得する。受信信号情報は、ビーコン信号の受信信号強度に関する情報を含む。また、受信信号情報は、ユーザU1の識別情報及びスキャナ3の識別情報を含む。第1測位部421は、複数のスキャナ3の各々におけるビーコン信号の受信信号強度に基づいて、複数のスキャナ3の各々とビーコン端末2との間の距離を求め、上記距離に基づいてビーコン端末2の位置、すなわち、ユーザU1の第1位置情報を求める。 The positioning server 4 acquires received signal information from each of the multiple scanners 3 . The received signal information includes information regarding the received signal strength of the beacon signal. Further, the received signal information includes identification information of the user U1 and identification information of the scanner 3. FIG. The first positioning unit 421 obtains the distance between each of the plurality of scanners 3 and the beacon terminal 2 based on the received signal strength of the beacon signal in each of the plurality of scanners 3, and determines the distance between the beacon terminal 2 based on the distance. , that is, the first position information of the user U1.

 受信信号情報は、ユーザU1の識別情報を含むので、第1測位部421は、複数のユーザU1を区別し、複数のユーザU1の位置を個別に測定可能である。第1測位部421は、所定の時間間隔で(例えば、1秒ごとに)複数のユーザU1の各々の位置を測定する。 Since the received signal information includes the identification information of the user U1, the first positioning unit 421 can distinguish between the multiple users U1 and individually measure the positions of the multiple users U1. The first positioning unit 421 measures the position of each of the multiple users U1 at predetermined time intervals (for example, every second).

 測位サーバ4は、複数の撮像装置5の各々から画像データを取得する。画像データにユーザU1が写っている場合、第2測位部422は、画像データに基づいてユーザU1の第2位置情報を求める。より詳細には、測位サーバ4は、撮像装置5の位置及び向きと、画像データにおけるユーザU1の位置と、に基づいて、ユーザU1の第2位置情報を求める。 The positioning server 4 acquires image data from each of the multiple imaging devices 5 . When the user U1 appears in the image data, the second positioning unit 422 obtains second position information of the user U1 based on the image data. More specifically, the positioning server 4 obtains the second position information of the user U1 based on the position and orientation of the imaging device 5 and the position of the user U1 in the image data.

 第2測位部422は、ユーザU1が誰であるかを識別する処理を行わない。ただし、第2測位部422は、各ユーザU1に管理IDを与えて区別する。さらに、第2測位部422は、時間的に連続した画像データを参照して、各ユーザU1を追跡する。これにより、第2測位部422は、ある時点に位置を測定されたユーザU1と、別の時点に位置を測定されたユーザU1と、が同一人物であるか、別人であるかを区別する。 The second positioning unit 422 does not identify who the user U1 is. However, the second positioning unit 422 distinguishes by giving a management ID to each user U1. Furthermore, the second positioning unit 422 tracks each user U1 by referring to temporally continuous image data. Thereby, the second positioning unit 422 distinguishes whether the user U1 whose position was measured at one point in time and the user U1 whose position was measured at another point in time are the same person or different persons.

 第2測位部422の測位精度は、第1測位部421の測位精度よりも高い。また、第2測位部422が第2位置情報を生成する時間間隔は、第1測位部421が第1位置情報を生成する時間間隔よりも短い。 The positioning accuracy of the second positioning unit 422 is higher than the positioning accuracy of the first positioning unit 421. Also, the time interval at which the second positioning unit 422 generates the second location information is shorter than the time interval at which the first positioning unit 421 generates the first location information.

 (6)位置検知方法
 本実施形態の位置検知方法は、位置検知システム1により実現可能である。図3に示すように、位置検知方法は、第1測位工程(ステップST1、ST2)と、第2測位工程(ステップST3、ST4)と、紐付工程(ステップST5~ST8)と、を含む。第1測位工程では、ユーザU1が所持するビーコン端末2から送信されスキャナ3で受信された、ユーザU1の識別情報を含むビーコン信号に基づいて、第1データを生成する。第1データは、ユーザU1の識別情報とユーザU1の第1位置情報とを含む。第2測位工程では、ユーザU1が滞在する空間を撮影し画像データを生成する撮像装置5で生成された画像データに基づいて、第2データを生成する。第2データは、ユーザU1の第2位置情報を含む。紐付工程では、第1データと第2データとを紐付ける。紐付工程では、ユーザU1が滞在する空間を分割した複数の区画のうち、第1位置情報に対応する区画を第1データに対応する第1対応区画とし、第2位置情報に対応する区画を第2データに対応する第2対応区画とする。紐付工程では、第1対応区画と第2対応区画との一致率に基づいて第1データと第2データとの紐付けを決定する。
(6) Position Detection Method The position detection method of this embodiment can be realized by the position detection system 1 . As shown in FIG. 3, the position detection method includes a first positioning process (steps ST1, ST2), a second positioning process (steps ST3, ST4), and a linking process (steps ST5 to ST8). In the first positioning step, the first data is generated based on the beacon signal including the identification information of the user U1, which is transmitted from the beacon terminal 2 owned by the user U1 and received by the scanner 3. The first data includes identification information of user U1 and first location information of user U1. In the second positioning step, the second data is generated based on the image data generated by the imaging device 5 that captures the space where the user U1 stays and generates the image data. The second data includes second location information of user U1. In the linking step, the first data and the second data are linked. In the linking step, among a plurality of sections obtained by dividing the space where the user U1 stays, the section corresponding to the first position information is set as the first corresponding section corresponding to the first data, and the section corresponding to the second position information is set as the first corresponding section. A second corresponding section corresponding to 2 data. In the linking step, linking between the first data and the second data is determined based on the matching rate between the first corresponding section and the second corresponding section.

 一態様に係るプログラムは、上記の位置検知方法を1以上のプロセッサに実行させるためのプログラムである。プログラムは、コンピュータで読み取り可能な非一時的記録媒体に記録されていてもよい。 A program according to one aspect is a program for causing one or more processors to execute the position detection method described above. The program may be recorded on a computer-readable non-transitory recording medium.

 以下、図3、図4を参照して、位置検知方法についてより詳細に説明する。なお、図3に示すフローチャートは、本開示に係る位置検知方法の一例に過ぎず、処理の順序が適宜変更されてもよいし、処理が適宜追加又は省略されてもよい。 The position detection method will be described in more detail below with reference to FIGS. Note that the flowchart shown in FIG. 3 is merely an example of the position detection method according to the present disclosure, and the order of processing may be changed as appropriate, and processing may be added or omitted as appropriate.

 ユーザU1は複数存在し、複数のユーザU1の各々は、ビーコン端末2を所持する。ある期間において、ビーコン端末2及びスキャナ3を使用して、ビーコン信号に基づいて測位される対象のユーザU1の人数を、Nとする。上記ある期間と同じ期間において、撮像装置5を使用して、画像データに基づいて測位される対象のユーザU1の人数を、Mとする。NとMとは、一致していてもよいし、一致していなくてもよい。 There are a plurality of users U1, and each of the plurality of users U1 possesses a beacon terminal 2. Let N be the number of target users U1 whose positions are determined based on beacon signals using the beacon terminal 2 and the scanner 3 in a certain period. Let M be the number of target users U1 whose positions are measured based on the image data using the imaging device 5 during the same period as the certain period described above. N and M may or may not match.

 第1測位部421は、ビーコン信号に基づく測位を行う(ステップST1)。これにより、第1測位部421は、第1データα1、α2、……、αNを生成する(ステップST2)。i=1、2、3、……、Nとすると、第1データαiは、i番目のユーザU1に対応する第1データである。第1データαiは、i番目のユーザU1の識別情報及びi番目のユーザU1の第1位置情報を含む。 The first positioning unit 421 performs positioning based on the beacon signal (step ST1). Thereby, the first positioning unit 421 generates the first data α1, α2, . . . , αN (step ST2). When i=1, 2, 3, . . . , N, the first data αi is the first data corresponding to the i-th user U1. The first data αi includes identification information of the i-th user U1 and first location information of the i-th user U1.

 第2測位部422は、画像データに基づく測位を行う(ステップST3)。これにより、第2測位部422は、第2データβN+1、βN+2、……、βN+Mを生成する(ステップST4)。j=N+1、N+2、N+3、……、N+Mとすると、第2データβjは、j番目のユーザU1に対応する第2データである。第2データβjは、j番目のユーザU1の第2位置情報を含む。 The second positioning unit 422 performs positioning based on image data (step ST3). Thereby, the second positioning unit 422 generates the second data βN+1, βN+2, . . . , βN+M (step ST4). If j=N+1, N+2, N+3, . The second data βj includes second location information of the j-th user U1.

 このように、第1測位部421は、複数のユーザU1に対応する複数の第1データを生成する。第2測位部422は、複数のユーザU1に対応する複数の第2データを生成する。 Thus, the first positioning unit 421 generates a plurality of first data corresponding to a plurality of users U1. The second positioning unit 422 generates a plurality of second data corresponding to a plurality of users U1.

 なお、第1データαi及び第2データβjは、継続して生成される。そのため、ステップST1、ST2は実際には、ステップST3、ST4と並列して実行される。 The first data αi and the second data βj are continuously generated. Therefore, steps ST1 and ST2 are actually executed in parallel with steps ST3 and ST4.

 紐付部423は、条件に応じて、第1データαiと第2データβjとを紐付ける。ここで、図4に示すように、上下方向から見て、ユーザU1が滞在する空間が、複数(図4では16個)の区画D1~D16に分割されている。各区画D1~D16の境界は、仮想的な境界である。各区画D1~D16の大きさは同じである。各区画D1~D16は、複数のスキャナ3のうち少なくとも1つのスキャナ3がビーコン信号を受信可能である範囲に含まれている。また、各区画D1~D16は、複数の撮像装置5のうち少なくとも1つの撮像装置5の撮像範囲に含まれている。 The linking unit 423 links the first data αi and the second data βj according to conditions. Here, as shown in FIG. 4, the space in which the user U1 stays is divided into a plurality of (16 in FIG. 4) sections D1 to D16 when viewed from above and below. The boundaries of each partition D1-D16 are virtual boundaries. The size of each partition D1-D16 is the same. Each of the sections D1 to D16 is included in a range in which at least one of the multiple scanners 3 can receive the beacon signal. Also, each of the sections D1 to D16 is included in the imaging range of at least one imaging device 5 out of the plurality of imaging devices 5 .

 図4では、1番目のユーザU1の第1位置情報A1が図示されている。より詳細には、1番目のユーザU1の移動の軌跡が、ビーコン信号に基づいて求められ、図4に第1位置情報A1として図示されている。 In FIG. 4, the first location information A1 of the first user U1 is illustrated. More specifically, the trajectory of movement of the first user U1 is obtained based on the beacon signal, and shown in FIG. 4 as first position information A1.

 図4では、N+1番目のユーザU1の第2位置情報B1が図示されている。より詳細には、N+1番目のユーザU1の移動の軌跡が、画像データに基づいて求められ、図4に第2位置情報B1として図示されている。 In FIG. 4, the second location information B1 of the N+1-th user U1 is illustrated. More specifically, the locus of movement of the N+1-th user U1 is obtained based on the image data, and is shown as second position information B1 in FIG.

 同様に、図4には、2番目のユーザU1の第1位置情報A2、及び、N+2番目のユーザU1の第2位置情報B2が図示されている。 Similarly, FIG. 4 shows the first location information A2 of the second user U1 and the second location information B2 of the (N+2)th user U1.

 以下では、N=M=2とする。つまり、以下では、1番目のユーザU1の第1位置情報A1、2番目のユーザU1の第1位置情報A2、3番目のユーザU1の第2位置情報B1、及び、4番目のユーザU1の第2位置情報B2に焦点を当てて説明する。 In the following, N=M=2. That is, hereinafter, the first location information A1 of the first user U1, the first location information A2 of the second user U1, the second location information B1 of the third user U1, and the fourth user U1 The description will focus on the 2-position information B2.

 紐付部423は、1番目のユーザU1が、3番目のユーザU1又は4番目のユーザU1と同一人物であるか否かを判定する。また、紐付部423は、2番目のユーザU1が、3番目のユーザU1又は4番目のユーザU1と同一人物であるか否かを判定する。同一人物であると判定した場合、紐付部423は、該当するユーザU1の第1データと第2データとを紐付ける。 The linking unit 423 determines whether the first user U1 is the same person as the third user U1 or the fourth user U1. The linking unit 423 also determines whether or not the second user U1 is the same person as the third user U1 or the fourth user U1. If it is determined that they are the same person, the linking unit 423 links the first data and the second data of the corresponding user U1.

 図4には、第1位置情報A1、A2、第2位置情報B1、B2の各々を表す曲線のうち、各時点t1~t6でのユーザU1の位置に、t1~t6の符号を傍記している。 In FIG. 4, among the curves representing the first position information A1, A2 and the second position information B1, B2, the positions of the user U1 at each time point t1 to t6 are appended with the symbols t1 to t6. there is

 第1位置情報A1は、時点t1、t2、t3に1番目のユーザU1が区画D5にいることを示している。第1位置情報A1は、時点t4、t5、t6に1番目のユーザU1が区画D2にいることを示している。 The first position information A1 indicates that the first user U1 is in section D5 at times t1, t2, and t3. The first position information A1 indicates that the first user U1 is in section D2 at times t4, t5, and t6.

 各時点t1~t6と、第1位置情報A1、A2、第2位置情報B1、B2に対応する各ユーザU1の位置(区画D1~D16)と、の関係を、[表1]に示す。 [Table 1] shows the relationship between each time point t1 to t6 and the position of each user U1 (sections D1 to D16) corresponding to the first position information A1, A2 and the second position information B1, B2.

Figure JPOXMLDOC01-appb-T000001
Figure JPOXMLDOC01-appb-T000001

 複数の区画D1~D16のうち、1番目のユーザU1の第1位置情報A1に対応する区画を、1番目のユーザU1の第1対応区画とする。例えば、時点t1~t3では、第1位置情報A1は1番目のユーザU1が区画D5にいることを表しているので、時点t1~t3における1番目のユーザU1の第1対応区画は、区画D5である。時点t4~t6における1番目のユーザU1の第1対応区画は、区画D2である。 Let the section corresponding to the first position information A1 of the first user U1 among the plurality of sections D1 to D16 be the first corresponding section of the first user U1. For example, from time t1 to t3, the first position information A1 indicates that the first user U1 is in section D5. is. The first corresponding section of the first user U1 at times t4 to t6 is section D2.

 2番目のユーザU1の第1位置情報A2に対応する区画を、2番目のユーザU1の第1対応区画とする。3番目のユーザU1の第2位置情報B1に対応する区画を、3番目のユーザU1の第2対応区画とする。4番目のユーザU1の第2位置情報B2に対応する区画を、4番目のユーザU1の第2対応区画とする。 Let the section corresponding to the first position information A2 of the second user U1 be the first corresponding section of the second user U1. Let the section corresponding to the second position information B1 of the third user U1 be the second corresponding section of the third user U1. Let the section corresponding to the second position information B2 of the fourth user U1 be the second corresponding section of the fourth user U1.

 紐付部423は、各第1対応区画と各第2対応区画との一致率を求める(ステップST5)。紐付部423は、第1対応区画と第2対応区画との一致率に基づいて第1データと第2データとの紐付けを決定する。より詳細には、紐付部423は、所定時間に亘る第1対応区画と第2対応区画との一致率に基づいて第1データと第2データとの紐付けを決定する。更に詳細には、紐付部423は、所定時間に亘る第1対応区画と第2対応区画との一致率の積分値に基づいて第1データと第2データとの紐付けを決定する。 The linking unit 423 obtains the rate of matching between each first corresponding section and each second corresponding section (step ST5). The tying unit 423 determines tying between the first data and the second data based on the match rate between the first corresponding section and the second corresponding section. More specifically, the tying unit 423 determines tying between the first data and the second data based on the matching rate between the first corresponding section and the second corresponding section over a predetermined period of time. More specifically, the tying unit 423 determines tying between the first data and the second data based on the integrated value of the matching rate between the first corresponding section and the second corresponding section over a predetermined period of time.

 紐付部423は、第1対応区画と第2対応区画とが一致しているとき、一致率が最大であるとする。紐付部423は、第1対応区画と第2対応区画とが近いほど、一致率が大きいと判断する。第1対応区画と第2対応区画との一致率を求める対応表の例を、[表2]に示す。 The linking unit 423 assumes that the matching rate is the maximum when the first corresponding section and the second corresponding section match. The linking unit 423 determines that the closer the first corresponding section and the second corresponding section are, the higher the matching rate is. [Table 2] shows an example of a correspondence table for obtaining the matching rate between the first corresponding section and the second corresponding section.

Figure JPOXMLDOC01-appb-T000002
Figure JPOXMLDOC01-appb-T000002

 一例として、第1対応区画と第2対応区間とが一致している期間の長さが1秒以上3秒未満だと、一致率が10加算される。第1対応区画と第2対応区間とが一致している期間の長さが3秒以上5秒未満だと、一致率が20加算される。第1対応区画と第2対応区間とが一致している期間の長さが5秒以上だと、一致率が30加算される。 As an example, if the length of the period in which the first corresponding section matches the second corresponding section is 1 second or more and less than 3 seconds, the match rate is increased by 10. If the length of the period in which the first corresponding section matches the second corresponding section is 3 seconds or more and less than 5 seconds, 20 is added to the match rate. If the length of the period in which the first corresponding section matches the second corresponding section is 5 seconds or longer, 30 is added to the matching rate.

 第1対応区画が第2対応区間の隣の区間である期間の長さが1秒以上3秒未満だと、一致率が5加算される。第1対応区画が第2対応区間の隣の区間である状態が3秒以上5秒未満だと、一致率が3加算される。第1対応区画が第2対応区間の隣の区間である期間の長さが5秒以上だと、一致率が1加算される。 If the length of the period in which the first corresponding section is the section next to the second corresponding section is 1 second or more and less than 3 seconds, the match rate is increased by 5. If the first corresponding section is the section next to the second corresponding section for 3 seconds or more and less than 5 seconds, 3 is added to the matching rate. If the length of the period in which the first corresponding section is the section next to the second corresponding section is 5 seconds or longer, 1 is added to the matching rate.

 第1対応区画が第2対応区間の隣の隣の区間である期間の長さが1秒以上3秒未満だと、一致率が3加算される。第1対応区画が第2対応区間の隣の隣の区間である期間の長さが3秒以上5秒未満だと、一致率が2加算される。第1対応区画が第2対応区間の隣の隣の区間である期間の長さが5秒以上だと、一致率が加算されない。 If the length of the period in which the first corresponding section is the section next to the second corresponding section is 1 second or more and less than 3 seconds, the matching rate is increased by 3. If the length of the period in which the first corresponding section is the section next to the second corresponding section is 3 seconds or more and less than 5 seconds, 2 is added to the matching rate. If the length of the period in which the first corresponding section is the section next to the second corresponding section is 5 seconds or longer, the matching rate is not added.

 また、第1対応区画と第2対応区間とが、一致しておらず、隣り合っておらず、隣の隣の関係でもない場合は、一致率が加算されない。 Also, if the first corresponding section and the second corresponding section do not match, are not adjacent to each other, or are not in a relationship of being next to each other, the matching rate is not added.

 紐付部423は、このようにして所定期間に亘り求められた一致率に基づいて、第1データと第2データとの紐付けを決定する。 The tying unit 423 determines tying between the first data and the second data based on the match rates obtained over a predetermined period in this way.

 紐付部423は、複数の第1データの中から選択された対象第1データを、複数の第2データのうち所定条件を満たす第2データと紐付ける(ステップST6~ST8)。所定条件は、第1対応区画と第2対応区画との一致率が最も大きいという条件を含むことが好ましい。また、所定条件は、第1対応区画と第2対応区画との一致率が閾値よりも大きいという条件を更に含むことが好ましい。以下では、閾値は18であり、紐付部423は、6秒間に亘る第1対応区画と第2対応区間との関係に基づいて、一致率を求めるとして説明する。また、時点t1から1秒、2秒、3秒、4秒、5秒が経過した時点をそれぞれ、時点t2、t3、t4、t5、t6とする。 The associating unit 423 associates the target first data selected from the plurality of first data with the second data that satisfies a predetermined condition among the plurality of second data (steps ST6 to ST8). The predetermined condition preferably includes a condition that the matching rate between the first corresponding section and the second corresponding section is the highest. Moreover, it is preferable that the predetermined conditions further include a condition that the match rate between the first corresponding section and the second corresponding section is greater than a threshold. In the following description, the threshold is 18, and the linking unit 423 obtains the matching rate based on the relationship between the first corresponding section and the second corresponding section over 6 seconds. Time points 1 second, 2 seconds, 3 seconds, 4 seconds, and 5 seconds after time t1 are defined as time points t2, t3, t4, t5, and t6, respectively.

 第1位置情報A1を含む第1データを、対象第1データとし、第1位置情報A1に対応する第1対応区画(この段落の以下の記載では、“A1”と記載する)と、第2位置情報B1に対応する第2対応区間(この段落の以下の記載では、“B1”と記載する)とを比較する。時点t5、t6に該当する2秒間、“A1”が“B1”と一致するので、一致率が10加算される。時点t3、t4に該当する2秒間、“A1”が“B1”の隣なので、一致率が5加算される。時点t1に該当する1秒間、“A1”が“B1”の隣の隣なので、一致率が3加算される。時点t2では、“A1”が“B1”の隣の隣よりも更に離れているので、一致率は加算されない。一致率の合計は、18となる。一致率の合計(積分値)は閾値の18と等しいので、所定条件が満たされず、第1位置情報A1を含む第1データは、第2位置情報B1を含む第2データと紐付けられない(ステップST8)。 The first data including the first positional information A1 is set as the target first data, and the first corresponding section corresponding to the first positional information A1 (referred to as "A1" in the following description of this paragraph) and the second A second corresponding section (referred to as "B1" in the following description of this paragraph) corresponding to the position information B1 is compared. Since "A1" matches "B1" for two seconds corresponding to time points t5 and t6, 10 is added to the match rate. During the two seconds corresponding to time points t3 and t4, "A1" is adjacent to "B1", so 5 is added to the matching rate. Since "A1" is next to "B1" for one second corresponding to time t1, 3 is added to the coincidence rate. At time t2, "A1" is farther away than "B1"'s neighbor, so the match rate is not added. The total match rate is 18. Since the total match rate (integral value) is equal to the threshold value of 18, the predetermined condition is not satisfied, and the first data including the first location information A1 is not associated with the second data including the second location information B1 ( step ST8).

 次に、第1位置情報A1を含む第1データを、対象第1データとし、第1位置情報A1に対応する第1対応区画(この段落の以下の記載では、“A1”と記載する)と、第2位置情報B2に対応する第2対応区間(この段落の以下の記載では、“B2”と記載する)とを比較する。時点t1、t2、t6に該当する3秒間、“A1”が“B2”の隣なので、一致率が3加算される。時点t3、t4、t5に該当する3秒間、“A1”が“B2”の隣の隣なので、一致率が2加算される。一致率の合計は、5となる。一致率の合計は閾値の18よりも小さいので、所定条件が満たされず、第1位置情報A1を含む第1データは、第2位置情報B2を含む第2データと紐付けられない(ステップST8)。 Next, the first data including the first position information A1 is set as the target first data, and the first corresponding section corresponding to the first position information A1 (hereinafter referred to as "A1" in this paragraph) , with a second corresponding section (referred to as “B2” in the following description of this paragraph) corresponding to the second position information B2. Since "A1" is adjacent to "B2" for three seconds corresponding to time points t1, t2, and t6, 3 is added to the coincidence rate. Since "A1" is next to "B2" for three seconds corresponding to time points t3, t4, and t5, 2 is added to the coincidence rate. The sum of match rates is 5. Since the total matching rate is smaller than the threshold value of 18, the predetermined condition is not satisfied, and the first data including the first location information A1 is not associated with the second data including the second location information B2 (step ST8). .

 次に、第1位置情報A2を含む第1データを、対象第1データとし、第1位置情報A2に対応する第1対応区画(この段落の以下の記載では、“A2”と記載する)と、第2位置情報B1に対応する第2対応区間(この段落の以下の記載では、“B1”と記載する)とを比較する。時点t1に該当する1秒間、“A2”が“B1”と一致するので、一致率が10加算される。時点t4に該当する1秒間、“A2”が“B1”の隣なので、一致率が5加算される。時点t2、t3、t5、t6に該当する4秒間、“A2”が“B1”の隣の隣なので、一致率が2加算される。一致率の合計は、17となる。一致率の合計は閾値の18よりも小さいので、所定条件が満たされず、第1位置情報A2を含む第1データは、第2位置情報B1を含む第2データと紐付けられない(ステップST8)。 Next, the first data including the first position information A2 is set as the target first data, and the first corresponding section corresponding to the first position information A2 (hereinafter referred to as "A2" in this paragraph) , with a second corresponding section (referred to as “B1” in the following description of this paragraph) corresponding to the second position information B1. Since "A2" coincides with "B1" for one second corresponding to time t1, 10 is added to the coincidence rate. Since "A2" is next to "B1" for 1 second corresponding to time t4, 5 is added to the coincidence rate. Since "A2" is next to "B1" for four seconds corresponding to time points t2, t3, t5, and t6, 2 is added to the coincidence rate. The total match rate is 17. Since the total matching rate is smaller than the threshold value of 18, the predetermined condition is not satisfied, and the first data including the first location information A2 is not associated with the second data including the second location information B1 (step ST8). .

 次に、第1位置情報A2を含む第1データを、対象第1データとし、第1位置情報A2に対応する第1対応区画(この段落の以下の記載では、“A2”と記載する)と、第2位置情報B2に対応する第2対応区間(この段落の以下の記載では、“B2”と記載する)とを比較する。時点t2、t4、t5に該当する3秒間、“A2”が“B2”と一致するので、一致率が20加算される。時点t1、t3、t6に該当する3秒間、“A2”が“B2”の隣なので、一致率が3加算される。一致率の合計は、23となる。一致率の合計は閾値の18よりも大きい。また、“A2”と“B2”との一致率は、“A2”と“B1”(第2位置情報B1に対応する第2対応区間)との一致率よりも大きい。つまり、“A2”に対する一致率は、“B1”と“B2”とのうち、“B2”が最も大きい。よって、所定条件が満たされ、第1位置情報A2を含む第1データは、第2位置情報B2を含む第2データと紐付けられる(ステップST7)。 Next, the first data including the first position information A2 is set as the target first data, and the first corresponding section corresponding to the first position information A2 (hereinafter referred to as "A2" in this paragraph) , with a second corresponding section (referred to as “B2” in the following description of this paragraph) corresponding to the second position information B2. Since "A2" matches "B2" for three seconds corresponding to time points t2, t4, and t5, 20 is added to the match rate. Since "A2" is next to "B2" for 3 seconds corresponding to time points t1, t3, and t6, 3 is added to the coincidence rate. The total match rate is 23. The sum of match rates is greater than the threshold of 18. Also, the match rate between "A2" and "B2" is higher than the match rate between "A2" and "B1" (the second corresponding section corresponding to the second position information B1). That is, the matching rate for "A2" is the highest for "B2" among "B1" and "B2". Therefore, the predetermined condition is satisfied, and the first data including the first location information A2 is associated with the second data including the second location information B2 (step ST7).

 以上より、紐付部423は、第1位置情報A2を含む第1データを、第2位置情報B2を含む第2データと紐付ける。言い換えると、紐付部423は、第1位置情報A2を含む第1データに対応するユーザU1と、第2位置情報B2を含む第2データに対応するユーザU1とが同一人物であると判定する。また、紐付部423は、第1位置情報A1を含む第1データを、第2位置情報B1を含む第2データ、及び、第2位置情報B2を含む第2データのいずれとも紐付けない。言い換えると、紐付部423は、第1位置情報A1を含む第1データに対応するユーザU1は、第2位置情報B1を含む第2データに対応するユーザU1、及び、第2位置情報B2を含む第2データに対応するユーザU1のいずれとも異なる者であると判定する。 As described above, the linking unit 423 links the first data including the first position information A2 with the second data including the second position information B2. In other words, the linking unit 423 determines that the user U1 corresponding to the first data including the first location information A2 and the user U1 corresponding to the second data including the second location information B2 are the same person. Further, the linking unit 423 links the first data including the first location information A1 with neither the second data including the second location information B1 nor the second data including the second location information B2. In other words, the linking unit 423 determines that the user U1 corresponding to the first data including the first location information A1 includes the user U1 corresponding to the second data including the second location information B1 and the second location information B2. It is determined that the user is different from any of the users U1 corresponding to the second data.

 測位サーバ4は、紐付部423による紐付けの結果を出力する。上記の例では、第1位置情報A2を含む第1データと、第2位置情報B2を含む第2データとを紐付けて出力する。例えば、図4のように第1位置情報A2及び第2位置情報B2をディスプレイに表示し、さらに、第1位置情報A2及び第2位置情報B2の両方に、第1データに含まれるユーザU1の識別情報を表すラベルを表示する。これにより、ディスプレイを見た管理者は、第1位置情報A2及び第2位置情報B2が同じユーザU1の位置情報であると知ることができる。なお、第1位置情報A2及び第2位置情報B2のうち、第2位置情報B2のみが表示されてもよい。 The positioning server 4 outputs the result of linking by the linking unit 423. In the above example, the first data including the first location information A2 and the second data including the second location information B2 are linked and output. For example, the first location information A2 and the second location information B2 are displayed on the display as shown in FIG. Display a label that represents identifying information. Accordingly, the administrator who sees the display can know that the first location information A2 and the second location information B2 are location information of the same user U1. Of the first position information A2 and the second position information B2, only the second position information B2 may be displayed.

 第2位置情報B2を生成する第2測位部422の測位精度は、第1位置情報A2を生成する第1測位部421の測位精度よりも高い。そのため、管理者は、第2位置情報B2を参照することで、第1位置情報A2を参照する場合よりも、ユーザU1の正確な位置を知ることができる。また、紐付部423により第2位置情報B2にユーザU1の識別情報が紐付けられるので、管理者は、第2位置情報B2に対応するユーザU1を識別することができる。 The positioning accuracy of the second positioning unit 422 that generates the second position information B2 is higher than the positioning accuracy of the first positioning unit 421 that generates the first position information A2. Therefore, by referring to the second location information B2, the administrator can know the more accurate location of the user U1 than when referring to the first location information A2. Further, since the identification information of the user U1 is linked to the second location information B2 by the linking unit 423, the administrator can identify the user U1 corresponding to the second location information B2.

 複数の区画D1~D16の各々の寸法は、予め決定されており、測位サーバ4の記憶部43に記憶されている。複数の区画D1~D16の各々の寸法は、第1位置情報の測位精度に基づいて決定されることが好ましい。例えば、第1位置情報の最大誤差と、各区画の最大長さと、の差の絶対値が所定値以下となるように、各区画の寸法が決定されることが好ましい。この構成によれば、各区画の寸法が第1位置情報の測位精度と比較して十分大きい場合と比較して、第1データと第2データとの紐付けを正確に行える可能性が高まる。また、各区画の寸法が第1位置情報の測位精度と比較して十分小さい場合と比較して、区画の個数を削減できる。 The dimensions of each of the plurality of sections D1 to D16 are determined in advance and stored in the storage unit 43 of the positioning server 4. The dimensions of each of the plurality of sections D1 to D16 are preferably determined based on the positioning accuracy of the first position information. For example, it is preferable that the dimension of each section is determined such that the absolute value of the difference between the maximum error of the first position information and the maximum length of each section is equal to or less than a predetermined value. According to this configuration, compared to the case where the dimension of each section is sufficiently large compared to the positioning accuracy of the first position information, the possibility of accurately linking the first data and the second data increases. In addition, the number of sections can be reduced compared to the case where the dimension of each section is sufficiently smaller than the positioning accuracy of the first position information.

 (実施形態の変形例)
 以下、実施形態の変形例を列挙する。以下の変形例は、適宜組み合わせて実現されてもよい。
(Modification of embodiment)
Modifications of the embodiment are listed below. The following modified examples may be implemented in combination as appropriate.

 撮像装置5は、撮像部55として、ステレオカメラを備えていてもよい。第2測位部422は、ステレオカメラの2つのレンズの間の視差の大きさに基づいて、撮像部55からユーザU1までの距離を算出してもよい。 The imaging device 5 may include a stereo camera as the imaging unit 55. The second positioning unit 422 may calculate the distance from the imaging unit 55 to the user U1 based on the magnitude of parallax between the two lenses of the stereo camera.

 スキャナ3及び撮像装置5は、天井に設置されることに限定されず、例えば、建物の壁に設置されてもよいし、建物以外の部材に設置されてもよい。 The scanner 3 and imaging device 5 are not limited to being installed on the ceiling, and may be installed, for example, on the wall of the building or on a member other than the building.

 第2測位部422は、撮像装置5で生成された画像データに対して画像認識処理(例えば、顔認識処理)をすることにより、画像データに写ったユーザU1を識別してもよい。 The second positioning unit 422 may identify the user U1 appearing in the image data by performing image recognition processing (for example, face recognition processing) on the image data generated by the imaging device 5.

 紐付部423は、第1データと第2データとを紐付ける処理を、ユーザU1が移動している場合にのみ行ってもよい。これにより、処理量を低減できる。紐付部423は、例えば、第1データに基づいて、ユーザU1がそれまで滞在していた区画から別の区画へ移動したと判断した場合に、第1データと第2データとを紐付ける処理を開始してもよい。 The linking unit 423 may perform the process of linking the first data and the second data only when the user U1 is moving. Thereby, the amount of processing can be reduced. The associating unit 423 performs processing for associating the first data and the second data, for example, when it is determined, based on the first data, that the user U1 has moved from the section where the user U1 has stayed until then to another section. may start.

 複数の区画D1~D16のうち少なくとも1つが、他の区画とは異なる寸法であってもよい。 At least one of the plurality of compartments D1 to D16 may have dimensions different from those of the other compartments.

 紐付部423は、第1データと第2データとの紐付けを決定した後、解除条件を満たすと紐付けを解消してもよい。解除条件は、第1データと第2データとがもともと紐付けられていないときに紐付部423が「第1データと第2データと紐付けない」ことを決定する場合の条件よりも厳しい条件であってもよい。解除条件は、例えば、上記[表2]に基づいて求められる一致率の合計が所定の閾値よりも小さいという条件であってもよい。 After determining the association between the first data and the second data, the association unit 423 may cancel the association when the release condition is satisfied. The release condition is a condition that is stricter than the condition in which the associating unit 423 determines "not to associate the first data and the second data" when the first data and the second data are not originally associated. There may be. The cancellation condition may be, for example, a condition that the sum of matching rates obtained based on [Table 2] is smaller than a predetermined threshold.

 本開示における位置検知システム1は、コンピュータシステムを含んでいる。コンピュータシステムは、ハードウェアとしてのプロセッサ及びメモリを主構成とする。コンピュータシステムのメモリに記録されたプログラムをプロセッサが実行することによって、本開示における位置検知システム1としての機能の少なくとも一部が実現される。プログラムは、コンピュータシステムのメモリに予め記録されてもよく、電気通信回線を通じて提供されてもよく、コンピュータシステムで読み取り可能なメモリカード、光学ディスク、ハードディスクドライブ等の非一時的記録媒体に記録されて提供されてもよい。コンピュータシステムのプロセッサは、半導体集積回路(IC)又は大規模集積回路(LSI)を含む1ないし複数の電子回路で構成される。ここでいうIC又はLSI等の集積回路は、集積の度合いによって呼び方が異なっており、システムLSI、VLSI(Very Large Scale Integration)、又はULSI(Ultra Large Scale Integration)と呼ばれる集積回路を含む。さらに、LSIの製造後にプログラムされる、FPGA(Field-Programmable Gate Array)、又はLSI内部の接合関係の再構成若しくはLSI内部の回路区画の再構成が可能な論理デバイスについても、プロセッサとして採用することができる。複数の電子回路は、1つのチップに集約されていてもよいし、複数のチップに分散して設けられていてもよい。複数のチップは、1つの装置に集約されていてもよいし、複数の装置に分散して設けられていてもよい。ここでいうコンピュータシステムは、1以上のプロセッサ及び1以上のメモリを有するマイクロコントローラを含む。したがって、マイクロコントローラについても、半導体集積回路又は大規模集積回路を含む1ないし複数の電子回路で構成される。 A position detection system 1 in the present disclosure includes a computer system. A computer system is mainly composed of a processor and a memory as hardware. At least part of the function of the position detection system 1 in the present disclosure is realized by the processor executing a program recorded in the memory of the computer system. The program may be recorded in advance in the memory of the computer system, may be provided through an electric communication line, or may be recorded in a non-temporary recording medium such as a computer system-readable memory card, optical disk, or hard disk drive. may be provided. A processor in a computer system consists of one or more electronic circuits, including semiconductor integrated circuits (ICs) or large scale integrated circuits (LSIs). Integrated circuits such as ICs or LSIs are called differently depending on the degree of integration, and include integrated circuits called system LSI, VLSI (Very Large Scale Integration), or ULSI (Ultra Large Scale Integration). In addition, FPGAs (Field-Programmable Gate Arrays), which are programmed after the LSI is manufactured, or logic devices capable of reconfiguring the connection relationships inside the LSI or reconfiguring the circuit partitions inside the LSI, shall also be adopted as processors. can be done. A plurality of electronic circuits may be integrated into one chip, or may be distributed over a plurality of chips. A plurality of chips may be integrated in one device, or may be distributed in a plurality of devices. A computer system, as used herein, includes a microcontroller having one or more processors and one or more memories. Accordingly, the microcontroller also consists of one or more electronic circuits including semiconductor integrated circuits or large scale integrated circuits.

 また、位置検知システム1における複数の機能が、1つの装置に集約されていることは位置検知システム1に必須の構成ではなく、位置検知システム1の構成要素は、複数の装置に分散して設けられていてもよい。さらに、位置検知システム1の少なくとも一部の機能、例えば、測位サーバ4の一部の機能がクラウド(クラウドコンピューティング)等によって実現されてもよい。 In addition, it is not an essential configuration of the position detection system 1 that a plurality of functions in the position detection system 1 are integrated into one device, and the components of the position detection system 1 are distributed and provided in a plurality of devices. may have been Furthermore, at least part of the functions of the position detection system 1, for example, part of the functions of the positioning server 4, may be realized by the cloud (cloud computing) or the like.

 反対に、実施形態において、複数の装置に分散されている位置検知システム1の少なくとも一部の機能が、1つの装置に集約されていてもよい。 Conversely, in the embodiment, at least part of the functions of the position detection system 1 distributed among multiple devices may be integrated into one device.

 本開示での2値の比較において、「以上」としているところは、2値が等しい場合、及び2値の一方が他方を超えている場合との両方を含む。ただし、これに限らず、ここでいう「以上」は、2値の一方が他方を超えている場合のみを含む「より大きい」と同義であってもよい。つまり、2値が等しい場合を含むか否かは、基準値等の設定次第で任意に変更できるので、「以上」か「より大きい」かに技術上の差異はない。同様に、「以下」においても「未満」と同義であってもよい。 In the comparison of two values in the present disclosure, "greater than or equal to" includes both the case where the two values are equal and the case where one of the two values exceeds the other. However, the term "greater than or equal to" as used herein may be synonymous with "greater than" which includes only the case where one of the two values exceeds the other. That is, whether the two values are equal can be arbitrarily changed depending on the setting of the reference value, etc., so there is no technical difference between "greater than" and "greater than". Similarly, "less than" may be synonymous with "less than".

 (まとめ)
 以上説明した実施形態等から、以下の態様が開示されている。
(summary)
The following aspects are disclosed from the embodiments and the like described above.

 第1の態様に係る位置検知システム(1)は、第1測位部(421)と、第2測位部(422)と、紐付部(423)と、を備える。第1測位部(421)は、ユーザ(U1)が所持するビーコン端末(2)から送信されスキャナ(3)で受信された、ユーザ(U1)の識別情報を含むビーコン信号に基づいて、第1データを生成する。第1データは、ユーザ(U1)の識別情報とユーザ(U1)の第1位置情報とを含む。第2測位部(422)は、ユーザ(U1)が滞在する空間を撮影し画像データを生成する撮像装置(5)で生成された画像データに基づいて、第2データを生成する。第2データは、ユーザ(U1)の第2位置情報を含む。紐付部(423)は、第1データと第2データとを紐付ける。紐付部(423)は、ユーザ(U1)が滞在する空間を分割した複数の区画(D1~D16)のうち、第1位置情報に対応する区画を第1データに対応する第1対応区画とし、第2位置情報に対応する区画を第2データに対応する第2対応区画とする。紐付部(423)は、第1対応区画と第2対応区画との一致率に基づいて第1データと第2データとの紐付けを決定する。 A position detection system (1) according to the first aspect includes a first positioning section (421), a second positioning section (422), and a linking section (423). The first positioning unit (421) performs the first Generate data. The first data includes identification information of the user (U1) and first location information of the user (U1). A second positioning unit (422) generates second data based on image data generated by an imaging device (5) that captures the space in which the user (U1) stays and generates image data. The second data includes second location information of the user (U1). A linking unit (423) links the first data and the second data. A linking unit (423) defines a section corresponding to the first position information among a plurality of sections (D1 to D16) obtained by dividing the space where the user (U1) stays as a first corresponding section corresponding to the first data, Let the section corresponding to the second position information be the second corresponding section corresponding to the second data. A linking unit (423) determines linking between the first data and the second data based on the matching rate between the first corresponding section and the second corresponding section.

 上記の構成によれば、ユーザ(U1)の識別情報を含む第1データを、第2データと紐付けることができる。よって、第2データがユーザ(U1)の識別情報を含まない場合であっても、第2データをユーザ(U1)の識別情報と紐付けることができる。 According to the above configuration, the first data including the identification information of the user (U1) can be associated with the second data. Therefore, even if the second data does not include the identification information of the user (U1), the second data can be associated with the identification information of the user (U1).

 第1データは、ビーコン信号に基づく第1位置情報を含み、第2データは、撮像装置(5)で撮影された画像データに基づく第2位置情報を含む。多くの場合、第1位置情報よりも第2位置情報の方が、ユーザ(U1)の位置を正確に表している。第2データをユーザ(U1)の識別情報と紐付けることで、撮像装置の測位精度を用いた測位を行いつつ、測位対象のユーザ(U1)を識別することができる。 The first data includes first position information based on the beacon signal, and the second data includes second position information based on image data captured by the imaging device (5). In many cases, the second location information more accurately represents the location of the user (U1) than the first location information. By associating the second data with the identification information of the user (U1), the user (U1) to be positioned can be identified while performing positioning using the positioning accuracy of the imaging device.

 また、第2の態様に係る位置検知システム(1)では、第1の態様において、紐付部(423)は、所定時間に亘る第1対応区画と第2対応区画との一致率に基づいて第1データと第2データとの紐付けを決定する。 In addition, in the position detection system (1) according to the second aspect, in the first aspect, the tying unit (423) performs a second A link between the first data and the second data is determined.

 上記の構成によれば、紐付けの精度を向上させることができる。 According to the above configuration, it is possible to improve the accuracy of tying.

 また、第3の態様に係る位置検知システム(1)では、第1又は2の態様において、ユーザ(U1)は複数存在する。複数のユーザ(U1)の各々は、ビーコン端末(2)を所持する。第1測位部(421)は、複数のユーザ(U1)に対応する複数の第1データを生成する。第2測位部(422)は、複数のユーザ(U1)に対応する複数の第2データを生成する。紐付部(423)は、複数の第1データの中から選択された対象第1データを、複数の第2データのうち所定条件を満たす第2データと紐付ける。 Also, in the position detection system (1) according to the third aspect, there are a plurality of users (U1) in the first or second aspect. Each of the users (U1) possesses a beacon terminal (2). A first positioning unit (421) generates a plurality of first data corresponding to a plurality of users (U1). A second positioning unit (422) generates a plurality of second data corresponding to a plurality of users (U1). An associating unit (423) associates target first data selected from among the plurality of first data with second data satisfying a predetermined condition among the plurality of second data.

 上記の構成によれば、複数のユーザ(U1)がいる場合にも紐付けを実現できる。 According to the above configuration, tying can be realized even when there are multiple users (U1).

 また、第4の態様に係る位置検知システム(1)では、第3の態様において、所定条件は、第1対応区画と第2対応区画との一致率が最も大きいという条件を含む。 Further, in the position detection system (1) according to the fourth aspect, in the third aspect, the predetermined condition includes the condition that the matching rate between the first corresponding section and the second corresponding section is the highest.

 上記の構成によれば、対象第1データに対応するユーザ(U1)と、対象第1データに紐づけられる第2データに対応するユーザ(U1)とが一致する可能性を高められる。 According to the above configuration, it is possible to increase the possibility that the user (U1) corresponding to the first target data and the user (U1) corresponding to the second data linked to the first target data match.

 また、第5の態様に係る位置検知システム(1)では、第4の態様において、所定条件は、第1対応区画と第2対応区画との一致率が閾値よりも大きいという条件を更に含む。 In addition, in the position detection system (1) according to the fifth aspect, in the fourth aspect, the predetermined condition further includes a condition that the matching rate between the first corresponding section and the second corresponding section is greater than a threshold.

 上記の構成によれば、対象第1データに対応するユーザ(U1)と、対象第1データに紐づけられる第2データに対応するユーザ(U1)とが一致する可能性を高められる。 According to the above configuration, it is possible to increase the possibility that the user (U1) corresponding to the first target data and the user (U1) corresponding to the second data linked to the first target data match.

 また、第6の態様に係る位置検知システム(1)では、第1~5の態様のいずれか1つにおいて、複数の区画(D1~D16)の各々の寸法は、第1位置情報の測位精度に基づいて決定される。 Further, in the position detection system (1) according to the sixth aspect, in any one of the first to fifth aspects, the dimension of each of the plurality of sections (D1 to D16) is the positioning accuracy of the first position information determined based on

 上記の構成によれば、区画の寸法が第1位置情報の測位精度と比較して十分大きい場合と比較して、第1データと第2データとの紐付けを正確に行える可能性が高まる。また、区画の寸法が第1位置情報の測位精度と比較して十分小さい場合と比較して、区画の個数を削減できる。 According to the above configuration, the possibility of accurately linking the first data and the second data increases compared to the case where the dimension of the section is sufficiently large compared to the positioning accuracy of the first position information. In addition, the number of sections can be reduced compared to the case where the dimensions of the sections are sufficiently smaller than the positioning accuracy of the first position information.

 また、第7の態様に係る位置検知システム(1)は、第1~6の態様のいずれか1つにおいて、スキャナ(3)を更に備える。 In addition, the position detection system (1) according to the seventh aspect further comprises a scanner (3) in any one of the first to sixth aspects.

 また、第8の態様に係る位置検知システム(1)は、第1~7の態様のいずれか1つにおいて、ビーコン端末(2)を更に備える。 In addition, the position detection system (1) according to the eighth aspect further comprises a beacon terminal (2) in any one of the first to seventh aspects.

 また、第9の態様に係る位置検知システム(1)は、第1~8の態様のいずれか1つにおいて、撮像装置(5)を更に備える。 Further, the position detection system (1) according to the ninth aspect further comprises an imaging device (5) in any one of the first to eighth aspects.

 第1の態様以外の構成については、位置検知システム(1)に必須の構成ではなく、適宜省略可能である。 Configurations other than the first aspect are not essential configurations for the position detection system (1) and can be omitted as appropriate.

 また、第10の態様に係る位置検知方法は、第1測位工程と、第2測位工程と、紐付工程と、を含む。第1測位工程では、ユーザ(U1)が所持するビーコン端末(2)から送信されスキャナ(3)で受信された、ユーザ(U1)の識別情報を含むビーコン信号に基づいて、第1データを生成する。第1データは、ユーザ(U1)の識別情報とユーザ(U1)の第1位置情報とを含む。第2測位工程では、ユーザ(U1)が滞在する空間を撮影し画像データを生成する撮像装置(5)で生成された画像データに基づいて、第2データを生成する。第2データは、ユーザ(U1)の第2位置情報を含む。紐付工程では、第1データと第2データとを紐付ける。紐付工程では、ユーザ(U1)が滞在する空間を分割した複数の区画(D1~D16)のうち、第1位置情報に対応する区画を第1データに対応する第1対応区画とし、第2位置情報に対応する区画を第2データに対応する第2対応区画とする。紐付工程では、第1対応区画と第2対応区画との一致率に基づいて第1データと第2データとの紐付けを決定する。 Also, the position detection method according to the tenth aspect includes a first positioning process, a second positioning process, and a linking process. In the first positioning step, first data is generated based on a beacon signal including identification information of the user (U1) transmitted from the beacon terminal (2) possessed by the user (U1) and received by the scanner (3). do. The first data includes identification information of the user (U1) and first location information of the user (U1). In the second positioning step, second data is generated based on image data generated by an imaging device (5) that captures the space where the user (U1) stays and generates image data. The second data includes second location information of the user (U1). In the linking step, the first data and the second data are linked. In the linking step, among a plurality of sections (D1 to D16) obtained by dividing the space where the user (U1) stays, the section corresponding to the first position information is set as the first corresponding section corresponding to the first data, and the second position Let the section corresponding to the information be the second corresponding section corresponding to the second data. In the linking step, linking between the first data and the second data is determined based on the matching rate between the first corresponding section and the second corresponding section.

 上記の構成によれば、撮像装置の測位精度を用いた測位を行いつつ、測位対象のユーザ(U1)を識別することができる。 According to the above configuration, it is possible to identify the user (U1) to be positioned while performing positioning using the positioning accuracy of the imaging device.

 また、第11の態様に係るプログラムは、第10の態様に係る位置検知方法を、1以上のプロセッサに実行させるためのプログラムである。 A program according to the eleventh aspect is a program for causing one or more processors to execute the position detection method according to the tenth aspect.

 上記の構成によれば、撮像装置の測位精度を用いた測位を行いつつ、測位対象のユーザ(U1)を識別することができる。 According to the above configuration, it is possible to identify the user (U1) to be positioned while performing positioning using the positioning accuracy of the imaging device.

 上記態様に限らず、実施形態に係る位置検知システム(1)の種々の構成(変形例を含む)は、位置検知方法、(コンピュータ)プログラム、又はプログラムを記録した非一時的記録媒体にて具現化可能である。 Various configurations (including modifications) of the position detection system (1) according to the embodiment are not limited to the above aspects, and are embodied in a position detection method, a (computer) program, or a non-temporary recording medium recording the program. can be converted.

1 位置検知システム
2 ビーコン端末
3 スキャナ
5 撮像装置
421 第1測位部
422 第2測位部
423 紐付部
D1~D16 区画
U1 ユーザ
1 Position detection system 2 Beacon terminal 3 Scanner 5 Imaging device 421 First positioning unit 422 Second positioning unit 423 Linking units D1 to D16 Section U1 User

Claims (11)

 ユーザが所持するビーコン端末から送信されスキャナで受信された、前記ユーザの識別情報を含むビーコン信号に基づいて、前記ユーザの前記識別情報と前記ユーザの第1位置情報とを含む第1データを生成する第1測位部と、
 前記ユーザが滞在する空間を撮影し画像データを生成する撮像装置で生成された前記画像データに基づいて、前記ユーザの第2位置情報を含む第2データを生成する第2測位部と、
 前記第1データと前記第2データとを紐付ける紐付部と、を備え、
 前記紐付部は、前記ユーザが滞在する前記空間を分割した複数の区画のうち、前記第1位置情報に対応する区画を前記第1データに対応する第1対応区画とし、前記第2位置情報に対応する区画を前記第2データに対応する第2対応区画とし、
 前記紐付部は、前記第1対応区画と前記第2対応区画との一致率に基づいて前記第1データと前記第2データとの紐付けを決定する、
 位置検知システム。
First data including the identification information of the user and first location information of the user is generated based on a beacon signal including the identification information of the user, which is transmitted from a beacon terminal owned by the user and received by a scanner. a first positioning unit to
a second positioning unit that generates second data including second position information of the user based on the image data generated by an imaging device that captures a space in which the user stays and generates image data;
a linking unit that links the first data and the second data,
The linking unit sets a section corresponding to the first position information among a plurality of sections obtained by dividing the space in which the user stays as a first corresponding section corresponding to the first data, and sets the section corresponding to the first data to the second position information. Let the corresponding partition be a second corresponding partition corresponding to the second data,
The linking unit determines linking of the first data and the second data based on a match rate between the first corresponding section and the second corresponding section.
Position detection system.
 前記紐付部は、所定時間に亘る前記第1対応区画と前記第2対応区画との一致率に基づいて前記第1データと前記第2データとの紐付けを決定する、
 請求項1に記載の位置検知システム。
The linking unit determines the linking of the first data and the second data based on a match rate between the first corresponding section and the second corresponding section over a predetermined period of time.
The position sensing system according to claim 1.
 前記ユーザは複数存在し、前記複数のユーザの各々は、前記ビーコン端末を所持し、
 前記第1測位部は、前記複数のユーザに対応する複数の第1データを生成し、
 前記第2測位部は、前記複数のユーザに対応する複数の第2データを生成し、
 前記紐付部は、前記複数の第1データの中から選択された対象第1データを、前記複数の第2データのうち所定条件を満たす前記第2データと紐付ける、
 請求項1又は2に記載の位置検知システム。
There are a plurality of users, and each of the plurality of users possesses the beacon terminal,
The first positioning unit generates a plurality of first data corresponding to the plurality of users,
The second positioning unit generates a plurality of second data corresponding to the plurality of users,
The associating unit associates target first data selected from among the plurality of first data with the second data that satisfies a predetermined condition among the plurality of second data.
The position sensing system according to claim 1 or 2.
 前記所定条件は、前記第1対応区画と前記第2対応区画との一致率が最も大きいという条件を含む、
 請求項3に記載の位置検知システム。
The predetermined condition includes a condition that the match rate between the first corresponding section and the second corresponding section is the highest,
The position detection system according to claim 3.
 前記所定条件は、前記第1対応区画と前記第2対応区画との一致率が閾値よりも大きいという条件を更に含む、
 請求項4に記載の位置検知システム。
The predetermined condition further includes a condition that a match rate between the first corresponding section and the second corresponding section is greater than a threshold,
The position detection system according to claim 4.
 前記複数の区画の各々の寸法は、前記第1位置情報の測位精度に基づいて決定される、
 請求項1~5のいずれか一項に記載の位置検知システム。
the dimensions of each of the plurality of partitions are determined based on the positioning accuracy of the first position information;
The position detection system according to any one of claims 1-5.
 前記スキャナを更に備える、
 請求項1~6のいずれか一項に記載の位置検知システム。
further comprising the scanner;
The position detection system according to any one of claims 1-6.
 前記ビーコン端末を更に備える、
 請求項1~7のいずれか一項に記載の位置検知システム。
further comprising the beacon terminal;
The position detection system according to any one of claims 1-7.
 前記撮像装置を更に備える、
 請求項1~8のいずれか一項に記載の位置検知システム。
further comprising the imaging device;
The position detection system according to any one of claims 1-8.
 ユーザが所持するビーコン端末から送信されスキャナで受信された、前記ユーザの識別情報を含むビーコン信号に基づいて、前記ユーザの前記識別情報と前記ユーザの第1位置情報とを含む第1データを生成する第1測位工程と、
 前記ユーザが滞在する空間を撮影し画像データを生成する撮像装置で生成された前記画像データに基づいて、前記ユーザの第2位置情報を含む第2データを生成する第2測位工程と、
 前記第1データと前記第2データとを紐付ける紐付工程と、を含み、
 前記紐付工程では、前記ユーザが滞在する前記空間を分割した複数の区画のうち、前記第1位置情報に対応する区画を前記第1データに対応する第1対応区画とし、前記第2位置情報に対応する区画を前記第2データに対応する第2対応区画とし、
 前記紐付工程では、前記第1対応区画と前記第2対応区画との一致率に基づいて前記第1データと前記第2データとの紐付けを決定する、
 位置検知方法。
First data including the identification information of the user and first location information of the user is generated based on a beacon signal including the identification information of the user, which is transmitted from a beacon terminal owned by the user and received by a scanner. a first positioning step to
a second positioning step of generating second data including second position information of the user based on the image data generated by an imaging device that captures a space in which the user stays and generates image data;
a linking step of linking the first data and the second data,
In the linking step, among a plurality of sections obtained by dividing the space in which the user stays, the section corresponding to the first position information is set as the first corresponding section corresponding to the first data, and the second position information is set to the first corresponding section. Let the corresponding partition be a second corresponding partition corresponding to the second data,
In the linking step, linking between the first data and the second data is determined based on a matching rate between the first corresponding section and the second corresponding section.
Position detection method.
 請求項10に記載の位置検知方法を、1以上のプロセッサに実行させるための、
 プログラム。
for causing one or more processors to perform the position sensing method of claim 10,
program.
PCT/JP2021/046976 2021-03-25 2021-12-20 Position detection system, position detection method, and program Ceased WO2022201682A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2023508628A JP7565504B2 (en) 2021-03-25 2021-12-20 Position detection system, position detection method, and program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021052341 2021-03-25
JP2021-052341 2021-03-25

Publications (1)

Publication Number Publication Date
WO2022201682A1 true WO2022201682A1 (en) 2022-09-29

Family

ID=83396689

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/046976 Ceased WO2022201682A1 (en) 2021-03-25 2021-12-20 Position detection system, position detection method, and program

Country Status (2)

Country Link
JP (1) JP7565504B2 (en)
WO (1) WO2022201682A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2025158780A1 (en) * 2024-01-22 2025-07-31 株式会社日立製作所 Communication control device, communication control system, and guidance system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009113265A1 (en) * 2008-03-11 2009-09-17 パナソニック株式会社 Tag sensor system and sensor device, and object position estimating device and object position estimating method
US20140155104A1 (en) * 2012-11-30 2014-06-05 Cambridge Silicon Radio Limited Indoor positioning using camera and optical signal
US20160377698A1 (en) * 2015-06-25 2016-12-29 Appropolis Inc. System and a method for tracking mobile objects using cameras and tag devices
US20170315208A1 (en) * 2016-05-02 2017-11-02 Mojix, Inc. Joint Entity and Object Tracking Using an RFID and Detection Network
JP2020515862A (en) * 2017-03-28 2020-05-28 オートマトン, インク.Automaton, Inc. Method and apparatus for locating RFID tags
JP2021505898A (en) * 2017-12-11 2021-02-18 フラウンホーファー−ゲゼルシャフト ツール フエルデルング デア アンゲヴァンテン フォルシュング エー.ファオ. Methods, positioning systems, trackers and computer programs for determining the current position of an object

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009113265A1 (en) * 2008-03-11 2009-09-17 パナソニック株式会社 Tag sensor system and sensor device, and object position estimating device and object position estimating method
US20140155104A1 (en) * 2012-11-30 2014-06-05 Cambridge Silicon Radio Limited Indoor positioning using camera and optical signal
US20160377698A1 (en) * 2015-06-25 2016-12-29 Appropolis Inc. System and a method for tracking mobile objects using cameras and tag devices
US20170315208A1 (en) * 2016-05-02 2017-11-02 Mojix, Inc. Joint Entity and Object Tracking Using an RFID and Detection Network
JP2020515862A (en) * 2017-03-28 2020-05-28 オートマトン, インク.Automaton, Inc. Method and apparatus for locating RFID tags
JP2021505898A (en) * 2017-12-11 2021-02-18 フラウンホーファー−ゲゼルシャフト ツール フエルデルング デア アンゲヴァンテン フォルシュング エー.ファオ. Methods, positioning systems, trackers and computer programs for determining the current position of an object

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2025158780A1 (en) * 2024-01-22 2025-07-31 株式会社日立製作所 Communication control device, communication control system, and guidance system

Also Published As

Publication number Publication date
JPWO2022201682A1 (en) 2022-09-29
JP7565504B2 (en) 2024-10-11

Similar Documents

Publication Publication Date Title
CN109344746B (en) Pedestrian counting method, system, computer device and storage medium
US20180137369A1 (en) Method and system for automatically managing space related resources
CN104661300B (en) Localization method, device, system and mobile terminal
WO2010016175A1 (en) Target detection device and target detection method
CN113936085B (en) 3D reconstruction method and device
US11520073B2 (en) Multiple sensor aggregation
KR20160120895A (en) Method for developing database of position information associated with image, positioning method using the database, and device performing the methods
CN114120221A (en) Environmental verification method, electronic device and storage medium based on deep learning
CN107832598B (en) Unlocking control method and related product
KR20170007070A (en) Method for visitor access statistics analysis and apparatus for the same
WO2022201682A1 (en) Position detection system, position detection method, and program
CN114581974A (en) Intelligent device control method, device, equipment and medium
Liang et al. Reduced-complexity data acquisition system for image-based localization in indoor environments
CN113936064B (en) Positioning method and device
JP2020061666A (en) Information processing apparatus, imaging apparatus, tracking system, method of controlling information processing apparatus, and program
CN113489897B (en) Image processing method and related device
CN108307357A (en) Floor location method based on Beacon three-point fixs
Dellosa et al. Modified fingerprinting localization technique of indoor positioning system based on coordinates
CN109889977B (en) Bluetooth positioning method, device, equipment and system based on Gaussian regression
US10460153B2 (en) Automatic identity detection
WO2020186856A1 (en) Three-dimensional indoor navigation system and implementation method thereof
CN118015559A (en) Object identification method and device, electronic equipment and storage medium
JP7457948B2 (en) Location estimation system, location estimation method, location information management system, location information management method and program
Yang et al. Vision-based indoor corridor localization via smartphone using relative distance perception and deviation compensation
Bejuri et al. Performance evaluation of spatial correlation-based feature detection and matching for automated wheelchair navigation system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21933274

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023508628

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21933274

Country of ref document: EP

Kind code of ref document: A1