WO2025169352A1 - Information processing device and information processing method - Google Patents
Information processing device and information processing methodInfo
- Publication number
- WO2025169352A1 WO2025169352A1 PCT/JP2024/004143 JP2024004143W WO2025169352A1 WO 2025169352 A1 WO2025169352 A1 WO 2025169352A1 JP 2024004143 W JP2024004143 W JP 2024004143W WO 2025169352 A1 WO2025169352 A1 WO 2025169352A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- area
- information processing
- analysis target
- processing device
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Definitions
- the present invention relates to an information processing device and an information processing method.
- Patent Document 1 describes technology that counts the number and flow of people in a store based on video captured inside the store by a camera installed on the ceiling or other surface of the store.
- the present disclosure therefore aims to make it easy to recognize the flow of objects along a specific route in a space.
- an information processing device includes an area setting unit that sets at least one area in a video image that represents a target space, which is the space to be analyzed; a detection unit that detects specific objects located within the at least one area from the video image during a target time period, which is the time period to be analyzed, and tracks the detected objects; and a drawing unit that draws the positions of the objects detected by the detection unit by superimposing them on an image of the target space.
- an area is set in a target space, and based on the tracking of objects located within the area, the time-series positions of the objects are drawn superimposed on an image of the target space. This visualizes the movement of objects passing through the area, making it easy to recognize at least the flow of objects that have a specific route passing through the area.
- FIG. 2 is a block diagram showing the functional configuration of the information processing apparatus according to the present embodiment.
- FIG. 10 is a diagram showing an example of an image of a target space to be analyzed and two pairs of analysis target areas set based on a specified input.
- FIG. 10 is a diagram showing an example of an image of a target space to be analyzed and four analysis target areas each set based on a specified input.
- FIG. 10 is a diagram showing an example of a graphic drawn to indicate the position of a person, in which a triangular graphic indicates a first position and a second position of a moving person in time series.
- FIG. 10 is a diagram showing an example of an image of a target space in which the position of a person moving from a first area to a second area of a pair of analysis target areas is depicted.
- FIG. 10 is a diagram showing an example of an image of a target space in which the positions of people who have flowed into the analysis target area are depicted.
- FIG. 10 is a diagram showing an example of an image of a target space in which the positions of people who have flown out of the analysis target area are depicted.
- FIG. 10 is a diagram showing an example of an image of a target space in which the position of a person who has been detected for the first time in the analysis target area and has flown out of the analysis target area is depicted.
- FIG. 10 is a diagram showing an example of an image of a target space in which the position of a person who has been detected for the first time in the analysis target area and has flown out of the analysis target area is depicted.
- FIG. 10 is a diagram showing an example of an image of a target space in which the positions of people who stayed in the area to be analyzed are depicted. 10 is a flowchart showing the processing content of an information processing method in the information processing system.
- FIG. 2 is a diagram showing a configuration of an information processing program.
- FIG. 2 is a hardware block diagram of the information processing device.
- FIG. 1 is a block diagram showing the device configuration of an information processing system including an information processing device according to this embodiment, and the functional configuration of the information processing device.
- Information processing system 1 is a system that analyzes and visualizes the flow of objects depicted in moving images, and as an example, can constitute a people flow analysis system that analyzes and visualizes the flow of people in a space such as a store. Note that information processing system 1 is not limited to systems that analyze the flow of people, and may also be applied to systems that analyze the flow of other moving objects such as vehicles.
- the information processing device 10 functionally comprises an acquisition unit 11, an area setting unit 12, a detection unit 13, a drawing unit 14, and an output unit 15.
- the functional units 11 to 15 are configured in a single information processing device 10, but they may also be configured distributed across multiple devices.
- Each functional unit of the information processing device 10 is configured to be able to access storage means (storage) such as the video storage unit 21.
- the video storage unit 21 stores video images captured of the target space, which is the space to be analyzed.
- the video storage unit 21 is configured in the information processing device 10, but it may also be configured in another device accessible from the information processing device 10.
- the acquisition unit 11 acquires video images of the target space. Specifically, the acquisition unit 11 acquires video images from the video image storage unit 21.
- the area setting unit 12 sets at least one area in the video image of the target space as the analysis target area. Specifically, the area setting unit 12 may set the analysis target area based on a specified input.
- the area setting unit 12 may, for example, set a polygon with vertices at three or more specified points as the analysis target area based on specified input indicating the positions of three or more points in the image of the target space. Note that the setting of the analysis target area is not limited to being based on specified input of vertex positions. For example, the area setting unit 12 may set a circular analysis target area based on specified input indicating the center position and radius of the circle.
- the detection unit 13 detects people who are located within the analysis target area during the target time period, which is the time period to be analyzed, from the video image depicting the target space. The detection unit 13 then tracks the positions of the detected people.
- the detection unit 13 may use any known method to track the detected person.
- the detection unit 13 may track the person using so-called optical flow.
- the detection unit 13 may track the person using a method known as ByteTrack, which recognizes the person using a bounding box and matches the bounding box between consecutive frames in a time series.
- the drawing unit 14 may draw a figure having at least a partially translucent shape and a two-dimensional range at each position of a person.
- the drawing unit 14 may draw a figure having at least a partially translucent shape and a two-dimensional range at each position of a person.
- the output unit 15 outputs an image of the target space in which the positions of people are depicted as a people flow information image.
- the manner in which the people flow information image is output is not limited, and may be displayed on a specified display, stored in specified storage, transmitted to a specified device, or the like.
- the detection unit 13 detects people who move from the first analysis target area ar31 to the second analysis target area ar32 during the target time period according to the set analysis pattern.
- the drawing unit 14 then draws a figure f3 representing each position in time series of the multiple people detected by the detection unit 13, superimposed on the image d3 of the target space.
- the output unit 15 outputs, as a people flow information image, an image d3 of the target space on which a figure f3 representing the person's position over time is drawn.
- a people flow information image By referring to the output people flow information image, it is possible to recognize the flow of people flowing from the first analysis target area ar31 to the second analysis target area ar32.
- Figure 6 is a diagram showing an example of an image of the target space in which the positions of people who have flowed into the analysis target area are depicted.
- the area setting unit 12 sets a single analysis target area ar4.
- the area setting unit 12 sets the analysis pattern to detect the flow of people from outside the analysis target area ar4 into the analysis target area ar4.
- the detection unit 13 detects people who move from outside the analysis target area ar4 into the analysis target area ar4 during the target time period according to the set analysis pattern.
- the drawing unit 14 then draws a figure f4 representing each position in time series of the multiple people detected by the detection unit 13, superimposed on an image d4 of the target space.
- the output unit 15 outputs an image d4 of the target space on which a figure f4 representing the person's position over time is drawn as a people flow information image.
- a people flow information image By referring to the output people flow information image, it is possible to recognize the flow of people flowing from outside the analysis target area ar4 into the analysis target area ar4.
- Figure 7 is a diagram showing an example of an image of the target space in which the positions of people who have flowed out of the analysis target area are depicted.
- the area setting unit 12 sets a single analysis target area ar5.
- the area setting unit 12 sets the analysis pattern to detect the flow of people from within the analysis target area ar5 to outside the analysis target area ar5.
- the detection unit 13 detects people who move from within the analysis target area ar5 to outside the analysis target area ar5 during the target time period according to the set analysis pattern.
- the drawing unit 14 then draws a figure f5 representing each position in time series of the multiple people detected by the detection unit 13, superimposed on an image d5 of the target space.
- the output unit 15 outputs an image d5 of the target space on which a figure f5 representing the person's position over time is drawn as a people flow information image.
- a people flow information image By referring to the output people flow information image, it is possible to recognize the flow of people flowing from within the analysis target area ar5 to outside the analysis target area ar5.
- Figure 8 is a diagram showing an example of an image of the target space in which the positions of people who have been detected for the first time in the analysis target area and have flowed out of the analysis target area are depicted.
- the area setting unit 12 sets a single analysis target area ar6.
- the area setting unit 12 sets the analysis pattern to detect the flow of people who have been detected for the first time in the analysis target area ar6 and have flowed out of the analysis target area ar6.
- the detection unit 13 detects people who are detected for the first time in the analysis target area ar6 during the target time period and who have moved outside the analysis target area ar6, according to the set analysis pattern.
- the drawing unit 14 then draws figures f6 representing the chronological positions of the multiple people detected by the detection unit 13, superimposed on the image d6 of the target space.
- the output unit 15 outputs, as a people flow information image, an image d5 of the target space on which a figure f6 representing the person's position over time is drawn.
- people are first recognized within the analysis target area ar6, and the flow of people flowing out of the analysis target area ar6 from within the analysis target area ar6 can be recognized. Therefore, the flow of people flowing into the target space can be recognized from the part of the target space that corresponds to the analysis target area ar6.
- Figure 9 is a diagram showing an example of an image of the target space in which the positions of people who have stayed in the analysis target area are depicted.
- the area setting unit 12 sets multiple single (not paired) analysis target areas ar71 and ar72.
- the area setting unit 12 sets the analysis pattern to detect the presence of people in each of the analysis target areas ar71 and ar72 in response to the specified input.
- the detection unit 13 detects people who stayed in each of the analysis target areas ar71 and ar72 at each time of the target time period according to the set analysis pattern.
- the drawing unit 14 then associates figures f71 and f72 representing the positions of the multiple people detected by the detection unit 13 with each of the analysis target areas ar71 and ar72, and draws them superimposed on the image d7 of the target space.
- the detection unit 13 may also detect people who stayed in areas other than the analysis target areas ar71 and ar72. In that case, the drawing unit 14 may draw figures f7 representing people outside the analysis target areas on the image d7 of the target space.
- the output unit 15 outputs an image d7 of the target space on which figures f71 and f72 representing the positions of people are drawn as a people flow information image. By referring to the output people flow information image, the length of stay of people in the areas ar71 and ar72 under analysis can be determined.
- the output unit 15 may also output a graph showing the time series change in the number of detected people as a people flow information image.
- Figure 10 is a flowchart showing the processing details of an information processing method for analyzing and visualizing people flow in a space using the information processing device 10.
- step S3 the detection unit 13 detects people from the video and tracks the detected people. That is, the detection unit 13 detects and tracks people from the video who are located within the analysis target area during the target time period, which is the time period to be analyzed, and who match the analysis pattern.
- step S4 the drawing unit 14 draws the position of the person corresponding to the analysis pattern by superimposing a predetermined figure on the image of the target space.
- step S5 the output unit 15 outputs an image of the target space in which the time-series positions of people are plotted in a predetermined format as a people flow information image.
- FIG. 11 is a diagram showing the configuration of the information processing program.
- the information processing program P1 is composed of a main module m10 that controls overall information processing in the information processing device 10, an acquisition module m11, an area setting module m12, a detection module m13, a drawing module m14, and an output module m15.
- Each of the modules m11 to m15 then realizes the functions for the respective functional units 11 to 15.
- the information processing program P1 may be transmitted via a transmission medium such as a communication line, or may be stored on a recording medium M1 as shown in FIG. 11.
- an area is set in the target space, and based on the tracking of objects located within the area, the time-series positions of the objects are drawn superimposed on an image of the target space. This visualizes the movement of objects passing through the area, making it easy to recognize at least the flow of objects whose specific route is to pass through the area.
- the information processing device and information processing method according to the present disclosure may have the following configurations. The functions and effects of each configuration are explained as follows:
- An information processing device includes an area setting unit that sets at least one area in a moving image that represents a target space, which is the space to be analyzed; a detection unit that detects specific objects located within the at least one area from the moving image during a target time period, which is the time period to be analyzed, and tracks the detected objects; and a drawing unit that draws the positions of the objects detected by the detection unit by superimposing them on an image of the target space.
- An information processing method is executed by a processor and includes an area setting step of setting at least one area in a moving image representing a target space, which is the space to be analyzed; a detection step of detecting objects located within the at least one area from the moving image during a target time period, which is the time period to be analyzed, and tracking the detected objects; and a drawing step of drawing the positions of the objects detected in the detection step superimposed on an image of the target space.
- an area is set in a target space, and based on the tracking of objects located within the area, the time-series positions of the objects are drawn superimposed on an image of the target space. This visualizes the movement of objects passing through the area, making it easy to recognize at least the flow of objects that have a specific route passing through the area.
- the drawing unit may draw a graphic, at least a portion of which is semi-transparent, at each position of the object.
- the superimposition of the object's time series positions can be visually recognized. This makes it possible to recognize multiple objects that pass through the same position on a specific route.
- the drawing unit may draw each position of the object by adding, to each pixel of the image in the target space, a pixel value having a magnitude associated with the probability of a two-dimensional Gaussian distribution that takes the average at each position of the object on the surface of the image in the target space.
- the above aspect makes it possible to recognize the amount of objects passing through the same position on a specific route based on the color shades expressed by the magnitude of pixel values.
- the drawing unit may draw a triangle having a first position among the positions of the detected object in the time series as the midpoint of the base and a second position, which is a position corresponding to a time later than the first position, as its vertex.
- a triangle is drawn to indicate two positions of the object in the time series, so the direction of the object's movement can be recognized by drawing a single shape.
- the area setting unit may set at least a pair of first and second areas as the analysis target areas, and the detection unit may detect an object that moves from the first area to the second area during the target time period.
- the area setting unit may set at least one area as an analysis target area
- the detection unit may detect an object that moves from outside the analysis target area into the analysis target area during the target time period.
- the area setting unit may set at least one area as an analysis target area
- the detection unit may detect an object that moves from within the analysis target area to outside the analysis target area during the target time period.
- the area setting unit may set at least one area as an analysis target area
- the detection unit may detect an object that is detected for the first time within the analysis target area during the target time period and then moves from within the analysis target area to outside the analysis target area.
- the above aspects allow for the recognition of the flow of objects that are first recognized within the analysis target area and then flow out of the analysis target area. Therefore, the flow of objects flowing into the target space can be recognized from the part of the target space that corresponds to the analysis target area.
- the object may be a person.
- each functional block may be realized using a single device that is physically or logically coupled, or may be realized using two or more physically or logically separated devices that are connected directly or indirectly (for example, using wires, wirelessly, etc.) and these multiple devices.
- a functional block may also be realized by combining software with the single device or multiple devices.
- Functions include, but are not limited to, judgment, determination, assessment, calculation, computation, processing, derivation, investigation, search, confirmation, reception, transmission, output, access, resolution, selection, election, establishment, comparison, assumption, expectation, regard, broadcasting, notifying, communicating, forwarding, configuring, reconfiguring, allocating, mapping, and assignment.
- a functional block (component) that performs transmission functions is called a transmitting unit or transmitter.
- transmitting unit or transmitter As mentioned above, there are no particular limitations on how these functions are implemented.
- the information processing device 10 in one embodiment of the present invention may function as a computer.
- Figure 12 is a diagram showing an example of the hardware configuration of the information processing device 10 according to this embodiment.
- the information processing device 10 may be physically configured as a computer device including a processor 1001, memory 1002, storage 1003, communication device 1004, input device 1005, output device 1006, bus 1007, etc.
- the term "apparatus" can be interpreted as a circuit, device, unit, etc.
- the hardware configuration of the information processing device 10 may be configured to include one or more of the devices shown in FIG. 12, or may be configured to exclude some of the devices.
- Each function of the information processing device 10 is realized by loading specific software (programs) onto hardware such as the processor 1001 and memory 1002, causing the processor 1001 to perform calculations and control communications via the communication device 1004 and the reading and/or writing of data from the memory 1002 and storage 1003.
- the processor 1001 for example, runs an operating system to control the entire computer.
- the processor 1001 may be configured as a central processing unit (CPU) that includes an interface with peripheral devices, a control unit, an arithmetic unit, registers, etc.
- CPU central processing unit
- the functional units 11 to 15 shown in Figure 1 may be realized by the processor 1001.
- Storage 1003 is a computer-readable recording medium, and may be composed of at least one of, for example, an optical disk such as a CD-ROM (Compact Disc ROM), a hard disk drive, a flexible disk, a magneto-optical disk (e.g., a compact disk, a digital versatile disk, a Blu-ray (registered trademark) disk), a smart card, flash memory (e.g., a card, a stick, a key drive), a floppy (registered trademark) disk, a magnetic strip, etc.
- Storage 1003 may also be referred to as an auxiliary storage device.
- the above-mentioned storage medium may be, for example, a database, a server, or other suitable medium including memory 1002 and/or storage 1003.
- the communication device 1004 is hardware (transmission/reception device) for communicating between computers via a wired and/or wireless network, and is also referred to as, for example, a network device, network controller, network card, or communication module.
- the input device 1005 is an input device (e.g., a keyboard, mouse, microphone, switch, button, sensor, etc.) that accepts input from the outside.
- the output device 1006 is an output device (e.g., a display, speaker, LED lamp, etc.) that outputs to the outside. Note that the input device 1005 and the output device 1006 may be integrated into one device (e.g., a touch panel).
- each device such as the processor 1001 and memory 1002, is connected by a bus 1007 for communicating information.
- the bus 1007 may be configured as a single bus, or may be configured with different buses between the devices.
- the information processing device 10 may be configured to include hardware such as a microprocessor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a programmable logic device (PLD), or a field-programmable gate array (FPGA), and some or all of the functional blocks may be realized by this hardware.
- the processor 1001 may be implemented by at least one of these pieces of hardware.
- Each aspect/embodiment described in this disclosure may be applied to at least one of systems utilizing LTE (Long Term Evolution), LTE-Advanced (LTE-A), SUPER 3G, IMT-Advanced, 4G (4th generation mobile communication system), 5G (5th generation mobile communication system), FRA (Future Radio Access), NR (New Radio), W-CDMA (registered trademark), GSM (registered trademark), CDMA2000, UMB (Ultra Mobile Broadband), IEEE 802.11 (Wi-Fi (registered trademark)), IEEE 802.16 (WiMAX (registered trademark)), IEEE 802.20, UWB (Ultra-WideBand), Bluetooth (registered trademark), or other suitable systems, and next generation systems enhanced based on these. Additionally, multiple systems may be combined (for example, a combination of at least one of LTE and LTE-A with 5G).
- certain operations described as being performed by a base station may in some cases be performed by its upper node.
- various operations performed for communication with terminals may be performed by at least one of the base station and other network nodes other than the base station (such as, but not limited to, an MME or S-GW). While the above example illustrates a case where there is one other network node other than the base station, it may also be a combination of multiple other network nodes (for example, an MME and an S-GW).
- Information, etc. can be output from a higher layer (or lower layer) to a lower layer (or higher layer). It may also be input/output via multiple network nodes.
- Input and output information may be stored in a specific location (for example, memory) or managed in a management table. Input and output information may be overwritten, updated, or added to. Output information may be deleted. Input information may be sent to another device.
- the determination may be made based on a value represented by a single bit (0 or 1), a Boolean value (true or false), or a numerical comparison (for example, a comparison with a predetermined value).
- notification of specified information is not limited to being done explicitly, but may also be done implicitly (e.g., not notifying the specified information).
- Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executable files, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
- Software, instructions, etc. may also be transmitted and received via a transmission medium.
- a transmission medium For example, if software is transmitted from a website, server, or other remote source using wired technologies such as coaxial cable, fiber optic cable, twisted pair, and digital subscriber line (DSL), and/or wireless technologies such as infrared, radio, and microwave, these wired and/or wireless technologies are included within the definition of transmission media.
- wired technologies such as coaxial cable, fiber optic cable, twisted pair, and digital subscriber line (DSL)
- DSL digital subscriber line
- wireless technologies such as infrared, radio, and microwave
- the information, signals, etc. described in this disclosure may be represented using any of a variety of different technologies.
- data, instructions, commands, information, signals, bits, symbols, chips, etc. that may be referred to throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or magnetic particles, optical fields or photons, or any combination thereof.
- system and “network” are used interchangeably.
- radio resources may be indicated by an index.
- the names used for the parameters described above are not intended to be limiting in any way. Furthermore, the mathematical formulas using these parameters may differ from those explicitly disclosed in this disclosure.
- the various channels (e.g., PUCCH, PDCCH, etc.) and information elements may be identified by any suitable names, and therefore the various names assigned to these various channels and information elements are not intended to be limiting in any way.
- determining may encompass a wide variety of actions.
- Determining and “determining” may include, for example, judging, calculating, computing, processing, deriving, investigating, looking up, searching, inquiring (e.g., searching a table, database, or other data structure), and ascertaining something that is considered to be a “determination.”
- Determining and “determining” may also include receiving (e.g., receiving information), transmitting (e.g., sending information), input, output, accessing (e.g., accessing data in memory), and so on.
- judgment and “decision” can include regarding actions such as resolving, selecting, choosing, establishing, and comparing as having been “judgment” or “decision.” In other words, “judgment” and “decision” can include regarding some action as having been “judgment” or “decision.” Furthermore, “judgment (decision)” can be interpreted as “assuming,” “expecting,” “considering,” etc.
- the phrase “based on” does not mean “based only on,” unless expressly stated otherwise. In other words, the phrase “based on” means both “based only on” and “based at least on.”
- any reference to such elements does not generally limit the quantity or order of those elements. These designations may be used herein as a convenient method of distinguishing between two or more elements. Thus, a reference to a first and a second element does not imply that only two elements may be employed therein or that the first element must precede the second element in some way.
- a and B are different may mean “A and B are different from each other.” Note that this term may also mean “A and B are each different from C.” Terms such as “separate” and “combined” may also be interpreted in the same way as “different.”
- the information processing device 10 and information processing method disclosed herein may have the following configuration.
- an area setting unit that sets at least one area in a moving image that represents a target space that is a space to be analyzed; a detection unit that detects a specific object located within the at least one area during a target time period that is a time period to be analyzed from the video and tracks the detected object; a drawing unit that draws each position of the object detected by the detection unit so as to be superimposed on the image of the target space;
- An information processing device comprising: [2] The information processing device according to [1], wherein the drawing unit draws a figure, at least a portion of which is semi-transparent, at each position of the object.
- the rendering unit renders each position of the object by adding, to each pixel of the image in the object space, a pixel value having a magnitude associated with a probability of a two-dimensional Gaussian distribution that takes an average at each position of the object on the surface of the image in the object space;
- the information processing device according to [2].
- the drawing unit draws a triangle having a first position among the respective positions of the detected object in time series as a midpoint of a base and a second position corresponding to a time later than the first position as a vertex;
- the information processing device according to [2].
- the area setting unit sets at least a pair of first and second areas as analysis target areas; the detection unit detects an object that moves from the first area to the second area during the target time period. [1] - [4] The information processing device according to any one of the above. [6] the area setting unit sets at least one area as an analysis target area; the detection unit detects an object that moves from outside the analysis target area into the analysis target area during the target time period; [1] - [4] The information processing device according to any one of the above.
- the area setting unit sets at least one area as an analysis target area; the detection unit detects an object that moves from within the analysis target area to outside the analysis target area during the target time period; [1] - [4] The information processing device according to any one of the above.
- the area setting unit sets at least one area as an analysis target area; the detection unit detects an object that is detected for the first time within the analysis target area during the target time period and that has moved from within the analysis target area to outside the analysis target area; [1] - [4] The information processing device according to any one of the above.
- the area setting unit sets at least one area as an analysis target area; the detection unit detects an object that has stayed in the analysis target area during the target time period; [1] - [4] The information processing device according to any one of the above. [10] The information processing device according to any one of [1] to [9], wherein the object is a person.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
Abstract
Description
本発明は、情報処理装置及び情報処理方法に関する。 The present invention relates to an information processing device and an information processing method.
特定のエリアを撮影した画像の解析により、特定のエリアにおける人の流れ及び滞在時間を取得する技術が知られている。例えば、特許文献1には、店舗等の天井等に設置したカメラにより撮影した店舗内の動画に基づいて、店舗内における人の数及び人流をカウントする技術が記載されている。 Technology is known that obtains the flow of people and their dwell time in a specific area by analyzing images captured of that area. For example, Patent Document 1 describes technology that counts the number and flow of people in a store based on video captured inside the store by a camera installed on the ceiling or other surface of the store.
人流等の把握を一態様として、解析対象の空間中における特定のルートを通るオブジェクトの流れを、直感的に認識可能に可視化したいという要請がある。 As one way of understanding people flow, there is a demand for an intuitively recognizable visualization of the flow of objects passing through specific routes within the space being analyzed.
そこで、本開示は、ある空間の特定のルートにおけるオブジェクトの流れを容易に認識可能とすることを目的とする。 The present disclosure therefore aims to make it easy to recognize the flow of objects along a specific route in a space.
上記課題を解決するために、本開示の一側面に係る情報処理装置は、解析対象の空間である対象空間を表した動画像において少なくとも一つのエリアを設定するエリア設定部と、動画像から、解析対象の時間帯である対象時間帯において少なくとも一つのエリア内に位置した特定のオブジェクトを検出し、検出したオブジェクトをトラッキングする検出部と、検出部により検出されたオブジェクトの各位置を、対象空間の画像に重畳して描画する描画部と、を備える。 In order to solve the above problem, an information processing device according to one aspect of the present disclosure includes an area setting unit that sets at least one area in a video image that represents a target space, which is the space to be analyzed; a detection unit that detects specific objects located within the at least one area from the video image during a target time period, which is the time period to be analyzed, and tracks the detected objects; and a drawing unit that draws the positions of the objects detected by the detection unit by superimposing them on an image of the target space.
上記の側面によれば、対象空間においてエリアが設定され、エリア内に位置したオブジェクトのトラッキングに基づいて、オブジェクトの時系列の各位置が対象空間の画像に重畳して描画される。これにより、エリアを通るオブジェクトの移動が可視化されるので、少なくともエリア内を通ることを特定ルートとするオブジェクトの流れが容易に認識できる。 In accordance with the above aspect, an area is set in a target space, and based on the tracking of objects located within the area, the time-series positions of the objects are drawn superimposed on an image of the target space. This visualizes the movement of objects passing through the area, making it easy to recognize at least the flow of objects that have a specific route passing through the area.
ある空間の特定のルートにおけるオブジェクトの流れを容易に認識することが可能となる。 It becomes easier to recognize the flow of objects along a specific route in a space.
本発明に係る情報処理装置の実施形態について図面を参照して説明する。なお、可能な場合には、同一の部分には同一の符号を付して、重複する説明を省略する。 An embodiment of an information processing device according to the present invention will be described with reference to the drawings. Where possible, identical parts will be designated by the same reference numerals, and duplicate explanations will be omitted.
図1は、本実施形態に係る情報処理装置を含む情報処理システムの装置構成及び情報処理装置の機能的構成を示すブロック図である。情報処理システム1は、動画像に表されたオブジェクトの流れを解析及び可視化するシステムであって、一例として、店舗等の空間における人流を解析及び可視化する人流解析システムを構成しうる。なお、情報処理システム1は、人物の流れを解析するシステムには限定されず、その他車両等の動体の流れを解析するシステムに適用されてもよい。 FIG. 1 is a block diagram showing the device configuration of an information processing system including an information processing device according to this embodiment, and the functional configuration of the information processing device. Information processing system 1 is a system that analyzes and visualizes the flow of objects depicted in moving images, and as an example, can constitute a people flow analysis system that analyzes and visualizes the flow of people in a space such as a store. Note that information processing system 1 is not limited to systems that analyze the flow of people, and may also be applied to systems that analyze the flow of other moving objects such as vehicles.
図1に示されるように、情報処理装置10は、機能的には、取得部11、エリア設定部12、検出部13、描画部14及び出力部15を備える。図1に示される例では、機能部11~15は、一つの情報処理装置10に構成されているが、複数の装置に分散して構成されてもよい。 As shown in FIG. 1, the information processing device 10 functionally comprises an acquisition unit 11, an area setting unit 12, a detection unit 13, a drawing unit 14, and an output unit 15. In the example shown in FIG. 1, the functional units 11 to 15 are configured in a single information processing device 10, but they may also be configured distributed across multiple devices.
情報処理装置10の各機能部は、動画像記憶部21といった記憶手段(ストレージ)にアクセス可能に構成されている。動画像記憶部21は、解析対象の空間である対象空間の撮影した動画像を記憶している。図1に示される例では、動画像記憶部21は、情報処理装置10に構成されているが、情報処理装置10からアクセス可能な他の装置に構成されてもよい。 Each functional unit of the information processing device 10 is configured to be able to access storage means (storage) such as the video storage unit 21. The video storage unit 21 stores video images captured of the target space, which is the space to be analyzed. In the example shown in Figure 1, the video storage unit 21 is configured in the information processing device 10, but it may also be configured in another device accessible from the information processing device 10.
次に、情報処理装置10各機能部について説明する。取得部11は、対象空間を撮影した動画像を取得する。具体的には、取得部11は、動画像記憶部21から動画像を取得する。 Next, we will explain each functional unit of the information processing device 10. The acquisition unit 11 acquires video images of the target space. Specifically, the acquisition unit 11 acquires video images from the video image storage unit 21.
エリア設定部12は、対象空間の動画像において、少なくとも一つのエリアを解析対象エリアとして設定する。具体的には、エリア設定部12は、指定入力に基づいて解析対象エリアを設定してもよい。 The area setting unit 12 sets at least one area in the video image of the target space as the analysis target area. Specifically, the area setting unit 12 may set the analysis target area based on a specified input.
図2は、解析対象の対象空間の画像及び指定入力に基づいて設定された二組の対となる解析対象エリアの例を示す図である。エリア設定部12は、対象空間を撮影した画像d1における、解析対象エリアの位置を指定するための指定入力を受け付ける。 FIG. 2 shows an example of two pairs of analysis target areas set based on an image of the target space to be analyzed and a specified input. The area setting unit 12 accepts a specified input for specifying the position of the analysis target area in an image d1 captured of the target space.
エリア設定部12は、例えば、対象空間の画像における3点以上の位置を示す指定入力に基づいて、指定された各点を頂点とする多角形を解析対象エリアとして設定してもよい。なお、解析対象エリアの設定は、頂点の位置の指定入力に基づくことに限定されない。例えば、エリア設定部12は、円の中心位置及び半径を示す指定入力に基づいて、円形の解析対象エリアを設定してもよい。 The area setting unit 12 may, for example, set a polygon with vertices at three or more specified points as the analysis target area based on specified input indicating the positions of three or more points in the image of the target space. Note that the setting of the analysis target area is not limited to being based on specified input of vertex positions. For example, the area setting unit 12 may set a circular analysis target area based on specified input indicating the center position and radius of the circle.
図2に示される例では、エリア設定部12は、指定入力に基づいて、第1の解析対象エリアar11s及び第2の解析対象エリアar11eのペアを設定する。第1の解析対象エリアar11s及び第2の解析対象エリアar11eは、エリア間の人流を解析するための始点エリア及び終点エリアを構成する。 In the example shown in FIG. 2, the area setting unit 12 sets a pair of a first analysis target area ar11s and a second analysis target area ar11e based on the specified input. The first analysis target area ar11s and the second analysis target area ar11e constitute the start area and end area for analyzing people flow between areas.
さらに、エリア設定部12は、始点エリア及び終点エリアからなる複数のペアを設定してもよい。図2に示されるように、エリア設定部12は、指定入力に基づいて、第1の解析対象エリアar12s及び第2の解析対象エリアar12eのペアを更に設定してもよい。 Furthermore, the area setting unit 12 may set multiple pairs of start and end areas. As shown in FIG. 2, the area setting unit 12 may further set a pair of a first analysis target area ar12s and a second analysis target area ar12e based on the specified input.
図3は、解析対象の対象空間の画像及び指定入力に基づいて各々設定された4つの解析対象エリアの例を示す図である。エリア設定部12は、指定入力に基づいて単一の解析対象エリアを設定してもよい。また、エリア設定部12は、単一の解析対象エリアを複数設定してもよい。 FIG. 3 shows an example of an image of the target space to be analyzed and four analysis target areas each set based on a specified input. The area setting unit 12 may set a single analysis target area based on the specified input. The area setting unit 12 may also set multiple single analysis target areas.
図3に示される例では、エリア設定部12は、指定入力に基づいて、単一の解析対象エリアar21を設定する。さらに、エリア設定部12は、単一の解析対象エリアar22,ar23,ar24を、指定入力に基づいてそれぞれ設定してもよい。 In the example shown in FIG. 3, the area setting unit 12 sets a single analysis target area ar21 based on the specified input. Furthermore, the area setting unit 12 may also set single analysis target areas ar22, ar23, and ar24 based on the specified input.
エリア設定部12は、解析対象エリアの設定に併せて、解析パターンを指定入力に基づいて設定してもよい。例えば、エリア設定部12は、第1及び第2の一対の解析対象エリアの設定に併せて、指定入力に基づいて、解析対象の時間帯における第1の解析対象エリアから第2の解析対象エリアへの人流を検出することを解析パターンとして設定してもよい。 The area setting unit 12 may set an analysis pattern based on a specified input in conjunction with the setting of the analysis target area. For example, the area setting unit 12 may set an analysis pattern to detect the flow of people from the first analysis target area to the second analysis target area during the time period to be analyzed based on a specified input in conjunction with the setting of a pair of first and second analysis target areas.
また、エリア設定部12は、単一の解析対象エリアの設定に併せて、指定入力に基づいて、解析対象エリアに流入した人流を検出すること、解析対象エリアから流出した人流を検出すること、解析対象エリアにおいて初めて検出され解析対象エリアから流出した人物の流れを検出すること、及び、解析対象エリアにおける人物の滞在を検出すること、のいずれかを解析パターンとして設定してもよい。 In addition to setting a single analysis target area, the area setting unit 12 may also set one of the following analysis patterns based on specified input: detecting the flow of people entering the analysis target area; detecting the flow of people leaving the analysis target area; detecting the flow of people detected for the first time in the analysis target area and then leaving the analysis target area; and detecting people staying in the analysis target area.
検出部13は、対象空間を表した動画像から、解析対象の時間帯である対象時間帯において、解析対象エリア内に位置した人物を検出する。そして、検出部13は、検出した人物の位置をトラッキングする。 The detection unit 13 detects people who are located within the analysis target area during the target time period, which is the time period to be analyzed, from the video image depicting the target space. The detection unit 13 then tracks the positions of the detected people.
検出部13は、いずれの既知の手法により人物を検出してもよく、手法の種類は限定されない。検出部13は、例えば、人物を識別するための形状、色及び動き等の特徴の抽出によるパターン認識により、動画像から人物を検出してもよい。また、検出部13は、動画像中の人物を検出及び分類する機械学習モデルを用いて、人物の検出を行ってもよい。 The detection unit 13 may detect people using any known method, and the type of method is not limited. For example, the detection unit 13 may detect people from video images using pattern recognition that extracts features such as shape, color, and movement to identify people. The detection unit 13 may also detect people using a machine learning model that detects and classifies people in video images.
検出部13は、検出した人物のトラッキングをいずれの既知の手法により行ってもよい。検出部13は、例えば、いわゆるオプティカルフローにより人物をトラッキングしてもよい。また、検出部13は、例えばByteTrackと言われる、人物をバウンディングボックスにより認識し、時系列で連続するフレーム間でバウンディングボックスをマッチングする手法により、人物をトラッキングしてもよい。 The detection unit 13 may use any known method to track the detected person. For example, the detection unit 13 may track the person using so-called optical flow. Alternatively, the detection unit 13 may track the person using a method known as ByteTrack, which recognizes the person using a bounding box and matches the bounding box between consecutive frames in a time series.
描画部14は、検出部13により検出された人物の時系列の各位置を、対象空間の画像に重畳して描画(プロット)する。具体的には、描画部14は、人物の位置及び移動の軌跡を、点及び線ではなく、ある範囲を有する図形により表す。 The drawing unit 14 draws (plots) each time-series position of the person detected by the detection unit 13, superimposing it on the image of the target space. Specifically, the drawing unit 14 represents the person's position and movement trajectory not with points and lines, but with a figure having a certain range.
描画部14は、少なくとも一部分が半透明であり2次元の範囲を有する図形を人物の各位置に描画してもよい。少なくとも一部が半透明である図形が人物の時系列の各位置に描画されることにより、各人物の時系列の位置の重畳を視認できる。従って、特定のルートにおける同じ位置を通る複数の人物の認識が可能となる。 The drawing unit 14 may draw a figure having at least a partially translucent shape and a two-dimensional range at each position of a person. By drawing a figure with at least a partially translucent shape at each position of a person in the time series, the superimposition of each person's position in the time series can be visually recognized. Therefore, it becomes possible to recognize multiple people passing through the same position on a specific route.
描画部14は、対象空間の画像の面における人物の各位置において平均をとる二次元のガウス分布の確率に関連付けられた大きさを有する画素値を、対象空間の画像の各画素に加算することにより、人物の各位置を描画してもよい。 The rendering unit 14 may render each position of the person by adding, to each pixel of the image in the target space, a pixel value having a magnitude associated with the probability of a two-dimensional Gaussian distribution that takes the average at each position of the person on the surface of the image in the target space.
即ち、描画部14は、検出された人物の時系列の各位置を、ある大きさを有する円形の図形により対象空間の画像にプロットしてもよい。円形の図形は、2次元のガウス分布を表し、その中心において最も大きい画素値を有し、中心から遠ざかるに従って小さくなる画素値を有する。即ち、円形の図形は、中心に最も濃い色を有し、中心から遠ざかるに従って薄くなる色を有する。 In other words, the drawing unit 14 may plot each time-series position of the detected person on the image of the target space using a circular figure of a certain size. The circular figure represents a two-dimensional Gaussian distribution, with the largest pixel value at its center and pixel values decreasing as one moves away from the center. In other words, the circular figure has the darkest color at its center and lighter colors as one moves away from the center.
さらに具体的には、描画部14は、検出された人物の各時刻(各フレーム)における位置の座標を平均とする二次元ガウス分布を、ヒートマップの手法における画素値の2次元配列に加算することとしてもよい。ヒートマップは、2次元に配列されるデータを可視化するために、行列型の数字データの大小を色値の大小により視覚化する既知の方法である。2次元ガウス分布は、対象空間の画像における2次元座標を(x、y)とすると、以下の式(1)により表される。
また、描画部14は、検出された人物の時系列の各位置のうちの第1の位置を底辺の中点とし、第1の位置よりも後の時刻に対応する位置である第2の位置を頂点とする三角形をプロットしてもよい。図4は、人物の位置を示すために描画される図形の一例であって、移動する人物の時系列の第1の位置及び第2の位置を示す三角形の図形を示す図である。 The drawing unit 14 may also plot a triangle with the first position of the detected person's time series as the midpoint of the base and the second position, which is a position corresponding to a time later than the first position, as the vertex. Figure 4 shows an example of a figure drawn to indicate a person's position, and is a diagram showing a triangular figure indicating the first and second positions of a moving person in the time series.
図4に示されるように、描画部14は、検出された人物の時系列の各位置のうちの第1の時刻における位置hp1(x1,y1)を底辺の中点とし、第1の時刻より後の第2の時刻における位置hp2(x2,y2)を頂点とする三角形trを、対象空間の画像にプロットしてもよい。なお、三角形trは、鋭角三角形であってもよい。また、三角形は、二等辺三角形であってもよい。 As shown in FIG. 4, the drawing unit 14 may plot a triangle tr on the image of the target space, with the midpoint of the base being the position hp1 (x1, y1) at a first time among the positions of the detected person in the time series, and the vertex being the position hp2 (x2, y2) at a second time after the first time. Note that the triangle tr may be an acute triangle. The triangle may also be an isosceles triangle.
このように、人物の時系列の二つの位置を示す三角形が描画されることにより、一つの図形の描画により、人物の移動の方向を認識できる。 In this way, by drawing triangles that indicate the person's two positions in the time series, the direction of the person's movement can be recognized by drawing a single shape.
再び図1を参照して、出力部15は、人物の位置が描画された対象空間の画像を人流情報画像として出力する。なお、人流情報画像の出力の態様は限定されず、所定のディスプレイによる表示、所定のストレージによる記憶、及び、所定の装置への送信等であってもよい。 Referring again to FIG. 1, the output unit 15 outputs an image of the target space in which the positions of people are depicted as a people flow information image. Note that the manner in which the people flow information image is output is not limited, and may be displayed on a specified display, stored in specified storage, transmitted to a specified device, or the like.
次に、図5~図9を参照して、解析パターンに応じた人物の検出及び人物の位置の描画の例を説明する。なお、図5~図9の例では、描画部14は、対象空間の画像の面における人物の各位置において平均をとる二次元のガウス分布の確率に関連付けられた大きさを有する画素値を、対象空間の画像の各画素に加算することにより、人物の各位置を描画する。 Next, with reference to Figures 5 to 9, an example of detecting a person according to an analysis pattern and drawing the person's position will be described. Note that in the examples of Figures 5 to 9, the drawing unit 14 draws each position of a person by adding, to each pixel of the image in the target space, a pixel value having a magnitude associated with the probability of a two-dimensional Gaussian distribution that takes the average at each position of the person on the surface of the image in the target space.
図5は、一対の解析対象エリアのうちの第1のエリアから第2のエリアに移動する人物の位置が描画された対象空間の画像の例を示す図である。図5に示される例では、エリア設定部12は、第1の解析対象エリアar31及び第2の解析対象エリアar32を設定する。また、エリア設定部12は、指定入力に応じて、第1の解析対象エリアar31から第2の解析対象エリアar32への人流を検出することを解析パターンとして設定する。 FIG. 5 is a diagram showing an example of an image of a target space in which the positions of people moving from a first area to a second area of a pair of analysis target areas are depicted. In the example shown in FIG. 5, the area setting unit 12 sets a first analysis target area ar31 and a second analysis target area ar32. In addition, in response to a specified input, the area setting unit 12 sets the analysis pattern to detect the flow of people from the first analysis target area ar31 to the second analysis target area ar32.
検出部13は、設定された解析パターンに応じて、対象時間帯において第1の解析対象エリアar31から第2の解析対象エリアar32に移動した人物を検出する。そして、描画部14は、検出部13により検出された複数の人物の時系列の各位置を表す図形f3を、対象空間の画像d3に重畳して描画する。 The detection unit 13 detects people who move from the first analysis target area ar31 to the second analysis target area ar32 during the target time period according to the set analysis pattern. The drawing unit 14 then draws a figure f3 representing each position in time series of the multiple people detected by the detection unit 13, superimposed on the image d3 of the target space.
出力部15は、人物の時系列の位置を表す図形f3が描画された対象空間の画像d3を人流情報画像として出力する。出力された人流情報画像の参照により、第1の解析対象エリアar31から第2の解析対象エリアar32エリアに流入する人流を認識できる。 The output unit 15 outputs, as a people flow information image, an image d3 of the target space on which a figure f3 representing the person's position over time is drawn. By referring to the output people flow information image, it is possible to recognize the flow of people flowing from the first analysis target area ar31 to the second analysis target area ar32.
図6は、解析対象エリアに流入した人物の位置が描画された対象空間の画像の例を示す図である。図6に示される例では、エリア設定部12は、単一の解析対象エリアar4を設定する。また、エリア設定部12は、指定入力に応じて、解析対象エリアar4外から解析対象エリアar4内への人流を検出することを解析パターンとして設定する。 Figure 6 is a diagram showing an example of an image of the target space in which the positions of people who have flowed into the analysis target area are depicted. In the example shown in Figure 6, the area setting unit 12 sets a single analysis target area ar4. In addition, in response to a specified input, the area setting unit 12 sets the analysis pattern to detect the flow of people from outside the analysis target area ar4 into the analysis target area ar4.
検出部13は、設定された解析パターンに応じて、対象時間帯において解析対象エリアar4外から解析対象エリアar4内に移動した人物を検出する。そして、描画部14は、検出部13により検出された複数の人物の時系列の各位置を表す図形f4を、対象空間の画像d4に重畳して描画する。 The detection unit 13 detects people who move from outside the analysis target area ar4 into the analysis target area ar4 during the target time period according to the set analysis pattern. The drawing unit 14 then draws a figure f4 representing each position in time series of the multiple people detected by the detection unit 13, superimposed on an image d4 of the target space.
出力部15は、人物の時系列の位置を表す図形f4が描画された対象空間の画像d4を人流情報画像として出力する。出力された人流情報画像の参照により、解析対象エリアar4外から解析対象エリアar4内に流入する人流を認識できる。 The output unit 15 outputs an image d4 of the target space on which a figure f4 representing the person's position over time is drawn as a people flow information image. By referring to the output people flow information image, it is possible to recognize the flow of people flowing from outside the analysis target area ar4 into the analysis target area ar4.
図7は、解析対象エリアから流出した人物の位置が描画された対象空間の画像の例を示す図である。図7に示される例では、エリア設定部12は、単一の解析対象エリアar5を設定する。また、エリア設定部12は、指定入力に応じて、解析対象エリアar5内から解析対象エリアar5外への人流を検出することを解析パターンとして設定する。 Figure 7 is a diagram showing an example of an image of the target space in which the positions of people who have flowed out of the analysis target area are depicted. In the example shown in Figure 7, the area setting unit 12 sets a single analysis target area ar5. In addition, in response to a specified input, the area setting unit 12 sets the analysis pattern to detect the flow of people from within the analysis target area ar5 to outside the analysis target area ar5.
検出部13は、設定された解析パターンに応じて、対象時間帯において解析対象エリアar5内から解析対象エリアar5外に移動した人物を検出する。そして、描画部14は、検出部13により検出された複数の人物の時系列の各位置を表す図形f5を、対象空間の画像d5に重畳して描画する。 The detection unit 13 detects people who move from within the analysis target area ar5 to outside the analysis target area ar5 during the target time period according to the set analysis pattern. The drawing unit 14 then draws a figure f5 representing each position in time series of the multiple people detected by the detection unit 13, superimposed on an image d5 of the target space.
出力部15は、人物の時系列の位置を表す図形f5が描画された対象空間の画像d5を人流情報画像として出力する。出力された人流情報画像の参照により、解析対象エリアar5内から解析対象エリアar5外に流出する人流を認識できる。 The output unit 15 outputs an image d5 of the target space on which a figure f5 representing the person's position over time is drawn as a people flow information image. By referring to the output people flow information image, it is possible to recognize the flow of people flowing from within the analysis target area ar5 to outside the analysis target area ar5.
図8は、解析対象エリアにおいて初めて検出され解析対象エリアから流出した人物の位置が描画された対象空間の画像の例を示す図である。図8に示される例では、エリア設定部12は、単一の解析対象エリアar6を設定する。また、エリア設定部12は、指定入力に応じて、解析対象エリアar6において初めて検出され解析対象エリアar6から流出した人物の流れを検出することを解析パターンとして設定する。 Figure 8 is a diagram showing an example of an image of the target space in which the positions of people who have been detected for the first time in the analysis target area and have flowed out of the analysis target area are depicted. In the example shown in Figure 8, the area setting unit 12 sets a single analysis target area ar6. In addition, in response to a specified input, the area setting unit 12 sets the analysis pattern to detect the flow of people who have been detected for the first time in the analysis target area ar6 and have flowed out of the analysis target area ar6.
検出部13は、設定された解析パターンに応じて、対象時間帯において解析対象エリアar6において初めて検出され解析対象エリアar6外に移動した人物を検出する。そして、描画部14は、検出部13により検出された複数の人物の時系列の各位置を表す図形f6を、対象空間の画像d6に重畳して描画する。 The detection unit 13 detects people who are detected for the first time in the analysis target area ar6 during the target time period and who have moved outside the analysis target area ar6, according to the set analysis pattern. The drawing unit 14 then draws figures f6 representing the chronological positions of the multiple people detected by the detection unit 13, superimposed on the image d6 of the target space.
出力部15は、人物の時系列の位置を表す図形f6が描画された対象空間の画像d5を人流情報画像として出力する。出力された人流情報画像の参照により、解析対象エリアar6内において初めて認識され、解析対象エリアar6内から解析対象エリアar6外に流出する人流を認識できる。従って、対象空間における解析対象エリアar6に対応する部分から、対象空間内に流入する人物の流れを認識できる。 The output unit 15 outputs, as a people flow information image, an image d5 of the target space on which a figure f6 representing the person's position over time is drawn. By referencing the output people flow information image, people are first recognized within the analysis target area ar6, and the flow of people flowing out of the analysis target area ar6 from within the analysis target area ar6 can be recognized. Therefore, the flow of people flowing into the target space can be recognized from the part of the target space that corresponds to the analysis target area ar6.
図9は、解析対象エリアに滞在した人物の位置が描画された対象空間の画像の例を示す図である。図9に示される例では、エリア設定部12は、複数の単一の(ペアではない)解析対象エリアar71,ar72を設定する。また、エリア設定部12は、指定入力に応じて、解析対象エリアar71,ar72のそれぞれにおける人物の滞在を検出することを解析パターンとして設定する。 Figure 9 is a diagram showing an example of an image of the target space in which the positions of people who have stayed in the analysis target area are depicted. In the example shown in Figure 9, the area setting unit 12 sets multiple single (not paired) analysis target areas ar71 and ar72. In addition, the area setting unit 12 sets the analysis pattern to detect the presence of people in each of the analysis target areas ar71 and ar72 in response to the specified input.
検出部13は、設定された解析パターンに応じて、対象時間帯の各時刻において解析対象エリアar71,ar72のそれぞれに滞在した人物を検出する。そして、描画部14は、検出部13により検出された複数の人物の各位置を表す図形f71,f72を解析対象エリアar71,ar72のそれぞれに関連付けて対象空間の画像d7に重畳して描画する。なお、検出部13は、解析対象エリアar71,ar72以外の領域に滞在した人物も検出してもよい。その場合には、描画部14は、解析対象エリア外の人物を表す図形f7を対象空間の画像d7に描画してもよい。 The detection unit 13 detects people who stayed in each of the analysis target areas ar71 and ar72 at each time of the target time period according to the set analysis pattern. The drawing unit 14 then associates figures f71 and f72 representing the positions of the multiple people detected by the detection unit 13 with each of the analysis target areas ar71 and ar72, and draws them superimposed on the image d7 of the target space. Note that the detection unit 13 may also detect people who stayed in areas other than the analysis target areas ar71 and ar72. In that case, the drawing unit 14 may draw figures f7 representing people outside the analysis target areas on the image d7 of the target space.
出力部15は、人物の位置を表す図形f71,f72が描画された対象空間の画像d7を人流情報画像として出力する。出力された人流情報画像の参照により、解析対象エリアar71,ar72に滞在した人物の滞在時間の長さを認識できる。 The output unit 15 outputs an image d7 of the target space on which figures f71 and f72 representing the positions of people are drawn as a people flow information image. By referring to the output people flow information image, the length of stay of people in the areas ar71 and ar72 under analysis can be determined.
なお、図5~図9を参照して説明した人物の検出及び人物の位置の描画の例において、出力部15は、検出した人物の数の時系列の推移を表すグラフを人流情報画像として併せて出力してもよい。 In the examples of detecting people and depicting their positions described with reference to Figures 5 to 9, the output unit 15 may also output a graph showing the time series change in the number of detected people as a people flow information image.
図10は、情報処理装置10における空間における人流を解析及び可視化する情報処理方法の処理内容を示すフローチャートである。 Figure 10 is a flowchart showing the processing details of an information processing method for analyzing and visualizing people flow in a space using the information processing device 10.
ステップS1において、取得部11は、対象空間の動画像を取得する。ステップS2において、エリア設定部12は、例えばユーザによる指定入力に基づいて、解析対象エリア及び解析パターンを設定する。 In step S1, the acquisition unit 11 acquires a moving image of the target space. In step S2, the area setting unit 12 sets the analysis target area and analysis pattern based on, for example, a user's designated input.
ステップS3において、検出部13は、動画像から人物を検出し、検出した人物をトラッキングする。即ち、検出部13は、解析対象の時間帯である対象時間帯において解析対象エリア内に位置し解析パターンに該当する人物を、動画像から検出及びトラッキングする。 In step S3, the detection unit 13 detects people from the video and tracks the detected people. That is, the detection unit 13 detects and tracks people from the video who are located within the analysis target area during the target time period, which is the time period to be analyzed, and who match the analysis pattern.
ステップS4において、描画部14は、解析パターンに該当する人物の位置を対象空間の画像に所定の図形により重畳して描画する。 In step S4, the drawing unit 14 draws the position of the person corresponding to the analysis pattern by superimposing a predetermined figure on the image of the target space.
ステップS5において、出力部15は、人物の時系列の位置が所定の態様でプロットされた対象空間の画像を、人流情報画像として出力する。 In step S5, the output unit 15 outputs an image of the target space in which the time-series positions of people are plotted in a predetermined format as a people flow information image.
次に、図11を参照して、コンピュータを、本実施形態の情報処理装置10として機能させるための情報処理プログラムについて説明する。図11は、情報処理プログラムの構成を示す図である。情報処理プログラムP1は、情報処理装置10における情報処理を統括的に制御するメインモジュールm10、取得モジュールm11、エリア設定モジュールm12、検出モジュールm13、描画モジュールm14及び出力モジュールm15を備えて構成される。そして、各モジュールm11~m15のそれぞれにより、各機能部11~15のための各機能が実現される。 Next, with reference to FIG. 11, an information processing program for causing a computer to function as the information processing device 10 of this embodiment will be described. FIG. 11 is a diagram showing the configuration of the information processing program. The information processing program P1 is composed of a main module m10 that controls overall information processing in the information processing device 10, an acquisition module m11, an area setting module m12, a detection module m13, a drawing module m14, and an output module m15. Each of the modules m11 to m15 then realizes the functions for the respective functional units 11 to 15.
なお、情報処理プログラムP1は、通信回線等の伝送媒体を介して伝送される態様であってもよいし、図11に示されるように、記録媒体M1に記憶される態様であってもよい。 In addition, the information processing program P1 may be transmitted via a transmission medium such as a communication line, or may be stored on a recording medium M1 as shown in FIG. 11.
以上説明した本実施形態の情報処理システム1、情報処理装置10、情報処理方法、情報処理プログラムP1によれば、対象空間においてエリアが設定され、エリア内に位置したオブジェクトのトラッキングに基づいて、オブジェクトの時系列の各位置が対象空間の画像に重畳して描画される。これにより、エリアを通るオブジェクトの移動が可視化されるので、少なくともエリア内を通ることを特定ルートとするオブジェクトの流れが容易に認識できる。 According to the information processing system 1, information processing device 10, information processing method, and information processing program P1 of this embodiment described above, an area is set in the target space, and based on the tracking of objects located within the area, the time-series positions of the objects are drawn superimposed on an image of the target space. This visualizes the movement of objects passing through the area, making it easy to recognize at least the flow of objects whose specific route is to pass through the area.
本開示に係る情報処理装置及び情報処理方法は、以下の構成を有しても良い。また、各構成における作用及び効果は、以下のとおり説明される。 The information processing device and information processing method according to the present disclosure may have the following configurations. The functions and effects of each configuration are explained as follows:
本開示の一側面に係る情報処理装置は、解析対象の空間である対象空間を表した動画像において少なくとも一つのエリアを設定するエリア設定部と、動画像から、解析対象の時間帯である対象時間帯において少なくとも一つのエリア内に位置した特定のオブジェクトを検出し、検出したオブジェクトをトラッキングする検出部と、検出部により検出されたオブジェクトの各位置を、対象空間の画像に重畳して描画する描画部と、を備える。 An information processing device according to one aspect of the present disclosure includes an area setting unit that sets at least one area in a moving image that represents a target space, which is the space to be analyzed; a detection unit that detects specific objects located within the at least one area from the moving image during a target time period, which is the time period to be analyzed, and tracks the detected objects; and a drawing unit that draws the positions of the objects detected by the detection unit by superimposing them on an image of the target space.
本開示の一側面に係る情報処理方法は、プロセッサにより実行され、解析対象の空間である対象空間を表した動画像において少なくとも一つのエリアを設定するエリア設定ステップと、動画像から、解析対象の時間帯である対象時間帯において少なくとも一つのエリア内に位置したオブジェクトを検出し、検出したオブジェクトをトラッキングするする検出ステップと、検出ステップにおいて検出されたオブジェクトの各位置を、対象空間の画像に重畳して描画する描画ステップと、を有する。 An information processing method according to one aspect of the present disclosure is executed by a processor and includes an area setting step of setting at least one area in a moving image representing a target space, which is the space to be analyzed; a detection step of detecting objects located within the at least one area from the moving image during a target time period, which is the time period to be analyzed, and tracking the detected objects; and a drawing step of drawing the positions of the objects detected in the detection step superimposed on an image of the target space.
上記の側面によれば、対象空間においてエリアが設定され、エリア内に位置したオブジェクトのトラッキングに基づいて、オブジェクトの時系列の各位置が対象空間の画像に重畳して描画される。これにより、エリアを通るオブジェクトの移動が可視化されるので、少なくともエリア内を通ることを特定ルートとするオブジェクトの流れが容易に認識できる。 In accordance with the above aspect, an area is set in a target space, and based on the tracking of objects located within the area, the time-series positions of the objects are drawn superimposed on an image of the target space. This visualizes the movement of objects passing through the area, making it easy to recognize at least the flow of objects that have a specific route passing through the area.
また、他の側面に係る情報処理装置では、描画部は、少なくとも一部分が半透明である図形をオブジェクトの各位置に描画することとしてもよい。 In another aspect of the information processing device, the drawing unit may draw a graphic, at least a portion of which is semi-transparent, at each position of the object.
上記の側面によれば、少なくとも一部が半透明である図形がオブジェクトの時系列の各位置に描画されることにより、各オブジェクトの時系列の位置の重畳を視認できる。従って、特定のルートにおける同じ位置を通る複数のオブジェクトの認識が可能となる。 In accordance with the above aspect, by drawing at least a partially translucent figure at each position in the object's time series, the superimposition of the object's time series positions can be visually recognized. This makes it possible to recognize multiple objects that pass through the same position on a specific route.
また、他の側面に係る情報処理装置では、描画部は、対象空間の画像の面におけるオブジェクトの各位置において平均をとる二次元のガウス分布の確率に関連付けられた大きさを有する画素値を、対象空間の画像の各画素に加算することにより、オブジェクトの各位置を描画することとしてもよい。 Furthermore, in an information processing device according to another aspect, the drawing unit may draw each position of the object by adding, to each pixel of the image in the target space, a pixel value having a magnitude associated with the probability of a two-dimensional Gaussian distribution that takes the average at each position of the object on the surface of the image in the target space.
上記の側面によれば、画素値の大小により表現される色の濃淡に基づいて、特定のルートにおける同じ位置を通るオブジェクトの量の認識が可能となる。 The above aspect makes it possible to recognize the amount of objects passing through the same position on a specific route based on the color shades expressed by the magnitude of pixel values.
また、他の側面に係る情報処理装置では、描画部は、検出されたオブジェクトの時系列の各位置のうちの第1の位置を底辺の中点とし、第1の位置よりも後の時刻に対応する位置である第2の位置を頂点とする三角形を描画することとしてもよい。 Furthermore, in an information processing device according to another aspect, the drawing unit may draw a triangle having a first position among the positions of the detected object in the time series as the midpoint of the base and a second position, which is a position corresponding to a time later than the first position, as its vertex.
上記の側面によれば、オブジェクトの時系列の二つの位置を示す三角形が描画されるので、一つの図形の描画により、オブジェクトの移動の方向を認識できる。 In accordance with the above aspect, a triangle is drawn to indicate two positions of the object in the time series, so the direction of the object's movement can be recognized by drawing a single shape.
また、他の側面に係る情報処理装置では、エリア設定部は、少なくとも第1及び第2の一対のエリアを解析対象エリアとして設定し、検出部は、対象時間帯において第1のエリアから第2のエリアに移動したオブジェクトを検出することとしてもよい。 Furthermore, in an information processing device according to another aspect, the area setting unit may set at least a pair of first and second areas as the analysis target areas, and the detection unit may detect an object that moves from the first area to the second area during the target time period.
上記の側面によれば、第1のエリアから第2のエリアに流入するオブジェクトの流れを認識できる。 According to the above aspect, it is possible to recognize the flow of objects flowing from a first area into a second area.
また、他の側面に係る情報処理装置では、エリア設定部は、少なくとも一つのエリアを解析対象エリアとして設定し、検出部は、対象時間帯において、解析対象エリア外から解析対象エリア内に移動したオブジェクトを検出することとしてもよい。 Furthermore, in an information processing device according to another aspect, the area setting unit may set at least one area as an analysis target area, and the detection unit may detect an object that moves from outside the analysis target area into the analysis target area during the target time period.
上記の側面によれば、解析対象エリア外から解析対象エリア内に流入するオブジェクトの流れを認識できる。 The above aspects make it possible to recognize the flow of objects flowing from outside the analysis area into the analysis area.
また、他の側面に係る情報処理装置ではエリア設定部は、少なくとも一つのエリアを解析対象エリアとして設定し、検出部は、対象時間帯において、解析対象エリア内から解析対象エリア外に移動したオブジェクトを検出することとしてもよい。 Furthermore, in an information processing device according to another aspect, the area setting unit may set at least one area as an analysis target area, and the detection unit may detect an object that moves from within the analysis target area to outside the analysis target area during the target time period.
上記の側面によれば、解析対象エリア内から解析対象エリア外に流出するオブジェクトの流れを認識できる。 The above aspects make it possible to recognize the flow of objects flowing from within the analysis area to outside the analysis area.
また、他の側面に係る情報処理装置では、エリア設定部は、少なくとも一つのエリアを解析対象エリアとして設定し、検出部は、対象時間帯において、解析対象エリア内において初めて検出され、解析対象エリア内から解析対象エリア外に移動したオブジェクトを検出することとしてもよい。 Furthermore, in an information processing device according to another aspect, the area setting unit may set at least one area as an analysis target area, and the detection unit may detect an object that is detected for the first time within the analysis target area during the target time period and then moves from within the analysis target area to outside the analysis target area.
上記の側面によれば、解析対象エリア内において初めて認識され、解析対象エリア内から解析対象エリア外に流出するオブジェクトの流れを認識できる。従って、対象空間における解析対象エリアに対応する部分から、対象空間内に流入するオブジェクトの流れを認識できる。 The above aspects allow for the recognition of the flow of objects that are first recognized within the analysis target area and then flow out of the analysis target area. Therefore, the flow of objects flowing into the target space can be recognized from the part of the target space that corresponds to the analysis target area.
また、他の側面に係る情報処理装置では、オブジェクトは、人物であることとしてもよい。 In another aspect of the information processing device, the object may be a person.
上記の側面によれば、エリアを通る人物の移動が可視化されるので、少なくともエリア内を通ることを特定ルートとする人流を容易に認識できる。 The above aspects make it possible to visualize the movement of people passing through an area, making it easy to recognize people flowing along specific routes, at least through the area.
なお、図2に示したブロック図は、機能単位のブロックを示している。これらの機能ブロック(構成部)は、ハードウェア及びソフトウェアの少なくとも一方の任意の組み合わせによって実現される。また、各機能ブロックの実現方法は特に限定されない。すなわち、各機能ブロックは、物理的又は論理的に結合した1つの装置を用いて実現されてもよいし、物理的又は論理的に分離した2つ以上の装置を直接的又は間接的に(例えば、有線、無線などを用いて)接続し、これら複数の装置を用いて実現されてもよい。機能ブロックは、上記1つの装置又は上記複数の装置にソフトウェアを組み合わせて実現されてもよい。 Note that the block diagram shown in Figure 2 shows functional blocks. These functional blocks (components) are realized by any combination of hardware and/or software. Furthermore, there are no particular limitations on how each functional block is realized. That is, each functional block may be realized using a single device that is physically or logically coupled, or may be realized using two or more physically or logically separated devices that are connected directly or indirectly (for example, using wires, wirelessly, etc.) and these multiple devices. A functional block may also be realized by combining software with the single device or multiple devices.
機能には、判断、決定、判定、計算、算出、処理、導出、調査、探索、確認、受信、送信、出力、アクセス、解決、選択、選定、確立、比較、想定、期待、見做し、報知(broadcasting)、通知(notifying)、通信(communicating)、転送(forwarding)、構成(configuring)、再構成(reconfiguring)、割り当て(allocating、mapping)、割り振り(assigning)などがあるが、これらに限られない。たとえば、送信を機能させる機能ブロック(構成部)は、送信部(transmitting unit)や送信機(transmitter)と呼称される。いずれも、上述したとおり、実現方法は特に限定されない。 Functions include, but are not limited to, judgment, determination, assessment, calculation, computation, processing, derivation, investigation, search, confirmation, reception, transmission, output, access, resolution, selection, election, establishment, comparison, assumption, expectation, regard, broadcasting, notifying, communicating, forwarding, configuring, reconfiguring, allocating, mapping, and assignment. For example, a functional block (component) that performs transmission functions is called a transmitting unit or transmitter. As mentioned above, there are no particular limitations on how these functions are implemented.
例えば、本発明の一実施の形態における情報処理装置10は、コンピュータとして機能してもよい。図12は、本実施形態に係る情報処理装置10のハードウェア構成の一例を示す図である。情報処理装置10は、物理的には、プロセッサ1001、メモリ1002、ストレージ1003、通信装置1004、入力装置1005、出力装置1006、バス1007などを含むコンピュータ装置として構成されてもよい。 For example, the information processing device 10 in one embodiment of the present invention may function as a computer. Figure 12 is a diagram showing an example of the hardware configuration of the information processing device 10 according to this embodiment. The information processing device 10 may be physically configured as a computer device including a processor 1001, memory 1002, storage 1003, communication device 1004, input device 1005, output device 1006, bus 1007, etc.
なお、以下の説明では、「装置」という文言は、回路、デバイス、ユニットなどに読み替えることができる。情報処理装置10のハードウェア構成は、図12に示した各装置を1つ又は複数含むように構成されてもよいし、一部の装置を含まずに構成されてもよい。 In the following description, the term "apparatus" can be interpreted as a circuit, device, unit, etc. The hardware configuration of the information processing device 10 may be configured to include one or more of the devices shown in FIG. 12, or may be configured to exclude some of the devices.
情報処理装置10における各機能は、プロセッサ1001、メモリ1002などのハードウェア上に所定のソフトウェア(プログラム)を読み込ませることで、プロセッサ1001が演算を行い、通信装置1004による通信や、メモリ1002及びストレージ1003におけるデータの読み出し及び/又は書き込みを制御することで実現される。 Each function of the information processing device 10 is realized by loading specific software (programs) onto hardware such as the processor 1001 and memory 1002, causing the processor 1001 to perform calculations and control communications via the communication device 1004 and the reading and/or writing of data from the memory 1002 and storage 1003.
プロセッサ1001は、例えば、オペレーティングシステムを動作させてコンピュータ全体を制御する。プロセッサ1001は、周辺装置とのインターフェース、制御装置、演算装置、レジスタなどを含む中央処理装置(CPU:Central Processing Unit)で構成されてもよい。例えば、図1に示した各機能部11~15などは、プロセッサ1001で実現されてもよい。 The processor 1001, for example, runs an operating system to control the entire computer. The processor 1001 may be configured as a central processing unit (CPU) that includes an interface with peripheral devices, a control unit, an arithmetic unit, registers, etc. For example, the functional units 11 to 15 shown in Figure 1 may be realized by the processor 1001.
また、プロセッサ1001は、プログラム(プログラムコード)、ソフトウェアモジュールやデータを、ストレージ1003及び/又は通信装置1004からメモリ1002に読み出し、これらに従って各種の処理を実行する。プログラムとしては、上述の実施の形態で説明した動作の少なくとも一部をコンピュータに実行させるプログラムが用いられる。例えば、情報処理装置10の各機能部11~15は、メモリ1002に格納され、プロセッサ1001で動作する制御プログラムによって実現されてもよい。上述の各種処理は、1つのプロセッサ1001で実行される旨を説明してきたが、2以上のプロセッサ1001により同時又は逐次に実行されてもよい。プロセッサ1001は、1以上のチップで実装されてもよい。なお、プログラムは、電気通信回線を介してネットワークから送信されても良い。 In addition, the processor 1001 reads programs (program code), software modules, and data from the storage 1003 and/or the communication device 1004 into the memory 1002, and executes various processes in accordance with these. The programs used are those that cause a computer to execute at least some of the operations described in the above-mentioned embodiments. For example, the functional units 11 to 15 of the information processing device 10 may be implemented by a control program stored in the memory 1002 and executed by the processor 1001. While the above-mentioned various processes have been described as being executed by a single processor 1001, they may also be executed simultaneously or sequentially by two or more processors 1001. The processor 1001 may be implemented on one or more chips. The programs may also be transmitted from a network via a telecommunications line.
メモリ1002は、コンピュータ読み取り可能な記録媒体であり、例えば、ROM(Read Only Memory)、EPROM(Erasable Programmable ROM)、EEPROM(Electrically Erasable Programmable ROM)、RAM(Random Access Memory)などの少なくとも1つで構成されてもよい。メモリ1002は、レジスタ、キャッシュ、メインメモリ(主記憶装置)などと呼ばれてもよい。メモリ1002は、本発明の一実施の形態に係る情報処理方法を実施するために実行可能なプログラム(プログラムコード)、ソフトウェアモジュールなどを保存することができる。 Memory 1002 is a computer-readable recording medium and may be composed of at least one of, for example, ROM (Read Only Memory), EPROM (Erasable Programmable ROM), EEPROM (Electrically Erasable Programmable ROM), RAM (Random Access Memory), etc. Memory 1002 may also be called a register, cache, main memory (primary storage device), etc. Memory 1002 can store executable programs (program code), software modules, etc. for implementing an information processing method according to one embodiment of the present invention.
ストレージ1003は、コンピュータ読み取り可能な記録媒体であり、例えば、CD-ROM(Compact Disc ROM)などの光ディスク、ハードディスクドライブ、フレキシブルディスク、光磁気ディスク(例えば、コンパクトディスク、デジタル多用途ディスク、Blu-ray(登録商標)ディスク)、スマートカード、フラッシュメモリ(例えば、カード、スティック、キードライブ)、フロッピー(登録商標)ディスク、磁気ストリップなどの少なくとも1つで構成されてもよい。ストレージ1003は、補助記憶装置と呼ばれてもよい。上述の記憶媒体は、例えば、メモリ1002及び/又はストレージ1003を含むデータベース、サーバその他の適切な媒体であってもよい。 Storage 1003 is a computer-readable recording medium, and may be composed of at least one of, for example, an optical disk such as a CD-ROM (Compact Disc ROM), a hard disk drive, a flexible disk, a magneto-optical disk (e.g., a compact disk, a digital versatile disk, a Blu-ray (registered trademark) disk), a smart card, flash memory (e.g., a card, a stick, a key drive), a floppy (registered trademark) disk, a magnetic strip, etc. Storage 1003 may also be referred to as an auxiliary storage device. The above-mentioned storage medium may be, for example, a database, a server, or other suitable medium including memory 1002 and/or storage 1003.
通信装置1004は、有線及び/又は無線ネットワークを介してコンピュータ間の通信を行うためのハードウェア(送受信デバイス)であり、例えばネットワークデバイス、ネットワークコントローラ、ネットワークカード、通信モジュールなどともいう。 The communication device 1004 is hardware (transmission/reception device) for communicating between computers via a wired and/or wireless network, and is also referred to as, for example, a network device, network controller, network card, or communication module.
入力装置1005は、外部からの入力を受け付ける入力デバイス(例えば、キーボード、マウス、マイクロフォン、スイッチ、ボタン、センサなど)である。出力装置1006は、外部への出力を実施する出力デバイス(例えば、ディスプレイ、スピーカー、LEDランプなど)である。なお、入力装置1005及び出力装置1006は、一体となった構成(例えば、タッチパネル)であってもよい。 The input device 1005 is an input device (e.g., a keyboard, mouse, microphone, switch, button, sensor, etc.) that accepts input from the outside. The output device 1006 is an output device (e.g., a display, speaker, LED lamp, etc.) that outputs to the outside. Note that the input device 1005 and the output device 1006 may be integrated into one device (e.g., a touch panel).
また、プロセッサ1001やメモリ1002などの各装置は、情報を通信するためのバス1007で接続される。バス1007は、単一のバスで構成されてもよいし、装置間で異なるバスで構成されてもよい。 Furthermore, each device, such as the processor 1001 and memory 1002, is connected by a bus 1007 for communicating information. The bus 1007 may be configured as a single bus, or may be configured with different buses between the devices.
また、情報処理装置10は、マイクロプロセッサ、デジタル信号プロセッサ(DSP:Digital Signal Processor)、ASIC(Application Specific Integrated Circuit)、PLD(Programmable Logic Device)、FPGA(Field Programmable Gate Array)などのハードウェアを含んで構成されてもよく、当該ハードウェアにより、各機能ブロックの一部又は全てが実現されてもよい。例えば、プロセッサ1001は、これらのハードウェアの少なくとも1つで実装されてもよい。 Furthermore, the information processing device 10 may be configured to include hardware such as a microprocessor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a programmable logic device (PLD), or a field-programmable gate array (FPGA), and some or all of the functional blocks may be realized by this hardware. For example, the processor 1001 may be implemented by at least one of these pieces of hardware.
情報の通知は、本開示において説明した態様/実施形態に限られず、他の方法を用いて行われてもよい。例えば、情報の通知は、物理レイヤシグナリング(例えば、DCI(Downlink Control Information)、UCI(Uplink Control Information))、上位レイヤシグナリング(例えば、RRC(Radio Resource Control)シグナリング、MAC(Medium Access Control)シグナリング、報知情報(MIB(Master Information Block)、SIB(System Information Block)))、その他の信号又はこれらの組み合わせによって実施されてもよい。また、RRCシグナリングは、RRCメッセージと呼ばれてもよく、例えば、RRC接続セットアップ(RRC Connection Setup)メッセージ、RRC接続再構成(RRC Connection Reconfiguration)メッセージなどであってもよい。 The notification of information is not limited to the aspects/embodiments described in this disclosure and may be performed using other methods. For example, the notification of information may be performed using physical layer signaling (e.g., DCI (Downlink Control Information), UCI (Uplink Control Information)), higher layer signaling (e.g., RRC (Radio Resource Control) signaling, MAC (Medium Access Control) signaling, broadcast information (MIB (Master Information Block), SIB (System Information Block))), other signals, or a combination of these. Furthermore, RRC signaling may be referred to as an RRC message, and may be, for example, an RRC Connection Setup message, an RRC Connection Reconfiguration message, etc.
本開示において説明した各態様/実施形態は、LTE(Long Term Evolution)、LTE-A(LTE-Advanced)、SUPER 3G、IMT-Advanced、4G(4th generation mobile communication system)、5G(5th generation mobile communication system)、FRA(Future Radio Access)、NR(new Radio)、W-CDMA(登録商標)、GSM(登録商標)、CDMA2000、UMB(Ultra Mobile Broadband)、IEEE 802.11(Wi-Fi(登録商標))、IEEE 802.16(WiMAX(登録商標))、IEEE 802.20、UWB(Ultra-WideBand)、Bluetooth(登録商標)、その他の適切なシステムを利用するシステム及びこれらに基づいて拡張された次世代システムの少なくとも一つに適用されてもよい。また、複数のシステムが組み合わされて(例えば、LTE及びLTE-Aの少なくとも一方と5Gとの組み合わせ等)適用されてもよい。 Each aspect/embodiment described in this disclosure may be applied to at least one of systems utilizing LTE (Long Term Evolution), LTE-Advanced (LTE-A), SUPER 3G, IMT-Advanced, 4G (4th generation mobile communication system), 5G (5th generation mobile communication system), FRA (Future Radio Access), NR (New Radio), W-CDMA (registered trademark), GSM (registered trademark), CDMA2000, UMB (Ultra Mobile Broadband), IEEE 802.11 (Wi-Fi (registered trademark)), IEEE 802.16 (WiMAX (registered trademark)), IEEE 802.20, UWB (Ultra-WideBand), Bluetooth (registered trademark), or other suitable systems, and next generation systems enhanced based on these. Additionally, multiple systems may be combined (for example, a combination of at least one of LTE and LTE-A with 5G).
本開示において説明した各態様/実施形態の処理手順、シーケンス、フローチャートなどは、矛盾の無い限り、順序を入れ替えてもよい。例えば、本開示において説明した方法については、例示的な順序を用いて様々なステップの要素を提示しており、提示した特定の順序に限定されない。 The order of the processing steps, sequences, flowcharts, etc. of each aspect/embodiment described in this disclosure may be changed unless inconsistent. For example, the methods described in this disclosure present elements of various steps using an example order, and are not limited to the particular order presented.
本開示において基地局によって行われるとした特定動作は、場合によってはその上位ノード(upper node)によって行われることもある。基地局を有する1つ又は複数のネットワークノード(network nodes)からなるネットワークにおいて、端末との通信のために行われる様々な動作は、基地局及び基地局以外の他のネットワークノード(例えば、MME又はS-GWなどが考えられるが、これらに限られない)の少なくとも1つによって行われ得ることは明らかである。上記において基地局以外の他のネットワークノードが1つである場合を例示したが、複数の他のネットワークノードの組み合わせ(例えば、MME及びS-GW)であってもよい。 In this disclosure, certain operations described as being performed by a base station may in some cases be performed by its upper node. In a network consisting of one or more network nodes having base stations, it is clear that various operations performed for communication with terminals may be performed by at least one of the base station and other network nodes other than the base station (such as, but not limited to, an MME or S-GW). While the above example illustrates a case where there is one other network node other than the base station, it may also be a combination of multiple other network nodes (for example, an MME and an S-GW).
情報等は、上位レイヤ(又は下位レイヤ)から下位レイヤ(又は上位レイヤ)へ出力され得る。複数のネットワークノードを介して入出力されてもよい。 Information, etc. can be output from a higher layer (or lower layer) to a lower layer (or higher layer). It may also be input/output via multiple network nodes.
入出力された情報等は特定の場所(例えば、メモリ)に保存されてもよいし、管理テーブルで管理してもよい。入出力される情報等は、上書き、更新、または追記され得る。出力された情報等は削除されてもよい。入力された情報等は他の装置へ送信されてもよい。 Input and output information may be stored in a specific location (for example, memory) or managed in a management table. Input and output information may be overwritten, updated, or added to. Output information may be deleted. Input information may be sent to another device.
判定は、1ビットで表される値(0か1か)によって行われてもよいし、真偽値(Boolean:trueまたはfalse)によって行われてもよいし、数値の比較(例えば、所定の値との比較)によって行われてもよい。 The determination may be made based on a value represented by a single bit (0 or 1), a Boolean value (true or false), or a numerical comparison (for example, a comparison with a predetermined value).
本開示において説明した各態様/実施形態は単独で用いてもよいし、組み合わせて用いてもよいし、実行に伴って切り替えて用いてもよい。また、所定の情報の通知(例えば、「Xであること」の通知)は、明示的に行うものに限られず、暗黙的(例えば、当該所定の情報の通知を行わない)ことによって行われてもよい。 Each aspect/embodiment described in this disclosure may be used alone, in combination, or switched between depending on the implementation. Furthermore, notification of specified information (e.g., notification that "X is true") is not limited to being done explicitly, but may also be done implicitly (e.g., not notifying the specified information).
以上、本開示について詳細に説明したが、当業者にとっては、本開示が本開示中に説明した実施形態に限定されるものではないということは明らかである。本開示は、請求の範囲の記載により定まる本開示の趣旨及び範囲を逸脱することなく修正及び変更態様として実施することができる。したがって、本開示の記載は、例示説明を目的とするものであり、本開示に対して何ら制限的な意味を有するものではない。 Although the present disclosure has been described in detail above, it will be clear to those skilled in the art that the present disclosure is not limited to the embodiments described herein. The present disclosure can be implemented in modified and altered forms without departing from the spirit and scope of the present disclosure, which are defined by the claims. Therefore, the description of the present disclosure is intended for illustrative purposes only and does not have any limiting meaning on the present disclosure.
ソフトウェアは、ソフトウェア、ファームウェア、ミドルウェア、マイクロコード、ハードウェア記述言語と呼ばれるか、他の名称で呼ばれるかを問わず、命令、命令セット、コード、コードセグメント、プログラムコード、プログラム、サブプログラム、ソフトウェアモジュール、アプリケーション、ソフトウェアアプリケーション、ソフトウェアパッケージ、ルーチン、サブルーチン、オブジェクト、実行可能ファイル、実行スレッド、手順、機能などを意味するよう広く解釈されるべきである。 Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executable files, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
また、ソフトウェア、命令などは、伝送媒体を介して送受信されてもよい。例えば、ソフトウェアが、同軸ケーブル、光ファイバケーブル、ツイストペア及びデジタル加入者回線(DSL)などの有線技術及び/又は赤外線、無線及びマイクロ波などの無線技術を使用してウェブサイト、サーバ、又は他のリモートソースから送信される場合、これらの有線技術及び/又は無線技術は、伝送媒体の定義内に含まれる。 Software, instructions, etc. may also be transmitted and received via a transmission medium. For example, if software is transmitted from a website, server, or other remote source using wired technologies such as coaxial cable, fiber optic cable, twisted pair, and digital subscriber line (DSL), and/or wireless technologies such as infrared, radio, and microwave, these wired and/or wireless technologies are included within the definition of transmission media.
本開示において説明した情報、信号などは、様々な異なる技術のいずれかを使用して表されてもよい。例えば、上記の説明全体に渡って言及され得るデータ、命令、コマンド、情報、信号、ビット、シンボル、チップなどは、電圧、電流、電磁波、磁界若しくは磁性粒子、光場若しくは光子、又はこれらの任意の組み合わせによって表されてもよい。 The information, signals, etc. described in this disclosure may be represented using any of a variety of different technologies. For example, data, instructions, commands, information, signals, bits, symbols, chips, etc. that may be referred to throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or magnetic particles, optical fields or photons, or any combination thereof.
なお、本開示において説明した用語及び/又は本明細書の理解に必要な用語については、同一の又は類似する意味を有する用語と置き換えてもよい。 Note that terms explained in this disclosure and/or terms necessary for understanding this specification may be replaced with terms having the same or similar meanings.
本開示において使用する「システム」および「ネットワーク」という用語は、互換的に使用される。 As used in this disclosure, the terms "system" and "network" are used interchangeably.
また、本開示において説明した情報、パラメータなどは、絶対値で表されてもよいし、所定の値からの相対値で表されてもよいし、対応する別の情報で表されてもよい。例えば、無線リソースはインデックスによって指示されるものであってもよい。 Furthermore, the information, parameters, etc. described in this disclosure may be expressed as absolute values, as relative values from a predetermined value, or as corresponding other information. For example, radio resources may be indicated by an index.
上述したパラメータに使用する名称はいかなる点においても限定的な名称ではない。さらに、これらのパラメータを使用する数式等は、本開示で明示的に開示したものと異なる場合もある。様々なチャネル(例えば、PUCCH、PDCCHなど)及び情報要素は、あらゆる好適な名称によって識別できるので、これらの様々なチャネル及び情報要素に割り当てている様々な名称は、いかなる点においても限定的な名称ではない。 The names used for the parameters described above are not intended to be limiting in any way. Furthermore, the mathematical formulas using these parameters may differ from those explicitly disclosed in this disclosure. The various channels (e.g., PUCCH, PDCCH, etc.) and information elements may be identified by any suitable names, and therefore the various names assigned to these various channels and information elements are not intended to be limiting in any way.
本開示で使用する「判断(determining)」、「決定(determining)」という用語は、多種多様な動作を包含する場合がある。「判断」、「決定」は、例えば、判定(judging)、計算(calculating)、算出(computing)、処理(processing)、導出(deriving)、調査(investigating)、探索(looking up、search、inquiry)(例えば、テーブル、データベース又は別のデータ構造での探索)、確認(ascertaining)した事を「判断」「決定」したとみなす事などを含み得る。また、「判断」、「決定」は、受信(receiving)(例えば、情報を受信すること)、送信(transmitting)(例えば、情報を送信すること)、入力(input)、出力(output)、アクセス(accessing)(例えば、メモリ中のデータにアクセスすること)した事を「判断」「決定」したとみなす事などを含み得る。また、「判断」、「決定」は、解決(resolving)、選択(selecting)、選定(choosing)、確立(establishing)、比較(comparing)などした事を「判断」「決定」したとみなす事を含み得る。つまり、「判断」「決定」は、何らかの動作を「判断」「決定」したとみなす事を含み得る。また、「判断(決定)」は、「想定する(assuming)」、「期待する(expecting)」、「みなす(considering)」などで読み替えられてもよい。 As used in this disclosure, the terms "determining" and "determining" may encompass a wide variety of actions. "Determining" and "determining" may include, for example, judging, calculating, computing, processing, deriving, investigating, looking up, searching, inquiring (e.g., searching a table, database, or other data structure), and ascertaining something that is considered to be a "determination." "Determining" and "determining" may also include receiving (e.g., receiving information), transmitting (e.g., sending information), input, output, accessing (e.g., accessing data in memory), and so on. Furthermore, "judgment" and "decision" can include regarding actions such as resolving, selecting, choosing, establishing, and comparing as having been "judgment" or "decision." In other words, "judgment" and "decision" can include regarding some action as having been "judgment" or "decision." Furthermore, "judgment (decision)" can be interpreted as "assuming," "expecting," "considering," etc.
本開示で使用する「に基づいて」という記載は、別段に明記されていない限り、「のみに基づいて」を意味しない。言い換えれば、「に基づいて」という記載は、「のみに基づいて」と「に少なくとも基づいて」の両方を意味する。 As used in this disclosure, the phrase "based on" does not mean "based only on," unless expressly stated otherwise. In other words, the phrase "based on" means both "based only on" and "based at least on."
本開示において使用する「第1の」、「第2の」などの呼称を使用した場合においては、その要素へのいかなる参照も、それらの要素の量または順序を全般的に限定するものではない。これらの呼称は、2つ以上の要素間を区別する便利な方法として本明細書で使用され得る。したがって、第1および第2の要素への参照は、2つの要素のみがそこで採用され得ること、または何らかの形で第1の要素が第2の要素に先行しなければならないことを意味しない。 When designations such as "first," "second," etc. are used in this disclosure, any reference to such elements does not generally limit the quantity or order of those elements. These designations may be used herein as a convenient method of distinguishing between two or more elements. Thus, a reference to a first and a second element does not imply that only two elements may be employed therein or that the first element must precede the second element in some way.
上記の各装置の構成における「手段」を、「部」、「回路」、「デバイス」等に置き換えてもよい。 The "means" in the configuration of each of the above devices may be replaced with "part," "circuit," "device," etc.
「含む(include)」、「含んでいる(including)」、およびそれらの変形が、本明細書あるいは特許請求の範囲で使用されている限り、これら用語は、用語「備える(comprising)」と同様に、包括的であることが意図される。さらに、本明細書あるいは特許請求の範囲において使用されている用語「または(or)」は、排他的論理和ではないことが意図される。 To the extent that the terms "include," "including," and variations thereof are used herein or in the claims, these terms are intended to be inclusive, similar to the term "comprising." Furthermore, the term "or," as used herein or in the claims, is not intended to be an exclusive or.
本開示において、例えば、英語でのa, an及びtheのように、翻訳により冠詞が追加された場合、本開示は、これらの冠詞の後に続く名詞が複数形であることを含んでもよい。 In this disclosure, where articles are added by translation, such as a, an, and the in English, this disclosure may include the noun following these articles being plural.
本開示において、「AとBが異なる」という用語は、「AとBが互いに異なる」ことを意味してもよい。なお、当該用語は、「AとBがそれぞれCと異なる」ことを意味してもよい。「離れる」、「結合される」などの用語も、「異なる」と同様に解釈されてもよい。 In this disclosure, the term "A and B are different" may mean "A and B are different from each other." Note that this term may also mean "A and B are each different from C." Terms such as "separate" and "combined" may also be interpreted in the same way as "different."
本開示の情報処理装置10及び情報処理方法は、以下の構成を有してもよい。 The information processing device 10 and information processing method disclosed herein may have the following configuration.
[1]
解析対象の空間である対象空間を表した動画像において少なくとも一つのエリアを設定するエリア設定部と、
前記動画像から、解析対象の時間帯である対象時間帯において前記少なくとも一つのエリア内に位置した特定のオブジェクトを検出し、検出した前記オブジェクトをトラッキングする検出部と、
前記検出部により検出された前記オブジェクトの各位置を、前記対象空間の画像に重畳して描画する描画部と、
を備える情報処理装置。
[2]
前記描画部は、少なくとも一部分が半透明である図形を前記オブジェクトの各位置に描画する、[1]に記載の情報処理装置。
[3]
前記描画部は、前記対象空間の前記画像の面における前記オブジェクトの各位置において平均をとる二次元のガウス分布の確率に関連付けられた大きさを有する画素値を、前記対象空間の前記画像の各画素に加算することにより、前記オブジェクトの各位置を描画する、
[2]に記載の情報処理装置。
[4]
前記描画部は、検出された前記オブジェクトの時系列の各位置のうちの第1の位置を底辺の中点とし、前記第1の位置よりも後の時刻に対応する位置である第2の位置を頂点とする三角形を描画する、
[2]に記載の情報処理装置。
[5]
前記エリア設定部は、少なくとも第1及び第2の一対のエリアを解析対象エリアとして設定し、
前記検出部は、前記対象時間帯において前記第1のエリアから前記第2のエリアに移動したオブジェクトを検出する、
[1]~[4]のいずれか一項に記載の情報処理装置。
[6]
前記エリア設定部は、少なくとも一つのエリアを解析対象エリアとして設定し、
前記検出部は、前記対象時間帯において、前記解析対象エリア外から前記解析対象エリア内に移動したオブジェクトを検出する、
[1]~[4]のいずれか一項に記載の情報処理装置。
[7]
前記エリア設定部は、少なくとも一つのエリアを解析対象エリアとして設定し、
前記検出部は、前記対象時間帯において、前記解析対象エリア内から前記解析対象エリア外に移動したオブジェクトを検出する、
[1]~[4]のいずれか一項に記載の情報処理装置。
[8]
前記エリア設定部は、少なくとも一つのエリアを解析対象エリアとして設定し、
前記検出部は、前記対象時間帯において、前記解析対象エリア内において初めて検出され、前記解析対象エリア内から前記解析対象エリア外に移動したオブジェクトを検出する、
[1]~[4]のいずれか一項に記載の情報処理装置。
[9]
前記エリア設定部は、少なくとも一つのエリアを解析対象エリアとして設定し、
前記検出部は、前記対象時間帯において、前記解析対象エリア内に滞在したオブジェクトを検出する、
[1]~[4]のいずれか一項に記載の情報処理装置。
[10]
前記オブジェクトは、人物である、[1]~[9]のいずれか一項に記載の情報処理装置。
[11]
解析対象の空間である対象空間を表した動画像において少なくとも一つのエリアを設定するエリア設定ステップと、
前記動画像から、解析対象の時間帯である対象時間帯において前記少なくとも一つのエリア内に位置したオブジェクトを検出し、検出した前記オブジェクトをトラッキングするする検出ステップと、
前記検出ステップにおいて検出された前記オブジェクトの各位置を、前記対象空間の画像に重畳して描画する描画ステップと、
を有する、プロセッサにより実行される情報処理方法。
[1]
an area setting unit that sets at least one area in a moving image that represents a target space that is a space to be analyzed;
a detection unit that detects a specific object located within the at least one area during a target time period that is a time period to be analyzed from the video and tracks the detected object;
a drawing unit that draws each position of the object detected by the detection unit so as to be superimposed on the image of the target space;
An information processing device comprising:
[2]
The information processing device according to [1], wherein the drawing unit draws a figure, at least a portion of which is semi-transparent, at each position of the object.
[3]
the rendering unit renders each position of the object by adding, to each pixel of the image in the object space, a pixel value having a magnitude associated with a probability of a two-dimensional Gaussian distribution that takes an average at each position of the object on the surface of the image in the object space;
The information processing device according to [2].
[4]
the drawing unit draws a triangle having a first position among the respective positions of the detected object in time series as a midpoint of a base and a second position corresponding to a time later than the first position as a vertex;
The information processing device according to [2].
[5]
the area setting unit sets at least a pair of first and second areas as analysis target areas;
the detection unit detects an object that moves from the first area to the second area during the target time period.
[1] - [4] The information processing device according to any one of the above.
[6]
the area setting unit sets at least one area as an analysis target area;
the detection unit detects an object that moves from outside the analysis target area into the analysis target area during the target time period;
[1] - [4] The information processing device according to any one of the above.
[7]
the area setting unit sets at least one area as an analysis target area;
the detection unit detects an object that moves from within the analysis target area to outside the analysis target area during the target time period;
[1] - [4] The information processing device according to any one of the above.
[8]
the area setting unit sets at least one area as an analysis target area;
the detection unit detects an object that is detected for the first time within the analysis target area during the target time period and that has moved from within the analysis target area to outside the analysis target area;
[1] - [4] The information processing device according to any one of the above.
[9]
the area setting unit sets at least one area as an analysis target area;
the detection unit detects an object that has stayed in the analysis target area during the target time period;
[1] - [4] The information processing device according to any one of the above.
[10]
The information processing device according to any one of [1] to [9], wherein the object is a person.
[11]
an area setting step of setting at least one area in a moving image representing a target space that is a space to be analyzed;
a detection step of detecting an object located within the at least one area during a target time period that is a time period to be analyzed from the video and tracking the detected object;
a drawing step of drawing each position of the object detected in the detection step so as to be superimposed on the image of the target space;
An information processing method executed by a processor, comprising:
1…情報処理システム、10…情報処理装置、11…取得部、12…エリア設定部、13…検出部、14…描画部、15…出力部、21…動画像記憶部、M1…記録媒体、m10…メインモジュール、m11…取得モジュール、m11~m15…各モジュール、m12…エリア設定モジュール、m13…検出モジュール、m14…描画モジュール、m15…出力モジュール、P1…情報処理プログラム。 1...information processing system, 10...information processing device, 11...acquisition unit, 12...area setting unit, 13...detection unit, 14...drawing unit, 15...output unit, 21...moving image storage unit, M1...recording medium, m10...main module, m11...acquisition module, m11-m15...each module, m12...area setting module, m13...detection module, m14...drawing module, m15...output module, P1...information processing program.
Claims (10)
前記動画像から、解析対象の時間帯である対象時間帯において前記少なくとも一つのエリア内に位置した特定のオブジェクトを検出し、検出した前記オブジェクトをトラッキングする検出部と、
前記検出部により検出された前記オブジェクトの各位置を、前記対象空間の画像に重畳して描画する描画部と、
を備える情報処理装置。 an area setting unit that sets at least one area in a moving image that represents a target space that is a space to be analyzed;
a detection unit that detects a specific object located within the at least one area during a target time period that is a time period to be analyzed from the video and tracks the detected object;
a drawing unit that draws each position of the object detected by the detection unit so as to be superimposed on the image of the target space;
An information processing device comprising:
請求項2に記載の情報処理装置。 the rendering unit renders each position of the object by adding, to each pixel of the image in the object space, a pixel value having a magnitude associated with a probability of a two-dimensional Gaussian distribution that takes an average at each position of the object on the surface of the image in the object space;
The information processing device according to claim 2 .
請求項2に記載の情報処理装置。 the drawing unit draws a triangle having a first position among the respective positions of the detected object in time series as a midpoint of a base and a second position corresponding to a time later than the first position as a vertex;
The information processing device according to claim 2 .
前記検出部は、前記対象時間帯において前記第1のエリアから前記第2のエリアに移動したオブジェクトを検出する、
請求項1に記載の情報処理装置。 the area setting unit sets at least a pair of first and second areas as analysis target areas;
the detection unit detects an object that moves from the first area to the second area during the target time period.
The information processing device according to claim 1 .
前記検出部は、前記対象時間帯において、前記解析対象エリア外から前記解析対象エリア内に移動したオブジェクトを検出する、
請求項1に記載の情報処理装置。 the area setting unit sets at least one area as an analysis target area;
the detection unit detects an object that moves from outside the analysis target area into the analysis target area during the target time period;
The information processing device according to claim 1 .
前記検出部は、前記対象時間帯において、前記解析対象エリア内から前記解析対象エリア外に移動したオブジェクトを検出する、
請求項1に記載の情報処理装置。 the area setting unit sets at least one area as an analysis target area;
the detection unit detects an object that moves from within the analysis target area to outside the analysis target area during the target time period;
The information processing device according to claim 1 .
前記検出部は、前記対象時間帯において、前記解析対象エリア内において初めて検出され、前記解析対象エリア内から前記解析対象エリア外に移動したオブジェクトを検出する、
請求項1に記載の情報処理装置。 the area setting unit sets at least one area as an analysis target area;
the detection unit detects an object that is detected for the first time within the analysis target area during the target time period and that has moved from within the analysis target area to outside the analysis target area;
The information processing device according to claim 1 .
前記動画像から、解析対象の時間帯である対象時間帯において前記少なくとも一つのエリア内に位置したオブジェクトを検出し、検出した前記オブジェクトをトラッキングするする検出ステップと、
前記検出ステップにおいて検出された前記オブジェクトの各位置を、前記対象空間の画像に重畳して描画する描画ステップと、
を有する、プロセッサにより実行される情報処理方法。 an area setting step of setting at least one area in a moving image representing a target space that is a space to be analyzed;
a detection step of detecting an object located within the at least one area during a target time period that is a time period to be analyzed from the video and tracking the detected object;
a drawing step of drawing each position of the object detected in the detection step so as to be superimposed on the image of the target space;
An information processing method executed by a processor, comprising:
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/JP2024/004143 WO2025169352A1 (en) | 2024-02-07 | 2024-02-07 | Information processing device and information processing method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/JP2024/004143 WO2025169352A1 (en) | 2024-02-07 | 2024-02-07 | Information processing device and information processing method |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025169352A1 true WO2025169352A1 (en) | 2025-08-14 |
Family
ID=96699409
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2024/004143 Pending WO2025169352A1 (en) | 2024-02-07 | 2024-02-07 | Information processing device and information processing method |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025169352A1 (en) |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2015087841A (en) * | 2013-10-29 | 2015-05-07 | パナソニック株式会社 | Congestion status analyzer, congestion status analyzing system, and congestion status analyzing method |
| JP2015099539A (en) * | 2013-11-20 | 2015-05-28 | パナソニックIpマネジメント株式会社 | Person movement analysis apparatus, person movement analysis system, and person movement analysis method |
| JP2016048834A (en) * | 2014-08-27 | 2016-04-07 | パナソニックIpマネジメント株式会社 | Monitoring device, monitoring system and monitoring method |
| JP2016076893A (en) * | 2014-10-08 | 2016-05-12 | パナソニックIpマネジメント株式会社 | Activity status analysis apparatus, activity status analysis system, and activity status analysis method |
| WO2016147586A1 (en) * | 2015-03-19 | 2016-09-22 | パナソニックIpマネジメント株式会社 | Image-capturing device, recording device, and video output control device |
| JP2017184025A (en) * | 2016-03-30 | 2017-10-05 | 株式会社リコー | Communication terminal, image communication system, image transmission method, image display method, and program |
| JP2021125145A (en) * | 2020-02-07 | 2021-08-30 | 株式会社リコー | Information processing equipment, image processing system, program |
-
2024
- 2024-02-07 WO PCT/JP2024/004143 patent/WO2025169352A1/en active Pending
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2015087841A (en) * | 2013-10-29 | 2015-05-07 | パナソニック株式会社 | Congestion status analyzer, congestion status analyzing system, and congestion status analyzing method |
| JP2015099539A (en) * | 2013-11-20 | 2015-05-28 | パナソニックIpマネジメント株式会社 | Person movement analysis apparatus, person movement analysis system, and person movement analysis method |
| JP2016048834A (en) * | 2014-08-27 | 2016-04-07 | パナソニックIpマネジメント株式会社 | Monitoring device, monitoring system and monitoring method |
| JP2016076893A (en) * | 2014-10-08 | 2016-05-12 | パナソニックIpマネジメント株式会社 | Activity status analysis apparatus, activity status analysis system, and activity status analysis method |
| WO2016147586A1 (en) * | 2015-03-19 | 2016-09-22 | パナソニックIpマネジメント株式会社 | Image-capturing device, recording device, and video output control device |
| JP2017184025A (en) * | 2016-03-30 | 2017-10-05 | 株式会社リコー | Communication terminal, image communication system, image transmission method, image display method, and program |
| JP2021125145A (en) * | 2020-02-07 | 2021-08-30 | 株式会社リコー | Information processing equipment, image processing system, program |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111695622B (en) | Identification model training method, identification method and identification device for substation operation scene | |
| JP2018165926A (en) | Similar image retrieval device | |
| CN112001852A (en) | Image processing method, device and equipment | |
| US11576188B2 (en) | External interference radar | |
| CN108805884A (en) | A kind of mosaic area's detection method, device and equipment | |
| JP6944405B2 (en) | Building judgment system | |
| Hegde et al. | Wi-Fi Router Signal Coverage Position Prediction System using Machine Learning Algorithms | |
| JP2019029935A (en) | Image processing system and control method thereof | |
| JP2020046331A (en) | Bridge evaluation system and bridge evaluation method | |
| KR102741042B1 (en) | Electronic device and operating method for the same | |
| CN116956192A (en) | Abnormal data detection method, device, medium and equipment | |
| Sardar et al. | Indoor occupancy estimation using the LTE-CommSense system | |
| WO2025169352A1 (en) | Information processing device and information processing method | |
| KR101767743B1 (en) | Device and method for indoor positioning based on sensor image | |
| WO2023127250A1 (en) | Detection line determination device | |
| CN108241874B (en) | Video text area localization method based on BP neural network and spectrum analysis | |
| US20230324540A1 (en) | Apparatus and Method for Identifying Transmitting Radio Devices | |
| CN109255187A (en) | Analysis method and device | |
| JP7366683B2 (en) | information processing equipment | |
| JP6994996B2 (en) | Traffic route judgment system | |
| CN113596898A (en) | Method, device, equipment, storage medium and system for determining terminal portrait | |
| JP7617961B2 (en) | Inspection method and system based on millimeter wave security inspection device, and server | |
| JPWO2020115944A1 (en) | Map data generator | |
| JP2023037848A (en) | Displacement quantity calculation device | |
| JP7733539B2 (en) | Infrastructure impact calculation device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24923856 Country of ref document: EP Kind code of ref document: A1 |