[go: up one dir, main page]

WO2025215776A1 - Person detecting device - Google Patents

Person detecting device

Info

Publication number
WO2025215776A1
WO2025215776A1 PCT/JP2024/014588 JP2024014588W WO2025215776A1 WO 2025215776 A1 WO2025215776 A1 WO 2025215776A1 JP 2024014588 W JP2024014588 W JP 2024014588W WO 2025215776 A1 WO2025215776 A1 WO 2025215776A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
area
detection
sensor
sensor unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/JP2024/014588
Other languages
French (fr)
Japanese (ja)
Inventor
洋平 岩田
俊 大西
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Optex Co Ltd
Original Assignee
Optex Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Optex Co Ltd filed Critical Optex Co Ltd
Priority to PCT/JP2024/014588 priority Critical patent/WO2025215776A1/en
Publication of WO2025215776A1 publication Critical patent/WO2025215776A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J1/00Photometry, e.g. photographic exposure meter
    • G01J1/02Details
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V8/00Prospecting or detecting by optical means
    • G01V8/10Detecting, e.g. by using light barriers
    • G01V8/20Detecting, e.g. by using light barriers using multiple transmitters or receivers

Definitions

  • the present invention relates to a human detection device that detects people who enter a specified detection area.
  • Patent Document 1 describes a human detection device that has three object-detecting detectors and detects people who enter a specified detection area. This human detection device outputs a human body detection signal indicating that a person has been detected when any one of the three detectors detects an object.
  • the above-mentioned human detection device cannot determine the position within the detection area of an object that has entered the detection area.
  • the present invention was made to solve the above problems, and its intended purpose is to provide a human detection device that can identify the position of an object that has entered the detection area, while preventing increases in manufacturing costs and the size of the device.
  • the present invention has the following configuration:
  • a plurality of sensor units attached at a position higher than a reference plane, forming a unit zone extending obliquely downward, and detecting an object entering the unit zone; a determination unit that determines whether or not a person has entered a detection area formed by the unit zones of each of the sensor units based on a detection signal from each of the sensor units,
  • the sensor units include a plurality of first sensor units and a single second sensor unit, first projection areas, which are obtained by projecting the unit zones of the first sensor units onto the reference surface when viewed from above, are set to be aligned in the left-right direction; a second projection area obtained by projecting the unit zone of the second sensor unit onto the reference surface as viewed from above is set to overlap with each of the first projection areas; a leading edge of the first projection area is set to be located farther away than a leading edge of the second projection area,
  • a human detection device characterized in that the judgment unit calculates the value of the detection signal from the first sensor unit and the value of the detection signal from the second sensor
  • first projection areas are lined up in the left-right direction on the side farther from the leading edge of the second projection area, and a single second projection area is divided into multiple regions lined up in the left-right direction by the overlapping first projection areas on the side closer to the leading edge of the second projection area.
  • This allows the detection area to be divided into regions lined up in a matrix, and by performing a logical AND operation on the detection signals of the first sensor unit and the second sensor unit, it is possible to identify which of the matrix-divided regions an object has entered.
  • the detection area can be divided into more regions than the number of sensor units, making it possible to more precisely identify the detected position of an object that has entered the detection area while suppressing an increase in the number of sensor units and thereby preventing an increase in manufacturing costs or an increase in the size of the device.
  • the determination unit When a detection signal indicating that an object has been detected is received from both the first sensor unit and the second sensor unit, it is determined that an object has entered an area where the first projection area and the second projection area overlap, A human detection device as described in [1], which, when receiving a detection signal indicating that an object has been detected from only the first sensor unit, determines that an object has entered an area of the first projection area that does not overlap with the second projection area.
  • the configuration [2] is a specific example of the arithmetic processing of the detection signals by the determination unit. By calculating the logical product of the values of the detection signals received from both the first sensor unit and the second sensor unit, which indicate whether an object has been detected, the detected position of the object can be identified through simple calculations.
  • the second projection area is set to have an outer area extending outward from a first projection area located at the left end or the right end of each of the first projection areas,
  • the determination unit The human detection device according to [1], which determines that an object has entered the outer area when a detection signal indicating that an object has been detected is received from only the second sensor unit.
  • the first projection areas are arranged so as to partially overlap each other, and when the judgment unit receives a detection signal from two of the first sensor units indicating that an object has been detected, it judges that an object has entered the area where the first projection areas overlap.
  • a human detection device as described in With this configuration it is possible to set areas that are detected by two first sensor units within the detection area, thereby dividing the detection area into even more areas.
  • the sensor unit includes a PIR element and an optical member that defines the unit zone, which is a range of infrared light incident on the PIR element;
  • the optical member has a plurality of lenses arranged side by side, and divided zones, which are infrared incidence ranges defined by the respective lenses, are arranged side by side when viewed from above, forming the unit zones;
  • a human detection device as described in [1], wherein the number of lenses defining the unit zone of the first sensor unit is the same as the number of lenses of the second sensor unit that define the area of the unit zone of the second sensor unit that overlaps with the unit zone of the first sensor unit when viewed from above.
  • the first sensor unit and the second sensor unit have the same number of lenses, it is possible to align and overlap the divided zones of these two sensor units, which makes it easier to align the timing of the detection signals output from the first sensor unit and the second sensor unit, making it easier to determine whether the detection signals represent the same object.
  • the human detection device described in [1] further comprises an intrusion count suggestion data output unit that generates position-specific intrusion count suggestion data that can grasp the number of times an object has intruded into each object detection position identified by the judgment unit, and transmits this to other devices such as a display.
  • an intrusion count suggestion data output unit that generates position-specific intrusion count suggestion data that can grasp the number of times an object has intruded into each object detection position identified by the judgment unit, and transmits this to other devices such as a display.
  • the present invention can divide the detection area into more regions than the number of sensor units used, making it possible to provide a human detection device that can identify the position of an object that has entered the detection area while preventing increases in manufacturing costs and the device size.
  • FIG. 1 is an overall schematic diagram of a human detection device according to an embodiment of the present invention
  • FIG. 2 is a functional block diagram of the human detection device according to the embodiment.
  • FIG. 2 is a schematic diagram of the internal structure of the human detection device according to the embodiment, as viewed from above.
  • FIG. 2 is a schematic diagram of a circuit board of the human detection device of the embodiment as viewed from the front.
  • 5 is a schematic diagram of a detection area viewed from the left and right in the embodiment;
  • FIG. FIG. 4 is a schematic diagram of a detection area viewed from above in the embodiment.
  • FIG. 3 is a schematic diagram showing sections set within a detection area in the embodiment. 10 is a table showing the relationship between projection areas and sections in the embodiment.
  • FIG. 1 is an overall schematic diagram of a human detection device according to an embodiment of the present invention
  • FIG. 2 is a functional block diagram of the human detection device according to the embodiment.
  • FIG. 2 is a schematic diagram of the internal structure of the human detection
  • FIG. 4 is a schematic diagram of unit zones and divided zones as viewed from above in the embodiment.
  • 3 is a truth table of a logic function used by the determination unit of the embodiment;
  • FIG. 11 is a schematic diagram of a detection area viewed from above in the second embodiment.
  • FIG. 10 is a schematic diagram showing sections set within a detection area in the second embodiment. 10 is a table showing the relationship between projection areas and sections in the second embodiment.
  • FIG. 11 is a schematic diagram of a detection area viewed from above in the third embodiment.
  • FIG. 11 is a schematic diagram of a circuit board of a human detection device according to a third embodiment, as viewed from the front.
  • FIG. 10 is a functional block diagram of a human detection device according to a fourth embodiment.
  • FIG. 20 is a schematic diagram of a detection area viewed from the left and right in the sixth embodiment.
  • the human detection device of this embodiment is used in an intrusion detection system that detects and issues an alarm when a suspicious person intrudes into a predetermined detection area, for example.
  • this human detection device is mounted at a position higher than a reference plane, and comprises a plurality of first sensor units 1 and a single second sensor unit 2 that form unit zones extending diagonally downward and detect objects that enter these unit zones, and a determination unit 3 that determines whether a human has entered detection area X based on the detection signals of each of the sensor units 1 and 2.
  • the reference plane is, for example, the ground or floor surface of the location where this human detection device 100 is installed, or a surface parallel to the ground or floor surface.
  • the sensor units 1 and 2 are all installed in one location, and here they are housed together with the determination unit 3 in a common housing H.
  • the housing H here is a vertically elongated pillar that is attached to a wall, pillar, ceiling, etc. with its front surface facing the detection area X.
  • the mounting height of the housing H is set to, for example, approximately 0.5 to 4.0 m from the reference plane.
  • the up-down direction is defined as being either up or down in the vertical direction when viewed from the human detection device 100
  • the front-to-back direction is defined as the direction toward the center of the detection area X when viewed from the human detection device 100
  • the direction perpendicular to the up-down direction and the front-to-back direction is defined as the left-to-right direction.
  • each first sensor unit 1 has a first infrared sensor 11, a first optical member 12 that defines a first unit zone, which is the range of infrared light that enters the first infrared sensor 11, and a first detection circuit that outputs a detection signal based on the output signal of the first infrared sensor 11.
  • three first sensor units 1 are provided.
  • the first unit zones of these three first sensor units are set to face in different directions in the left-right direction.
  • the symbols 1a, 1b, and 1c are designated by the symbols 1a, 1b, and 1c.
  • First infrared sensor 11 3(a) and 3(b), the first infrared sensor 11 is a passive infrared sensor that detects infrared rays emitted from an object, and in this case is a pyroelectric dual type sensor with two PIR elements mounted in a single can package.
  • the first infrared sensor 11 detects fluctuations in the incident infrared rays and outputs an analog output signal indicating the amount of fluctuation.
  • the first infrared sensor 11 is mounted on a circuit board S housed within the housing H with its sensor surface facing forward.
  • the first infrared sensors 11 of the three first sensor units 1 are mounted on the circuit board S in a vertical line.
  • the circuit board S here is a so-called PCB, and is arranged so that the mounting surface of the first infrared sensor 11 faces forward.
  • First optical member 12 3( a) and 3(b), the first optical member 12 of this embodiment is a first lens set consisting of a plurality of lenses L.
  • This first lens set 12 is formed integrally with a cover FC that forms the front surface of the housing H.
  • the cover FC is disposed opposite the infrared sensor mounting surface of the substrate S, and is provided to cover the substrate S.
  • a first lens set 12 is provided for each of the multiple first sensor units 1, and each first lens set 12 is provided so as to face the sensor surface of the first infrared sensor 11 of the same first sensor unit 1 from front to back.
  • three first lens sets 12 are provided on the cover FC in a vertically aligned manner to match the three vertically aligned first infrared sensors 11.
  • the first lens sets 12 of each first sensor unit 1 are offset left and right so as not to overlap when viewing the cover FC (or housing H) from above.
  • a shielding member (not shown) between adjacent lens sets in the vertical direction to block infrared rays.
  • a shielding member (not shown) between adjacent lens sets in the vertical direction to block infrared rays.
  • a shielding member (not shown) between adjacent lens sets in the vertical direction to block infrared rays.
  • the lenses L which are Fresnel lenses in this example, focus infrared light from within the detection area X onto each infrared sensor.
  • Each lens L is connected to other lenses L that make up the same lens set, but they may also be spaced apart.
  • each first lens set 12 the multiple lenses L that make up each first lens set 12 are arranged side by side at the same height when viewing the housing H from the front.
  • Each first lens set 12 is composed of three lenses L. Note that the number of lenses that make up each first lens set 12 may differ from each other.
  • the first detection circuit is formed on the substrate S, for example, and compares the value of the analog output signal received from the first infrared sensor 11 with a predetermined threshold, and outputs a detection signal, which is a digital output signal having a value of 1 or 0 depending on the magnitude of the comparison.
  • the detection circuit here outputs a detection signal that indicates a value of 1 when an object is detected, and outputs a detection signal that indicates a value of 0 when the detection signal does not detect an object.
  • Each first detection circuit is provided for each first sensor unit 1, and outputs an individual detection signal based on the output signal received from the corresponding first infrared sensor 11.
  • Second sensor unit 2 As shown in Figure 2, the second sensor unit 2 has a second infrared sensor 21, a second optical member 22 that defines a second unit zone, which is the range of infrared light that enters the second infrared sensor 21, and a second detection circuit (not shown) that outputs a detection signal based on the output signal of the second infrared sensor 21.
  • the second sensor unit 2 is configured to have a shorter detection distance than the first sensor unit 1, and is capable of detecting objects located at a closer distance than the first sensor unit 1.
  • the second infrared sensor 21 is a pyroelectric dual type infrared sensor, similar to the first infrared sensor 11, and detects fluctuations in incident infrared light and outputs an analog output signal indicating the amount of fluctuation.
  • the second infrared sensor 21 is mounted on the substrate S with its sensor surface facing forward. In this example, it is positioned below the three first infrared sensors 11 that are lined up vertically, resulting in four infrared sensors 11, 21 lined up vertically.
  • Second optical member 22 2(a) and 2(b), the second optical member 22 of this embodiment is a second lens set consisting of a plurality of lenses L. Like the first lens set 12, this second lens set 22 is formed integrally with the cover FC.
  • the second lens sets 22 are positioned so that they overlap with each of the first lens sets 12 when the cover FC (or the housing H) is viewed from above.
  • each second lens set 22 is composed of nine lenses L.
  • the second detection circuit is formed on the substrate S, for example, and compares the value of the analog output signal received from the second infrared sensor 21 with a predetermined threshold, and outputs a detection signal, which is a digital output signal having a value of 1 or 0 depending on the magnitude of the comparison.
  • the detection circuit here outputs a detection signal indicating a value of 1 when an object is detected, and outputs a detection signal indicating a value of 0 when the detection signal does not detect an object.
  • the determination unit 3 is physically a computer (not shown) housed in, for example, the housing H.
  • This computer is equipped with a CPU, memory, an I/O interface, a communication interface, etc., and performs the function of the determination unit 3, which determines whether or not a person has entered the detection area X, by operating the CPU and peripheral devices in cooperation with each other in accordance with a predetermined program stored in the memory.
  • the determination unit 3 identifies the position in the detection area X where an object is detected (hereinafter referred to as the object detection position) based on the detection signals received from each sensor unit 1, 2, and determines whether a person has entered the detection area X.
  • the object detection position is determined by performing a logical operation on the value of the detection signal from the first sensor unit 1 and the value of the detection signal from the second sensor unit 2. For example, if a detection signal indicating a value of 1 is received from both the first sensor unit 1 and the second sensor unit 2, the area detected by both sensor units 1 and 2 is determined as the object detection position, and if a detection signal having the value of 1 is received only from the first sensor unit 1, the area detected only by the first sensor unit 1 is determined as the object detection position.
  • the detection signal of the first sensor unit 1 and the detection signal of the second sensor unit 2, both of which are generated by the same object are used.
  • the object detection position is identified using the detection signals of each sensor unit 1, 2 received by the judgment unit 3 within a predetermined period.
  • the predetermined period is determined based on, for example, the time it takes for a person to cross a unit zone. This period should be set with a width large enough to allow for errors that may occur due to differences in the arrangement of the unit zones of each sensor unit 1, 2 and variations in optical components or circuits.
  • the judgment unit 3 may also identify the object detection position using the detection signals generated within the predetermined period.
  • the determination unit 3 determines that a person has entered the detection area X, it outputs a human detection signal indicating this.
  • the human detection signal is linked to an object detection position that indicates the detected position of the object that generated the human detection signal.
  • the detection area X of the human detection device 100 is formed by combining a plurality of first unit zones and a single second unit zone, as shown in Figures 4 to 6.
  • the detection area X of the human detection device 100 according to the present invention will be described in detail below with reference to Figures 4 to 6.
  • Unit Zones Fig. 4 is a schematic diagram showing the relationship between the first unit zone and the second unit zone in the detection area X as viewed from the left and right. As described above, each unit zone is formed to extend diagonally downward from the sensor units 1 and 2 that are attached at a position higher than the reference surface. Each unit zone extends until it intersects with the reference surface, and the detection distance of the sensor units 1 and 2 is limited by the unit zone ending at the reference surface.
  • the first unit zone is set to have a longer detection distance than the second unit zone. For this reason, the inclination of the first unit zone relative to the reference plane is set to be gentler than the inclination of the second unit zone relative to the reference plane. Furthermore, when viewed from above, the second unit zone is set to overlap with each of the first unit zones on the near side of the first unit zones.
  • the three first unit zones are set to have approximately the same inclination relative to the reference plane, but this is not limited to this.
  • the first unit zone and the second unit zone are set so that they do not overlap and are lined up one above the other with no gaps between them, but the first unit zone and the second unit zone may also be set so that they overlap or are spaced apart.
  • the 5A is a schematic diagram showing the detection area X when viewed from above.
  • the first projection areas A to C are areas where the first unit zones of the first sensor units 1a to 1c are respectively projected onto the reference surface when viewed from above, and the second projection area D is an area where the second unit zone of the second sensor unit 2 is projected onto the reference surface when viewed from above.
  • each projection area A to D When viewed from above, each projection area A to D is formed in a fan shape spreading out from sensor units 1 and 2 within a horizontal angular range (called the horizontal field of view angle) defined by the optical members 12 and 22.
  • Each projection area A to D has its base end directly below each sensor unit 1 and 2 (or housing H) on the reference plane and extends forward to the position where each unit zone intersects with the reference plane and ends. The edge of each projection area A to D farthest from the base end is called the leading edge.
  • the leading edges of the first projection areas A to C are set to be located farther from the base end than the leading edge of the second projection area D (hereinafter also referred to as the second leading edge).
  • the distance from the base end to the leading edge of the first projection areas A to C is set to, for example, twice the distance from the base end to the tip of the second projection area D.
  • the first projection areas A to C are set to line up in the left-right direction when viewed from above.
  • the three first projection areas A to C are spaced apart so that they do not overlap when viewed from above, but they may also be lined up without any gaps.
  • the distance between adjacent first projection areas should be such that there is no blind spot between the two projection areas that is larger than the detection target.
  • the horizontal viewing angle of the second projection area D is set to be larger than the horizontal viewing angle of each of the first projection areas A to C.
  • the horizontal viewing angle of the second projection area D is set to be approximately the same as the central angle of the fan-shaped area formed by the multiple first projection areas A to C lined up side by side.
  • the second projection area D is set to overlap with each of the multiple first projection areas A to C. As mentioned above, because each first tip edge is set farther away than the second tip edge, the second projection area D overlaps with each of the first projection areas A to C on the base end side of each of the first projection areas A to C.
  • an area where the first projection areas A to C and the second projection area D overlap is set on the base end side of the detection area X when viewed from above, and an area of the first projection areas A to C that does not overlap with the second projection area D is set on the tip end side.
  • areas that are distinguished in this way by the overlapping state of the first projection area and the second projection area will be referred to as sections.
  • Sections I to VI set in detection area X 5B is a schematic diagram showing six sections set within the detection area X as viewed from above in this embodiment.
  • Sections I to III are sections consisting of only one of the first projection regions A to C (the non-overlapping sections), and sections IV to VI are sections consisting of one of the first projection regions A to C overlapping with the second projection region D (the overlapping sections).
  • the table in FIG. 5C shows the combinations of sections I to VI and the projection regions A to D that make up these sections I to VI.
  • sections I to VI divide the detection area X, as viewed from above, into a matrix pattern from front to back and left to right.
  • Sections I to III on the base end side (or sections IV to VI on the tip end side) are horizontally arranged areas that are the same distance from each other in the front to back direction when viewed from the human detection device 100, but have different left to right arrangements.
  • Sections I, IV, etc., which are made up of the common first projection area A, are vertically arranged areas that are different distances from each other in the front to back direction.
  • the first unit zone in sections IV to VI on the base end side, is set to pass below the predetermined height from the reference plane, so that an object O that enters one of sections IV to VI is detected by both the first sensor unit 1 and the second sensor unit 2.
  • the predetermined height is set to match the height of the object to be detected, and in this case, it is set to match the height of the person being detected.
  • the specified height is set by adjusting the mounting height of each sensor unit 1, 2, the angular range of the unit zone, or the angle of each unit zone relative to the reference plane.
  • the human detection device 100 by mounting the human detection device 100 at a position close to the height of a person, it is configured to prevent a person from entering each unit zone throughout almost the entire detection area X.
  • each unit zone in this embodiment is formed by a plurality of divided zones as shown in Fig. 6.
  • a divided zone is the angular range (infrared incident range) of infrared rays that pass through the lens and enter the infrared sensors 11 and 21, and is defined by the focal length of the lens.
  • each unit zone is formed by multiple divided zones lined up on the left and right.
  • the first sensor units 1a to 1c have three lenses (the lens set described above), and the first unit zone is formed by three divided zones.
  • the second sensor unit 2 has nine lenses, and the second unit zone is formed by nine divided zones.
  • the angular range (horizontal field of view) of the divided zones when viewed from above is set to be approximately the same for the first sensor unit 1 and the second sensor unit 2.
  • the divided zones that form the first unit zone and the divided zones that form the second unit zone are set to overlap each other.
  • the number of lenses defining the first unit zone is the same as the number of lenses of the second sensor unit 2 that define the area of the second unit zone that overlaps with the first unit zone, and the divided zones of each first unit zone and the divided zones of the second unit zone overlap in a one-to-one relationship. Furthermore, the two overlapping divided zones are set so that their central axes overlap each other when viewed from above.
  • first sensor unit 1a detects the object and outputs a detection signal with a value of 1.
  • the other sensor units 1b, 1c, and 2 do not detect the object and therefore output detection signals with a value of 0.
  • the determination unit 3 performs a logical operation on each of the received detection signals using a predetermined logical function to identify section I as the object detection position.
  • the predetermined logical function is a four-input, six-output logical function that calculates the logical product of the detection signal values of the four sensor units 1 and 2 and determines which of the six sections I to VI the object has entered.
  • Figure 7 is a truth table representing this logical function.
  • the first sensor unit 1a and the second sensor unit 2 output detection signals having a value of 1, and the other sensor units 1b and 1c output detection signals having a value of 0.
  • the determination unit 3 determines that an object has entered section IV, which is the area where the first projection area A and the second projection area D overlap. More specifically, the determination unit 3 performs a logical operation on each of the received detection signals using the logical function, and identifies section IV as the object detection position.
  • the determination unit 3 determines that an object has entered any of the sections I to VI, that is, if it has identified the object's detection position, it determines that a person has entered the detection area X and outputs a human detection signal indicating this to, for example, a security system. Furthermore, the determination unit 3 outputs object detection position information indicating the section into which the object has entered, linked to the human detection signal. Note that the determination unit 3 may also output an object detection position signal indicating the object detection position separately from the human detection signal.
  • each sensor unit 1, 2 If no object has entered any of the sections, each sensor unit 1, 2 outputs a detection signal with a value of 0.
  • the judgment unit 3 receives these detection signals and performs a logical operation using the logic function to determine that no object has been detected in the detection area X.
  • the detection area X can be divided into more sections than the number of sensor units 1, 2 used, and it is possible to more precisely identify the detected position of an object that has entered the detection area X while suppressing an increase in the number of sensor units 1, 2, thereby suppressing an increase in manufacturing costs or an increase in the size of the device.
  • the four sensor units 1, 2 divide the detection area into six sections.
  • the detected position of an object can be identified by a simple logical operation: calculating the logical product of the values of the detection signals received from both the first sensor unit 1 and the second sensor unit 2.
  • object detection position information indicating the section into which the object has entered is output, it is possible to count the number of times an object has entered each section, for example, and this can be useful for masking detection area X and adjusting the thresholds of each sensor unit.
  • the detection area X is divided into sections arranged in a matrix from front to back and left to right, when adjusting the detection area X using masking or other methods, it is easier to imagine what the detection area will look like after adjustment, making area adjustment simple.
  • the timing of the detection signals output from the first sensor unit 1 and the second sensor unit 2 that detect the same object can be aligned. This simplifies signal processing in the determination unit 3.
  • the human detection device 100 of the second embodiment differs from the first embodiment in that, when the detection area X is viewed from above, the second projection area D is set to have an outer area that extends outward from the first projection area C located at the right end of the first projection areas A to C. Note that the outer area may extend outward from the first projection area A located at the left end, or may extend outward from each of the first projection areas A and C located at the left and right ends.
  • the horizontal viewing angle of the second projection area D in the second embodiment is set to be larger than the central angle of the fan-shaped area formed by the multiple first unit zones lined up on the left and right, and the second projection area D extends outward (to the right) from the rightmost first unit zone in the detection area X, forming an outer area.
  • the detection area X viewed from above includes an area (sections I to III) made up only of the first projection areas A to C, an area (sections IV to VI) made up of the overlapping first projection areas A to C and the second projection area D, and an outer area (section VII) made up only of the second projection area D.
  • the detection area X is divided into seven sections by the four sensor units 1 and 2.
  • the determination unit 3 when the determination unit 3 receives a detection signal indicating that an object has been detected from only the second sensor unit 2, it determines that an object has entered the outer region of the second projection region D of the second sensor unit 2.
  • a detection signal indicating that an object has been detected is received from only the second sensor unit 2, it determines that an object has entered section VII, and identifies this section VII as the object detection position.
  • the detection area X can be divided into smaller areas without increasing the number of sensor units 1 and 2, making it possible to more precisely identify the detection position of an object that has entered the detection area X.
  • the human detection device 100 of the third embodiment differs from the previous embodiments in that, as shown in Figure 9(a), when the detection area X is viewed from above, the first projection areas A and B are arranged so that they partially overlap each other.
  • the human detection device 100 of the third embodiment includes two first sensor units 1a, 1b and a single second sensor unit 2.
  • the two first lens sets 12 are arranged so as to overlap each other when viewing the cover FC (or the housing H) from above, and the detection area X described above is defined.
  • the two first projection areas A and B arranged side by side are configured so that they partially overlap each other from their respective base ends to their respective tip ends.
  • the second projection area D is also configured so that it overlaps with the base ends of each of the first projection areas A and B.
  • the detection area X viewed from above includes areas consisting of only one of the first projection areas A and B (sections I and III), areas consisting of one of the first projection areas A and B overlapping with the second projection area D in a one-to-one relationship (sections IV and VI), an area where two of the first projection areas A and B overlap (section II), and an area where the area where two of the first projection areas A and B overlap is further overlapped by the second projection area D (section V).
  • the three sensor units 1a, 1b, and 2 divide the detection area X into six sections in a matrix.
  • the determination unit 3 of the third embodiment determines that an object has entered the area where the first projection areas A and B overlap when it receives detection signals from the two first sensor units 1a and 1b indicating that an object has been detected. Specifically, when it receives detection signals from both first sensor units 1a and 1b indicating that an object has been detected, it determines that an object has entered section II and identifies this section II as the object detection position.
  • the determination unit 3 receives detection signals from the two first sensor units 1a, 1b indicating that an object has been detected, and also receives a detection signal from the second sensor unit 2 indicating that an object has been detected, it determines that an object has entered the area where the unit zones of these three sensor units 1a, 1b, 2 overlap.
  • detection signals indicating that an object has been detected are received from all of the first sensor units 1a, 1b and the second sensor unit 2, it determines that an object has entered section V, and identifies this section V as the object detection position.
  • the detection area X can be divided into smaller areas without increasing the number of sensor units, making it possible to more precisely identify the detection position of an object that has entered the detection area X.
  • the human detection device 100 of the fourth embodiment further includes an intrusion count suggestion data output unit 4 that generates position-specific intrusion count suggestion data that enables the number of object intrusions into each object detection position identified by the judgment unit 3 to be determined, and transmits this to other devices such as a display.
  • the intrusion count suggestion data output unit 4 is physically provided as a computer common to the above-mentioned determination unit 3 .
  • the intrusion count suggestion data output unit 4 when the intrusion count suggestion data output unit 4 receives an object detection position signal from the judgment unit 3 indicating the position where an object has intruded (here, one of sections I to VI), it acquires the time at which the signal was received, associates the section into which the object has intruded with the corresponding reception time, and stores this in a specified area of memory.
  • the reception history of object detection position signals for each section is stored together as log data.
  • the intrusion count suggestion data output unit 4 references a specified area of the memory, generates position-specific intrusion count suggestion data indicating the number of times an object has intruded into each section within a specified period, and outputs this in a format that can be read by other devices. For example, the intrusion count suggestion data output unit 4 generates table data that allows the number of times an object has intruded into each section to be ascertained, and outputs this to a display.
  • the number of object intrusions at each object detection position can be determined using the intrusion count suggestion data by position. Therefore, if the number of intrusions exceeds a certain number, it is assumed that there is some factor causing a false alarm, and object detection at that object detection position can be stopped or the object detection level adjusted to reduce false alarms.
  • the determination unit in each of the above embodiments determines that a person has entered the detection area when it identifies the detected position of an object within the detection area, and outputs a person detection signal.
  • the judgment unit of the fifth embodiment determines that a person has entered the detection area when the detection position of an object is identified and the detection position is set to an alert position, and outputs a person detection signal.
  • the determination unit references alert position data stored in advance in a specified area of memory, and if it determines that an object has entered a position corresponding to one or more alert positions indicated in the alert position data, it determines that a person has entered the detection area.
  • the alert position data here is, for example, a binary value indicating alert or non-alert, set for each of sections I to VI that divide the detection area.
  • the alert position data stored in a specified area of memory is held in a changeable state, and the determination unit is configured to change the alert position data automatically under specified conditions, or when it receives a change command from outside.
  • the effective detection area can be set more flexibly by designating the area within the entire detection area where you want to detect human intrusion as an alert position, and designating the remaining area as a non-alert position.
  • the range of the effective detection area can be adjusted without physical masking. Unlike physical masking, this allows for stable area adjustment regardless of the skill of the person adjusting the detection area. For example, because it is not physical area adjustment, an operator can remotely set areas with a high number of object intrusions as non-alert positions through input from an input device configured to communicate with the human detection device.
  • the human detection device 100 is installed at a position higher than the height of a person (here, at a position about twice the height), and each unit zone at the base end of the detection area X is set to pass above a predetermined height.
  • the area where the first projection area and the second projection area overlap is further divided into an area where an intruder enters both the first unit zone and the second unit zone, an area where an intruder enters only the second unit zone, and an area where an intruder does not enter any unit zone.
  • the determination by the determination unit as to whether a person has entered the detection area may be performed independently of the determination unit's identification of the object detection position. For example, the determination unit may determine that a person has entered the detection area when it receives a detection signal indicating that an object has been detected from any one of the sensor units. Independently of this determination, the determination unit may calculate the values of the detection signals received from each sensor unit to identify the detection position of the object entering the detection area.
  • the detection signals used by the determination unit to detect people and identify object detection positions may be analog output signals indicating the amount of fluctuation in the incident infrared light detected by each infrared sensor, and may be any type of detection signal as long as it calculates the detection signals from both the first and second sensor units to identify the object detection position.
  • the determination unit may identify the area where the first projection area of the first sensor unit and the second projection area of the second sensor unit overlap as the object detection position.
  • the determination unit may identify, as the object detection position, an area where the first projection areas of two or more first sensor units overlap, based on the analog output signals of those two or more first sensor units.
  • the determination unit may determine whether an object has entered the overlapping area between the first projection area and the second projection area based on a detection signal indicating that an object has been detected by the first sensor unit and a detection signal indicating that an object has been detected by the second sensor unit received a predetermined time after the generation of the first detection signal.
  • the predetermined time corresponds to the time it takes for an object that has entered the first projection area to enter the second projection area, and is calculated based on, for example, the distance from the first leading edge to the second leading edge.
  • a human detection device using a pyroelectric PIR element if an object enters the first projection area and continues to move straight toward the human detection device within the detection area, it can be detected when it enters the second projection area, and the approach of the object can be determined.
  • the number of first sensor units was two or three, but it may be four or more, as long as there is more than one.
  • the present invention by overlapping each of the multiple first projection areas with a single second projection area, it is possible to set an area of at least "the number of first sensor units x 2" within the detection area.
  • a human detection device may be provided with multiple sensor unit groups, each consisting of multiple first sensor units and a single second sensor unit that forms a second projection area that overlaps with the first projection area.
  • the arrangement of the infrared sensors or optical components in each sensor unit is not limited to that described in the above embodiments. Furthermore, the configuration of the infrared sensors and optical components may differ between sensor units.
  • the infrared sensor may be a single type or quad type, or may be a PIR sensor such as a thermopile, or may be an AIR sensor having an irradiator that irradiates an infrared LED and a receiver that receives the reflected infrared LED light.
  • each optical element may be a single lens.
  • the optical element may be composed of a mirror, or may be composed of a combination of a lens and a mirror.
  • the human detection device in each of the above embodiments is configured with a sensor unit and a computer functioning as a judgment unit, etc., located within a common housing, but the physical configuration of the human detection device according to the present invention is not limited to this. Furthermore, the physical locations of each component element are not limited to the locations shown in each embodiment.
  • the sensor unit may be equipped with an analog circuit including a comparator, an AD converter, and a digital electrical circuit such as a computer or PLD, which may perform the functions of the detection circuit, and this computer may be common to the judgment unit.
  • the sensor units do not need to be located in the same housing, as long as they are arranged relatively close together.
  • the information processing device is, for example, a computer equipped with a CPU, memory, input/output interfaces such as a display, a communication interface, etc.
  • a common information processing device can be used to integrate and process the multiple human detection devices.
  • the present invention it is possible to provide a human detection device that can identify the position of an object that has entered a detection area, while suppressing increases in manufacturing costs and size of the device.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Geophysics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Geophysics And Detection Of Objects (AREA)

Abstract

In order to make it possible to identify the position of an object and suppress an increase in cost and device size, this person detecting device comprises a plurality of sensor units that are attached in a position higher than a reference surface and that detect an object that has entered a unit zone extending obliquely downward, and a determining unit that determines whether or not a person has entered a detection area formed by the unit zones, on the basis of detection signals from each sensor unit, wherein: a plurality of first sensor units and a single second sensor unit are provided as the sensor units; the distal edges of first projection regions, in which the unit zones of each first sensor unit are projected onto the reference surface when viewed from above, are set so as to be located further away than the distal edge of a second projection region, in which the unit zone of the second sensor unit is projected onto the reference surface, such that the first projection regions are arranged in the left-right direction and the second projection region overlaps each first projection region; and the determining unit performs a computation using the values of the detection signals of the first sensor units and the second sensor unit to identify the position of the object.

Description

人検知装置Human detection device

 本発明は、所定の検知エリアに侵入した人を検知する人検知装置に関するものである。 The present invention relates to a human detection device that detects people who enter a specified detection area.

 特許文献1には、物体を検出する検知器を3つ備え、所定の検知エリアに侵入した人を検知する人検知装置が記載されている。この人検知装置は、3つの検知器のうち何れか1つが物体を検出した場合に人を検知したことを示す人体検知信号を出力するものである。 Patent Document 1 describes a human detection device that has three object-detecting detectors and detects people who enter a specified detection area. This human detection device outputs a human body detection signal indicating that a person has been detected when any one of the three detectors detects an object.

特開2010-071761Patent Publication No. 2010-071761

 しかしながら、上記の人検知装置では、検知エリアに侵入した物体の検知エリア内における位置を特定することはできない。 However, the above-mentioned human detection device cannot determine the position within the detection area of an object that has entered the detection area.

 本発明は、上記問題を解決するべくなされたものであり、検知エリア内に侵入した物体の位置を特定できるようにするとともに、製造コストの増加や装置の大型化を抑制することが可能な人検知装置を提供することをその所期課題とするものである。 The present invention was made to solve the above problems, and its intended purpose is to provide a human detection device that can identify the position of an object that has entered the detection area, while preventing increases in manufacturing costs and the size of the device.

 すなわち、本発明は以下のような構成を有するものである。 In other words, the present invention has the following configuration:

[1]
 基準面よりも高い位置に取り付けられるとともに、斜め下方に向かって延びる単位ゾーンを形成し、この単位ゾーンに侵入した物体を検出する複数のセンサユニットと、
 前記各センサユニットからの検出信号に基づいて、前記各センサユニットの単位ゾーンによって形成される検知エリアに人が侵入したか否かを判断する判断部とを備えた人検知装置であって、
 前記センサユニットとして、複数の第1センサユニットと、単一の第2センサユニットとが設けられており、
 上方から視て前記各第1センサユニットの単位ゾーンを前記基準面上に投影した第1投影領域は、左右方向に並ぶように設定されており、
 上方から視て前記第2センサユニットの単位ゾーンを前記基準面上に投影した第2投影領域は、前記各第1投影領域それぞれと重なるように設定されており、
 前記第1投影領域の先端縁は、前記第2投影領域の先端縁よりも遠くに位置するように設定されており、
 前記判断部は、前記第1センサユニットからの検出信号の値と、前記第2センサユニットからの検出信号の値とを演算し、前記検知エリアにおける物体検出位置を特定することを特徴とする人検知装置。
[1]
a plurality of sensor units attached at a position higher than a reference plane, forming a unit zone extending obliquely downward, and detecting an object entering the unit zone;
a determination unit that determines whether or not a person has entered a detection area formed by the unit zones of each of the sensor units based on a detection signal from each of the sensor units,
The sensor units include a plurality of first sensor units and a single second sensor unit,
first projection areas, which are obtained by projecting the unit zones of the first sensor units onto the reference surface when viewed from above, are set to be aligned in the left-right direction;
a second projection area obtained by projecting the unit zone of the second sensor unit onto the reference surface as viewed from above is set to overlap with each of the first projection areas;
a leading edge of the first projection area is set to be located farther away than a leading edge of the second projection area,
A human detection device characterized in that the judgment unit calculates the value of the detection signal from the first sensor unit and the value of the detection signal from the second sensor unit, and identifies the object detection position in the detection area.

 このように構成された本発明では、第2投影領域の先端縁より遠い側において複数の第1投影領域が左右方向に並び、かつ、第2投影領域の先端縁より近い側において単一の第2投影領域がこれと重なる前記各第1投影領域によって仕切られて左右方向に並ぶ複数の領域に分かれることによって、検知エリアをマトリクス状に並ぶ領域に分割することができ、さらに第1センサユニット及び第2センサユニットの検出信号を例えば論理積演算することによって、マトリクス状に分割された領域の何れに物体が侵入したかを特定することができる。これにより、本発明であれば、センサユニットの数よりも多くの領域に検知エリアを分割することができ、センサユニット数の増加を抑えて製造コストの増加又は装置の大型化を抑制しつつ、検知エリアに侵入した物体の検出位置をより詳細に特定することが可能になる。 In the present invention configured in this manner, multiple first projection areas are lined up in the left-right direction on the side farther from the leading edge of the second projection area, and a single second projection area is divided into multiple regions lined up in the left-right direction by the overlapping first projection areas on the side closer to the leading edge of the second projection area. This allows the detection area to be divided into regions lined up in a matrix, and by performing a logical AND operation on the detection signals of the first sensor unit and the second sensor unit, it is possible to identify which of the matrix-divided regions an object has entered. As a result, with the present invention, the detection area can be divided into more regions than the number of sensor units, making it possible to more precisely identify the detected position of an object that has entered the detection area while suppressing an increase in the number of sensor units and thereby preventing an increase in manufacturing costs or an increase in the size of the device.

[2]
 前記判断部は、
 前記第1センサユニット及び第2センサユニットの双方から、物体を検出した旨の検出信号を受け付けた場合には、前記第1投影領域と前記第2投影領域とが重なり合う領域に物体が侵入したと判断し、
 前記第1センサユニットのみから物体を検出した旨の検出信号を受け付けた場合には、前記第1投影領域のうち、前記第2投影領域と重ならない領域に物体が侵入したと判断する[1]記載の人検知装置。
 [2]の構成は、判断部による検出信号の演算処理を具体化したものである。第1センサユニット及び第2センサユニット双方から受け付けた物体を検出したか否かを示す検出信号の値の論理積を演算することにより、簡単な計算処理で物体の検出位置を特定することができる。
[2]
The determination unit
When a detection signal indicating that an object has been detected is received from both the first sensor unit and the second sensor unit, it is determined that an object has entered an area where the first projection area and the second projection area overlap,
A human detection device as described in [1], which, when receiving a detection signal indicating that an object has been detected from only the first sensor unit, determines that an object has entered an area of the first projection area that does not overlap with the second projection area.
The configuration [2] is a specific example of the arithmetic processing of the detection signals by the determination unit. By calculating the logical product of the values of the detection signals received from both the first sensor unit and the second sensor unit, which indicate whether an object has been detected, the detected position of the object can be identified through simple calculations.

[3]
 前記第2投影領域が、前記各第1投影領域のうち、左端または右端にある第1投影領域よりも外側に広がる外側領域を有するように設定されており、
 前記判断部は、
 前記第2センサユニットのみから物体を検出した旨の検出信号を受け付けた場合に、前記外側領域に物体が侵入したと判断する[1]記載の人検知装置。
 このような構成であれば、検知エリア内にさらに第2センサユニットのみで検知される領域を設定して、検知エリアをさらに多くの領域に分割できる。これにより、センサユニット数の増加を抑えつつ、検知エリアに侵入した物体の検出位置をより詳細に特定することが可能になる。
[3]
the second projection area is set to have an outer area extending outward from a first projection area located at the left end or the right end of each of the first projection areas,
The determination unit
The human detection device according to [1], which determines that an object has entered the outer area when a detection signal indicating that an object has been detected is received from only the second sensor unit.
With this configuration, it is possible to further divide the detection area into more areas by setting up an area that is detected only by the second sensor unit within the detection area, thereby making it possible to identify the detected position of an object that has entered the detection area in more detail while suppressing an increase in the number of sensor units.

[4]
 前記第1投影領域同士が一部重なり合うように配置されており、前記判断部は、2つの前記第1センサユニットから物体を検出した旨の検出信号を受け付けた場合に、前記第1投影領域が重なり合う領域に物体が侵入したと判断する[1]記載の人検知装置。
 このような構成であれば、検知エリア内に2つの第1センサユニットで検知される領域を設定して、検知エリアをさらに多くの領域に分割できる。
[4]
The first projection areas are arranged so as to partially overlap each other, and when the judgment unit receives a detection signal from two of the first sensor units indicating that an object has been detected, it judges that an object has entered the area where the first projection areas overlap. [1] A human detection device as described in
With this configuration, it is possible to set areas that are detected by two first sensor units within the detection area, thereby dividing the detection area into even more areas.

[5]
 前記センサユニットは、PIR素子及び該PIR素子に入射する赤外線の範囲である前記単位ゾーンを規定する光学部材を備えるものであり、
 前記光学部材は、左右に並ぶ複数のレンズを有しており、各レンズによって規定される赤外線入射範囲である分割ゾーンが、上方から視て、左右に並び、前記単位ゾーンを形成するように構成されており、
 前記第1センサユニットの単位ゾーンを規定するレンズ数と、上方から視た場合に前記第2センサユニットの単位ゾーンのうち、前記第1センサユニットの単位ゾーンに重なる領域を規定する当該第2センサユニットのレンズ数が同数である[1]記載の人検知装置。
 このような構成であれば、第1センサユニットと第2センサユニットとのレンズ数が同数であることにより、これら2つのセンサユニットの分割ゾーンの大きさをそれぞれ揃えて重ねることが可能になる。これにより第1センサユニット及び第2センサユニットから出力される検出信号のタイミングを揃えやすくなり、各検出信号が同じ物体を検出したものであるかの判定がしやすくなる。
[5]
the sensor unit includes a PIR element and an optical member that defines the unit zone, which is a range of infrared light incident on the PIR element;
the optical member has a plurality of lenses arranged side by side, and divided zones, which are infrared incidence ranges defined by the respective lenses, are arranged side by side when viewed from above, forming the unit zones;
A human detection device as described in [1], wherein the number of lenses defining the unit zone of the first sensor unit is the same as the number of lenses of the second sensor unit that define the area of the unit zone of the second sensor unit that overlaps with the unit zone of the first sensor unit when viewed from above.
With this configuration, since the first sensor unit and the second sensor unit have the same number of lenses, it is possible to align and overlap the divided zones of these two sensor units, which makes it easier to align the timing of the detection signals output from the first sensor unit and the second sensor unit, making it easier to determine whether the detection signals represent the same object.

[6]
 前記判断部が特定した物体検出位置ごとへの物体の侵入回数を把握可能な位置別侵入回数示唆データを生成し、これをディスプレイ等の他機器に送信する侵入回数示唆データ出力部をさらに備える[1]記載の人検知装置。
 このような構成であれば、位置別侵入回数示唆データによって物体検出位置ごとへの物体の侵入回数を把握できるので、その侵入回数が相当数を超えるような物体検出位置は、何らかの誤報要因があるとして、その物体検出位置での物体検知を停止したり、物体検出レベルを調整したりして、誤報を低減することができる。
[6]
The human detection device described in [1] further comprises an intrusion count suggestion data output unit that generates position-specific intrusion count suggestion data that can grasp the number of times an object has intruded into each object detection position identified by the judgment unit, and transmits this to other devices such as a display.
With this configuration, the number of times an object has intruded into each object detection position can be ascertained using the position-specific intrusion count suggestion data, and therefore, if the object detection position has an intrusion count exceeding a considerable number, it is assumed that there is some factor causing a false alarm, and object detection at that object detection position can be stopped or the object detection level can be adjusted to reduce false alarms.

 このように構成した本発明によれば、使用するセンサユニットの数以上の領域に検知エリアを分割することができるので、検知エリア内に侵入した物体の位置を特定できるようにするとともに、製造コストの増加や装置の大型化を抑制することが可能な人検知装置を提供することができる。 With this configuration, the present invention can divide the detection area into more regions than the number of sensor units used, making it possible to provide a human detection device that can identify the position of an object that has entered the detection area while preventing increases in manufacturing costs and the device size.

本発明の一実施形態に係る人検知装置の全体模式図。1 is an overall schematic diagram of a human detection device according to an embodiment of the present invention; 同実施形態における人検知装置の機能ブロック図。FIG. 2 is a functional block diagram of the human detection device according to the embodiment. 上方から視た同実施形態における人検知装置の内部構造の模式図。FIG. 2 is a schematic diagram of the internal structure of the human detection device according to the embodiment, as viewed from above. 前方から視た同実施形態の人検知装置の基板の模式図。FIG. 2 is a schematic diagram of a circuit board of the human detection device of the embodiment as viewed from the front. 同実施形態における左右方向から視た検知エリアの模式図。5 is a schematic diagram of a detection area viewed from the left and right in the embodiment; FIG. 同実施形態における上方から視た検知エリアの模式図。FIG. 4 is a schematic diagram of a detection area viewed from above in the embodiment. 同実施形態において検知エリア内に設定される区画を示す模式図。FIG. 3 is a schematic diagram showing sections set within a detection area in the embodiment. 同実施形態における投影領域と区画との関係を示す表。10 is a table showing the relationship between projection areas and sections in the embodiment. 同実施形態における上方から視た単位ゾーンと分割ゾーンの模式図。FIG. 4 is a schematic diagram of unit zones and divided zones as viewed from above in the embodiment. 同実施形態の判断部が用いる論理関数の真理値表。3 is a truth table of a logic function used by the determination unit of the embodiment; 第2実施形態における上方から視た検知エリアの模式図。FIG. 11 is a schematic diagram of a detection area viewed from above in the second embodiment. 第2実施形態において検知エリア内に設定される区画を示す模式図。FIG. 10 is a schematic diagram showing sections set within a detection area in the second embodiment. 第2実施形態における投影領域と区画との関係を示す表。10 is a table showing the relationship between projection areas and sections in the second embodiment. 第3実施形態における上方から視た検知エリアの模式図。FIG. 11 is a schematic diagram of a detection area viewed from above in the third embodiment. 第3実施形態において検知エリア内に設定される区画を示す模式図。FIG. 11 is a schematic diagram showing sections set within a detection area in the third embodiment. 第3実施形態における投影領域と区画との関係を示す表。11 is a table showing the relationship between projection areas and sections in the third embodiment. 前方から視た第3実施形態の人検知装置の基板の模式図。FIG. 11 is a schematic diagram of a circuit board of a human detection device according to a third embodiment, as viewed from the front. 第4実施形態における人検知装置の機能ブロック図。FIG. 10 is a functional block diagram of a human detection device according to a fourth embodiment. 第6実施形態における左右方向から視た検知エリアの模式図。FIG. 20 is a schematic diagram of a detection area viewed from the left and right in the sixth embodiment.

100・・・人検知装置
H  ・・・筐体
FC ・・・カバー
S  ・・・基板
1  ・・・第1センサユニット
11 ・・・第1赤外線センサ
12 ・・・第1光学部材
2  ・・・第2センサユニット
21 ・・・第2赤外線センサ
22 ・・・第2光学部材
3  ・・・判断部
4  ・・・侵入回数示唆データ出力部
X  ・・・検知エリア
A~C・・・第1投影領域
D  ・・・第2投影領域
100... Human detection device H... Housing FC... Cover S... Board 1... First sensor unit 11... First infrared sensor 12... First optical member 2... Second sensor unit 21... Second infrared sensor 22... Second optical member 3... Determination unit 4... Intrusion count suggestion data output unit X... Detection areas A to C... First projection area D... Second projection area

 以下、本発明に係る人検知装置の第1実施形態について図面を参照して説明する。 The first embodiment of the human detection device according to the present invention will be described below with reference to the drawings.

[第1実施形態]
1.全体構成
 本実施形態の人検知装置は、例えば所定の検知エリアに不審者が侵入した場合に、これを検知して発報する侵入検知システムに使用されるものである。
[First embodiment]
1. Overall Configuration The human detection device of this embodiment is used in an intrusion detection system that detects and issues an alarm when a suspicious person intrudes into a predetermined detection area, for example.

 この人検知装置は、図1から図3に示すように、基準面よりも高い位置に取り付けられるとともに、斜め下方に向かって延びる単位ゾーンを形成しこの単位ゾーンに侵入した物体を検出する複数の第1センサユニット1及び単一の第2センサユニット2と、前記各センサユニット1,2の検出信号に基づいて検知エリアXに人が侵入したか否かを判断する判断部3とを備えたものである。基準面は、例えばこの人検知装置100の設置場所の地面、床面、又は当該地面や床面と平行な面である。 As shown in Figures 1 to 3, this human detection device is mounted at a position higher than a reference plane, and comprises a plurality of first sensor units 1 and a single second sensor unit 2 that form unit zones extending diagonally downward and detect objects that enter these unit zones, and a determination unit 3 that determines whether a human has entered detection area X based on the detection signals of each of the sensor units 1 and 2. The reference plane is, for example, the ground or floor surface of the location where this human detection device 100 is installed, or a surface parallel to the ground or floor surface.

 前記各センサユニット1,2はまとまって一か所に設けられており、ここでは前記判断部3とともに共通の筐体Hに収容されている。ここでの筐体Hは、図1に示すように、縦長の柱状をなし、その前面が前記検知エリアXを向くように、壁や柱、天井等に取り付けられるものである。筐体Hの取り付け高さは、例えば、前記基準面から0.5~4.0m程度に設定されている。 The sensor units 1 and 2 are all installed in one location, and here they are housed together with the determination unit 3 in a common housing H. As shown in Figure 1, the housing H here is a vertically elongated pillar that is attached to a wall, pillar, ceiling, etc. with its front surface facing the detection area X. The mounting height of the housing H is set to, for example, approximately 0.5 to 4.0 m from the reference plane.

 人検知装置100から視て鉛直方向に上か下かで上下方向が規定され、人検知装置100から視て検知エリアXの中心に向かう方向を前方向として前後方向が規定され、前記上下方向及び前後方向と直交する方向が左右方向として規定されるものとする。 The up-down direction is defined as being either up or down in the vertical direction when viewed from the human detection device 100, the front-to-back direction is defined as the direction toward the center of the detection area X when viewed from the human detection device 100, and the direction perpendicular to the up-down direction and the front-to-back direction is defined as the left-to-right direction.

2.装置構成
2-1.第1センサユニット1
 各第1センサユニット1は、図2に示すように、第1赤外線センサ11と、前記第1赤外線センサ11に入射する赤外線の範囲である第1単位ゾーンを規定する第1光学部材12と、前記第1赤外線センサ11の出力信号に基づいて検出信号を出力する第1検出回路とを有するものである。
2. Device Configuration 2-1. First Sensor Unit 1
As shown in Figure 2, each first sensor unit 1 has a first infrared sensor 11, a first optical member 12 that defines a first unit zone, which is the range of infrared light that enters the first infrared sensor 11, and a first detection circuit that outputs a detection signal based on the output signal of the first infrared sensor 11.

 本実施形態では、第1センサユニット1が3つ設けられている。これら3つの第1センサユニットの各第1単位ゾーンは、左右方向に互いに異なる方向を向くように設定されている。ここでは3つの第1センサユニット1を区別する場合には、1a、1b、1cとの符号を付して表す。 In this embodiment, three first sensor units 1 are provided. The first unit zones of these three first sensor units are set to face in different directions in the left-right direction. Here, when distinguishing between the three first sensor units 1, they are designated by the symbols 1a, 1b, and 1c.

2-1-1.第1赤外線センサ11
 第1赤外線センサ11は、図3(a)(b)に示すように、物体から発せられる赤外線を検出するパッシブ型の赤外線センサであり、ここでは、2つのPIR素子を1つのカンパッケージに搭載した焦電型デュアルタイプのものである。第1赤外線センサ11は、入射赤外線の変動を検出して、その変動量を示すアナログ出力信号を出力するものである。
2-1-1. First infrared sensor 11
3(a) and 3(b), the first infrared sensor 11 is a passive infrared sensor that detects infrared rays emitted from an object, and in this case is a pyroelectric dual type sensor with two PIR elements mounted in a single can package. The first infrared sensor 11 detects fluctuations in the incident infrared rays and outputs an analog output signal indicating the amount of fluctuation.

 第1赤外線センサ11は、そのセンサ面を前方に向けた姿勢で前記筐体H内に収容された基板Sに搭載されている。ここでは、3つの第1センサユニット1の第1赤外線センサ11が、前記基板Sに上下に並ぶように搭載されている。ここでの基板Sは、いわゆるPCBであり、前記第1赤外線センサ11の搭載面が前方を向くように配置されている。 The first infrared sensor 11 is mounted on a circuit board S housed within the housing H with its sensor surface facing forward. Here, the first infrared sensors 11 of the three first sensor units 1 are mounted on the circuit board S in a vertical line. The circuit board S here is a so-called PCB, and is arranged so that the mounting surface of the first infrared sensor 11 faces forward.

2-1-2.第1光学部材12
 本実施形態の第1光学部材12は、図3(a)(b)に示すように、複数のレンズLからなる第1レンズセットである。この第1レンズセット12は、前記筐体Hの前面を構成するカバーFCと一体に形成されている。カバーFCは、前記基板Sの赤外線センサ搭載面に対向して配置され、当該基板Sを覆うように設けられている。
2-1-2. First optical member 12
3( a) and 3(b), the first optical member 12 of this embodiment is a first lens set consisting of a plurality of lenses L. This first lens set 12 is formed integrally with a cover FC that forms the front surface of the housing H. The cover FC is disposed opposite the infrared sensor mounting surface of the substrate S, and is provided to cover the substrate S.

 第1レンズセット12は、複数の第1センサユニット1ごとに設けられており、各第1レンズセット12は同じ第1センサユニット1の第1赤外線センサ11のセンサ面と前後に対向するように設けられている。本実施形態では、上下に並ぶ3つの第1赤外線センサ11に合わせて、3つの第1レンズセット12が上下に並んでカバーFCに設けられている。 A first lens set 12 is provided for each of the multiple first sensor units 1, and each first lens set 12 is provided so as to face the sensor surface of the first infrared sensor 11 of the same first sensor unit 1 from front to back. In this embodiment, three first lens sets 12 are provided on the cover FC in a vertically aligned manner to match the three vertically aligned first infrared sensors 11.

 各第1センサユニット1の第1レンズセット12は、カバーFC(又は筐体H)を上方から視て、左右方向に互いに重ならないようにズラして配置されている。 The first lens sets 12 of each first sensor unit 1 are offset left and right so as not to overlap when viewing the cover FC (or housing H) from above.

 上下に隣り合うレンズセットの間には、赤外線を遮蔽する遮蔽部材(図示しない)が設けられているとよい。例えば水平方向に沿った板状の遮蔽部材を筐体Hの内壁に設けることで、一のセンサユニットのレンズセットを通った赤外線が、他のセンサユニットの赤外線センサに入射しないように構成できる。 It is advisable to provide a shielding member (not shown) between adjacent lens sets in the vertical direction to block infrared rays. For example, by providing a horizontal, plate-shaped shielding member on the inner wall of the housing H, it is possible to prevent infrared rays that pass through the lens set of one sensor unit from entering the infrared sensors of other sensor units.

 レンズLは、検知エリアX内からの赤外線を各赤外線センサに集光して入射させるものであり、ここではフレネルレンズである。ここでの各レンズLは、同じレンズセットを構成する他のレンズLと繋がって設けられているが、離間して設けられていてもよい。 The lenses L, which are Fresnel lenses in this example, focus infrared light from within the detection area X onto each infrared sensor. Each lens L is connected to other lenses L that make up the same lens set, but they may also be spaced apart.

 各第1レンズセット12を構成する複数のレンズLは、図3(b)に示すように、前記筐体Hを前方から視て、同じ高さで左右に並ぶように配置されている。第1レンズセット12はそれぞれ3枚のレンズLで構成されている。なお、各第1レンズセット12を構成するレンズの枚数は互いに異なっていてもよい。 As shown in Figure 3(b), the multiple lenses L that make up each first lens set 12 are arranged side by side at the same height when viewing the housing H from the front. Each first lens set 12 is composed of three lenses L. Note that the number of lenses that make up each first lens set 12 may differ from each other.

2-1-3.第1検出回路
 第1検出回路は、例えば前記基板Sに形成されたものであり、前記第1赤外線センサ11から受け付けたアナログ出力信号の値を所定の閾値と比較して、その大小に応じて1または0の値を有するデジタル出力信号である検出信号を出力するものである。ここでの検出回路は、物体を検出している場合に1の値を示す検出信号を出力し、検出信号が物体を検出していない場合は0の値を示す検出信号を出力する。
The first detection circuit is formed on the substrate S, for example, and compares the value of the analog output signal received from the first infrared sensor 11 with a predetermined threshold, and outputs a detection signal, which is a digital output signal having a value of 1 or 0 depending on the magnitude of the comparison. The detection circuit here outputs a detection signal that indicates a value of 1 when an object is detected, and outputs a detection signal that indicates a value of 0 when the detection signal does not detect an object.

 各第1検出回路は、第1センサユニット1ごとに設けられ、それぞれ対応する第1赤外線センサ11から受け付けた出力信号に基づいて、個別に検出信号を出力する。 Each first detection circuit is provided for each first sensor unit 1, and outputs an individual detection signal based on the output signal received from the corresponding first infrared sensor 11.

2-2.第2センサユニット2
 第2センサユニット2は、図2に示すように、第2赤外線センサ21と、前記第2赤外線センサ21に入射する赤外線の範囲である第2単位ゾーンを規定する第2光学部材22と、前記第2赤外線センサ21の出力信号に基づいて検出信号を出力する第2検出回路(図示しない。)とを有するものである。
2-2. Second sensor unit 2
As shown in Figure 2, the second sensor unit 2 has a second infrared sensor 21, a second optical member 22 that defines a second unit zone, which is the range of infrared light that enters the second infrared sensor 21, and a second detection circuit (not shown) that outputs a detection signal based on the output signal of the second infrared sensor 21.

 本実施形態の第2センサユニット2は、第1センサユニット1よりも検出距離が短くなるように構成されており、第1センサユニット1と比較して近距離に位置する物体を検出するものである。 In this embodiment, the second sensor unit 2 is configured to have a shorter detection distance than the first sensor unit 1, and is capable of detecting objects located at a closer distance than the first sensor unit 1.

2-2-1.第2赤外線センサ21
 第2赤外線センサ21は、第1赤外線センサ11と同様に、焦電型デュアルタイプの赤外線センサであり、入射赤外線の変動を検出して、その変動量を示すアナログ出力信号を出力するものである。
2-2-1. Second infrared sensor 21
The second infrared sensor 21 is a pyroelectric dual type infrared sensor, similar to the first infrared sensor 11, and detects fluctuations in incident infrared light and outputs an analog output signal indicating the amount of fluctuation.

 第2赤外線センサ21は、図3(a)(b)に示すように、そのセンサ面を前方に向けた姿勢で前記基板Sに搭載されている。ここでは、上下に並ぶ3つの第1赤外線センサ11の下に配置されており、4つの赤外線センサ11,21が上下に一列に並んでいる。 As shown in Figures 3(a) and 3(b), the second infrared sensor 21 is mounted on the substrate S with its sensor surface facing forward. In this example, it is positioned below the three first infrared sensors 11 that are lined up vertically, resulting in four infrared sensors 11, 21 lined up vertically.

2-2-2.第2光学部材22
 本実施形態の第2光学部材22は、図2(a)(b)に示すように、複数のレンズLからなる第2レンズセットである。この第2レンズセット22は、第1レンズセット12と同様に、前記カバーFCと一体に形成されている。
2-2-2. Second optical member 22
2(a) and 2(b), the second optical member 22 of this embodiment is a second lens set consisting of a plurality of lenses L. Like the first lens set 12, this second lens set 22 is formed integrally with the cover FC.

 第2レンズセット22は、カバーFC(又は前記筐体H)を上方から視て、各第1レンズセット12それぞれと重なる位置に配置されている。 The second lens sets 22 are positioned so that they overlap with each of the first lens sets 12 when the cover FC (or the housing H) is viewed from above.

 第2レンズセット22を構成する複数のレンズLは、図3(b)に示すように、前記筐体Hを前方から視て、同じ高さで左右に並ぶように配置されている。第2レンズセット22はそれぞれ9枚のレンズLで構成されている。 As shown in Figure 3(b), the multiple lenses L that make up the second lens set 22 are arranged side by side at the same height when viewing the housing H from the front. Each second lens set 22 is composed of nine lenses L.

2-2-3.第2検出回路
 第2検出回路は、例えば前記基板Sに形成されたものであり、前記第2赤外線センサ21から受け付けたアナログ出力信号の値を所定の閾値と比較して、その大小に応じて1または0の値を有するデジタル出力信号である検出信号を出力するものである。ここでの検出回路は、物体を検出している場合に1の値を示す検出信号を出力し、検出信号が物体を検出していない場合は0の値を示す検出信号を出力する。
The second detection circuit is formed on the substrate S, for example, and compares the value of the analog output signal received from the second infrared sensor 21 with a predetermined threshold, and outputs a detection signal, which is a digital output signal having a value of 1 or 0 depending on the magnitude of the comparison. The detection circuit here outputs a detection signal indicating a value of 1 when an object is detected, and outputs a detection signal indicating a value of 0 when the detection signal does not detect an object.

2-3.判断部3
 判断部3は、物理的には、例えば前記筐体H内に収容されたコンピュータ(図示しない)である。このコンピュータは、CPU、メモリ、I/Oインターフェース、通信インターフェースなどを備えたものであって、前記メモリに格納された所定のプログラムにしたがってCPUや周辺機器を協動することにより、検知エリアXに人が侵入したか否かを判断する判断部3としての機能を発揮するものである。
2-3. Judgment part 3
The determination unit 3 is physically a computer (not shown) housed in, for example, the housing H. This computer is equipped with a CPU, memory, an I/O interface, a communication interface, etc., and performs the function of the determination unit 3, which determines whether or not a person has entered the detection area X, by operating the CPU and peripheral devices in cooperation with each other in accordance with a predetermined program stored in the memory.

 具体的に判断部3は、各センサユニット1,2から受け付けた検出信号に基づいて、検知エリアXにおける物体が検出された位置(以下、物体検出位置という。)を特定するとともに、検知エリアXに人が侵入したか否かを判断するものである。 Specifically, the determination unit 3 identifies the position in the detection area X where an object is detected (hereinafter referred to as the object detection position) based on the detection signals received from each sensor unit 1, 2, and determines whether a person has entered the detection area X.

 前記物体検出位置は、第1センサユニット1からの検出信号の値と、第2センサユニット2からの検出信号の値とを論理演算することで特定される。例えば、第1センサユニット1及び第2センサユニット2の双方から1の値を示す検出信号を受け付けた場合には、これらセンサユニット1,2双方によって検出される領域が物体検出位置として特定され、第1センサユニット1のみから前記1の値を有する検出信号を受け付けた場合には、当該第1センサユニット1のみによって検出される領域が物体検出位置として特定される。 The object detection position is determined by performing a logical operation on the value of the detection signal from the first sensor unit 1 and the value of the detection signal from the second sensor unit 2. For example, if a detection signal indicating a value of 1 is received from both the first sensor unit 1 and the second sensor unit 2, the area detected by both sensor units 1 and 2 is determined as the object detection position, and if a detection signal having the value of 1 is received only from the first sensor unit 1, the area detected only by the first sensor unit 1 is determined as the object detection position.

 物体検出位置の特定には、同じ物体が原因で発生した第1センサユニット1の検出信号と第2センサユニット2の検出信号とが用いられる。具体的には判断部3が所定の期間内に受け付けた各センサユニット1,2の検出信号を用いて物体検出位置が特定される。ここで所定の期間内は、例えば、人が単位ゾーンを横切る場合にかかる時間などに基づいて決定される。この期間は、各センサユニット1,2の単位ゾーンの配置の違いや光学部材又は回路のバラツキに基づいてズレてしまう誤差を許容できる程度に幅を持って設定されるとよい。なお判断部3は、所定の期間内で発生した検出信号を用いて物体検出位置が特定してもよい。 To identify the object detection position, the detection signal of the first sensor unit 1 and the detection signal of the second sensor unit 2, both of which are generated by the same object, are used. Specifically, the object detection position is identified using the detection signals of each sensor unit 1, 2 received by the judgment unit 3 within a predetermined period. Here, the predetermined period is determined based on, for example, the time it takes for a person to cross a unit zone. This period should be set with a width large enough to allow for errors that may occur due to differences in the arrangement of the unit zones of each sensor unit 1, 2 and variations in optical components or circuits. The judgment unit 3 may also identify the object detection position using the detection signals generated within the predetermined period.

 判断部3は、検知エリアXに人が侵入したと判断した場合に、その旨を示す人検知信号を出力する。本実施形態の人検知信号には、当該人検知信号を発生させた物体の検出位置を示す物体検出位置が紐づけられている。 If the determination unit 3 determines that a person has entered the detection area X, it outputs a human detection signal indicating this. In this embodiment, the human detection signal is linked to an object detection position that indicates the detected position of the object that generated the human detection signal.

3.検知エリアX
 人検知装置100の検知エリアXは、図4から図6に示すように、複数の第1単位ゾーン及び単一の第2単位ゾーンが組み合わさって形成されるものである。以下、図4から図6を参照しながら、本発明に係る人検知装置100の検知エリアXについて詳述する。
3. Detection area X
The detection area X of the human detection device 100 is formed by combining a plurality of first unit zones and a single second unit zone, as shown in Figures 4 to 6. The detection area X of the human detection device 100 according to the present invention will be described in detail below with reference to Figures 4 to 6.

3-1.単位ゾーン
 図4は、左右方向から視た検知エリアXにおける第1単位ゾーンと第2単位ゾーンの関係を表す模式図である。
 各単位ゾーンは、上述したように、基準面より高い位置に取り付けられたセンサユニット1,2から斜め下方に向かって延びるように形成されている。各単位ゾーンは前記基準面と交わるまで延びており、この単位ゾーンが前記基準面を終端に途切れることによって、センサユニット1,2の検出距離が制限されている。
3-1 Unit Zones Fig. 4 is a schematic diagram showing the relationship between the first unit zone and the second unit zone in the detection area X as viewed from the left and right.
As described above, each unit zone is formed to extend diagonally downward from the sensor units 1 and 2 that are attached at a position higher than the reference surface. Each unit zone extends until it intersects with the reference surface, and the detection distance of the sensor units 1 and 2 is limited by the unit zone ending at the reference surface.

 第1単位ゾーンは、第2単位ゾーンよりも検出距離が長く設定されている。このために、ここでは第1単位ゾーンの前記基準面に対する傾きが、第2単位ゾーンの前記基準面に対する傾きよりも緩やかになるように設定されている。また上方から視た場合に第2単位ゾーンは、第1単位ゾーンの手前側において各第1単位ゾーンそれぞれと重なるように設定されている。 The first unit zone is set to have a longer detection distance than the second unit zone. For this reason, the inclination of the first unit zone relative to the reference plane is set to be gentler than the inclination of the second unit zone relative to the reference plane. Furthermore, when viewed from above, the second unit zone is set to overlap with each of the first unit zones on the near side of the first unit zones.

 本実施形態では、3つの第1単位ゾーンそれぞれの基準面に対する傾きが略同一になるように設定されているが、これに限られない。 In this embodiment, the three first unit zones are set to have approximately the same inclination relative to the reference plane, but this is not limited to this.

 ここでは左右方向から視た検知エリアXにおいて、第1単位ゾーンと第2単位ゾーンとが重なることなく、かつ、隙間なく上下に並ぶように設定されているが、第1単位ゾーンと第2単位ゾーンとは重なるように設定されていたり、離間して設定されていてもよい。 Here, in the detection area X viewed from the left and right, the first unit zone and the second unit zone are set so that they do not overlap and are lined up one above the other with no gaps between them, but the first unit zone and the second unit zone may also be set so that they overlap or are spaced apart.

3-2.投影領域
 図5(a)は、上方から視た場合における検知エリアXを表した模式図である。第1投影領域A~Cは、上方から視て第1センサユニット1a~1cの第1単位ゾーンをそれぞれ基準面上に投影した領域であり、第2投影領域Dは、上方から視て第2センサユニット2の第2単位ゾーンを基準面上に投影した領域である。
5A is a schematic diagram showing the detection area X when viewed from above. The first projection areas A to C are areas where the first unit zones of the first sensor units 1a to 1c are respectively projected onto the reference surface when viewed from above, and the second projection area D is an area where the second unit zone of the second sensor unit 2 is projected onto the reference surface when viewed from above.

 上方から視た各投影領域A~Dは、前記光学部材12、22によって規定される水平方向における角度範囲(水平視野角という。)でセンサユニット1,2から扇状に広がって形成されている。各投影領域A~Dは、基準面上における各センサユニット1,2(又は筐体H)の真下を基端として前方向に、各単位ゾーンが基準面と交わって途切れる位置まで延びている。各投影領域A~Dにおいて前記基端から最も遠い縁を先端縁という。 When viewed from above, each projection area A to D is formed in a fan shape spreading out from sensor units 1 and 2 within a horizontal angular range (called the horizontal field of view angle) defined by the optical members 12 and 22. Each projection area A to D has its base end directly below each sensor unit 1 and 2 (or housing H) on the reference plane and extends forward to the position where each unit zone intersects with the reference plane and ends. The edge of each projection area A to D farthest from the base end is called the leading edge.

 第1投影領域A~Cの先端縁(以下、第1先端縁ともいう。)は、第2投影領域Dの先端縁(以下、第2先端縁ともいう。)よりも、前記基端から遠くに位置するように設定されている。ここでは、第1投影領域A~Cの基端から先端縁までの距離は、例えば第2投影領域Dの基端から先端までの距離の2倍に設定されている。 The leading edges of the first projection areas A to C (hereinafter also referred to as the first leading edges) are set to be located farther from the base end than the leading edge of the second projection area D (hereinafter also referred to as the second leading edge). Here, the distance from the base end to the leading edge of the first projection areas A to C is set to, for example, twice the distance from the base end to the tip of the second projection area D.

 各第1投影領域A~Cは上方から視て左右方向に並ぶように設定されている。ここでの3つの第1投影領域A~Cは、上方から視て重ならないように離間して並んでいるが、隙間無く並んでいてもよい。隣り合う第1投影領域間の距離は、その二つの投影領域の間に検知対象以上の大きさの死角ができない程度の間隔がよい。 The first projection areas A to C are set to line up in the left-right direction when viewed from above. Here, the three first projection areas A to C are spaced apart so that they do not overlap when viewed from above, but they may also be lined up without any gaps. The distance between adjacent first projection areas should be such that there is no blind spot between the two projection areas that is larger than the detection target.

 第2投影領域Dの水平視野角は、各第1投影領域A~Cの水平視野角よりも大きく設定されている。ここでは、第2投影領域Dの水平視野角は、左右に並ぶ複数の第1投影領域A~Cが集まって形成される扇状の領域の中心角と略同一に設定されている。 The horizontal viewing angle of the second projection area D is set to be larger than the horizontal viewing angle of each of the first projection areas A to C. Here, the horizontal viewing angle of the second projection area D is set to be approximately the same as the central angle of the fan-shaped area formed by the multiple first projection areas A to C lined up side by side.

 第2投影領域Dは、複数の第1投影領域A~Cそれぞれと重なるように設定されている。前述の通り、各第1先端縁が第2先端縁より遠い位置に設定されていることにより、第2投影領域Dは、各第1投影領域A~Cの基端側において、各第1投影領域A~Cそれぞれと互いに重なる。 The second projection area D is set to overlap with each of the multiple first projection areas A to C. As mentioned above, because each first tip edge is set farther away than the second tip edge, the second projection area D overlaps with each of the first projection areas A to C on the base end side of each of the first projection areas A to C.

 このようにして本実施形態では、上から視た検知エリアXの基端側に第1投影領域A~Cと第2投影領域Dとが重なる領域が設定され、先端側に第1投影領域A~Cのうち第2投影領域Dと重ならない領域が設定されている。以下、このように第1投影領域と第2投影領域との重なり態様によって、区別される領域を区画ということとする。 In this way, in this embodiment, an area where the first projection areas A to C and the second projection area D overlap is set on the base end side of the detection area X when viewed from above, and an area of the first projection areas A to C that does not overlap with the second projection area D is set on the tip end side. Hereinafter, areas that are distinguished in this way by the overlapping state of the first projection area and the second projection area will be referred to as sections.

3-3.検知エリアXに設定される区画I~VI
 図5(b)は、本実施形態において、上方から視た検知エリアX内に設定される6つの区画を表した模式図である。区画I~IIIは、各第1投影領域A~Cのいずれか1つのみで構成されている領域(前記重ならない領域)であり、区画IV~VIは、各第1投影領域A~Cのいずれか1つと第2投影領域Dとが重なって構成されている領域(前記重なる領域)である。図5(c)の表は、各区画I~VIと、これらの区画I~VIを構成する投影領域A~Dの組み合わせを表している。
3-3. Sections I to VI set in detection area X
5B is a schematic diagram showing six sections set within the detection area X as viewed from above in this embodiment. Sections I to III are sections consisting of only one of the first projection regions A to C (the non-overlapping sections), and sections IV to VI are sections consisting of one of the first projection regions A to C overlapping with the second projection region D (the overlapping sections). The table in FIG. 5C shows the combinations of sections I to VI and the projection regions A to D that make up these sections I to VI.

 本実施形態の区画I~VIは、上方から視た検知エリアXを前後左右にマトリクス状に分割している。基端側の区画I~III(又は先端側の区画IV~VI)は、人検知装置100から視て前後方向の距離は同じだが左右方向の配置が互いに異なる横並びの領域である。共通の第1投影領域Aで構成される区画I、IV等は、前後方向の距離が互いに異なる縦並びの領域である。 In this embodiment, sections I to VI divide the detection area X, as viewed from above, into a matrix pattern from front to back and left to right. Sections I to III on the base end side (or sections IV to VI on the tip end side) are horizontally arranged areas that are the same distance from each other in the front to back direction when viewed from the human detection device 100, but have different left to right arrangements. Sections I, IV, etc., which are made up of the common first projection area A, are vertically arranged areas that are different distances from each other in the front to back direction.

 本実施形態では、基準面に対して所定の高さを有する物体Oが区画I~IIIに侵入した場合に、図4に示すように、第1単位ゾーンだけに侵入し第1センサユニット1のみで検出されるように設定されている。また本実施形態では、前記物体Oが区画IV~VIに侵入した場合に、第1単位ゾーン及び第2単位ゾーンの双方に侵入し、第1センサユニット1及び第2センサユニット2の双方で検出されるように設定されている。なお、各区画IV~VIの全域において、侵入した物体Oが第1センサユニット1及び第2センサユニット2の双方で検出されるように設定されている必要はない。 In this embodiment, when an object O having a predetermined height relative to the reference plane enters sections I to III, as shown in Figure 4, it is set up so that it enters only the first unit zone and is detected only by the first sensor unit 1. Also, in this embodiment, when the object O enters sections IV to VI, it enters both the first and second unit zones and is detected by both the first sensor unit 1 and the second sensor unit 2. It is not necessary to set up so that the entering object O is detected by both the first sensor unit 1 and the second sensor unit 2 throughout the entire area of each of sections IV to VI.

 具体的に本実施形態では、基端側の区画IV~VIにおいて、第1単位ゾーンが基準面から前記所定の高さより下を通るように設定されており、これにより当該区画IV~VIに侵入した物体Oが第1センサユニット1及び第2センサユニット2の双方で検出されることとなる。前記所定の高さは、検知対象の高さに合わせて設定されるものであり、ここでは検知対象である人の身長の高さに合わせて設定されている。 Specifically, in this embodiment, in sections IV to VI on the base end side, the first unit zone is set to pass below the predetermined height from the reference plane, so that an object O that enters one of sections IV to VI is detected by both the first sensor unit 1 and the second sensor unit 2. The predetermined height is set to match the height of the object to be detected, and in this case, it is set to match the height of the person being detected.

 所定の高さは、各センサユニット1,2の取り付け高さ、単位ゾーンの角度範囲、又は、各単位ゾーンの基準面に対する角度が調整されることによって設定される。ここでは、人検知装置100を人の身長に近い位置に取り付けることで、検知エリアXのほぼ全体において各単位ゾーンに人が侵入するように構成されている。 The specified height is set by adjusting the mounting height of each sensor unit 1, 2, the angular range of the unit zone, or the angle of each unit zone relative to the reference plane. Here, by mounting the human detection device 100 at a position close to the height of a person, it is configured to prevent a person from entering each unit zone throughout almost the entire detection area X.

3-4.分割ゾーン
 更に本実施形態の各単位ゾーンは、図6に示すように複数の分割ゾーンによって形成されるものである。分割ゾーンは、レンズを通って赤外線センサ11、21に入射する赤外線の角度範囲(赤外線入射範囲)であり、レンズの焦点距離によって規定されるものである。
3-4. Divided Zones Furthermore, each unit zone in this embodiment is formed by a plurality of divided zones as shown in Fig. 6. A divided zone is the angular range (infrared incident range) of infrared rays that pass through the lens and enter the infrared sensors 11 and 21, and is defined by the focal length of the lens.

 検知エリアXを上方から視た場合に、各単位ゾーンは左右に並ぶ複数の分割ゾーンによって形成されている。本実施形態の第1センサユニット1a~1cは、3枚のレンズ(上述したレンズセット)を有し、3つの分割ゾーンで第1単位ゾーンを形成する。第2センサユニット2は、9枚のレンズを有し、9つの分割ゾーンで第2単位ゾーンを形成する。 When the detection area X is viewed from above, each unit zone is formed by multiple divided zones lined up on the left and right. In this embodiment, the first sensor units 1a to 1c have three lenses (the lens set described above), and the first unit zone is formed by three divided zones. The second sensor unit 2 has nine lenses, and the second unit zone is formed by nine divided zones.

 本実施形態では、図6に示すように上方から視た分割ゾーンの角度範囲(水平視野角)が、第1センサユニット1と第2センサユニット2とで略同一になるように設定されている。 In this embodiment, as shown in Figure 6, the angular range (horizontal field of view) of the divided zones when viewed from above is set to be approximately the same for the first sensor unit 1 and the second sensor unit 2.

 また、検知エリアXを上方から視て、各第1単位ゾーンと、第2単位ゾーンとがそれぞれ重なる領域において、第1単位ゾーンを形成する分割ゾーンと、第2単位ゾーンを形成する分割ゾーンとがそれぞれ重なり合うように設定されている。 Furthermore, when viewing the detection area X from above, in the regions where each first unit zone and each second unit zone overlap, the divided zones that form the first unit zone and the divided zones that form the second unit zone are set to overlap each other.

 本実施形態では、第1単位ゾーンを規定するレンズ数と、第2単位ゾーンのうち、第1単位ゾーンに重なる領域を規定する当該第2センサユニット2のレンズ数が同数になるように構成されており、各第1単位ゾーンの分割ゾーンと、第2単位ゾーンの分割ゾーンとが1対1の関係で重なり合っている。また、重なり合う2つの分割ゾーンは、上方から視てその中心軸が互いに重なるように設定されている。 In this embodiment, the number of lenses defining the first unit zone is the same as the number of lenses of the second sensor unit 2 that define the area of the second unit zone that overlaps with the first unit zone, and the divided zones of each first unit zone and the divided zones of the second unit zone overlap in a one-to-one relationship. Furthermore, the two overlapping divided zones are set so that their central axes overlap each other when viewed from above.

4.動作
 以下、上述のように構成された検知エリアXに物体Oが侵入した場合に、本実施形態の人検知装置100がどのように動作するかを説明する。
4. Operation Hereinafter, it will be described how the human detection device 100 of this embodiment operates when an object O enters the detection area X configured as described above.

 具体的に区画Iに物体が侵入した場合には、まず第1センサユニット1aがこの物体を検出して、1の値を有する検出信号を出力する。他のセンサユニット1b,1c,2は、物体を検出していないので0の値を有する検出信号を出力する。 Specifically, when an object enters section I, first sensor unit 1a detects the object and outputs a detection signal with a value of 1. The other sensor units 1b, 1c, and 2 do not detect the object and therefore output detection signals with a value of 0.

 このように第1センサユニット1aのみから物体を検出した旨の検出信号を受け付けた場合には、第1投影領域Aのうち、第2投影領域Dと重ならない領域である区画Iに物体が侵入したと判断する。より具体的に判断部3は、受け付けた各検出信号を所定の論理関数で論理演算して区画Iを物体検出位置として特定する。ここでの所定の論理関数は、4つのセンサユニット1,2の検出信号の値の論理積を計算して、物体が6つの区画I~VIの何れに侵入したか特定する4入力6出力の論理関数である。図7は、この論理関数を表す真理値表である。 In this way, when a detection signal indicating that an object has been detected is received from only the first sensor unit 1a, it is determined that the object has entered section I, which is an area of the first projection area A that does not overlap with the second projection area D. More specifically, the determination unit 3 performs a logical operation on each of the received detection signals using a predetermined logical function to identify section I as the object detection position. Here, the predetermined logical function is a four-input, six-output logical function that calculates the logical product of the detection signal values of the four sensor units 1 and 2 and determines which of the six sections I to VI the object has entered. Figure 7 is a truth table representing this logical function.

 また、区画IVへ物体が侵入した場合には、第1センサユニット1a及び第2センサユニット2が1の値を有する検知信号が出力し、他のセンサユニット1b、1cが0の値を有する検出信号が出力する。
 このように第1センサユニット1a及び第2センサユニット2の双方から、物体を検出した旨の検出信号を受け付けた場合には、第1投影領域Aと第2投影領域Dとが重なり合う領域である区画IVに物体が侵入したと判断する。より具体的に判断部3は、受け付けた各検出信号を前記論理関数で論理演算して区画IVを物体検出位置として特定する。
Furthermore, when an object enters section IV, the first sensor unit 1a and the second sensor unit 2 output detection signals having a value of 1, and the other sensor units 1b and 1c output detection signals having a value of 0.
In this way, when detection signals indicating that an object has been detected are received from both the first sensor unit 1a and the second sensor unit 2, the determination unit 3 determines that an object has entered section IV, which is the area where the first projection area A and the second projection area D overlap. More specifically, the determination unit 3 performs a logical operation on each of the received detection signals using the logical function, and identifies section IV as the object detection position.

 そして判断部3は、いずれかの区画I~VIへ物体が侵入したと判断した場合に、つまりは物体の検出位置を特定した場合に、検知エリアXに人が侵入したと判断し、その旨を示す人検知信号を例えば警備システムに出力する。さらに判断部3は、物体が侵入した前記区画を示す物体検出位置の情報を、前記人検知信号に紐づけて出力する。なお判断部3は、物体検出位置を示す物体検出位置信号を、人検知信号と別で出力してもよい。 If the determination unit 3 determines that an object has entered any of the sections I to VI, that is, if it has identified the object's detection position, it determines that a person has entered the detection area X and outputs a human detection signal indicating this to, for example, a security system. Furthermore, the determination unit 3 outputs object detection position information indicating the section into which the object has entered, linked to the human detection signal. Note that the determination unit 3 may also output an object detection position signal indicating the object detection position separately from the human detection signal.

 いずれの区画にも物体が侵入していない場合には、各センサユニット1,2は、0の値を有する検出信号を出力する。これら各検出信号を受け付けた判断部3は、前記論理関数で論理演算して検知エリアXにおいて物体は検出されていないと判断する。 If no object has entered any of the sections, each sensor unit 1, 2 outputs a detection signal with a value of 0. The judgment unit 3 receives these detection signals and performs a logical operation using the logic function to determine that no object has been detected in the detection area X.

5.効果
 このように構成された第1実施形態の人検知装置100であれば、使用するセンサユニット1,2の数よりも多くの区画に検知エリアXを分割することができ、センサユニット1,2の数の増加を抑えて製造コストの増加又は装置の大型化を抑制しつつ、検知エリアXに侵入した物体の検出位置をより詳細に特定することが可能になる。本実施形態では4つのセンサユニット1,2で検知エリア内を6つの区画に分割している。
5. Effects With the human detection device 100 of the first embodiment configured as described above, the detection area X can be divided into more sections than the number of sensor units 1, 2 used, and it is possible to more precisely identify the detected position of an object that has entered the detection area X while suppressing an increase in the number of sensor units 1, 2, thereby suppressing an increase in manufacturing costs or an increase in the size of the device. In this embodiment, the four sensor units 1, 2 divide the detection area into six sections.

 第1センサユニット1及び第2センサユニット2双方から受け付けた各検出信号の値の論理積を演算するというシンプルな論理演算により物体の検出位置を特定することができる。 The detected position of an object can be identified by a simple logical operation: calculating the logical product of the values of the detection signals received from both the first sensor unit 1 and the second sensor unit 2.

 物体が侵入した区画を示す物体検出位置の情報が出力されるので、例えば区画ごとの物体侵入回数をカウントすることなどが可能なり、検知エリアXのマスキングや各センサユニットの閾値調整などに役立てることができる。 Since object detection position information indicating the section into which the object has entered is output, it is possible to count the number of times an object has entered each section, for example, and this can be useful for masking detection area X and adjusting the thresholds of each sensor unit.

 検知エリアXが前後左右のマトリクス状に並ぶ区画に分割されているので、マスキング等による検知エリアXの調整を行う場合に、調整後の検知エリアが想像しやすくなり、エリア調整が簡単になる。 Because the detection area X is divided into sections arranged in a matrix from front to back and left to right, when adjusting the detection area X using masking or other methods, it is easier to imagine what the detection area will look like after adjustment, making area adjustment simple.

 第1センサユニット1と第2センサユニット2の分割ゾーンが互いに重なっているので、同一の物体を検出した第1センサユニット1及び第2センサユニット2から出力される検出信号のタイミングを揃えることができる。これにより判断部3における信号処理をシンプルにすることができる。 Because the divided zones of the first sensor unit 1 and the second sensor unit 2 overlap, the timing of the detection signals output from the first sensor unit 1 and the second sensor unit 2 that detect the same object can be aligned. This simplifies signal processing in the determination unit 3.

[第2実施形態]
 第2実施形態の人検知装置100は、図8(a)に示すように、検知エリアXを上方から視た場合に、第2投影領域Dが、第1投影領域A~Cのうち、右端にある第1投影領域Cよりも外側に広がる外側領域を有するように設定されている点で前記第1実施形態と異なる。なお、外側領域は、左端にある第1投影領域Aよりも外側広がっていてもよいし、左端及び右端にある第1投影領域A,Cそれぞれより外側に広がっていてもよい。
Second Embodiment
8A, the human detection device 100 of the second embodiment differs from the first embodiment in that, when the detection area X is viewed from above, the second projection area D is set to have an outer area that extends outward from the first projection area C located at the right end of the first projection areas A to C. Note that the outer area may extend outward from the first projection area A located at the left end, or may extend outward from each of the first projection areas A and C located at the left and right ends.

 具体的に第2実施形態の第2投影領域Dの水平視野角は、左右に並ぶ複数の第1単位ゾーンが集まって形成される扇状の領域の中心角よりも大きく設定されており、検知エリアXにおいて最も右端にある第1単位ゾーンよりも外側(右側)に第2投影領域Dが広がり外側領域を形成している。 Specifically, the horizontal viewing angle of the second projection area D in the second embodiment is set to be larger than the central angle of the fan-shaped area formed by the multiple first unit zones lined up on the left and right, and the second projection area D extends outward (to the right) from the rightmost first unit zone in the detection area X, forming an outer area.

 このように投影領域A~Dが設定されていることにより、図8(a)~(c)に示すように、上方から視た検知エリアXに、第1投影領域A~Cのみで構成される領域(区画I~III)、及び、第1投影領域A~Cと第2投影領域Dとが重なって構成されている領域(区画IV~VI)に加えて、第2投影領域Dのみで構成される前記外側領域(区画VII)が設定されることになる。本実施形態では、4つのセンサユニット1,2によって、検知エリアXが7つの区画に分割されている。 By setting projection areas A to D in this manner, as shown in Figures 8(a) to (c), the detection area X viewed from above includes an area (sections I to III) made up only of the first projection areas A to C, an area (sections IV to VI) made up of the overlapping first projection areas A to C and the second projection area D, and an outer area (section VII) made up only of the second projection area D. In this embodiment, the detection area X is divided into seven sections by the four sensor units 1 and 2.

 そして第2実施形態の判断部3は、第2センサユニット2のみから物体を検出した旨の検出信号を受け付けた場合に、当該第2センサユニット2の第2投影領域Dにおける外側領域に物体が侵入したと判断するものである。ここでは第2センサユニット2のみから物体を検出した旨の検出信号を受け付けた場合に、区画VIIに物体が侵入したと判断し、この区画VIIを物体検出位置として特定する。 In the second embodiment, when the determination unit 3 receives a detection signal indicating that an object has been detected from only the second sensor unit 2, it determines that an object has entered the outer region of the second projection region D of the second sensor unit 2. Here, when a detection signal indicating that an object has been detected is received from only the second sensor unit 2, it determines that an object has entered section VII, and identifies this section VII as the object detection position.

 このような構成であれば、センサユニット1,2の数を増やさずに、さらに検知エリアXを細かく分割することができ検知エリアXに侵入した物体の検出位置をより詳細に特定することを可能になる。 With this configuration, the detection area X can be divided into smaller areas without increasing the number of sensor units 1 and 2, making it possible to more precisely identify the detection position of an object that has entered the detection area X.

[第3実施形態]
 第3実施形態の人検知装置100は、図9(a)に示すように、検知エリアXを上方から視た場合に、第1投影領域A,B同士が一部重なり合うように配置されている点で、前記各実施形態と異なる。
[Third embodiment]
The human detection device 100 of the third embodiment differs from the previous embodiments in that, as shown in Figure 9(a), when the detection area X is viewed from above, the first projection areas A and B are arranged so that they partially overlap each other.

 具体的に第3実施形態の人検知装置100は、図10に示すように、2つの第1センサユニット1a、1bと単一の第2センサユニット2と備えている。2つの第1レンズセット12が、カバーFC(又は前記筐体H)を上方から視て、互いに重なるように配置されており、上記のような検知エリアXが設定されている。 Specifically, as shown in FIG. 10, the human detection device 100 of the third embodiment includes two first sensor units 1a, 1b and a single second sensor unit 2. The two first lens sets 12 are arranged so as to overlap each other when viewing the cover FC (or the housing H) from above, and the detection area X described above is defined.

 第3実施形態の検知エリアXを上方から視ると、図9(a)に示すように、左右に並ぶ2つの第1投影領域A,Bの一部が、それぞれの基端側から先端側にまで亘って互いに重なり合うように設定されている。そして第2投影領域Dが、各第1投影領域A,Bの基端側それぞれと重なるように設定されている。 When the detection area X of the third embodiment is viewed from above, as shown in Figure 9(a), the two first projection areas A and B arranged side by side are configured so that they partially overlap each other from their respective base ends to their respective tip ends. The second projection area D is also configured so that it overlaps with the base ends of each of the first projection areas A and B.

 このように投影領域A,B,Dが設定されていることにより、図10(a)~(c)に示すように、上方から視た検知エリアXには、第1投影領域A,Bの何れか1つのみで構成される領域(区画I,III)、及び、第1投影領域A,Bうちの何れか1つと第2投影領域Dとが一対一の関係で重なって構成されている領域(区画IV,VI)に加えて、2つの第1投影領域A,B同士が重なっている領域(区画II)と、2つの第1投影領域A,Bが重なっている領域に更に第2投影領域Dが重なっている領域(区画V)が設定されることになる。ここでは、3つのセンサユニット1a、1b,2によって、検知エリアXが6つの区画にマトリクス状に分割されている。 By setting projection areas A, B, and D in this manner, as shown in Figures 10(a) to (c), the detection area X viewed from above includes areas consisting of only one of the first projection areas A and B (sections I and III), areas consisting of one of the first projection areas A and B overlapping with the second projection area D in a one-to-one relationship (sections IV and VI), an area where two of the first projection areas A and B overlap (section II), and an area where the area where two of the first projection areas A and B overlap is further overlapped by the second projection area D (section V). Here, the three sensor units 1a, 1b, and 2 divide the detection area X into six sections in a matrix.

 そして第3実施形態の判断部3は、2つの第1センサユニット1a、1bから物体を検出した旨の検出信号を受け付けた場合に、第1投影領域A,Bが重なり合う領域に物体が侵入したと判断するものである。具体的に判断部3は、第1センサユニット1a,1bの双方から物体を検出した旨の検出信号を受け付けた場合には、区画IIに物体が侵入したと判断し、この区画IIを物体検出位置として特定する。 The determination unit 3 of the third embodiment determines that an object has entered the area where the first projection areas A and B overlap when it receives detection signals from the two first sensor units 1a and 1b indicating that an object has been detected. Specifically, when it receives detection signals from both first sensor units 1a and 1b indicating that an object has been detected, it determines that an object has entered section II and identifies this section II as the object detection position.

 また判断部3は、2つの第1センサユニット1a,1bから物体を検出した旨の検出信号を受け付け、かつ、第2センサユニット2からも物体を検出した旨の検出信号を受け付けた場合に、これら3つのセンサユニット1a,1b、2の単位ゾーンが重なり合う領域に物体が侵入したと判断するものである。ここでは、第1センサユニット1a,1b及び第2センサユニット2の全てから物体を検出した旨の検出信号を受け付けた場合には、区画Vに物体が侵入したと判断し、この区画Vを物体検出位置として特定する。 Furthermore, when the determination unit 3 receives detection signals from the two first sensor units 1a, 1b indicating that an object has been detected, and also receives a detection signal from the second sensor unit 2 indicating that an object has been detected, it determines that an object has entered the area where the unit zones of these three sensor units 1a, 1b, 2 overlap. Here, when detection signals indicating that an object has been detected are received from all of the first sensor units 1a, 1b and the second sensor unit 2, it determines that an object has entered section V, and identifies this section V as the object detection position.

 このような構成であれば、センサユニットの数を増やさずに、さらに検知エリアXを細かく分割することができ、検知エリアXに侵入した物体の検出位置をより詳細に特定することを可能になる。 With this configuration, the detection area X can be divided into smaller areas without increasing the number of sensor units, making it possible to more precisely identify the detection position of an object that has entered the detection area X.

[第4実施形態]
 第4実施形態に係る人検知装置100は、図11に示すように、判断部3が特定した物体検出位置ごとへの物体の侵入回数を把握可能な位置別侵入回数示唆データを生成し、これをディスプレイ等の他機器に送信する侵入回数示唆データ出力部4をさらに備えるものである。
 侵入回数示唆データ出力部4は、物理的には前述した判断部3と共通のコンピュータとして設けられている。
[Fourth embodiment]
As shown in Figure 11, the human detection device 100 of the fourth embodiment further includes an intrusion count suggestion data output unit 4 that generates position-specific intrusion count suggestion data that enables the number of object intrusions into each object detection position identified by the judgment unit 3 to be determined, and transmits this to other devices such as a display.
The intrusion count suggestion data output unit 4 is physically provided as a computer common to the above-mentioned determination unit 3 .

 例えば侵入回数示唆データ出力部4は、物体が侵入した位置(ここでは区画I~VIのいずれか)を示す物体検出位置信号を判断部3から受け付けた場合に、当該信号を受信した時刻を取得し、物体が侵入した区画と対応する受信時刻とを互いに紐づけて、メモリの所定領域に格納する。ここでは、各区画についての物体検出位置信号の受信履歴がまとめてログデータとして格納される。 For example, when the intrusion count suggestion data output unit 4 receives an object detection position signal from the judgment unit 3 indicating the position where an object has intruded (here, one of sections I to VI), it acquires the time at which the signal was received, associates the section into which the object has intruded with the corresponding reception time, and stores this in a specified area of memory. Here, the reception history of object detection position signals for each section is stored together as log data.

 侵入回数示唆データ出力部4は、前記メモリの所定領域を参照し、所定期間内における区画毎の物体の侵入回数を示す位置別侵入回数示唆データを生成し、これを他の機器で読み込み可能な形式で出力する。例えば、侵入回数示唆データ出力部4は、区画ごとの物体の侵入回数が把握可能な表データを生成し、これをディスプレイに出力する。 The intrusion count suggestion data output unit 4 references a specified area of the memory, generates position-specific intrusion count suggestion data indicating the number of times an object has intruded into each section within a specified period, and outputs this in a format that can be read by other devices. For example, the intrusion count suggestion data output unit 4 generates table data that allows the number of times an object has intruded into each section to be ascertained, and outputs this to a display.

 このような構成であれば、位置別侵入回数示唆データによって物体検出位置ごとへの物体の侵入回数を把握できるので、その侵入回数が相当数を超えるような物体検出位置は、何らかの誤報要因があるとして、その物体検出位置での物体検知を停止したり、物体検出レベルを調整したりして、誤報を低減することができる。 With this configuration, the number of object intrusions at each object detection position can be determined using the intrusion count suggestion data by position. Therefore, if the number of intrusions exceeds a certain number, it is assumed that there is some factor causing a false alarm, and object detection at that object detection position can be stopped or the object detection level adjusted to reduce false alarms.

[第5実施形態]
 前記各実施形態の判断部は、検知エリア内における物体の検出位置を特定した場合に、検知エリアに人が侵入したと判断し、人検知信号を出力するものであった。
 これに対して、第5実施形態に係る判断部は、物体の検出位置を特定した場合であって、かつ、当該検出位置が警戒位置に設定されている場合に、検知エリア内に人が侵入したと判断し、人検知信号を出力するものである。
Fifth Embodiment
The determination unit in each of the above embodiments determines that a person has entered the detection area when it identifies the detected position of an object within the detection area, and outputs a person detection signal.
In contrast, the judgment unit of the fifth embodiment determines that a person has entered the detection area when the detection position of an object is identified and the detection position is set to an alert position, and outputs a person detection signal.

 例えば判断部は、メモリの所定領域に予め格納された警戒位置データを参照し、当該警戒位置データに示される1又は複数の警戒位置に該当する位置に物体が侵入したと判断した場合に、検知エリア内に人が侵入したと判断する。ここでの警戒位置データは、例えば検知エリアを分割する区画I~VIごとに設定される警戒又は非警戒を示す二値の値である。メモリの所定領域に格納された警戒位置データは、変更可能に保持されており、判断部は、所定の条件で自動的に、又は外部からの変更命令を受け付けた場合に、前記警戒位置データを変更するように構成されている。 For example, the determination unit references alert position data stored in advance in a specified area of memory, and if it determines that an object has entered a position corresponding to one or more alert positions indicated in the alert position data, it determines that a person has entered the detection area. The alert position data here is, for example, a binary value indicating alert or non-alert, set for each of sections I to VI that divide the detection area. The alert position data stored in a specified area of memory is held in a changeable state, and the determination unit is configured to change the alert position data automatically under specified conditions, or when it receives a change command from outside.

 このような構成であれば、検知エリア全体のうち、人が侵入した場合にこれを検知したい領域を警戒位置とし、それ以外の領域を非警戒位置とすることで、有効な検知エリアをより柔軟に設定することができる。 With this configuration, the effective detection area can be set more flexibly by designating the area within the entire detection area where you want to detect human intrusion as an alert position, and designating the remaining area as a non-alert position.

 また警戒位置データを変更することで、物理的なマスキングをすることなく、有効な検知エリアの範囲の調整ができる。これならば物理的なマスキングと異なり、検知エリアを調整する人の技術に関わらずに安定したエリア調整が可能になる。他にも例えば、物理的なエリア調整ではないので、人検知装置と通信可能に構成された入力装置からのオペレータによる入力によって、物体の侵入回数が多い区画を非警戒位置に遠隔で設定することなどができる。 Furthermore, by changing the alert position data, the range of the effective detection area can be adjusted without physical masking. Unlike physical masking, this allows for stable area adjustment regardless of the skill of the person adjusting the detection area. For example, because it is not physical area adjustment, an operator can remotely set areas with a high number of object intrusions as non-alert positions through input from an input device configured to communicate with the human detection device.

[第6実施形態]
 第6実施形態では図12に示すように、人検知装置100が人の身長よりも高い位置(ここでは2倍程度の位置)に設置されており、検知エリアXの基端側において各単位ゾーンが、所定の高さよりも上を通るように設定されている。
Sixth Embodiment
In the sixth embodiment, as shown in FIG. 12, the human detection device 100 is installed at a position higher than the height of a person (here, at a position about twice the height), and each unit zone at the base end of the detection area X is set to pass above a predetermined height.

 これにより、第1投影領域及び第2投影領域が重なる領域の中は、侵入した人が、第1単位ゾーン及び第2単位ゾーン双方に侵入する領域と、第2単位ゾーンのみに侵入する領域と、どの単位ゾーンにも侵入しない領域とにさらに分割される。これならば人検知装置100から離れた位置に検知エリアを形成したい場合に有利である。 As a result, the area where the first projection area and the second projection area overlap is further divided into an area where an intruder enters both the first unit zone and the second unit zone, an area where an intruder enters only the second unit zone, and an area where an intruder does not enter any unit zone. This is advantageous when you want to form a detection area at a location away from the human detection device 100.

[その他の実施形態]
 判断部による検知エリアに人が侵入したか否かの判断は、判断部による物体検出位置の特定と独立して行われてもよい。例えば判断部は、何れか1つのセンサユニットから物体を検出した旨の検出信号を受け付けた場合に、検知エリアに人が侵入したと判断してもよい。この判断と独立して判断部は、各センサユニットから受け付けた検出信号の値を演算して、検知エリア内に侵入している物体の検出位置を特定してよい。
[Other embodiments]
The determination by the determination unit as to whether a person has entered the detection area may be performed independently of the determination unit's identification of the object detection position. For example, the determination unit may determine that a person has entered the detection area when it receives a detection signal indicating that an object has been detected from any one of the sensor units. Independently of this determination, the determination unit may calculate the values of the detection signals received from each sensor unit to identify the detection position of the object entering the detection area.

 前記判断部が人検知及び物体検出位置の特定に利用する検出信号は、各赤外線センサが検出した入射赤外線の変動量を示すアナログ出力信号であってもよく、検出信号の形式に関わらず第1センサユニット及び第2センサユニット双方の検出信号を演算して物体検出位置を特定するものであればよい。 The detection signals used by the determination unit to detect people and identify object detection positions may be analog output signals indicating the amount of fluctuation in the incident infrared light detected by each infrared sensor, and may be any type of detection signal as long as it calculates the detection signals from both the first and second sensor units to identify the object detection position.

 例えば判断部は、とある第1センサユニットの検出信号及び第2センサユニットの検出信号が、所定の閾値以上の値を示し、これら2つの検出信号が示す値の比が所定の範囲内にある場合には、前記第1センサユニットの第1投影領域と、第2センサユニットの第2投影領域とが重なり合う領域を物体検出位置として特定するとよい。 For example, if the detection signal of a certain first sensor unit and the detection signal of a certain second sensor unit indicate a value equal to or greater than a predetermined threshold, and the ratio of the values indicated by these two detection signals is within a predetermined range, the determination unit may identify the area where the first projection area of the first sensor unit and the second projection area of the second sensor unit overlap as the object detection position.

 さらに判断部は、2以上の第1センサユニットのアナログ出力信号に基づいて、それら2以上の第1センサユニットの第1投影領域が互いに重なる領域を物体検出位置として特定してもよい。 Furthermore, the determination unit may identify, as the object detection position, an area where the first projection areas of two or more first sensor units overlap, based on the analog output signals of those two or more first sensor units.

 焦電型PIR素子を用いた人検知装置では、検知エリア内を人検知装置に向かって直進してくる物体の接近を検知しにくい。
 そこで判断部は、第1センサユニットの物体を検出した旨の検出信号と、その検出信号の発生から所定の時間遅れて受け付けた第2センサユニットの物体を検出した旨の検出信号とに基づいて、第1投影領域と第2投影領域とが重なる領域に物体が侵入したか否かの判断をしてもよい。ここで所定の時間とは、第1投影領域に侵入した物体が第2投影領域に侵入するまでにかかる時間に対応しており、例えば第1先端縁から第2先端縁までの距離等に基づいて算出される。
 これならば、焦電型PIR素子を用いた人検知装置において、第1投影領域に侵入し検知エリア内を更に人検知装置に向かって直進してくる物体が、第2投影領域に侵入した場合にこれを検知でき、当該物体の接近が分かる。
A human detection device using a pyroelectric PIR element has difficulty detecting the approach of an object moving straight toward the human detection device within the detection area.
Therefore, the determination unit may determine whether an object has entered the overlapping area between the first projection area and the second projection area based on a detection signal indicating that an object has been detected by the first sensor unit and a detection signal indicating that an object has been detected by the second sensor unit received a predetermined time after the generation of the first detection signal. Here, the predetermined time corresponds to the time it takes for an object that has entered the first projection area to enter the second projection area, and is calculated based on, for example, the distance from the first leading edge to the second leading edge.
In this case, in a human detection device using a pyroelectric PIR element, if an object enters the first projection area and continues to move straight toward the human detection device within the detection area, it can be detected when it enters the second projection area, and the approach of the object can be determined.

 前記各実施形態の第1センサユニットの数は2又は3であったが、4以上であってもよく、複数であればよい。本発明では、複数形成される第1投影領域のそれぞれを単一の第2投影領域と重ならせることによって、少なくとも「第1センサユニットの数×2」の領域を、検知エリア内に設定できる。 In the above embodiments, the number of first sensor units was two or three, but it may be four or more, as long as there is more than one. In the present invention, by overlapping each of the multiple first projection areas with a single second projection area, it is possible to set an area of at least "the number of first sensor units x 2" within the detection area.

 複数の第1センサユニットと、これらの第1投影領域と重なる第2投影領域を形成する単一の第2センサユニットとを1つのセンサユニット群とし、人検知装置は、複数のセンサユニット群を備えるものであってもよい。 A human detection device may be provided with multiple sensor unit groups, each consisting of multiple first sensor units and a single second sensor unit that forms a second projection area that overlaps with the first projection area.

 各センサユニットにおける赤外線センサまたは光学部材の配置は前記各実施形態に記載のものに限られない。また、各センサユニット間で赤外線センサや光学部材の構成を互いに異ならせてもよい。 The arrangement of the infrared sensors or optical components in each sensor unit is not limited to that described in the above embodiments. Furthermore, the configuration of the infrared sensors and optical components may differ between sensor units.

 赤外線センサは、シングルタイプ又はクアッドタイプなどであってもよいし、サーモパイルなど他のPIRセンサであってもよいし、また、赤外線LEDを照射する照射器と、反射した前記赤外線LEDを受光する受光器を有するAIRセンサ等であってもよい。 The infrared sensor may be a single type or quad type, or may be a PIR sensor such as a thermopile, or may be an AIR sensor having an irradiator that irradiates an infrared LED and a receiver that receives the reflected infrared LED light.

 単位ゾーンは、複数の分割ゾーンに分割されていなくてもよい。この場合、各光学部材は、単一のレンズでよい。 The unit zone does not have to be divided into multiple divided zones. In this case, each optical element may be a single lens.

 光学部材は、ミラーで構成されていてもよいし、レンズとミラーが組み合わさって構成されていてもよい。 The optical element may be composed of a mirror, or may be composed of a combination of a lens and a mirror.

 前記各実施形態の人検知装置は、共通の筐体内に設けられたセンサユニットと判断部等として機能するコンピュータとで構成されたものであったが、本発明に係る人検知装置の物理的な構成はこれに限られない。また各構成要件が物理的に存在している場所は各実施形態において示した場所に限られない。 The human detection device in each of the above embodiments is configured with a sensor unit and a computer functioning as a judgment unit, etc., located within a common housing, but the physical configuration of the human detection device according to the present invention is not limited to this. Furthermore, the physical locations of each component element are not limited to the locations shown in each embodiment.

 例えば、センサユニットは、コンパレータ等を含むアナログ回路と、ADコンバータと、コンピュータやPLD等のデジタル電気回路とを備え、これらによって前記検出回路の機能が発揮されてもよいし、このコンピュータは判断部と共通のものであってもよい。またセンサユニットは、同じ筐体内に設けられていなくともある程度まとまって配置されていればよい。 For example, the sensor unit may be equipped with an analog circuit including a comparator, an AD converter, and a digital electrical circuit such as a computer or PLD, which may perform the functions of the detection circuit, and this computer may be common to the judgment unit. Furthermore, the sensor units do not need to be located in the same housing, as long as they are arranged relatively close together.

 前述した判断部又は侵入回数示唆データ出力部といった機能の一部又は全部は、センサユニットとは別で設けられ、センサユニットと一方的に又は相互に通信可能な情報処理装置によって発揮されてもよい。情報処理装置は、例えばCPU、メモリ、ディスプレイ等の入出力インターフェース、通信インターフェース等を備えたコンピュータである。この場合、複数の人検知装置を備える警備システムを構築する場合などに、共通の情報処理装置を用いて、それら複数の人検知装置を統合して処理することができる。 Some or all of the functions of the aforementioned judgment unit or intrusion count suggestion data output unit may be provided separately from the sensor unit and performed by an information processing device that can communicate unilaterally or mutually with the sensor unit. The information processing device is, for example, a computer equipped with a CPU, memory, input/output interfaces such as a display, a communication interface, etc. In this case, when building a security system equipped with multiple human detection devices, a common information processing device can be used to integrate and process the multiple human detection devices.

 本発明によれば、検知エリア内に侵入した物体の位置を特定できるようにするとともに、製造コストの増加や装置の大型化を抑制することが可能な人検知装置を提供することができる。

 
According to the present invention, it is possible to provide a human detection device that can identify the position of an object that has entered a detection area, while suppressing increases in manufacturing costs and size of the device.

Claims (6)

 基準面よりも高い位置に取り付けられるとともに、斜め下方に向かって延びる単位ゾーンを形成し、この単位ゾーンに侵入した物体を検出する複数のセンサユニットと、
 前記各センサユニットからの検出信号に基づいて、前記各センサユニットの単位ゾーンによって形成される検知エリアに人が侵入したか否かを判断する判断部とを備えた人検知装置であって、
 前記センサユニットとして、複数の第1センサユニットと、単一の第2センサユニットとが設けられており、
 上方から視て前記各第1センサユニットの単位ゾーンを前記基準面上に投影した第1投影領域は、左右方向に並ぶように設定されており、
 上方から視て前記第2センサユニットの単位ゾーンを前記基準面上に投影した第2投影領域は、前記各第1投影領域それぞれと重なるように設定されており、
 前記第1投影領域の先端縁は、前記第2投影領域の先端縁よりも遠くに位置するように設定されており、
 前記判断部は、前記第1センサユニットからの検出信号の値と、前記第2センサユニットからの検出信号の値とを演算し、前記検知エリアにおける物体検出位置を特定することを特徴とする人検知装置。
a plurality of sensor units attached at a position higher than a reference plane, forming a unit zone extending obliquely downward, and detecting an object entering the unit zone;
a determination unit that determines whether or not a person has entered a detection area formed by the unit zones of each of the sensor units based on a detection signal from each of the sensor units,
The sensor units include a plurality of first sensor units and a single second sensor unit,
first projection areas, which are obtained by projecting the unit zones of the first sensor units onto the reference surface when viewed from above, are set to be aligned in the left-right direction;
a second projection area obtained by projecting the unit zone of the second sensor unit onto the reference surface as viewed from above is set to overlap with each of the first projection areas;
a leading edge of the first projection area is set to be located farther away than a leading edge of the second projection area,
A human detection device characterized in that the judgment unit calculates the value of the detection signal from the first sensor unit and the value of the detection signal from the second sensor unit, and identifies the object detection position in the detection area.
 前記判断部は、
 前記第1センサユニット及び第2センサユニットの双方から、物体を検出した旨の検出信号を受け付けた場合には、前記第1投影領域と前記第2投影領域とが重なり合う領域に物体が侵入したと判断し、
 前記第1センサユニットのみから物体を検出した旨の検出信号を受け付けた場合には、前記第1投影領域のうち、前記第2投影領域と重ならない領域に物体が侵入したと判断する請求項1記載の人検知装置。
The determination unit
When a detection signal indicating that an object has been detected is received from both the first sensor unit and the second sensor unit, it is determined that an object has entered an area where the first projection area and the second projection area overlap,
A human detection device as described in claim 1, which, when receiving a detection signal indicating that an object has been detected from only the first sensor unit, determines that an object has entered an area of the first projection area that does not overlap with the second projection area.
 前記第2投影領域が、前記各第1投影領域のうち、左端または右端にある第1投影領域よりも外側に広がる外側領域を有するように設定されており、
 前記判断部は、
 前記第2センサユニットのみから物体を検出した旨の検出信号を受け付けた場合に、前記外側領域に物体が侵入したと判断する請求項1記載の人検知装置。
the second projection area is set to have an outer area extending outward from a first projection area located at the left end or the right end of each of the first projection areas,
The determination unit
2. The human detection device according to claim 1, wherein the device determines that an object has entered the outer area when a detection signal indicating that an object has been detected is received from only the second sensor unit.
 前記第1投影領域同士が一部重なり合うように配置されており、前記判断部は、2つの前記第1センサユニットから物体を検出した旨の検出信号を受け付けた場合に、前記第1投影領域が重なり合う領域に物体が侵入したと判断する請求項1記載の人検知装置。 The human detection device of claim 1, wherein the first projection areas are arranged so as to partially overlap each other, and the determination unit determines that an object has entered the area where the first projection areas overlap when it receives detection signals from two of the first sensor units indicating that an object has been detected.  前記センサユニットは、PIR素子及び該PIR素子に入射する赤外線の範囲である前記単位ゾーンを規定する光学部材を備えるものであり、
 前記光学部材は、左右に並ぶ複数のレンズを有しており、各レンズによって規定される赤外線入射範囲である分割ゾーンが、上方から視て、左右に並び、前記単位ゾーンを形成するように構成されており、
 前記第1センサユニットの単位ゾーンを規定するレンズ数と、上方から視た場合に前記第2センサユニットの単位ゾーンのうち、前記第1センサユニットの単位ゾーンに重なる領域を規定する当該第2センサユニットのレンズ数が同数である請求項1記載の人検知装置。
the sensor unit includes a PIR element and an optical member that defines the unit zone, which is a range of infrared light incident on the PIR element;
the optical member has a plurality of lenses arranged side by side, and divided zones, which are infrared incidence ranges defined by the respective lenses, are arranged side by side when viewed from above, forming the unit zones;
A human detection device as described in claim 1, wherein the number of lenses defining the unit zone of the first sensor unit is the same as the number of lenses of the second sensor unit that define the area of the unit zone of the second sensor unit that overlaps with the unit zone of the first sensor unit when viewed from above.
 前記判断部が特定した物体検出位置ごとへの物体の侵入回数を把握可能な位置別侵入回数示唆データを生成し、これをディスプレイ等の他機器に送信する侵入回数示唆データ出力部をさらに備える請求項1記載の人検知装置。

 
The human detection device of claim 1 further comprises an intrusion count suggestion data output unit that generates position-specific intrusion count suggestion data that can grasp the number of times an object has intruded into each object detection position identified by the judgment unit, and transmits this to other devices such as a display.

PCT/JP2024/014588 2024-04-10 2024-04-10 Person detecting device Pending WO2025215776A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2024/014588 WO2025215776A1 (en) 2024-04-10 2024-04-10 Person detecting device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2024/014588 WO2025215776A1 (en) 2024-04-10 2024-04-10 Person detecting device

Publications (1)

Publication Number Publication Date
WO2025215776A1 true WO2025215776A1 (en) 2025-10-16

Family

ID=97349610

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2024/014588 Pending WO2025215776A1 (en) 2024-04-10 2024-04-10 Person detecting device

Country Status (1)

Country Link
WO (1) WO2025215776A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07311280A (en) * 1994-05-19 1995-11-28 Mitsubishi Electric Corp Human body detection device
JP2010091144A (en) * 2008-10-06 2010-04-22 Hitachi Appliances Inc Air conditioner
JP2010156491A (en) * 2008-12-26 2010-07-15 Panasonic Corp Air conditioner
JP2013210307A (en) * 2012-03-30 2013-10-10 Secom Co Ltd Human body detector

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07311280A (en) * 1994-05-19 1995-11-28 Mitsubishi Electric Corp Human body detection device
JP2010091144A (en) * 2008-10-06 2010-04-22 Hitachi Appliances Inc Air conditioner
JP2010156491A (en) * 2008-12-26 2010-07-15 Panasonic Corp Air conditioner
JP2013210307A (en) * 2012-03-30 2013-10-10 Secom Co Ltd Human body detector

Similar Documents

Publication Publication Date Title
US8314390B2 (en) PIR motion sensor system
US7075431B2 (en) Logical pet immune intrusion detection apparatus and method
US7355626B2 (en) Location of events in a three dimensional space under surveillance
US8772702B2 (en) Detector
EP2149128B1 (en) Intrusion detector
WO2025215776A1 (en) Person detecting device
JP2017190967A (en) Human body detection system and human body detection method
JP5414120B2 (en) Human body detection sensor
KR102868576B1 (en) Ai human body detector using complex sensor
JP5274953B2 (en) Passive infrared sensor
JP2005241555A (en) Passive-type infrared detector
JP6154633B2 (en) Infrared detector, display equipped with the same, and personal computer
RU2292597C1 (en) Protecive warner provided with ir-red detection channel
KR100339255B1 (en) infrared sensor and managing method thereof
KR20210017074A (en) Method and apparatus for sensing object using a plurality of sensors
JP3509577B2 (en) Passive infrared security sensor
JPH0915036A (en) Luminescent detector
JP3244554B2 (en) Mobile body smoke detector
JP5467695B2 (en) Hot wire sensor
WO2025134368A1 (en) Human detection sensor device
JP4222482B2 (en) Passive infrared sensor
EP1361553B1 (en) Surveillance system for locating events in a three-dimensional space
JPS61100685A (en) Heat ray type invader sensor
JP2008190923A (en) Heat ray sensor
JP2016176801A (en) Detecting device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24935155

Country of ref document: EP

Kind code of ref document: A1