[go: up one dir, main page]

NL2034751B1 - Luminaire system for determining environmental characteristics - Google Patents

Luminaire system for determining environmental characteristics Download PDF

Info

Publication number
NL2034751B1
NL2034751B1 NL2034751A NL2034751A NL2034751B1 NL 2034751 B1 NL2034751 B1 NL 2034751B1 NL 2034751 A NL2034751 A NL 2034751A NL 2034751 A NL2034751 A NL 2034751A NL 2034751 B1 NL2034751 B1 NL 2034751B1
Authority
NL
Netherlands
Prior art keywords
area
data
processing means
images
map
Prior art date
Application number
NL2034751A
Other languages
Dutch (nl)
Inventor
Steurer Michael
Bandeira Lourenço
Schröder Helmut
Original Assignee
Schreder Iluminacao Sa
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Schreder Iluminacao Sa filed Critical Schreder Iluminacao Sa
Priority to NL2034751A priority Critical patent/NL2034751B1/en
Priority to PCT/EP2024/062155 priority patent/WO2024227897A1/en
Priority to AU2024265495A priority patent/AU2024265495A1/en
Application granted granted Critical
Publication of NL2034751B1 publication Critical patent/NL2034751B1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3852Data derived from aerial or satellite images
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F21LIGHTING
    • F21SNON-PORTABLE LIGHTING DEVICES; SYSTEMS THEREOF; VEHICLE LIGHTING DEVICES SPECIALLY ADAPTED FOR VEHICLE EXTERIORS
    • F21S8/00Lighting devices intended for fixed installation
    • F21S8/08Lighting devices intended for fixed installation with a standard
    • F21S8/085Lighting devices intended for fixed installation with a standard of high-built type, e.g. street light
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • G08G1/0116Measuring and analyzing of parameters relative to traffic conditions based on the source of data from roadside infrastructure, e.g. beacons
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • G08G1/0133Traffic data processing for classifying traffic situation
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • G08G1/0141Measuring and analyzing of parameters relative to traffic conditions for specific applications for traffic information dissemination
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096766Systems involving transmission of highway information, e.g. weather, speed limits where the system is characterised by the origin of the information transmission
    • G08G1/096783Systems involving transmission of highway information, e.g. weather, speed limits where the system is characterised by the origin of the information transmission where the origin of the information is a roadside individual element

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Atmospheric Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

System for determining environmental characteristics, said system comprising: a plurality of luminaires comprising a first luminaire (100a) and a second luminaire (100b), at least one first capturing means (110a) and at least one second capturing means (110b), said at least one first capturing means (110a) being associated with said first luminaire (100a) and being configured to capture first data of a first area (200a) in the Vicinity of the first luminaire (100a), said at least one second capturing means (110b) being associated with said second luminaire (100b) and being configured to capture second data of a second area (200b) in the Vicinity of the second luminaire (100b); and a central processing means (300) configured to determine at least one map, based on the first and second data, each of the at least one map indicating environmental characteristics of an area comprising the first and the second area.

Description

LUMINAIRE SYSTEM FOR DETERMINING ENVIRONMENTAL CHARACTERISTICS
FIELD OF INVENTION
The field of the invention relates to the field of luminaire systems for determining environmental characteristics, and in particular luminaire outdoor networks including sensors.
BACKGROUND
IO Roads are typically equipped with their own lighting, for example with luminaires, which may be part of luminaire networks. Luminaires, especially outdoor luminaires, are present worldwide in cities, industrial plants, private or public parking lots, etc. Smart luminaires able to work in networks are already present in densely populated areas, such as streets, roads, paths, parking lots, parks, campuses, train stations, airports, harbors, beaches, etc., of cities around the world, from small towns to metropoles. Hence, a network of such luminaires is capable of automatically exchanging information between the luminaires and/or with a remote entity.
Luminaires may be mounted on long poles or placed at a significant height such as to illuminate wider portions of an area such as a road. More and more, luminaires are provided with additional sensing devices, such as cameras, that take advantage of the height of the luminaire and the pole they are often mounted on. This allows for example to capture images from a bird’s eye point of view and to access information otherwise not easily obtained at ground level.
Such information may be used to help road users navigate through the massive labyrinth of geography and traffic that is the road network, for example by including sensed information in digital mapping applications. But while digital mapping applications exist and are widely used, both satellite views and street views are scarcely updated with recent images. Being able to detect changes and provide regular updates to the system when needed would improve such mapping applications greatly.
Sensed information may also be used to determine sunlight information. Sunlight information is valuable information for a wide variety of applications. For example, city planners could improve their designing of natural air conditioning within a city, for example by knowing which are the sunny places which need to be provided with green spaces and artificial water bodies, and could improve the placement of modern urban gardens, which necessitate sun for the plants to grow. Road safety may be increased, by knowing which roads are sooner in the shadow of the sun and need an earlier lighting of their luminaires or by adapting the light levels in a tunnel when sun is bright outside. Real estate businesses could better evaluate the price of accommodation depending on the available sunlight throughout the year. Solar panel placement may be improved to harvest energy for electric bicycles, cars, house facades, luminaires, parking meters, etc.
SUMMARY
An object of embodiments of the invention is to provide a system capable of providing improved maps of an area, allowing maps to be updated on a regular basis using the existing infrastructure in an environment.
According to a first aspect of the invention, there is provided a system for determining environmental characteristics, said system comprising a plurality of luminaires with a first and a second laminaire as well as at least one first capturing means and at least one second capturing means. The at least one first capturing means is associated with the first luminaire and is configured to capture first data of a first area in the vicinity of the first luminaire. The at least one second capturing means is associated with the second luminaire and is configured to capture second data of a second area in the vicinity of the second luminaire. In addition, the system comprises a central processing means configured to determine at least one map, based on the first and second data. Each of the at least one map indicates environmental characteristics of an area comprising the first and the second area.
In other words, in this system, luminaires in a given area are equipped with capturing means, e.g. image capturing means such as cameras, that capture data, typically sequences of data, e.g. one or more images, in said area. The capturing means may each capture given sub-areas within the area, e.g. a portion of a road, which may be distinct, overlap or be identical to the sub-areas captured by the other capturing means. In the case of the area being a road, the data, e.g. one or more images, may comprise information on the road and its vicinity, e.g. the presence of a bus stop, of a zebra crossing, of road lights, the state of the road, whether the road surface is lit by the sun or not, the number of cars on the road, the number and/or position of parked cars, etc. Based on the captured data and therefore on the information contained in the captured data, the central processing means of the system determines at least one map indicating environmental characteristics of the area.
In this way, valuable information containing environmental characteristics of the area is obtained in the form of maps. For example, these maps may be used to regularly update digital mapping applications, for example with recent images or recent information, and/or to provide sunlight information to a user of the map. In addition, the system provides a satellite independent alternative to digital mapping applications.
Indeed. the system may be able to identify whether significant changes have occurred in a scene of view and, when needed, upload the updated regions of the map (or the entire map) so that applications. e.g. mapping applications such as Google Maps or Waze, traffic monitoring applications, parking applications, etc. may be updated.
In addition, the system may be able to identify light/sunlit areas and/or shadow areas over time such as to provide a dynamic shadow-light map for an environment, such as a map indicating light/sunlit and/or shadow areas in/on a street and any surrounding buildings. This may be used by a wide variety of applications, e.g. a parking monitoring application may guide a vehicle towards shadowed parking spaces based on the identification of these areas, and/or may be used to configure and/or optimize applications, e.g. a traffic monitoring application may be optimized, for example by recalibrating the sensor based on the light/sunlit and/or shadow areas. As mentioned above, such information may also be used by city planners. real estate businesses, efc.
Information about the light/sunlit and/or shadow areas may also be used in a network system as described in WO 2022/189601 A1 or WO 2023/006970 A1 in the name of the applicant, which are included herein by reference. WO 2022/189601 A1 discloses a network system comprising a plurality of edge devices, e.g. luminaires. The plurality of edge devices is arranged at a plurality of locations and comprise at least a sensor and processing means. The sensor is configured for obtaining environmental data related to an event in the vicinity of an edge device of the plurality of edge devices. The sensor is set up according to at least one configuration parameter. The processing means is configured to process input data in accordance with a model to derive the at least one configuration parameter of the sensor. The network system is configured to determine an updated model over time and to reset the processing means so as to process input data in accordance with the updated model. Optionally, the information about the light/sunlit and/or shadow areas may be used to determine the updated model. Similarly, the information about the light/sunlit and/or shadow areas may be used in the system disclosed in WO 2023/006970 Al.
The at least one map may be at least one 3D-map. wherein each of the at least one 3D-map may indicate environmental characteristics of ground areas and/or infrastructures areas of the area.
This may be achieved using an existing 3D-map of the area which is updated using the first and second captured data. For example, an existing 3D street view map may be updated when a static infrastructure element of the street is added or removed or changed in the area. If updates are done very frequently even more dynamic changes, such as weather changes may be taken into account.
In this way, maps determined by the processing means may be more immersive. In the example of cameras being located on luminaires on the side of a road, environmental characteristics of ground areas may for example relate to the presence of a zebra crossing. of a bus stop, the state of the road, the presence of a work zone on the road, heavy traffic, occupancy of parking spaces along the road, etc. Environmental characteristics of infrastructures area within the area may for example relate to the presence of a bus shelter, of a bench, the type of buildings within the area, e.g. a bank, a restaurant, a post office, a school, an hospital, etc.
The central processing means may be a local central processing means or a remote central processing means, or a combination thereof. For example, the central processing means may be included in a fog device or in a remote central server.
For example, the central processing means may be arranged in a fog device of a network comprising a plurality of fog devices. each fog device being associated with a subset of a plurality of edge devices. Such subset may comprise the first and second luminaire associated with the first and second capturing means and/or further luminaires and/or further capturing means either independent or associated with a luminaire. It is noted that a subset of edge devices may change over time and an edge-device may be part of more than one subset, providing for instance some overlap between geographically adjacent subsets. Fog devices may further be configured to communicate with each other depending on circumstances. Preferably, a fog device and the associated subset of edge devices (comprising the first and second luminaire with associated first and second capturing means) may be arranged in a mesh network. For example, the edge devices may be configured to transmit captured data or edge processed data based on the captured data to its associated fog device and optionally receive control data from its associated fog device using a wireless personal area network (WPAN), preferably as defined in the IEEE 802.15.4 standard. Thus, the communication between the edge devices and its associated fog device may be based on a short-range protocol such as IEEE 802.15.4 (e.g. Zigbee). The network may be managed by the fog device or by a separate segment controller.
A fog device may be defined typically as a device having less processing and storage capabilities than a central control means of the entire network but more processing, storage and/or communication capabilities than an edge device. When the central control means operate under the principle of cloud computing, the intermediate level of processing performed by fog devices is 5 referred to as fog computing. Fog computing may comprise a certain degree of distribution of the processing among a plurality of fog devices arranged in a mesh with the edge devices. The plurality of fog devices may be configured to communicate with the central control system through a cellular network. In such a solution the edge device may only be capable of communicating through the short-range communication protocol. However, it is also possible that at least some edge devices are capable of communicating both through a short-range protocol and a long-range protocol (e.g. through the cellular network). Also, a fog device may be integrated with one of the edge devices, e.g. one of the luminaires of a group of luminaires could function as the fog device for a group of edge devices comprising the group of luminaires and possibly also other edge devices.
A fog device may be associated with a subset of a plurality of edge devices located geographically in the vicinity of each other and forming a regional subset of edge devices. In an example a subset of edge devices may be defined for edge devices installed in the same neighborhood, whether installed on luminaires, traffic lights, trash bins or any other infrastructure. In an example, a subset of edge devices may be defined based on the location, e.g. a subset for edge devices installed on luminaires lighting the same road. In another example, a subset of edge devices may be defined for edge devices located next to a specific city infrastructure, e.g. next a bus stop, a school, a pedestrian crossing.
In another example, the central processing means is included in a remote central server, and the first and second luminaire with their associated first and second image capturing means are configured to communicate with the remote central server, either directly, using e.g., cellular communication, or indirectly, e.g., via a segment controller or a fog device.
In a preferred embodiment, the first data comprises sequential first sets of data at sequential moments in time and the second data comprises sequential second sets of data at sequential moments in time.
The determining of the at least one map may then comprise determining sequential maps based on said first sets and said second sets. In this way sequential maps of the area representing the area at sequential moments in time, may be obtained.
For example, the first data captured by the first capturing means may comprise a first set of first data at a first moment in time and a second set of first data at a second moment in time, and the second data captured by the second capturing means may comprise a first set of second data at a first moment in time and a second set of second data at a second moment in time. The determining of the at least one map may then comprise determining a first map based on the first set of first data and the first set of second data and determining a second map based on the second set of first data and the second set of second data. In this way a first map representing the area at the first moment in time and a second map representing the area at a second moment in time, may be obtained.
For example, each map of the sequential maps may indicate environmental characteristics of the area comprising the first and the second area at a given moment in time. In this way, the sequential maps allow to observe an evolution of the environmental characteristics of the area over time.
The at least one first and/or second capturing means may comprise an image capturing means, e.g. an infrared imaging device, a camera, etc. The first data of the first area may then comprise first images of the first area, e.g. a first sequence of images of the first area taken at sequential moments in time, and/or the second data of the second area may comprise second images of the second area, e.g. a second sequence of images of the second area at sequential moments in time.
Alternatively or additionally, the at least one first and/or second capturing means may comprise at least one of a LIDAR sensor, a sound sensor, a radar.
It is noted that it is possible to associate different capturing means with the same luminaire. For example, the first luminaire may be provided with two different capturing means and/or the second luminaire may be provided with two different capturing means. It is also possible that the first luminaire has a first type of capturing means and that the second luminaire has a second type of capturing means different from the first type.
For example, the first luminaire may be associated with a first capturing means configured to obtain first sensed environmental data and a second capturing means configured to obtain second sensed environmental data for the same area. In this way data from multiple capturing means may be combined for an improved accuracy. For instance while optical sensors have high accuracy and a long reach during bright days, they are less reliable in the event of heavy rain or dark nights. In the same way, a micro Doppler radar is able to produce very recognizable features of a car at a close distance, but it creates indistinctive data at far reach. From the acoustic sensor alone it is very difficult to separate two distinct objects but combined with an optical sensor the information pool becomes richer. A multi-sensor luminaire accommodating such a combination of sensors may thus have a higher accuracy and speed of detection. The skilled person understands that also more than two sensors may be associated with a luminaire.
According to a preferred embodiment, the at least one capturing means comprise at least one of, preferably at least two of: an optical sensor such as a photodetector or an image sensor, a sound sensor, a radar such as a Doppler effect radar, a LIDAR, a humidity sensor, a pollution sensor, a temperature sensor, a motion sensor, an antenna, an RF sensor, a vibration sensor, a metering device (e.g. a metering device for measuring the power consumption of a component of the edge device, more in particular a metering device for measuring the power consumption of a driver of a luminaire), a malfunctioning sensor (e.g. a sensor for detecting the malfunctioning of a component of the edge device such as a current leakage detector for measuring current leaks in a driver of a luminaire}, a measurement device for measuring a maintenance related parameter of a component of the edge device, an alarm device (e.g. a push button which a user can push in the event of an alarming situation). In this way, environmental data about an event in the vicinity of a laminaire may be detected, e.g. characteristics (presence, absence, state, number) of objects like vehicles, street furniture, animals, persons, sub-parts of the edge device, or properties related to the environment (like weather (rain, fog, sun, wind), pollution, visibility, earth quake) or security related events (explosion, incident, gun shot, user alarm) in the vicinity of the edge device, maintenance related data or malfunctioning data of a component of an edge device.
According to an exemplary embodiment, a capturing means may be mounted in a housing of a luminaire, in an orientable manner. An example of a suitable mounting structure is disclosed in
WO 2019/243331 Al in the name of the applicant which is included herein by reference. Such mounting structure may be used for arranging e.g. an optical sensor in the housing of an edge device. Other suitable mounting structures for sensors are described in WO 2019/053259 Al, WO 2019/092273 Al, WO 2020/053342 Al, WO 2021/094612 A1, all of which are in the name of the applicant and included herein by reference.
According to a possible embodiment, the at least one first and/or second capturing means comprises an image sensor configured to sense raw image data of the area. The raw image data may be transferred directly to the central processing means or may be processed by an edge processing means configured to process the sensed raw image data to obtain the edge processed data which is then sent to the central processing means. In this way, bandwidth may be saved. For instance edges of an object may be extracted from a sensed image, or a license plate may be extracted from a sensed image. In this way, a more complete and at the same time compact information may be transmitted to the central processing means.
According to a possible embodiment, the at least one first and/or second capturing means comprises a sound sensor configured to sense sound data in the area. The raw sound data may be transferred directly to the central processing means or may be processed by an edge processing means configured to process the sensed sound data to obtain the edge processed data which is then sent to the central processing means. For example, the edge processing means may be configured
IO to select a class from a plurality of classes according to the type of object detected in the area, and to include the determined class in the edge processed data. Additionally an attribute associated to the sensed sound data may be also generated and aggregated to the sound classification. A sound attribute may be a sound level, a frequency of a sound, duration of said sound for instance.
Preferably a sound attribute may be a frequency band related to a certain type of vehicle, e.g. frequency band of the noise generated by electric cars/non electric cars. In this way a more complete and at the same time compact information may be transmitted to the central processing means.
According to a possible embodiment, the at least one first and/or second capturing means comprises a radar sensor configured to sense radar data in the area. The raw sensed data may be transterred directly to the central processing means or may be processed by an edge processing means configured to process the sensed data to obtain the edge processed data which is then sent to the central processing means. The edge processing means may be configured to process the sensed radar image data to select a class from a plurality of classes relating to an object detected in the area. In this way, bandwidth may be saved.
More in particular, all three types of capturing means: optical, sound and radar may be connected to the same common interface support such that the combination of sensors can be easily interconnected in any kind of edge device in a cost-effective manner. Thus, the first and/or second luminaire may comprise a combination of an optical, sound and radar capturing means. It has been found that the combination of these three sensors in an edge device such as a luminaire allows for an accurate classification of objects in the vicinity of the edge device, at all times of the day.
Examples of edge devices such as laminaires where these sensors are combined are disclosed in
WO 2022/122750 in the name of the applicant which is included herein by reference.
The central processing means may be configured to identify sunlit and/or shadow portions in the first and second data. Said portions may include e.g. portions of a ground area and/or an infrastructures area of the first and second area.
S Inthis way, sunlight information may be determined from the data. This allows e.g.. to know whether the lighting of a given portion of the area needs to be activated and/or dimmed earlier or later than other portions of the area. For example, it may be determined that some portion of a street is in the shadow of a building earlier than another portion of the street. In this way, the safety of the users of the road of the street and the feeling of safety of a pedestrian walking in said street are increased.
The first and/or the second luminaire may comprise a transmitting means configured to transmit information to the central processing means and receive information from the central processing means.
The first and/or the second luminaire may comprise a local processing means configured to detect if an object has been added to or removed from the area and a transmitting means configured to transmit information based on said detection to the central processing means. The detection may be based on the first and/or second data. The local processing means may also be configured to select from the first and/or second data, one or more data captures of the data corresponding to the area with the added or removed object and the selected one or more data captures and/or information based may then be transmitted to the central processing means.
In this way, changes in the area may be detected locally and information about the changes may be transmitted to a central processing means. For example, if a bus shelter or a parking spot has been moved further down on the road, the local processing means may detect that change. In another example, the local processing means may detect that a vehicle that was parked in a parking space has left said parking space such that said parking space is available for another vehicle. For example, one or more data captures, e.g. one or more images, of the new situation may be selected by the local processing means and transmitted to the central processing means, such that, for example, a digital mapping application or a parking application may be updated.
The local processing means may be further configured to determine a class of the added or removed object and to only transmit information if the added or removed object (O) belongs to one or more predetermined classes. In such case, the local processing means may for example only select the one or more data captures of the data for transmission if the added or removed object belongs to one or more predetermined classes. Preferably, the one or more predetermined classes may be classes for one or more static objects, preferably infrastructure elements.
In yet another embodiment, the detecting if an object has been added to or removed from the area and/or the determining of a class of an added or removed object may be done by the central processing means (e.g. included in a remote device) instead of by a local processing means.
In this way, undesired transmitting from the local processing means may be avoided, especially for changes of objects that are not static, which occur more often. What is desired for transmission or not will most often depend on the application that uses the transmitted data. For example, for a digital mapping application such as Google Maps or Waze or for a transport mapping application, knowing that a bus shelter, which is static, has been moved further down on the road may be valuable to update a digital mapping application. Indeed, such changes do not occur often and are not expected to happen soon after the shelter has been moved. On the other hand, knowing that a vehicle has moved from a parking space, or even that a vehicle has just passed by on the road occurs more often. In the two latter cases, transmitting one or more data captures of the data, e.g. one or more images, may consume processing power and/or bandwidth in an undesired manner. On the contrary, for other applications such as parking monitoring applications for example, the moving of a bus shelter would not be interesting, while the fact that a vehicle has moved from a parking space is.
Optionally, the first and/or second luminaire (i.e. a local processing means of the first and/or second luminaire) and/or the central processing means is configured to determine which elements in the first and/or second area are permanent and which elements are non-permanent, based on the first and/or second data; and/or to determine short-term, mid-term and/or long-term changes in the first and/or second area, based on the first and/or second data. This could be done locally, where the first luminaire determines this for the first area based on sequential sets of first data, and the second luminaire for the second area based on sequential sets of second data, wherein the first and second luminaire transmit a result of the determining to the central processing means. Alternatively, this may be done by the central processing means for the combined first and second area.
The central processing means may be configured to generate at least one map including the elements which are permanent and not the elements which are non-permanent.
Optionally, the first and/or second luminaire and/or the central processing means is configured to determine information about recurrent or periodic changes, based on the first and/or second data.
The central processing means may then be configured to generate the at least one map using the information about recurrent or periodic changes.
The first and/or second captured data over time may comprise a first short-term sequence of data covering a first predetermined period of time, and a second mid-term sequence of data covering a second predetermined period of time. Optionally, a third long-term sequence of data covering a third predetermined period of time may also be comprised. The first predetermined period of time may be shorter than the second and the optional third period of time and the second predetermined period of time may be shorter than the third period of time.
The captured sequences of data over time may be sequences of data captured at sequential moments in time, for example with regular or irregular time intervals between two moments in time. The first predetermined period of time may be less than 2 days while the first short-term captured sequence of data over time may comprise preferably at least 2 data captures per day. More preferably, the first short-term captured sequence of data over time may comprise at least 10 data captures per day, and preferably less than 24 data captures per day. The second predetermined period of time may be less than 2 months while the second mid-term captured sequence of data over time may be preferably at least 2 data captures per month. More preferably, the second mid-term captured sequence of data over time may be at least 4 data captures per month, preferably less than 100 data captures per month.
The third predetermined period of time may be less than 18 months while the third long-term captured sequence of data over time may be preferably at least 2 data captures per year. More preferably, the third long-term captured sequence of data over time may be at least 12 data captures per vear, preferably less than 100 data captures per year.
In this way, different time scales may be considered in the detection of changes and different types of changes may be detected. For example, it may be determined if the change is a long-lasting change, e.g. an element such as a bus shelter or a building has been moved, if the change is a medium - lasting change, e.g. garbage bins have been taken by the garbage truck, or if the change is a short- lasting change, e.g. a parking space has been freed, but has been occupied shortly after. In the case of long-lasting changes, it may be advantageous to store the data of only a few data captures to be able to detect such changes. For example, having 5 images of an area in the last 5 years may be enough to determine that a building has been removed from the area over those 5 years, and there may not be more information to gain from having 50 images instead. For parking monitoring applications however, the time scale will more probably rather be about the hour or the day instead. e.g. 3 data captures every hour.
The first short-term sequence, the second mid-term sequence, and optionally the third long-term sequence may be used to determine which elements in the first and/or second area are permanent and which elements are non-permanent. Alternatively or additionally, they may be used to determine short-term, mid-term and/or long-term changes in the first and/or second area. The central processing means may be configured to generate at least one map including the elements which are permanent and not the elements which are non-permanent.
By generating a map of the area comprising only permanent elements, it may be easier to detect changes in the area, e.g. changes in the environmental characteristics of infrastructures areas of the area and/or changes of shadow and light portions of the area, as the map then contains a “background scene” to compare to. In addition. the map only contains permanent elements, i.e. static elements such as a bus shelter, the sidewalk, the commercial buildings, that are not subject to regular changes, such that more concise information is contained in the map. Digital mapping applications using said maps may therefore provide the user of said applications with more concise information.
The first short-term sequence, the second mid-term sequence, and optionally the third long-term sequence may be used to determine information about recurrent or periodic changes. The central processing means may be configured to generate the at least one map using the information about recurrent or periodic changes.
Recurrent and/or periodic changes such as the day and night cycle or the yearly seasonal cycle may have a significant impact on the sequence of data. For example, during the night, the area may be illuminated with artificial light, which always casts the same shadow, while during the day. the sun may cast shadows that move as the day progresses. Also, as the year progresses, the height of the sun varies, and the length of the day and the size of the shadows vary with it. Valuable information may be obtained when including such information in the generation of the at least one map. For example, the map may include at a given time of the day, the expected illumination of a given area.
Such information may be valuable to city planners, which could improve their designing of natural air conditioning within a city, to real estate businesses, that may better evaluate the price of an accommodation depending on the available sunlight throughout the year, and/or to inhabitants in search of a sunlit terrasse to enjoy the sun. Such information may also be useful for parking monitoring applications which may guide a vehicle towards a parking space which is or will be in a shadow area.
The luminaires may each comprise a pole with a luminaire head or alternatively a plurality of pole modules arranged one above the other along a longitudinal axis. The at least one corresponding data capturing means, e.g. an image capturing means and/or a LIDAR sensor, may be configured to be arranged in or on said pole or in or on said luminaire head or in or on a pole module of said plurality of pole modules.
An example of a luminaire with pole modules is disclosed in EP 3 076 073 B1 in the name of the applicant, which discloses a modular lamp post comprising a plurality of pole modules mounted on a support pole. The pole modules are connected to one another by respective pole module connectors and optionally one pole module thereof is connected to a support pole by a pole module connector.
EP 3 076 073 B1 is included herein by reference. It is known to include additional functionalities, either in the modular lamp post itself or in a separate cabinet adjacent a lamp post. Examples of such modular lamp posts are disclosed in WO 2019/043045 A1, WO 2019/043046 A1, WO2020/152294,
WO 2019/053259 Al, WO 2019/092273 Al, WO2021/239993 Al, PCT/EP2022/051148, WO 2016/193430 and WO 2021/094612 Al in the name of the applicant. An example with a camera pole module is disclosed in WO 2021/094612 Al in the name of the applicant. WO 2021/094612 Al discloses a lamp post comprising a plurality of pole modules arranged one above the other along a vertical axis. The plurality of pole modules comprises a light pole module with a light source and a camera pole module.
The central processing means may comprise a cloud and/or a fog device. In this way, the processing of the captured data may be centralized in a cloud and/or fog device, which may be more adapted for such processing. This may also allow to have a centralized device through which all communications between the luminaires occur. It is noted that the central processing means may be distributed over different servers, and may be distributed over multiple fog and cloud devices. As explained above, a fog device may be part of a network and may communicate with different edge devices, such as the first and second luminaire, e.g. using a mesh network.
For example, as described in PCT publication WO 2022/122750 Al and WO 2019/175435 A2 in the name of the applicant, which are included herein by reference, the central processing means may comprise a plurality of fog devices configured to process data received from the plurality of luminaires, e.g., raw sequences of captured image data or luminaire processed data, and to produce fog processed data based thereon, said fog processed data comprising e.g. information about one or more detected objects and/or sunlit or shadow portions, and a central processing system in communication with said fog devices and configured to receive the fog processed data.
According to a second aspect of the invention, there is provided a system for updating data, for example one or more images, of an area in the vicinity of one or more luminaires. The system comprises one or more luminaires provided with one or more image capturing means configured to capture data of the area, for example a sequence of images over time of the area. The system also comprises a processing means configured to detect if a long-lasting change has occurred in the area based on the captured data of the area.
In other words, in this system, one or more luminaires in a given area are equipped with capturing
IO means, such as image capturing means, e.g. cameras, that capture data, e.g. a sequence of images, in said area. The capturing means may for example each capture given sub-areas within the area, e.g. a portion of a road, which may be distinct, overlap or be identical to the sub-areas captured by the other capturing means. In the case of the area being a road, the data may comprise information on the road and its vicinity, e.g. the presence of a bus stop, of a zebra crossing, of road lights, the state of the road, the number of cars on the road, of parked cars, etc. Based on the captured data, the processing means of the system detects if long-lasting changes have occurred in the area. It is noted that the processing means may use the raw captured data or processed data based on the raw captured data. For example, a bus shelter may have been moved elsewhere, and this change may be detected based on the captured data.
As an example, such system may be used to transmit information regarding the removal of a bus shelter or of a parking spot and its moving at a different location, or the change of the name of a commercial building. In this way, a map or image of a given area may be determined as needing an update. Such information may be used by applications, e.g. digital mapping applications such as
Google Maps or Waze, traffic monitoring applications, parking monitoring applications, ec. to update their data. In particular, the system may provide a satellite independent alternative to digital mapping applications.
Preferably, at least one of the one or more luminaires comprises the processing means and a transmitting means configured to transmit information related to a result of the detecting to a remote device.
In an exemplary embodiment, the processing means is configured to select from the data, e.g. a sequence of images, one or more data captures, e.g. one or more images, captured before and/or one or more data captures captured after the detected long-lasting change. The transmitting means may then be configured to transmit information related to the selected one or more data captures or the selected one or more data captures itself to a remote device. For example, the information may comprise one or more captured images or portions of captures images, the type of change, and/ or the position of the change within the area.
For example, the information related to the selected one or more data captures may comprise the selected one or more images or portions thereof.
In this way, by transmitting an image or a portion thereof prior to the long-lasting change and an image or a portion thereof after said change has occurred, a confirmation that said change has been correctly detected and that the image indeed needs updating may be determined, e.g. by the remote device or by an operator. In addition, this may also improve the determination of the timing at which such change has occurred.
Optionally. the remote device is configured to include a result of the detection in a map of the area.
In addition or alternatively, the processing means is configured to include a result of the detection in a map of the area.
Optionally, the remote device is configured to select from the captured data one or more data items captured before and/or one or more data items captured after the detected long-lasting change, and optionally to use the selected one or more data items to generate a map. In addition or alternatively, the processing means is configured to select from the captured data one or more data items captured before and/or one or more data items captured after the detected long-lasting change, and optionally to use the selected one or more data items to generate a map.
The captured data, e.g. a sequence of images over time, may cover a first predetermined period of time. The first predetermined period of time may be at least 1 week, preferably at least 1 month, more preferably at least 3 months. The data may be captured at sequential moments in time, preferably at least 2 captures per day, more preferably at least 10 captures per day. For example, images may be captured at consecutive time intervals. Preferably at least 2 images may be captured per day, more preferably at least 10 images per day.
Preferably, the data comprises one or more images. However, in addition or alternatively also other sensed data, such as noise data, may be used.
The long-lasting change may be detected by comparing pixels in consecutive images from a captured sequence of images and detecting that the change is a long-lasting change when said pixel change lasts at least on a first predetermined number of consecutive images.
In this way, short-lasting changes may not be mistakenly considered as a long-lasting change. For example, a bus may stop in front of a bus shelter at the moment at which one or more images from the captured sequence of images are taken. On these one or more images, this will induce pixel changes for the pixels that generally correspond to the bus shelter. Similarly, bus users waiting at the bus stop or stepping out of the bus will also induce such pixel changes. However, these pixel changes will most often last only on one image, or on a short amount of consecutive images, at most. By requiring that the pixel change lasts at least on a first predetermined number of consecutive images, undesired information, related to short-lasting changes may therefore be filtered.
Consecutive static images may be obtained by averaging pixel values of a predetermined amount of consecutive images from a captured sequence of images. The long-lasting change may be detected by comparing pixels in said consecutive static images.
By taking the average of the pixel values of images, which may sometimes be referred to as denoising, changes that occur on a small amount of images, and therefore contribute only to a small amount in the average, may not appear in the static images obtained in this way. This guarantees that only long-lasting changes remain in the static images.
To make sure that the changes are only due to the static elements of the image, the average may be calculated on images taken in the same conditions. For example, several images used in the average may be selected such that they have been taken at the same time of the day, and/or under the same weather conditions, and/or under the same exposure.
When the data comprises noise data, the long-lasting change may be detected for example by comparing average noise data for consecutive periods of time.
The processing means may be further configured to detect that an object has been added to or removed from the area as the long-lasting change.
For example, a bus shelter may have been removed from a first area, and moved to a second area due to changes in the city planning or in the public transport network. The processing means may therefore detect that an object, i.e. the bus shelter, has been removed from the first area. This may be determined as a long lasting change, since the bus shelter does not appear on a significant amount of images after the change. On the other hand, the processing means may also detect that an object, i.e. the bus shelter, has been added to a second area. This also may be determined as a long lasting change, since the bus shelter appears on a significant amount of images after the change.
A threshold for the duration of the long-lasting change may be determined. For example, a bus shelter may be moved temporarily because of road work. In that case, it may be desirable to wait that the change is indeed confirmed to be long-lasting before requiring that the images of a digital mapping application are updated to reflect these changes. The threshold may be a predetermined first amount, e.g. a first amount of static images and/or a first duration of time, such that if the change is visible on a second amount of static images that is larger than the first amount and/or if the change is visible for a second duration larger than the first duration, then the change may be determined as a long lasting change.
The processing means may be further configured to determine a class of the added or removed object, and optionally the transmitting means is configured to only transmit the information if the added or removed object belongs to one or more predetermined classes. For example, the information may comprise the determined class and/or one or more images with the added or removed object.
Depending on the application, some changes may be long-lasting but may not be considered as generating a need to update images of the area. For example, an added object, such as a parking place or parked car or a road work zone, may be detected in the area and may remain there for days or weeks. Such change may therefore be considered a long-lasting change. For a digital mapping application such as Google Maps or Waze, however, such changes may not all require that the images of the application are updated to reflect these changes. Indeed, a user of e.g. Google Maps may not be interested in changes of parked cars. On the contrary, changes such as the removal of a bus shelter or of a zebra crossing may be considered as generating a need for an update. On the other hand, for other applications such as parking monitoring applications for example, the fact that a parking spot is not unavailable because of long-lasting road works may well trigger the need for an update of the application to reflect these changes.
By determining the class of the added or removed object with the processing means, it may be avoided that too much processing and/or transmission power is used when not needed.
The information related to the selected one or more images may further comprise the class of the added or removed object. In this way, the determination whether the change calls for an update or not may be left for the remote server to make.
The one or more predetermined classes may be classes for one or more static objects, preferably infrastructure elements.
The information related to the selected one or more data captures, e.g. one or more images, may comprise the position of the long-lasting change.
The one or more capturing means may be further configured to adjust a duration between consecutive data captures, e.g. captured images, based on sensed and/or received data related to the area.
In this way, the system may be more adaptable to particular situations encountered in the area. For example, the information that a football match, a protest march, a concert, efc. may take place in the area or nearby may be determined based on sensed and/or received data. In those cases, adjusting the duration between the consecutive data captures, e.g. increasing the amount of images taken over time, may be valuable for applications such as crowd monitoring, parking monitoring, etc. Similarly, decreasing the frequency of image capture during the night in comparison to the frequency of capture during the day, may allow to decrease the power consumption of the system.
The one or more capturing means may be configured to be triggered by a remote user to capture data of the area.
In this way, the updating of the data, e.g. images, may be piloted manually. For example, based on reviews stating that the image is not up to date, a remote user, e.g. a user of a digital mapping application or an operator of such application may trigger the capture of the images.
The luminaires may each comprise a pole with a luminaire head or alternatively a plurality of pole modules arranged one above the other along a longitudinal axis. The one or more image capturing means may be configured to be arranged in or on said pole or in or on said luminaire head or in or on a pole module of said plurality of pole modules.
An example of a luminaire with pole modules is disclosed in EP 3 076 073 B1 in the name of the applicant, which discloses a modular lamp post comprising a plurality of pole modules mounted on a support pole. The pole modules are connected to one another by respective pole module connectors and optionally one pole module thereof is connected to a support pole by a pole module connector.
EP 3 076 073 B1 is included herein by reference. It is known to include additional functionalities, either in the modular lamp post itself or in a separate cabinet adjacent a lamp post. Examples of such modular lamp posts are disclosed in WO 2019/043045 A1, WO 2019/043046 A1, WO2020/152294,
WO 2019/053259 A1, WO 2019/092273 Al, WO2021/239993 Al, PCT/EP2022/051148, WO 2016/193430 and WO 2021/094612 Al in the name of the applicant. An example with a camera pole module is disclosed in WO 2021/094612 Al in the name of the applicant. WO 2021/094612 Al discloses a lamp post comprising a plurality of pole modules arranged one above the other along a vertical axis. The plurality of pole modules comprises a light pole module with a light source and a camera pole module.
According to a third aspect of the invention, there is provided a system for determining sunlight exposure information of an area in the vicinity of one or more edge devices. The system comprises one or more edge devices provided with one or more image capturing means configured to capture a sequence of images of one or more surfaces of the area, captured at sequential moments in time.
The sequence of images preferably covers at least a daylight period. In addition, the system comprises a processing means configured to receive the captured sequence of images. The processing means is also configured to identify sunlit and/or shadow portions of the one or more surfaces in the captured sequence of images.
In other words, in this system, one or more edge devices in a given area in the vicinity of the edge devices are equipped with image capturing means that capture a sequence of images in said area.
The image capturing means may for example capture given sub-areas of e.g. a road, which may or may not be different sub-areas within the area. In the case of the area being a road, the sequence of images may comprise information on the road and its vicinity and more specifically one or more surfaces of the road and its vicinity. Preferably the sequence of images covers at least the daylight period. Based on the captured sequence of images and therefore on the information contained in the captured sequence of images, the processing means of the system identifies sunlit and/or shadow portions of the surfaces in the area. For example, a tree may cast a shadow on the road and a tall building on one side of the road may cast a shadow on the road surface, the sidewalk and the buildings on the other side. The processing means may then identify these portions as shadow portions.
In this way, valuable information regarding sunlight information may be gathered by the system.
Sunlight information is valuable information for a wide variety of applications. For example, city planners could improve the design of natural air conditioning within a city, for example by knowing which are the sunny places which need to be provided with green spaces and artificial water bodies, and could improve the placement of modern urban gardens, which necessitate sun for the plants to grow. Road safety may be increased, by knowing which roads are sooner in the shadow of the sun and need an earlier lighting of their luminaires or by adapting the light levels in a tunnel when sun is bright outside. for example by using 100% of the light at the entrance of the tunnel and dimming light gradually. Real estate businesses could better evaluate the price of accommodation depending on the available sunlight throughout the year. Solar panel placement may be improved to harvest energy for electric bicycles, cars, house facades, luminaires, parking meters.
The captured images may be images captured by a camera forming an image using visible light, but could also be thermal images captured by an infrared camera.
The processing means may be configured to determine at least one map of the area, preferably a sequence of maps over time. Each map may indicate identified sunlit and/or shadow portions of the one or more surfaces.
For example, said maps may be used by digital mapping application to provide additional sunlight information, such as whether some parking space is exposed to direct sunlight or not or whether the luminosity of a screen, such as a display of a smart phone using the application or an advertising display located in the area, may need to be increased or not in some portion of the road. This may also allow to know whether the lighting of a given portion of the road needs to be activated and/or dimmed earlier or later than other portions of the road. For example, it may be determined that some portion of a street is in the shadow of a building earlier than another portion. In this way, the safety of the users of the road of the street or of a pedestrian walking in said street are increased, as well as their feeling of safety. This may also be helpful to adapt the light levels at the entrance of the tunnel or in the tunnel.
The at least one map may comprise a plurality of maps, each indicating identified sunlit and/or shadow portions of the one or more surfaces at a given moment in time. The moments in time corresponding to the plurality of maps may for example span the same period of time spanned by the sequence of images. In this way, an evolution of the identified sunlit and/or shadow portions of the one or more surfaces may be obtained.
The processing means may be configured to identify a type of the one or more surfaces, and to determine at least one map based on the identified type of the one or more surfaces. Each map may indicate identified sunlit and/or shadow portions of the one or more surfaces.
The at least one map may comprise at least one 3D-map. In this way, maps determined by the processing means may be more immersive. For example, a user of a digital mapping desiring to enjoy a good meal at a terrasse may be able to determine whether the terrasse of a restaurant will be lit for a sufficient time thanks to such 3D maps. Said maps may also be used to know whether some apartment has good sunshine or not, which may influence its price.
The processing means may be configured to determine an amount of time during which a predetermined area in the vicinity of the one or more edge devices is exposed to direct sunlight, preferably based on the captured sequence of images. Indeed, if images are captured on a regular basis, it may be determined when a surface changes from a sunlit surface to a shadow surface. Further by capturing data on a regular basis at least during daylight time for a plurality of days, improved results may be achieved.
In this way, valuable sunlight information may be obtained. As mentioned above, the determined amount time during which a predetermined area is exposed to direct sunlight may influence the choice of a person desiring to enjoy a good meal at a terrasse or the price of an apartment.
Optionally, the results of the determining may be represented graphically using for example a different color for different ranges of the amount of direct sunlight time.
The processing means may be further configured to receive weather information from a weather database and to determine an amount of direct and/or indirect sunlight during the daylight period to which a predetermined area in the vicinity of the one or more edge devices is exposed. The determination may be based on the identified sunlit and/or shadow portions of the one or more surfaces and the received weather information.
In bad weather, the sun most often does not cast any shadow on the ground. If only based on the identified sunlit and portions of the one or more surfaces, the determination of sunlight may be inaccurate. By taking into account weather information, for example cloud cover, a better accuracy may be obtained in the determination of the amount of sunlight. The determined amount of sunlight may be used by solar panel operators, e.g. to predict the energy that will be generated by a solar panel,
The one or more edge devices may be further associated with one or more sensors, such as a light sensor and/or a temperature sensor. In addition, the processing means may be further configured to determine an amount of direct and/or indirect sunlight during the daylight period to which a predetermined area in the vicinity of the one or more edge devices is exposed. The determination may be based on the identified sunlit and/or shadow portions of the one or more surfaces and data sensed by the one or more sensors.
In this way, the accuracy of the determination of the sunlight during the daylight period may be improved.
The processing means may be further configured to identify sunlit portions of a surface as portions with a luminosity above a first pixel’s luminosity threshold and/or shadow portions of a surface as portions with a luminosity below a second pixel’s luminosity threshold. The first luminosity threshold may be above or equal to the second luminosity threshold.
In this way, the sunlit and/or shadow portions may be identified in a simple and not computationally expensive way.
The processing means may be further configured to determine the first and/or the second pixels luminosity threshold based on optical properties of the surface.
The amount of light that is reflected or diffused by a surface of the area that is imaged depends on the optical properties of the surface. A wet road surface will reflect more light than a road covered with sand particles and tree leaves, and a glass facade will reflect more light than a brick facade.
Therefore, the amount of light that reaches the image capturing means will also depend on these properties. By taking these properties into account and adjusting the thresholds based on these properties, the sunlit and/or shadow portions may be identified in an improved way.
The one or more edge devices may be luminaires or may be included therein. The luminaires may for example each comprise a pole and a luminaire head or alternatively a plurality of pole modules arranged one above the other along a longitudinal axis. The one or more image capturing means of the one or more edge devices may be configured to be arranged in or on said pole or in or on the luminaire head or in or on a pole module of said plurality of pole modules.
An example of a luminaire with pole modules is disclosed in EP 3 076 073 B1 in the name of the applicant, which discloses a modular lamp post comprising a plurality of pole modules mounted on a support pole. The pole modules are connected to one another by respective pole module connectors and optionally one pole module thereof is connected to a support pole by a pole module connector.
EP 3 076 073 B1 is included herein by reference. It is known to include additional functionalities, either in the modular lamp post itself or in a separate cabinet adjacent a lamp post. Examples of such modular lamp posts are disclosed in WO 2019/043045 A1, WO 2019/043046 A1, WO2020/152294,
WO 2019/053259 Al, WO 2019/092273 Al, WO2021/239993 Al, PCT/EP2022/051148, WO 2016/193430 and WO 2021/094612 Al in the name of the applicant. An example with a camera pole module is disclosed in WO 2021/094612 Al in the name of the applicant. WO 2021/094612 Al discloses a lamp post comprising a plurality of pole modules arranged one above the other along a vertical axis. The plurality of pole modules comprises a light pole module with a light source and a camera pole module.
The processing means may be further configured to communicate information relating to the sunlit and/or shadow portions of the one or more surfaces to a remote device and/or to another luminaire.
In this way, further processing of the information may be carried out at the remote device, e.g. ata remote server or a cloud computing device.
The information relating to the sunlit and/or shadow portions of the one or more surfaces may be in the form of a binary map.
In this way, the bandwidth for the communication of the information relating to the sunlit and/or shadow portions of the one or more surfaces may be reduced.
Alternatively, the remote device may be configured to generate a binary map based on the received information.
The processing means may be included in at least one of the one or more edge devices. The processing means may be a single unit, included in one of the one or more edge devices.
Alternatively, the processing means may be distributed over multiple units, e.g. partly in a first edge device and partly in a second edge device. In this way, the processing may be carried out locally, which reduces the amount of data that needs to be exchanged, e.g. with a remote device or cloud device, such as a cloud computing device.
Alternatively, the processing means may be included in a local single unit. e.g. a fog device, a remote device such as a cloud computing device, or may be distributed over multiple units, e.g. partly in a local unit and partly in a remote unit, such as a fog device and/or a cloud computing device. The fog device may be part of a network in a similar way as described above in connection with the first aspect.
Also, self-learning mechanisms may be used in exemplary embodiments. For example, as described in PCT publication WO 2022/122755 Al in the name of the applicant, which is included herein by reference, a system with one or more edge devices being preferably arranged in the vicinity of each other, may comprise: at least one first sensor, e.g. a first image capturing means and/or a first light sensor, and at least one second sensor, e.g. a second image capturing means and/or a second light sensor, the first sensor being configured for obtaining first environmental data related to an event in the vicinity of the one or more edge devices, e.g., a first captured sequence of images and/or first light data, and the second sensor being configured for obtaining second environmental data related to an event in the vicinity of the one or more edge devices, e.g.. a second captured sequence of images and/or second light data; a first processing means configured to process said first environmental data in accordance with a first set of rules to generate first processed data and a second processing means configured to process said second environmental data in accordance with a second set of rules to generate second processed data; and a control means configured to control the first and second processing means such that the second processed data is used to train the first processing means.
The skilled person will understand that the above-described self-learning mechanisms also apply to the embodiments of the first and second aspects of the invention.
Also, in exemplary embodiments, one or more improved image capturing means and/or sensor configuration parameters may be determined and set. For example, as described in PCT publication
WO 2022/189601 Al in the name of the applicant, which is included herein by reference, the sensor may be set up according to at least one configuration parameter, and there may be provided in the control means or in the edge device a processing means configured to process input data in accordance with a model to derive the at least one configuration parameter of the sensor. The control means may be configured to determine an updated model over time and to reset the processing means so as to process input data in accordance with the updated model.
Also, in exemplary embodiments, the processing means may be configured to use one or more processing models and/or one or more image capturing means and/or sensor configuration parameters, which are improved over time. For example, as described in PCT application
PCT/EP2022/071401 in the name of the applicant, which is included herein by reference, there may be provided an edge device configuration system for setting an initial configuration of one or more edge devices and/or for updating a configuration of one or more edge devices over time, comprising a data model database storing a plurality of data models, a data model comprising one or more processing models for one or more processing means of one or more edge devices and/or one or more configuration parameters for one or more sensors of one or more edge devices; and the control means may be configured to select a data model from the data model database for an edge device based on one or more environmental parameters of that edge device, and to configure the edge device in accordance with the selected data model.
Also, in exemplary embodiments, processing by the edge device and/or by the processing means may be done in an improved manner using data from multiple edge devices. For example, as described in PCT publication WO 2019/175435 A2 in the name of the applicant, which is included herein by reference, the edge device may comprise a communication unit configured to enable communication of data to and from communication units of other edge devices and/or to the control means, a processing unit configured to process the sensed data to produce first processed data. The first processed data of at least two edge devices may be further processed to produce second processed data.
According to another aspect there is provided a method for determining environmental characteristics of an area in the vicinity of a plurality of luminaires, comprising the following steps: capturing first data of a first area and capturing second data of a second area; determining by the processing means at least one map, based on the first and second data, each of the at least one map indicating environmental characteristics of an area comprising the first and the second area.
Preferably, the at least one map comprises at least one 3D-map, each 3D-map indicating environmental characteristics of ground areas and/or infrastructures areas of the area.
Optionally, the method further comprises the step of identifying sunlit and/or shadow portions in the first and second data, wherein said portions optionally include portions of a ground area and/or an infrastructure area of the first and second area.
S Optionally, the method further comprises the steps of: determining which elements in the first and/or second area are permanent and which elements are non-permanent; and/or determining short-term, mid-term and/or long-term changes in the first and/or second area.
Preferably, the determining is done such that the at least one map includes the elements which are 16 permanent and not the elements which are non-permanent; and/or the determining is done such that the at least one map includes the short-term, mid-term and/or long-term changes in the first and/or second area.
According to another aspect there is provided a method for updating data of an area in the vicinity of a plurality of luminaires, comprising the following steps: capturing data of an area; detecting with a processing means if a long-lasting change has occurred in the area based on a result of a processing of the data.
Preferably, the method further comprises transmitting information related to a result of the detecting, wherein the information optionally comprises one or more data captures captured before and/or one or more data captures captured after the detected long-lasting change and/or information related thereto, to a remote device.
Preferably, the data comprises images.
Preferably, the detecting comprises the comparing of pixels in consecutive images captured at sequential moments in time and the detecting that said pixels have changed when said pixel change lasts at least on a first predetermined number of consecutive images.
Optionally the method further comprises the step of averaging pixel values of a predetermined amount of consecutive images captured at sequential moments in time so as to obtain a static image; and wherein the detecting comprises the comparing of pixels in consecutive static images.
Optionally, the detecting comprises the detecting that an object has been added to or removed from the area; and optionally farther comprises the determining of a class of the added or removed object.
Optionally, the step of transmitting is only performed if the added or removed object belongs to one or more predetermined classes.
According to another aspect there is provided a method for determining sunlight exposure information of an area in the vicinity of one or more edge devices, comprising the following steps: capturing a sequence of images of surfaces of the area over time, said sequence of images preferably covering at least a daylight period; identifying by the processing means sunlit and/or shadow portions of one or more surfaces in the captured sequence of images.
Optionally, the method further comprises the step of determining by the processing means at least one map of the area, preferably a sequence of maps over time, each of the at least one map indicating identified sunlit and/or shadow portions of the one or more surfaces.
Optionally, the method further comprises the step of determining by the processing means an amount of time during which a predetermined area in the vicinity of the one or more edge devices is exposed to direct sunlight.
Optionally, the method further comprises the steps of: receiving at the processing means weather information from a weather database and/or from one or more light sensors configured to measure sunlight; and determining an amount of direct and/or indirect sunlight during the daylight period to which a predetermined area in the vicinity of the one or more edge devices is exposed based on the identified sunlit (L) and/or shadow (S) portions of the one or more surfaces and the received weather information.
The skilled person will understand that the hereinabove described technical considerations and advantages for the system embodiments of the first, second and third aspects also apply to the corresponding method embodiments of these aspects, mutatis mutandis.
BRIEF DESCRIPTION OF THE FIGURES
This and other aspects of the present invention will now be described in more detail, with reference to the appended drawings showing exemplary embodiments of the invention. Like numbers refer to like features throughout the drawings.
Figure 1 illustrates an embodiment of a system for determining environmental characteristics according to the first aspect;
Figures 2a and 2b illustrate how an embodiment of such a system according to the first aspect may be used to determine that a bus stop on the side of a road has been moved;
S Figures 2e and 2d illustrate how an embodiment of such a system according to the first aspect may be used for the monitoring of parking spaces on the side of a road;
Figure 3 illustrates a capture timing of the sequence of captured images for an embodiment of the system according to the first aspect;
Figure 4 illustrates an embodiment of a system for updating images of an area according to the
IO second aspect;
Figures 5a and Sb illustrate an embodiment of a system for updating images of a road according to the second aspect and how a static image may be obtained from the captured sequence of images;
Figure 6 illustrates an embodiment of a system for determining sunlight exposure information according to the third aspect; and
Figures 7a and 7b illustrate an embodiment of such a system according to the third aspect being used for determining the sunlight exposure of an urban square.
Figure 8 is a schematic view of an exemplary architecture which may be used in embodiments of the invention.
DESCRIPTION OF THE EMBODIMENTS
Figure 1 illustrates an embodiment of a system for determining environmental characteristics according to the first aspect.
The system for determining environmental characteristics comprises a plurality of luminaires, comprising a first luminaire 100a and a second luminaire 100b. The system further comprises at least one first image capturing means 110a associated with the first luminaire 100a, and at least one second image capturing means 110b associated with the second luminaire 100b.
Here, the capturing means 110a and 110b are image capturing means. However, other alternative or additional types of capturing means may be used, such as, e.g. a LIDAR sensor. a sound sensor, a radar. In the following however, the embodiments of the system will be particularized to capturing means being image capturing means. such that the sequences of data are sequences of images and such that a data capture corresponds to an image capture. The skilled person will understand that this does not limit the invention to such types of capturing means.
The system may comprise additional luminaires and additional image capturing means associated with the additional luminaires, e.g. a third luminaire 100c and a third image capturing means 110c associated with the third luminaire 100c. The skilled person will understand that the number of luminaires and image capturing means associated with said luminaires is not limited to those examples.
The at least one first image capturing means 110a is configured to capture a first sequence of images over time of a first area 200a in the vicinity of the first luminaire 100a. Similarly, the at least one
IO second image capturing means 110b is configured to capture a second sequence of images over time of a second area 200b in the vicinity of the second luminaire 100b. In some cases one image capturing means is provided for a plurality of luminaires.
Although the first area 200a and the second area 200b areas are shown to overlap on Fig. 1, the skilled person will understand that they may also be distinct (no overlap) or similar (even a full overlap is possible). As also visible from Fig. 1, the first area 200a may comprise the first luminaire 100a and the first image capturing means 110a while the second area 200b may not comprise the second luminaire 100b nor the second image capturing means 110b. The skilled person will understand however that the areas in which sequences of images are captured may well comprise other luminaires than the ones associated with the image capturing means. For example, as illustrated on Fig. 1, the second area may comprise the third luminaire and the third capturing means.
In addition, the system comprises a central processing means 300 configured to receive the first and second captured sequences of images. The central processing means 300 is further configured to determine at least one map, typically a sequence of maps over time, based on the received first and second captured sequences of images. Each map indicates environmental characteristics of an area comprising the first area 200a and the second area 200b at a moment in time.
Figures 2a and 2b and figures 2c and 2d illustrate how an embodiment of such a system according to the first aspect may be used to determine that a bus stop on the side of a road has been moved or to determine that parked cars have left their parking spots and/or that new cars are filling parking spots, respectively.
The system for determining environmental characteristics may be used to determine environmental characteristics of a road and its vicinity. For example, the system may comprise a plurality of luminaires on the side of a road, comprising a first luminaire 100a and a second luminaire 100b. In
Figs 2a-2d, the luminaires each comprise a pole and a luminaire head. However, the luminaires may comprise a plurality of pole modules arranged one above the other along a longitudinal axis.
S The system further comprises at least one first image capturing means 110a associated with the first luminaire 100a and at least one second image capturing means 110b associated with the second luminaire 100b. As illustrated in Figs 2a-2d, the image capturing means may be configured to be arranged on said pole, e.g. on a platform attached to the pole. The skilled person will understand that other arrangements are possible, e.g. the at least one first image capturing means 110a and/or the at
IO least one second image capturing means 110b may be configured to be arranged in or on a pole, or in or on a luminaire head, or in or on a pole module of a plurality of pole modules, in the case of a luminaire comprising a plurality of pole modules. The skilled person will understand that, in other embodiments, the at least one first image capturing means 110a and/or the at least one second image capturing means 110b may correspond to components of the system that are physically separate from their associated first luminaire 100a and second luminaire 100b, said components being configured to be arranged at a vicinity of said associated first luminaire 100a and second laminaire 100b.
The first image capturing means 110a is configured to capture a first sequence of images over time of a first area 200a in the vicinity of the first luminaire 100a and the second image capturing means 110b is configured to capture a second sequence of images over time of a second area 200b in the vicinity of the second luminaire 100b.
For example, the first area 200a and the second area 200b, which may overlap, as illustrated, but may well be distinct, may comprise a surface of the road, static structures of the surface (e.g. a pedestrian crossing), dynamic objects on the surface (e.g. vehicles using the road), static structures on the sides of the road (e.g. a bus stop or shelter, road lights, other luminaires), etc.
In addition, the system comprises a central processing means 300 configured to receive the first and second captured sequences of images. The central processing means 300 may in some embodiments comprise a remote server or a cloud server. The skilled person will understand that, in other embodiments, the central processing means 300 may also be included in a fog device or in a luminaire.
The central processing means 300 is further configured to determine at least one map, e.g. a sequence of maps over time, based on the received first and second captured sequence of images. Thus, the captured sequence of images may include images captured at sequential moments in time. And the sequence of maps may comprise maps at said sequential moments in time, or at some of said sequential moments in time. Each map indicates environmental characteristics of an area comprising the first area 200a and the second area 200b at a moment in time. For example, the map may indicate the position of static structures or objects on the surface of the road and/or its vicinity, e.g. the position of the pedestrian crossing, of the bus shelter, of the luminaires, etc. The map may also indicate the state of the road surface, the number of cars on the road, the number of parked cars, etc.
The map may be a 3D-map. Each map may then indicate environmental characteristics of ground areas and/or infrastructures areas of the area.
The first luminaire 100a and/or the second luminaire 100b may comprise a local processing means (not shown in Figs 2a-2d) configured to detect if an object O has been added to or removed from the area. Alternatively, this detection may be done by a remote device based on information received from the first laminaire 100a and/or the second luminaire 100b. This situation is illustrated in Figs 24 and 2b, where a bus shelter O is moved further upstream on the road. These changes may be due to e.g. changes in transport network planning but may not have been communicated to the digital mapping application. As illustrated in Figs 2c and 2d, other changes, such as a parked vehicle O leaving its parking spot or an unoccupied parking spot O’’ being occupied by a new vehicle may also be detected.
The detection may be based on the first and/or second captured sequence of images. The processing means may also be configured to select from the first and/or second sequence of images one or more images with the added or removed object O. For example, in the situation illustrated in Figs 2a and 2b, the selected first and/or second sequence of images may comprise an earlier image of the situation prior to the change, i.e. with the bus shelter downstream from the pedestrian crossing, and a later image of the situation after the change, i.e. with the bus shelter moved upstream from the pedestrian crossing, such that the earlier and later image illustrate the change. In the example illustrated in Figs 2c and 2d, the selected first and/or second sequences of images may comprise a first image of the parked car O°, and a second image of the empty spot, after that the parked car O° has left, or conversely for 0”,
The first luminaire 100a and/or the second 100b luminaire may comprise a transmitting means configured to transmit information related to the detecting (such as the type of object that was added or removed) and/or the selected one or more images and/or information based thereon to the central processing means 300. In this way, changes in the area such as the ones discussed above may be detected locally and transmitted to the central processing means 300. For example, after the local processing means detects that change, one or more images of the new situation may be selected by the local processing means and transmitted to the central processing means, such that, for example, a digital mapping application or a parking application may be updated.
The local processing means may be further configured to determine a class of the added or removed object. The local processing means may only select one or more images for transmission if the added or removed object belongs to one or more predetermined classes, such that the transmitting means
IO transmits the one or more images and/or information based thereon only if these have been selected for transmission. The predetermined classes will depend on the application being updated. For example, in the case of a digital mapping application, the one or more predetermined classes may preferably be classes for one or more static objects, preferably infrastructure elements, such as a bus shelter or a pedestrian crossing. On the other hand, for a parking application, the predetermined classes may be vehicles.
In this way, undesired transmitting from the local processing means may be avoided, especially for changes of objects that are not static, which occur more often. For example, knowing that a bus shelter, which is static, has been moved further down on the road may be valuable to update a digital mapping application. Indeed, such changes do not occur often and are not expected to happen soon after the shelter has been moved. On the contrary, knowing that a vehicle has just passed by on the road occurs more often and does not necessitate the selecting and transmitting of one or more images as this may consume processing power and/or bandwidth in an undesired manner.
Although only objects have been discussed up to now, the central processing means 300 may also be configured to identity sunlit and/or shadow portions in the first and/or second sequences of images. Said portions may include portions of a ground area and/or an infrastructure area of the first area 200a and/or the second area 200b. The map may indicate whether the road surface is lit by the sun or not based on the identified sunlit and/or shadow portions.
Figure 3 illustrates a capture timing of the sequence of captured data, e.g. images, for an embodiment of the system according to the first aspect.
The first and/or second captured sequences of data over time may comprise a first, short-term, sequence of data covering a first predetermined period of time dtl, and a second, mid-term, sequence of data covering a second predetermined period of time dt2. Optionally, a third, long-term, sequence of data covering a third predetermined period of time dt3 may also be comprised. The first predetermined period of time dtl may be shorter than the second dt2 and the optional third dt3 predetermined period of time and the second predetermined period of time dt2 may be shorter than
S the third predetermined period of time dt3.
In Fig. 3, each of the bars represents a captured data. wherein the large bars with a solid diamond erid are captured data of the third captured sequence, the medium striped bars are captured data of the second captured sequence, and the small solid bars are captured data of the first captured
IO sequence. As visible from Fig. 3, some captured data of different sequences may be taken at the same time. The skilled person will understand that, instead of capturing two different data, an data may be captured only once, and be comprised in several captured sequences.
The captured sequences of data over time may be sequences of data at consecutive time intervals.
The first predetermined period of time dtl may be less than 2 days, while the first, short-term, captured sequence of data over time may comprise preferably at least 2 data per day. More preferably, the first, short-term, captured sequence of data over time may comprise at least 10 data per day, and preferably less than 24 data per day. For example, as illustrated in Fig. 3, dt1 may equal 1 day and 12 data may be captured per day.
The second predetermined period of time dt2 may be less than 2 months, while the second, mid- term, captured sequence of data over time may be preferably at least 2 data per month. More preferably, the second, mid-term, captured sequence of data over time may be at least 4 data per month, preferably less than 100 data per month. For example, as illustrated in Fig. 3, dt2 may equal 1 month and 15 data may be captured per month.
The third predetermined period of time dt3 may be less than 18 months while the third, long-term, captured sequence of data over time may be preferably at least 2 images per year. More preferably, the third, long-term, captured sequence of data over time may be at least 12 data per year, preterably less than 100 data per year, For example, as illustrated in Fig. 3, dt3 may equal 6 months and 4 data may be captured per half year.
The predetermined periods of time dtl, dt2 and dt3 may be dependent on the application. For example, for a parking application, the predetermined period of time may be desiredly short, since several changes in the occupancy of a single parking space may occur in the course of e.g. an hour.
On the contrary, for a digital mapping application such as OpenStreetMap or Google Maps™, the predetermined periods may be relatively longer, since infrastructure elements tend to change relatively rarely. Indeed, a pedestrian crossing or a bus shelter tend to remain at their location for long periods of time, e.g. several months or years. It may therefore not be useful to capture images several times a day for this type of applications.
The first, short-term, sequence, the second, mid-term, sequence, and optionally the third, long-term, sequence of the first and/or second captured sequences of data may be used to determine which elements in the first area 200a and/or the second area 200b are permanent and which elements are non-permanent. For example, in the situation illustrated in Figs 2a-2d, since the pedestrian crossing should appear on most (if not all) captured data, such as captured images, the pedestrian crossing may be determined to be a permanent element of these areas. On the contrary, a parked car will most often not be visible on all captured data, such as captured images (e.g. the vehicle may have been moved elsewhere and/or replaced by a different vehicle) and may therefore be determined as non- permanent. The permanent character of an element may also be decided based on a predetermined threshold, for example, if this element is seen for more than a month.
The central processing means 300 may be configured to generate a map, such as a 3D map, including the elements which are permanent and not the elements which are non-permanent.
The first, short-term, sequence, the second, mid-term, sequence, and optionally the third, long-term, sequence of the first and/or second captured sequences of data may be used to determine information about recurrent or periodic changes. Recurrent and/or periodic changes such as the day and night cycle or the yearly seasonal cycle may have a significant impact on the sequence of data. For example, during the night, the area may be illuminated with artificial light, which always casts the same shadow, while during the day, the sun may cast shadows that move as the day progresses. Also, as the year progresses, the height of the sun varies, and the length of the day and the size of the shadows varies with it. Other recurrent or periodic changes such as the fact that some parking spaces may be more occupied on the weekends than during the week may also be an example of such changes, Valuable information may be obtained when including such information in the generation of the maps.
The central processing means 300 may be configured to generate the sequence of 3D maps over time using the information about recurrent or periodic changes.
Although the plurality of laminaires 100a, 100b is represented in Figs 2a-2d as comprising a pole and a luminaire head and the at least one corresponding image capturing means 110a, 110b is illustrated as being arranged on the pole, the at least one image capturing means 110a, 110b may be arranged in the pole instead. Alternatively, the plurality of luminaires 100a, 100b may comprise a plurality of pole modules arranged one above the other along a longitudinal axis instead. The at least one corresponding image capturing means 110a, 110b may then be configured to be arranged in or on a pole module of said plurality of pole modules or in or on the luminaire head. The skilled person will understand that, in other embodiments, the at least one first image capturing means 110a and/or the at least one second image capturing means 110b may correspond to components of the system that are physically separate from their associated first luminaire 100a and second luminaire 100b, said components being configured to be arranged at a vicinity of said associated first laminaire 100a and second luminaire 100b.
Figure 4 illustrates an embodiment of a system for updating data, such as images, of an area according to the second aspect.
The system comprises one or more luminaires, e.g. a first luminaire 100a, a second luminaire 100b and/or a third luminaire 100c, which are provided with one or more image capturing means, e.g. a first image capturing means 110a. a second image capturing means 110b and/or a third image capturing means 110c, which are configured to capture a sequence of images over time of an area 200.
The area 200 may comprise a plurality of sub-areas corresponding to the sub-areas 201a, 201b and 201c, from which the image capturing means captures sequences of images thereof. Although the first sub-area 201a and the second sub-area 201b are shown to overlap on Fig. 4, the skilled person will understand that they may be distinct or identical. As also visible from Fig. 4, the first sub-area 2014 may comprise the first luminaire 100a and the first image capturing means 110a and the second sub-area 201b may comprise the second laminaire 100b and the second image capturing means 110b.
The skilled person will understand however that the sub-areas in which sequences of images are captured may well not comprise the luminaires themselves or may comprise other luminaires than the ones associated with the image capturing means. For example, as illustrated in Fig. 4, the third sub-area 201c may comprise the second luminaire and the second image capturing means.
The one or more luminaires are also provided with a processing means, e.g. a first local processing means 120a and/or a fog processing means 130bc, which is configured to detect if a long-lasting change has occurred in the area 200 based on the captured sequence of images. The processing means is also configured to select from the sequence of images one or more images captured before and after the detected long-lasting change.
In addition, the one or more luminaires are provided with a transmitting means configured to transmit information related to the selected one or more images to a remote server 350.
The one or more luminaires may be arranged in a mesh network as explained above in the summary part of the invention. For example, the one or more luminaires may be configured to transmit captured data or edge processed data based on the captured data to its associated processing means 120bc and optionally receive control data from its associated processing means 120bc using a wireless personal area network (WPAN), preferably as defined in the IEEE 802.15.4 standard. Thus, the communication between the one or more luminaires and its associated processing means 120bc may be based on a short-range protocol such as IEEE 802.15.4 (e.g. Zigbee). The network may be managed by the fog device or by a separate segment controller.
The one or more luminaires may be provided with a short-range transmitting means such that the luminaires may transmit information to the local processing means. For example, the second luminaire 100b may be provided with a short-range transmitting means 140b and the third luminaire 100c may be provided with a short-range transmitting means 140c, such that the luminaires 100b, 100c can transmit information such as the selected one or more captured images to the fog processing means 120bc.
The one or more luminaires may be also provided with a long-range transmitting means configured to transmit information related to the selected one or more images to a remote server 350. For example, the first luminaire 1004 may be provided with a long-range transmitting means 1304 and the second 100b and third 100c luminaires may be provided with a long-range transmitting means 130bc. The information related to the selected one or more images may comprise the selected one or more images, and/or the type of change or the position of the change within the area 200.
Figure 5a illustrates an embodiment of a system for updating images in the vicinity of a road according to the second aspect.
The system comprises one or more luminaires, e.g. a first luminaire (not illustrated), a second luminaire 100b and/or a third luminaire 100c, which are provided with one or more image capturing means, e.g. a first image capturing means (not illustrated), a second image capturing means 110b and/or a third image capturing means 110c, which are configured to capture a sequence of images over time of an area.
The area may comprise a plurality of sub-areas corresponding to the sub-areas from which the image capturing means captures sequences of images thereof. For example, the scene may correspond to the captured image from the first image capturing means, in which case the first sub-area comprises all visible elements of Fig. Sa.
Although not represented, the embodiment of Fig. 5a is similar to the one of Fig. 4, wherein the luminaires are also provided with a processing unit and a transmitting means to transmit to a remote server.
As visible from Fig. Sa. the sub-area corresponding to the first image capturing means may comprise a road surface, a plurality of vehicles, a plurality of unoccupied parking spots, and a plurality of parking spots occupied by cars, pedestrians, luminaires, a bus shelter and a bus stop, house facades, and a bar.
Consecutive static images may be obtained by averaging pixel values of a predetermined amount of consecutive images from the captured sequence of images. An example of a static image corresponding to the captured image of Fig. 5a and the way in which it is obtained from a captured sequence of images is illustrated on Fig. 5b.
In Fig. 5b, three captured images corresponding to the captured image of Fig. Sa are illustrated.
These three images have been captured at different periods of time.
The captured sequence of images over time may cover a first predetermined period of time. The first predetermined period of time may be at least 1 week, preferably at least 1 month, more preferably at least 3 months. The images of the captured sequence of images may also be captured at consecutive time intervals. Preferably, at least 2 images may be captured per day, more preferably at least 10 images per day.
As discussed above, the predetermined period of time and/or the time interval may be dependent on the application. The one or more image capturing means may be further configured to adjust a duration between consecutive images of the sequence of images based on sensed and/or received data related to the area. The one or more image capturing means may be alternatively or additionally configured to be triggered by a remote user to capture images of the area, such that the updating may also be piloted manually, e.g. based on reviews stating that the image is not up to date, or directly triggered by a remote user.
For example, the captured sequence of images may comprise three captured images taken at a 1- month interval, i.e. the first predetermined period of time being 3 months. By taking the average of pixel values of these three consecutive images from the captured sequence of images, a static image may be obtained. The static image is illustrated below the three captured images. As visible from
Fig. 5b, the static image may comprise elements that have been static during the predetermined period of time. Indeed, by taking the average of the pixel values of images, which may sometimes be referred to as denoising, changes that occur on a small amount of images, and therefore contribute only to a small amount in the average, may not appear in the static images obtained in this way. This guarantees that only long-lasting changes remain in the static images.
For example, the static image illustrated in Fig. 5b may comprise the road surface, the plurality of parking spots, the luminaires, the bus shelter and the bus stop, the house facades, and the bar but may not comprise the cars nor the pedestrians, since these dynamic elements have been filtered out by averaging the pixel values.
A long-lasting change may be detected by comparing pixels in consecutive images from the captured sequence of images and detecting that said pixels have changed, with said change lasting at least on a first predetermined number of consecutive images. Preferably, the long-lasting change may also be detected by comparing pixels in consecutive static images, as those contain infrastructure elements and/or static elements only.
The processing unit may be further configured to detect that an object has been added to or removed from the area as the long-lasting change. For example, a bus shelter may have been removed from the area and moved to a second area due to changes in the city planning or in the public transport network. The processing unit may therefore detect that an object, e.g. the bus shelter, has been removed from the first area. This may be determined as a long-lasting change. since the bus shelter may not appear on a significant amount of images captured by the first image capturing means 110a after the change. On the other hand, the processing unit may also detect that an object, e.g. the bus shelter, has been added to a second area. This also may be determined as a long-lasting change, since the bus shelter may appear on a significant amount of images captured by the second image capturing means 110b after the change.
The processing unit may be further configured to determine a class of the added or removed object.
One or more images may only be selected for transmission by the processing unit if the added or removed object belongs to one or more predetermined classes.
Some changes may be long-lasting but may not be considered as generating the need to update images of the area. For example, an added object, such as car parking in the area or a road work zone, may be detected in the area and may remain there for days or weeks. Such change may therefore be considered as a long-lasting change. However, such changes may not require that the images of a digital mapping application are updated to reflect these changes. On the contrary, changes such as the removal of a bus shelter, or the change of the bar into a shop may be considered as generating a need for an update, in which case, one or more images may be selected for transmission to update the data of a digital mapping application.
The information related to the selected one or more images may further comprise the class of the added or removed object. The one or more predetermined classes may be classes for one or more static objects, preferably infrastructure elements, e.g. bus shelter, bar name, facade of some house.
Additionally or alternatively, the information related to the selected one or more images may comprise the position of the long-lasting change. The determination whether the change calls for an update or not may then be left for the remote server to make.
Although the luminaires illustrated in Figs 5a-5b each comprise a pole and a luminaire head, the skilled person will understand that the luminaire may alternatively comprise a plurality of pole modules arranged one above the other along a longitudinal axis. Also, while the one or more image capturing means are illustrated as being arranged on the pole, the skilled person will understand that the one or more image capturing means may be arranged in the pole or in or on a pole module of said plurality of pole modules. The skilled person will understand that, in other embodiments, the one or more image capturing means may correspond to components of the system that are physically separate from the luminaire, said components being configured to be arranged at a vicinity of said luminaire.
Figure 6 illustrates an embodiment of a system for determining sunlight exposure information according to the third aspect.
The system comprises one or more edge devices, e.g. a first edge device 100a, a second edge device 100b, a third edge device 100c, which are provided with one or more image capturing means, e.g. a first image capturing means 110a, a second image capturing means 110b and a third image capturing means 110c, which are configured to capture a sequence of images of surfaces of an area 200 over time. The sequence of images preferably covers at least a daylight period.
The area 200 may comprise a plurality of sub-areas corresponding to the sub-areas 2014, 201b and 201c, from which the image capturing means captures sequences of images thereof. Although the first sub-area 2014 and the second sub-area 201b are shown to overlap in Fig. 6, the skilled person will understand that they may be distinct. As also visible from Fig. 6, the first sub-area 201a may comprise the first edge device 100a and the first image capturing means 110a and the second sub- area 201b may comprise the second edge device 100b and the second image capturing means 110b.
The skilled person will understand however that the sub-areas in which sequences of images are captured may well not comprise the edge devices themselves or may comprise other edge devices than the ones associated with the image capturing means. For example, as illustrated in Fig. 5, the third sub-area 201c may comprise the second edge device 100b and the second image capturing means 110b.
In addition, the system comprises a processing means 120 configured to receive the captured sequence of images. The processing means 120 is also configured to identify sunlit and/or shadow portions of the surfaces in the captured sequence of images.
The processing means 120 may be further configured to receive sunlight information from a weather database 160, and to determine an amount of direct and/or indirect sunlight during the daylight period to which a predetermined area in the vicinity of the one or more edge devices is exposed based on the identified sunlit and/or shadow portions of surfaces and the received weather information.
The one or more edge devices 1092, 100b, 100c may be further associated with one or more light sensors, e.g. with a first light sensor 1704 and a second light sensor 170b, configured to measure sunlight. In addition, the processing means 120 may be further configured to determine an amount of direct and/or indirect sunlight during the daylight period to which a predetermined area in the vicinity of the one or more edge devices 100a. 100b, 100c is exposed. The determination may be based on the identified sunlit and/or shadow portions of surfaces and the measured sunlight.
The processing means 120 may be further configured to communicate information relating to the sunlit and/or shadow portions of the surface to a server 350. In this way, further processing of the information may be carried out at the server 350 side.
The information relating to the sunlit and/or shadow portions of the surface may be in the form of a binary map, such that the bandwidth for the communication of the information relating to the sunlit and/or shadow portions of the surface may be reduced.
Although not illustrated in Fig. 6, the processing means 120 may be included in one of the edge devices 100a, 100b, 100c. In this way, the processing may be carried out locally, which reduces the amount of data that needs to be exchanged.
The edge devices 100a, 100b, 100c may be included in one or more luminaires, and/or may correspond to luminaires.
Figures 7a and 7b illustrate an embodiment of such a system according to the third aspect being used for determining the sunlight exposure of an urban square.
The sequence of images preferably covers at least a daylight period. Figure 7a illustrates the urban square early in the morning, while Fig. 7b illustrates the urban square later in the day.
As illustrated, the square may comprise a plurality of luminaires, which may comprise a first edge device 100a and second edge device 100b (included in one or more luminaires, or being the luminaires themselves), trees, terrasses, a road and vehicles parked and/or driving on said road. All these elements cast a shadow, which create shadow portions S of the surfaces of the urban square, e.g. on the ground surface, but also on the facades of the buildings surrounding the square. Other portions of the surfaces in the area are lit by the sun and are thus sunlit portions L of the surfaces of the urban square.
As discussed above, the processing means 120 is configured to identify sunlit and/or shadow portions of the surfaces in the captured sequence of images. The processing means 120 may also be configured to determine a sequence of maps of the area over time, with each map indicating identified sunlit L and shadow S portions of the surfaces. Additionally or alternatively, the processing means 120 may be configured to determine an amount of time during which a predetermined area in the vicinity of the one or more edge devices 100a, 100b is exposed to direct sunlight.
For example, said maps may be used by digital mapping application to provide it with additional sunlight information, such as whether some parking spaces on the road are exposed to direct sunlight or not. Real estate businesses could better evaluate the price of accommodation depending on the available sunlight throughout the year. For example, the windows of the building at the back of the urban square does not receive direct solar light throughout the day, which could lower the price of apartments in said building (or increase it, e.g. in southern countries).
The processing means may be configured to identify a type of a surface, and to determine a sequence of 3D-maps over time based on the identified type of the surface. Each 3D-map may indicate identified sunlit and/or shadow portions of surfaces.
In this way, maps determined by the processing means may be more immersive. For example, a user of a digital mapping desiring to enjoy a good meal at a terrasse may be able to determine whether the terrasse of a restaurant will be lit for a sufficient time thanks to such 3D maps. In the morning, said user may therefore rather choose a restaurant on the right side of the square, while during the evening, the left side may be a better option. Said maps may also be used by an interested buyer to evaluate whether some apartment has good sunshine exposure or not, which may influence its price and the interest of the buyer.
The processing means may be further configured to identify sunlit portions of a surface as portions with a luminosity above a first pixel’s luminosity threshold and/or shadow portions of a surface as portions with a luminosity below a second pixel’s luminosity threshold. The first luminosity threshold may be above or equal to the second luminosity threshold. In this way, the sunlit and/or shadow portions may be identified in a simple and not computationally expensive way.
The processing means may be further configured to determine the first and/or the second pixel’s luminosity threshold based on optical properties of the surface. The amount of light that is reflected or diffused by the surfaces of the area that is imaged depends on the optical properties of the surface.
A wet road surface or a surface of the urban square on which rain has accumulated, will reflect more light than surfaces covered with sand particles and tree leaves. By taking these properties into account and adjusting the thresholds based on these properties, the sunlit and/or shadow portions may be identified in an improved way.
Although the plurality of luminaires is represented in Figs 7a and 7b as comprising a pole and a luminaire head, the plurality luminaires may alternatively comprise a plurality of pole modules arranged one above the other along a longitudinal axis instead. The at least one corresponding image capturing means may be configured to be arranged in or on the pole or in or on the luminaire head or may be arranged in or on a pole module of said plurality of pole modules. The skilled person will understand that, in other embodiments, the at least one corresponding image capturing means may correspond to a component of the system that is physically separate from the luminaire, said component being configured to be arranged at a vicinity of said luminaire.
Figure 8 illustrates an embodiment of a system for determining environmental characteristics. The system comprises a plurality of capturing means 110a, 110b, 110c¢, etc. which may be associated with one or more luminaires. Figure 8 represents a general multi-agent system and illustrates how agents may interact in order to obtain relevant conclusions. The agents are encapsulated as docker containers to represent models and processes that occur at the different stages. Starting from the top, figure 8 represents just a few of the possible capturing means 110a, 110b, 110c, etc. that can capture data from the environment, for instance: image capturing sensors, location sensors (e.g. GPS, active badges), pressure sensors (e.g. barometer, pressure gauges), wearable sensors (e.g. accelerometers, gyroscopes, magnetometers), vital sign processing devices (heart rate, temperature), motion sensors (e.g. radar gun, speedometer. mercury switches, tachometer). The sensed data is perceived by sensory specific agents (typically part of a local processing means) 120a, 120b, 120c, whose responsibility is pre-processing the sensed data, either through validation, standardization, format conversion or others. Then, they redirect data to agents 310a, 310b, 310c (which may be part of a central or a local processing means) with a specific know-how about the sensed data. Next, such knowledge base agents 310a, 310b, 310c take action in receiving the data and transforming it in knowledge capable of being valuable to a following information merging step. At this transformation stage, data may encompass actions from if-then-else rules or simple characteristics analysis all the way through complex machine learning or deep learning methodologies. These agents may operate under continuous training conditions by retaining information from other processing agents that can be useful to enhance their own capabilities. After individual data examination, a knowledge merging agent 320 (typically part of a central processing means) may proceed in order to correctly join the processed information from the capturing means 110a, 110b, 110c, etc. and return a plausible final result. Actions conducted may include an initial data alignment step (through timestamp information, for instance) or consecutive fusion levels between more similar modalities. More specifically, the process of multi-sensor data fusion involves the automatic integration of information from different capturing means 1104, 110b, 110c, etc. to create a comprehensive and useful description of the desired targets. This technique converts raw data obtained from multiple capturing means 110a, 110b, 110c, etc. into a cohesive set of inferences and its primary advantage is the ability to obtain information that may not be available from a single capturing means. Generally, fusion frameworks are based on feature-based and decision-based fusion principles. Feature-based fusion relies on a deep learning approach that automatically assigns higher weights to more relevant modalities during training. In contrast, decision-based fusion involves setting specific rules to prioritize more relevant modalities over others. To maintain communication between agents within the system, a broker 330 may be utilized, for example an Eclipse Mosquitto message broker. The broker 330 may be configured to implement the Message Queuing Telemetry Transport (MQTT) protocol, a standard messaging protocol for the Internet of Things (IoT) developed as a very lightweight publish/subscribe messaging transport for linking remote de-vices with a minimal code footprint and low network bandwidth, thus making it suitable for low power sensors or mobile devices.
Whilst the principles of the invention have been set out above in connection with specific embodiments, it is to be understood that this description is merely made by way of example and not as a limitation of the scope of protection which 1s determined by the appended claims.

Claims (62)

CONCLUSIESCONCLUSIONS 1. Een systeem voor het bepalen van omgevingskenmerken, het systeem omvattende: - een aantal lichtarmaturen omvattende een eerste lichtarmatuur (1002) en een tweede lichtarmatuar (100b), - ten minste één eerste opnamemiddel (1104) en ten minste één tweede opnamemiddel (110b), waarbij het ten minste één eerste opnamemiddel (1104) geassocieerd is met het eerste lichtarmatuur (1004) en geconfigureerd is om eerste gegevens van een eerste gebied (2002) in de nabijheid van het eerste lichtarmatuur (1002) op te nemen, waarbij het ten minste één tweede opnamemiddel (110b) geassocieerd is met het tweede lichtarmatuur (100b) en geconfigureerd is om tweede gegevens van een tweede gebied (200b) in de nabijheid van het tweede lichtarmatuur (100b) op te nemen; en - een centraal verwerkingsmiddel (300) dat geconfigureerd is om, op basis van de eerste en tweede gegevens, ten minste één kaart te bepalen, waarbij elke van de ten minste ene kaart omgevingskenmerken van een gebied omvattende het eerste en het tweede gebied aangeeft.1. A system for determining environmental characteristics, the system comprising: - a plurality of light fixtures including a first light fixture (1002) and a second light fixture (100b), - at least one first recording means (1104) and at least one second recording means (110b), the at least one first recording means (1104) being associated with the first light fixture (1004) and being configured to record first data of a first area (2002) in the vicinity of the first light fixture (1002), the at least one second recording means (110b) being associated with the second light fixture (100b) and being configured to record second data of a second area (200b) in the vicinity of the second light fixture (100b); and - a central processing means (300) configured to determine, based on the first and second data, at least one map, each of the at least one map indicating environmental features of an area comprising the first and second areas. 2. Het systeem volgens conclusie 1, waarbij de eerste gegevens opeenvolgende eerste sets van gegevens op opeenvolgende momenten in de tijd omvat; waarbij de tweede gegevens opeenvolgende tweede sets van gegevens op opeenvolgende momenten in de tijd omvat; en waarbij het bepalen van de ten minste ene kaart het bepalen van opeenvolgende kaarten op basis van de eerste sets en de tweede sets omvat.2. The system of claim 1, wherein the first data comprises successive first sets of data at successive points in time; wherein the second data comprises successive second sets of data at successive points in time; and wherein determining the at least one map comprises determining successive maps based on the first sets and the second sets. 3. Het systeem volgens conclusie 1 of 2, waarbij de ten minste ene kaart ten minste één 3D- kaart is, waarbij elke kaart omgevingskenmerken van grondgebieden en/of infrastructuargebieden van het gebied aangeeft. 3. The system of claim 1 or 2, wherein the at least one map is at least one 3D map, each map indicating environmental features of territories and/or infrastructure areas of the area. 4, Het systeem volgens één der voorgaande conclusies, waarbij het ten minste ene eerste en/of tweede opnamemiddel (1104, 110b) een beeldopnamemiddel omvat en waarbij de eerste gegevens van het eerste gebied één of meer eerste beelden van het eerste gebied omvat en/of waarbij de tweede gegevens van het tweede gebied één of meer tweede beelden van het tweede gebied omvat.4. The system of any preceding claim, wherein the at least one first and/or second recording means (1104, 110b) comprises an image recording means and wherein the first data of the first area comprises one or more first images of the first area and/or wherein the second data of the second area comprises one or more second images of the second area. 5. Hetsysteem volgens één der voorgaande conclusies, waarbij het centrale verwerkingsmiddel (300) geconfigureerd is om zonverlichte (L) en/of schaduwrijke (S) delen in de eerste en tweede gegevens te identificeren, waarbij de delen optioneel delen van een grondgebied en/of een infrastructuurgebied van het eerste en tweede gebied omvatten.The system of any preceding claim, wherein the central processing means (300) is configured to identify sunlit (L) and/or shaded (S) portions in the first and second data, the portions optionally comprising portions of a territory and/or an infrastructure area of the first and second areas. 6. Het systeem volgens één der voorgaande conclusies, waarbij het eerste en/of tweede lichtarmatuur (1004, 100b) een lokaal verwerkingsmiddel omvat dat geconfigureerd is om op basis van de eerste en/of tweede gegevens te detecteren of een object (O) is toegevoegd aan of verwijderd uit het gebied; en waarbij het eerste en/of tweede lichtarmatuur (1002, 100b) een zendmiddel omvat dat geconfigureerd is om informatie op basis van de detectie naar het centrale verwerkingsmiddel (300) te zenden.6. The system of any preceding claim, wherein the first and/or second light fixture (1004, 100b) comprises a local processing means configured to detect, based on the first and/or second data, whether an object (O) has been added to or removed from the area; and wherein the first and/or second light fixture (1002, 100b) comprises a transmitting means configured to transmit information based on the detection to the central processing means (300). 7. Het systeem volgens één der voorgaande conclusies, waarbij de informatie één of meer gegevensopnames omvat die overeenkomen met het gebied met het toegevoegde of verwijderde object (O).7. The system of any preceding claim, wherein the information comprises one or more data records corresponding to the area with the added or removed object (O). 8. Het systeem volgens conclusie 6 of 7, waarbij het lokale verwerkingsmiddel verder geconfigureerd is om een klasse van het toegevoegde of verwijderde object (O) te bepalen, en om informatie enkel te zenden als het toegevoegde of verwijderde object (O) tot één of meer vooraf bepaalde klassen behoort, waarbij de één of meer vooraf bepaalde klassen bij voorkeur klassen zijn voor één of meer statische objecten, bij voorkeur infrastructuurelementen.8. The system of claim 6 or 7, wherein the local processing means is further configured to determine a class of the added or removed object (O), and to send information only if the added or removed object (O) belongs to one or more predetermined classes, wherein the one or more predetermined classes are preferably classes for one or more static objects, preferably infrastructure elements. 9. Het systeem volgens één der voorgaande conclusies, waarbij het eerste en/of tweede lichtarmatuur en/of het centrale verwerkingsmiddel geconfigureerd is om te bepalen welke elementen in het eerste en/of tweede gebied permanent zijn en welke elementen niet- permanent zijn. op basis van de eerste en/of tweede gegevens; en/of om wijzigingen op korte termijn, middellange termijn, en/of lange termijn in het eerste en/of tweede gebied te bepalen, op basis van de eerste en/of tweede gegevens.9. The system of any preceding claim, wherein the first and/or second light fixture and/or the central processing means is configured to determine which elements in the first and/or second area are permanent and which elements are non-permanent based on the first and/or second data; and/or to determine short-term, medium-term, and/or long-term changes in the first and/or second area based on the first and/or second data. 10. Het systeem volgens de voorgaande conclusie, waarbij het centrale verwerkingsmiddel (300) geconfigureerd is om de ten minste ene kaart met de elementen die permanent zijn en zonder de elementen die niet-permanent zijn te genereren.10. The system of the preceding claim, wherein the central processing means (300) is configured to generate the at least one map with the elements that are permanent and without the elements that are non-permanent. 11. Het systeem volgens één der voorgaande conclusies, waarbij het eerste en/of tweede lichtarmatuur en/of het centrale verwerkingsmiddel geconfigureerd is om informatie over zich herhalende of periodieke wijzigingen te bepalen. op basis van de eerste en/of tweede gegevens.11. The system of any preceding claim, wherein the first and/or second light fixture and/or the central processing means is configured to determine information about repetitive or periodic changes based on the first and/or second data. 12. Het systeem volgens de voorgaande conclusie, waarbij het centrale verwerkingsmiddel (300) geconfigureerd is om de ten minste één kaart te genereren door het gebruiken van informatie over zich herhalende of periodieke wijzigingen.12. The system of the preceding claim, wherein the central processing means (300) is configured to generate the at least one map using information about recurring or periodic changes. 13. Het systeem volgens één der voorgaande conclusies, waarbij het ten minste één eerste en/of tweede opnamemiddel (110a, 1105) ten minste één van de volgende omvat: een LIDAR sensor, een geluidssensor, een radar.13. The system of any preceding claim, wherein the at least one first and/or second recording means (110a, 1105) comprises at least one of the following: a LIDAR sensor, a sound sensor, a radar. 14. Het systeem volgens één der voorgaande conclusies, waarbij het aantal lichtarmaturen (100a, 100b) elk een paal met een lichtarmatuurkop of een aantal boven elkaar aangebrachte paalmodules langs een lengteas omvat, en waarbij het ten minste ene overeenkomstige opnamemiddel (1104, 110b) geconfigureerd is om in of op de paal of in of op de lichtarmatuurkop of in of op een paalmodule van het aantal paalmodules aangebracht te zijn.14. The system of any preceding claim, wherein the plurality of light fixtures (100a, 100b) each comprises a pole having a light fixture head or a plurality of pole modules arranged one above the other along a longitudinal axis, and wherein the at least one corresponding receiving means (1104, 110b) is configured to be disposed in or on the pole or in or on the light fixture head or in or on a pole module of the plurality of pole modules. 15. Een systeem (100) voor het bijwerken van gegevens van cen gebied in de nabijheid van één of meer lichtarmaturen, waarbij het systeem omvat: één of meer lichtarmaturen (1004, 100b) voorzien van één of meer opnamemiddelen (1104, 110b) die geconfigureerd zijn om data van het gebied op te nemen; een verwerkingsmiddel (120) dat geconfigureerd is om te detecteren of een wijziging op lange termijn in het gebied heeft plaatsgevonden op basis van de opgenomen gegevens.15. A system (100) for updating data of an area in the vicinity of one or more light fixtures, the system comprising: one or more light fixtures (1004, 100b) provided with one or more recording means (1104, 110b) configured to record data of the area; a processing means (120) configured to detect whether a long-term change has occurred in the area based on the recorded data. 16. Het systeem volgens conclusie 15, waarbij ten minste één van de één of meer lichtarmaturen het verwerkingsmiddel en een zendmiddel (130) dat geconfigureerd is om informatie met betrekking tot een resultaat van het detecteren naar een inrichting op afstand (350) te zenden, omvat.16. The system of claim 15, wherein at least one of the one or more light fixtures comprises the processing means and a transmitting means (130) configured to transmit information regarding a result of the detection to a remote device (350). 17. Het systeem volgens conclusie 16, waarbij de inrichting op afstand is geconfigureerd om een resultaat van de detectie in een kaart van het gebied op te nemen; en/of om uit de opgenomen gegevens, één of meer gegevensitems die opgenomen zijn vóór en/of één of meer gegevensitems die opgenomen zijn na de gedetecteerde langdurige wijziging te selecteren, en optioneel om de geselecteerde één of meer gegevensitems te gebruiken om een kaart te genereren.17. The system of claim 16, wherein the remote device is configured to incorporate a result of the detection into a map of the area; and/or to select from the incorporated data, one or more data items incorporated before and/or one or more data items incorporated after the detected long-term change, and optionally to use the selected one or more data items to generate a map. 18. Het system volgens één der voorgaande conclusies 15-17, waarbij het verwerkingsmiddel is geconfigureerd om een resultaat van de detectie in een kaart van het gebied op te nemen: en/of waarbij het verwerkingsmiddel is geconfigureerd om uit de opgenomen gegevens, één of meer gegevensitems die opgenomen zijn vóór en/of één of meer gegevensitems die opgenomen zijn na de gedetecteerde langdurige wijziging te selecteren, en optioneel om de geselecteerde één of meer gegevensitems te gebruiken om een kaart te genereren.18. The system of any one of claims 15 to 17, wherein the processing means is configured to incorporate a result of the detection into a map of the area: and/or wherein the processing means is configured to select from the recorded data, one or more data items recorded before and/or one or more data items recorded after the detected long-term change, and optionally to use the selected one or more data items to generate a map. 19. Het systeem volgens conclusie 16 en 18, waarbij het zendmiddel is geconfigureerd om de kaart naar de inrichting op afstand te zenden.19. The system of claim 16 and 18, wherein the transmitting means is configured to transmit the card to the remote device. 20. Het systeem volgens één van de conclusies 15-19, waarbij de opgenomen gegevens betrekking hebben op een eerste vooraf bepaalde tijdsperiode heeft, waarbij de eerste vooraf bepaalde tijdsperiode ten minste één week is, bij voorkeur ten minste één maand, meer bij voorkeur ten minste drie maanden.20. The system of any of claims 15-19, wherein the recorded data relates to a first predetermined time period, the first predetermined time period being at least one week, preferably at least one month, more preferably at least three months. 21. Het systeem volgens één der conclusies 15-20, waarbij de gegevens op opeenvolgende momenten in de tijd worden opgenomen, bij voorkeur ten minste 2 opnames per dag, meer bij voorkeur ten minste 10 opnames per dag.21. The system of any one of claims 15 to 20, wherein the data are recorded at successive moments in time, preferably at least 2 recordings per day, more preferably at least 10 recordings per day. 22. Het systeem volgens één der conclusies 15-21, waarbij de gegevens één of meer beelden omvatten.22. The system of any of claims 15-21, wherein the data comprises one or more images. 23. Het systeem volgens conclusie 22, waarbij de langdurige wijziging wordt gedetecteerd door het vergelijken van pixels in opeenvolgende beelden die zijn opgenomen op opeenvolgende momenten in de tijd en door het detecteren dat de wijziging langdurig is wanneer de pixelwijziging ten minste op een eerste vooraf bepaald aantal opeenvolgende beelden duurt.23. The system of claim 22, wherein the long-term change is detected by comparing pixels in successive images captured at successive points in time and detecting that the change is long-term when the pixel change lasts for at least a first predetermined number of successive images. 24. Het systeem volgens conclusie 22 of 23, waarbij de langdurige wijziging wordt gedetecteerd door het vergelijken van pixels in opeenvolgende statische beelden, waarbij een statisch beeld wordt verkregen door het uitmiddelen van pixelwaarden van een vooraf bepaald aantal opeenvolgende beelden die zijn opgenomen op opeenvolgende momenten in de tijd.24. The system of claim 22 or 23, wherein the long-term change is detected by comparing pixels in successive static images, a static image being obtained by averaging pixel values from a predetermined number of successive images captured at successive points in time. 25. Het systeem volgens één van de conclusies 15-24, waarbij het verwerkingsmiddel (120) verder geconfigureerd is om te detecteren dat een object is toegevoegd aan of verwijderd uit het gebied als de langdurige wijziging.25. The system of any of claims 15-24, wherein the processing means (120) is further configured to detect that an object has been added to or removed from the area as the long-term change. 26. Het systeem volgens conclusie 16 en 25, waarbij het zendmiddel is geconfigureerd om de informatie enkel te zenden als het toegevoegde of verwijderde object tot één of meer vooraf bepaalde klassen behoort.26. The system of claim 16 and 25, wherein the transmitting means is configured to transmit the information only if the added or removed object belongs to one or more predetermined classes. 27. Het systeem volgens de voorgaande conclusie, waarbij de informatie de klasse van het toegevoegde of verwijderde object omvat.27. The system of the preceding claim, wherein the information includes the class of the added or removed object. 28. Het systeem volgens conclusie 26 of 27, waarbij de één of meer vooraf bepaalde klassen klassen zijn voor één of meer statische objecten, bij voorkeur infrastructuurelementen.28. The system according to claim 26 or 27, wherein the one or more predetermined classes are classes for one or more static objects, preferably infrastructure elements. 29. Het systeem van conclusie 16 en optioneel één der conclusies 17-28, waarbij de informatie de positie van de langdurige wijziging omvat.29. The system of claim 16 and optionally any of claims 17-28, wherein the information includes the position of the long-term change. 30. Het systeem volgens één der conclusies 15-29, waarbij de één of meer opnamemiddelen (110a, 110b) verder geconfigureerd zijn om een duur tussen opeenvolgende gegevensopnames die zijn opgenomen op opeenvolgende momenten in de tijd aan te passen, op basis van gedetecteerde en/of ontvangen gegevens met betrekking tot het gebied.30. The system of any of claims 15 to 29, wherein the one or more recording means (110a, 110b) are further configured to adjust a duration between successive data recordings recorded at successive points in time, based on detected and/or received data relating to the area. 31. Het systeem volgens één van conclusies 15-30, waarbij de één of meer opnamemiddelen (1104, 110b) geconfigureerd zijn om getriggerd te worden door een gebruiker op afstand om gegevens van het gebied op te nemen.31. The system of any of claims 15-30, wherein the one or more recording means (1104, 110b) are configured to be triggered by a remote user to record data from the area. 32. Het systeem volgens één van conclusies 15-31, waarbij de één of meer lichtarmaturen (100a, 100b) elk een paal en een lichtarmatuurkop of een aantal boven elkaar aangebrachte paalmodules langs een lengteas omvatten, en waarbij de één of meer beeld opnamemiddelen (1104, 110b) geconfigureerd zijn om in of op de paal of in of op de lichtarmatuurkop of in of op een paalmodule van het aantal paalmodules aangebracht te zijn.32. The system of any of claims 15 to 31, wherein the one or more light fixtures (100a, 100b) each comprise a pole and a light fixture head or a plurality of pole modules arranged one above the other along a longitudinal axis, and wherein the one or more image recording means (1104, 110b) are configured to be disposed in or on the pole or in or on the light fixture head or in or on a pole module of the plurality of pole modules. 33. Een systeem (100) voor het bepalen van zonlichtblootstellingsinformatie van een gebied (200) in de nabijheid van één of meer randinrichtingen (1002, 100b, 100c), waarbij het systeem omvat: één of meer randinrichtingen (1004, 100b, 100c) voorzien van één of meer beeldopnamemiddelen (1104, 110b, 1100) die geconfigureerd zijn om een opeenvolging van beelden op te nemen die worden opgenomen op opeenvolgende momenten in de tijd van één of meer oppervlakken van het gebied (200), waarbij de opeenvolging van beelden bij voorkeur ten minste een daglichtperiode bestrijkt; en een verwerkingsmiddel (120) geconfigureerd om de opgenomen opeenvolging van beelden te ontvangen; en om zonverlichte (L) en/of schaduwrijke (S) delen van de één of meer oppervlakken in de opgenomen opeenvolging van beelden te identificeren.33. A system (100) for determining sunlight exposure information of an area (200) in the vicinity of one or more peripheral devices (1002, 100b, 100c), the system comprising: one or more peripheral devices (1004, 100b, 100c) provided with one or more image capture means (1104, 110b, 1100) configured to capture a sequence of images captured at successive points in time of one or more surfaces of the area (200), the sequence of images preferably covering at least one daylight period; and processing means (120) configured to receive the captured sequence of images; and to identify sunlit (L) and/or shaded (S) portions of the one or more surfaces in the captured sequence of images. 34. Het systeem volgen de voorgaande conclusie, waarbij het verwerkingsmiddel is geconfigureerd om ten minste één kaart van het gebied te bepalen, bij voorkeur een opeenvolging van kaarten door de tijd, waarbij elke van de ten minste ene kaart geïdentificeerde zonverlichte (1) en/of schaduwrijke (S) delen van de één of meer oppervlakken aangeeft.34. The system according to the preceding claim, wherein the processing means is configured to determine at least one map of the area, preferably a sequence of maps through time, each of the at least one map indicating identified sunlit (1) and/or shaded (S) portions of the one or more surfaces. 35. Het systeem volgens conclusie 33 of 34, waarbij het verwerkingsmiddel is geconfigureerd om een type van de één of meer oppervlakken van het gebied te identificeren, en om ten minste één kaart te bepalen op basis van het geïdentificeerde type van de één of meer oppervlakken, waarbij elk van de ten minste één kaart de geïdentificeerde zonverlichte (L) en/of schaduwrijke (S) delen van de één of meer oppervlakken omvat. 35. The system of claim 33 or 34, wherein the processing means is configured to identify a type of the one or more surfaces of the area, and to determine at least one map based on the identified type of the one or more surfaces, each of the at least one map comprising the identified sunlit (L) and/or shaded (S) portions of the one or more surfaces. 36, Het systeem volgens conclusie 34 of 35, waarbij de ten minste één kaart ten minste één 3D kaart omvat.36. The system of claim 34 or 35, wherein the at least one map comprises at least one 3D map. 37. Het systeem volgens één van conclusies 33-36, waarbij het verwerkingsmiddel is geconfigureerd om een tijdsduur te bepalen waarbinnen een vooraf bepaald gebied in de nabijheid van de één of meer randinrichtingen zijn blootgesteld aan direct zonlicht, bij voorkeur op basis van de opgenomen opeenvolging van beelden.37. The system of any of claims 33-36, wherein the processing means is configured to determine a time period within which a predetermined area in the vicinity of the one or more peripheral devices are exposed to direct sunlight, preferably based on the recorded sequence of images. 38. Het systeem volgens één van conclusies 33-37, waarbij het verwerkingsmiddel verder geconfigureerd is om weerinformatie uit een weerdatabase te ontvangen en om een hoeveelheid direct en/of indirect zonlicht tijdens de daglichtperiode waaraan een vooraf bepaald gebied in de nabijheid van de één of meer randinrichtingen is blootgesteld te bepalen op basis van de geïdentificeerde zonverlichte (L) en/of schaduwrijke (S) delen van oppervlakken en de ontvangen weerinformatie.38. The system of any of claims 33-37, wherein the processing means is further configured to receive weather information from a weather database and to determine an amount of direct and/or indirect sunlight during the daylight period to which a predetermined area in the vicinity of the one or more edge devices is exposed based on the identified sunlit (L) and/or shaded (S) portions of surfaces and the received weather information. 39. Het systeem volgens één van conclusies 33-38, waarbij de één of meer randinrichtingen verder geassocieerd zijn met één of meer sensors, zoals één lichtsensor en/of een temperatuursensor, en waarbij het verwerkingsmiddel verder geconfigureerd is om een hoeveelheid direct en/of indirect zonlicht tijdens de daglichtperiode waaraan een vooraf bepaald gebied in de nabijheid van de één of meer randinrichtingen is blootgesteld te bepalen op basis van de geïdentificeerde zonverlichte (L) en/of schaduwrijke (S) delen van oppervlakken en gegevens gedetecteerd door de één of meer sensors.39. The system of any of claims 33-38, wherein the one or more peripheral devices are further associated with one or more sensors, such as one or more light sensors and/or a temperature sensor, and wherein the processing means is further configured to determine an amount of direct and/or indirect sunlight during the daylight period to which a predetermined area in the vicinity of the one or more peripheral devices is exposed based on the identified sunlit (L) and/or shaded (S) portions of surfaces and data detected by the one or more sensors. 40. Het systeem volgens én van conclusies 33-39, waarbij het verwerkingsmiddel verder geconfigureerd is om zonverlichte (L) delen van een oppervlak te identificeren als delen met een luminositeit boven een eerste pixelluminositeitsdrempelwaarde en/of om schaduwrijke (S) delen van een oppervlak te identificeren als delen met een luminositeit onder een tweede pixelluminositeitsdrempelwaarde, waarbij de eerste luminositeitsdrempelwaarde groter is dan of gelijk is aan de tweede luminositeitsdrempelwaarde.40. The system of any one of claims 33 to 39, wherein the processing means is further configured to identify sunlit (L) portions of a surface as having a luminosity above a first pixel luminosity threshold value and/or to identify shaded (S) portions of a surface as having a luminosity below a second pixel luminosity threshold value, the first luminosity threshold value being greater than or equal to the second luminosity threshold value. 41. Het systeem volgens de voorgaande conclusie, waarbij het verwerkingsmiddel verder geconfigureerd is om de eerste of tweede pixelluminositeitsdrempelwaarde te bepalen op basis van optische eigenschappen van het oppervlak.41. The system of the preceding claim, wherein the processing means is further configured to determine the first or second pixel luminosity threshold value based on optical properties of the surface. 42. Het systeem volgens één der conclusies 33-41, waarbij de één of meer randinrichtingen lichtarmaturen (1004, 100b) zijn of daarin zijn inbegrepen.42. The system of any of claims 33-41, wherein the one or more peripheral devices are or are included in light fixtures (1004, 100b). 43. Het systeem volgens de voorgaande conclusie, waarbij elk lichtarmatuur een paal en een lichtarmatuurkop of een aantal boven elkaar gebrachte paalmodules langs een lengteas omvat, en waarbij de één of meer beeldopnamemiddelen (1104, 1105) van de één of meer randinrichtingen zijn geconfigureerd om in of op de paal of in of op de lichtarmatuurkop of in of op een paalmodule van het aantal paalmodules aangebracht te zijn.43. The system of the preceding claim, wherein each light fixture comprises a pole and a light fixture head or a plurality of pole modules superimposed along a longitudinal axis, and wherein the one or more image recording means (1104, 1105) of the one or more peripheral devices are configured to be disposed in or on the pole or in or on the light fixture head or in or on a pole module of the plurality of pole modules. 44. Het systeem volgens één van conclusies 33-43, waarbij het verwerkingsmiddel (120) verder geconfigureerd is om informatie met betrekking tot de zonverlichte (L) en/of schaduwrijke (8S) delen van de één of meer oppervlakken naar een inrichting op afstand te zenden.44. The system of any of claims 33-43, wherein the processing means (120) is further configured to transmit information regarding the sunlit (L) and/or shaded (8S) portions of the one or more surfaces to a remote device. 45. Het systeem volgens de voorgaande conclusie, waarbij de informatie met betrekking tot de zonverlichte (L) en/of schaduwrijke (S) delen in de vorm van een binaire kaart zijn of waarbij de inrichting op afstand is geconfigureerd om een binaire kaart te genereren op basis van de ontvangen informatie.45. The system of the preceding claim, wherein the information regarding the sunlit (L) and/or shaded (S) areas is in the form of a binary map or wherein the remote device is configured to generate a binary map based on the received information. 46. Het systeem volgens één van conclusies 33-45, waarbij het verwerkingsmiddel in ten minste één van de één of meer randinrichtingen is opgenomen.46. The system of any of claims 33 to 45, wherein the processing means is incorporated in at least one of the one or more peripheral devices. 47. Een werkwijze voor het bepalen van omgevingskenmerken van een gebied in de nabijheid van een aantal lichtarmaturen, waarbij de werkwijze de volgende stappen omvat: - het opnemen van eerste gegevens van een eerste gebied (20023) en het opnemen van tweede gegevens van een tweede gebied (200b); - het bepalen door het verwerkingsmiddel (300) van ten minste één kaart, op basis van de eerste en tweede gegevens, waarbij elke van de ten minste één kaart omgevingskenmerken van een gebied omvattende het eerste en tweede gebied aangeeft,47. A method for determining environmental characteristics of an area in the vicinity of a plurality of light fixtures, the method comprising the steps of: - recording first data of a first area (20023) and recording second data of a second area (200b); - determining by the processing means (300) at least one map, based on the first and second data, each of the at least one map indicating environmental characteristics of an area comprising the first and second areas, 48. De werkwijze van de voorgaande conclusie, waarbij de ten minste één kaart ten minste één 3D-kaart omvat, waarbij elke 3D-kaart omgevingskernmerken van grondgebieden en/of intrastructuurgebieden van het gebied aangeeft.48. The method of the preceding claim, wherein the at least one map comprises at least one 3D map, each 3D map indicating environmental features of territories and/or intrastructure areas of the area. 49. De werkwijze volgens conclusie 47 of 48, verder omvattende de stap van het identificeren van zonverlichte (L) en/of schaduwrijke (S) delen in de eerste en de tweede gegevens, waarbij de delen optioneel delen van een grond gebied en/of een infrastructuurgebied van het eerste en tweede gebied omvatten.49. The method of claim 47 or 48, further comprising the step of identifying sunlit (L) and/or shaded (S) portions in the first and second data, the portions optionally comprising portions of a ground area and/or an infrastructure area of the first and second areas. 50. De werkwijze volgens één van conclusies 47-49, verder omvattende de stappen van: - het bepalen van welke elementen in het eerste en/of tweede gebied permanent zijn en welke elementen niet-permanent zijn; en/of - het bepalen van wijzigingen op korte termijn, middellange termijn en/of lange termijn in het eerste en/of tweede gebied.50. The method of any of claims 47 to 49, further comprising the steps of: - determining which elements in the first and/or second region are permanent and which elements are non-permanent; and/or - determining short-term, medium-term and/or long-term changes in the first and/or second region. 51. De werkwijze volgens de voorgaande conclusie, waarbij het bepalen zodanig wordt uitgevoerd dat de ten minste één kaart de elementen die permanent zijn omvat en de elementen die niet-permanent zijn niet omvat; en/of waarbij het bepalen zodanig wordt uitgevoerd dat de ten minste één kaart de wijzigingen op korte termijn, middellange termijn en/of lange termijn in het eerste en/of tweede gebied omvat.51. The method according to the preceding claim, wherein the determining is performed such that the at least one map includes the elements that are permanent and excludes the elements that are non-permanent; and/or wherein the determining is performed such that the at least one map includes the short-term, medium-term and/or long-term changes in the first and/or second area. 52. De werkwijze voor het bijwerken van gegevens van een gebied in de nabijheid van een aantal lichtarmaturen, omvattende de volgende stappen: - het opnemen van gegevens van een gebied;52. The method for updating data of an area in the vicinity of a number of light fixtures, comprising the following steps: - recording data of an area; - het detecteren met een verwerkingsmiddel (120) of langdurige wijzigingen in het gebied plaatsgevonden hebben op basis van een resultaat van een verwerking van de gegevens.- detecting by means of a processing means (120) whether long-term changes have occurred in the area based on a result of processing the data. 53. De werkwijze volgens de voorgaande conclusie, verder omvattende: - het zenden van informatie met betrekking tot een resultaat van het detecteren naar een inrichting op afstand (350), waarbij de informatie optioneel één of meer gegevensopnames die zijn opgenomen vóór en/of één of meer gegevensopnames die zijn opgenomen na de gedetecteerde langdurige wijziging en/of informatie met betrekking daartoe omvat.53. The method of the preceding claim, further comprising: - transmitting information regarding a result of the detection to a remote device (350), the information optionally comprising one or more data records recorded before and/or one or more data records recorded after the detected long-term change and/or information relating thereto. 54. De werkwijze volgens conclusie 52 or 53, waarbij de gegevens beelden omvatten.54. The method of claim 52 or 53, wherein the data comprises images. 55. De werkwijze volgens conclusie 54, waarbij het detecteren het vergelijken van pixels in opeenvolgende beelden opgenomen op opeenvolgende momenten in de tijd en het detecteren dat de pixels zijn gewijzigd wanneer de pixelwijziging ten minste op een eerste vooraf bepaald aantal opeenvolgende beelden duurt omvat.55. The method of claim 54, wherein detecting comprises comparing pixels in successive images captured at successive points in time and detecting that the pixels have changed when the pixel change lasts for at least a first predetermined number of successive images. 56. De werkwijze volgens conclusie 54 of 55, verder omvattende de stap van het uitmiddelen van pixelwaarden van een vooraf bepaald aantal opeenvolgende beelden opgenomen op opeenvolgende momenten in de tijd om een statisch beeld te verkrijgen; en waarbij het detecteren het vergelijken van pixels in opeenvolgende statische beelden omvat.56. The method of claim 54 or 55, further comprising the step of averaging pixel values from a predetermined number of consecutive images captured at consecutive moments in time to obtain a static image; and wherein the detecting comprises comparing pixels in consecutive static images. 57. De werkwijze volgens één van conclusies 52-56, waarbij het detecteren het detecteren dat een object is toegevoegd aan of is verwijderd van het gebied omvat; en optioneel verder het bepalen van een klasse van het toegevoegde of verwijderde object omvat.57. The method of any of claims 52-56, wherein detecting comprises detecting that an object has been added to or removed from the area; and optionally further comprises determining a class of the added or removed object. 58. De werkwijze volgens conclusie 53 en 57, waarbij de stap van het zenden enkel wordt uitgevoerd als het toegevoegde of verwijderde object tot één of meer vooraf bepaalde klassen behoort.58. The method according to claim 53 and 57, wherein the step of sending is performed only if the added or removed object belongs to one or more predetermined classes. 59. Een werkwijze voor het bepalen van zonlichtblootstelling over een gebied (200) in de nabijheid van één of meer randinrichtingen (1004, 100b, 100c), omvattende de volgende stappen:59. A method for determining sunlight exposure over an area (200) in the vicinity of one or more edge devices (1004, 100b, 100c), comprising the steps of: - het opnemen van een opeenvolging van beelden van oppervlakken van het gebied (200) doorheen de tijd, waarbij de opeenvolging van beelden bij voorkeur ten minste een daglichtperiode bestrijkt; - het identificeren door het verwerkingsmiddel (120) van zonverlichte (L) en/of schaduwrijke (S) delen van één of meer oppervlakken in de opgenomen opeenvolging van beelden.- recording a sequence of images of surfaces of the area (200) over time, the sequence of images preferably covering at least one daylight period; - identifying by the processing means (120) sunlit (L) and/or shaded (S) portions of one or more surfaces in the recorded sequence of images. 60. De werkwijze volgens de voorgaande conclusie, verdere omvattende de stap van het bepalen door het verwerkingsmiddel van ten minste één kaart van het gebied, bij voorkeur een opeenvolging van kaarten doorheen de tijd, waarbij elke van de ten minste één kaart geïdentificeerde zonverlichte (L) en/of schaduwrijke (S) delen van de één of meer oppervlakken aangeeft.60. The method of the preceding claim, further comprising the step of determining by the processing means at least one map of the area, preferably a sequence of maps over time, each of the at least one map indicating identified sunlit (L) and/or shaded (S) portions of the one or more surfaces. 61. De werkwijze volgens conclusie 59 of 60, verder omvattende de stap van het bepalen door het verwerkingsmiddel van een tijdsduur gedurende welke een vooraf bepaald gebied in de nabijheid van de één of meer randinrichtingen is blootgesteld aan direct zonlicht.61. The method of claim 59 or 60, further comprising the step of determining by the processing means a period of time during which a predetermined area in the vicinity of the one or more edge devices is exposed to direct sunlight. 62. De werkwijze volgens één van conclusies 59-61, verder omvattende de stappen van: - het ontvangen bij het verwerkingsmiddel van weerinformatie uit een weerdatabase en/of uit één of meer lichtsensoren die geconfigureerd zijn om zonlicht te meten; en - het bepalen van een hoeveelheid direct en/of indirect zonlicht tijdens de daglichtperiode waaraan een vooraf bepaald gebied in de nabijheid van de één of meer randinrichtingen is blootgesteld op basis van de geïdentificeerde zonverlichte (L) en/of schaduwrijke (S) delen van de één of meer oppervlakken en de ontvangen weerinformatie.62. The method of any of claims 59-61, further comprising the steps of: - receiving at the processing means weather information from a weather database and/or from one or more light sensors configured to measure sunlight; and - determining an amount of direct and/or indirect sunlight during the daylight period to which a predetermined area in the vicinity of the one or more edge devices is exposed based on the identified sunlit (L) and/or shaded (S) portions of the one or more surfaces and the received weather information.
NL2034751A 2023-05-02 2023-05-02 Luminaire system for determining environmental characteristics NL2034751B1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
NL2034751A NL2034751B1 (en) 2023-05-02 2023-05-02 Luminaire system for determining environmental characteristics
PCT/EP2024/062155 WO2024227897A1 (en) 2023-05-02 2024-05-02 Luminaire system for determining environmental characteristics
AU2024265495A AU2024265495A1 (en) 2023-05-02 2024-05-02 Luminaire system for determining environmental characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
NL2034751A NL2034751B1 (en) 2023-05-02 2023-05-02 Luminaire system for determining environmental characteristics

Publications (1)

Publication Number Publication Date
NL2034751B1 true NL2034751B1 (en) 2024-11-14

Family

ID=87974403

Family Applications (1)

Application Number Title Priority Date Filing Date
NL2034751A NL2034751B1 (en) 2023-05-02 2023-05-02 Luminaire system for determining environmental characteristics

Country Status (1)

Country Link
NL (1) NL2034751B1 (en)

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3076073A1 (en) 2015-04-02 2016-10-05 Schreder Improvements in or relating to modular luminaire assemblies
WO2016193430A1 (en) 2015-06-05 2016-12-08 Schreder Improvements in or relating to luminaires
US20180115751A1 (en) * 2015-03-31 2018-04-26 Westire Technology Limited Smart city closed camera photocell and street lamp device
WO2019043046A1 (en) 2017-08-29 2019-03-07 Schreder S.A. Lamp post with improved cooling
WO2019053259A1 (en) 2017-09-18 2019-03-21 Schreder S.A. Air quality pole module and lamp post comprising such a module
WO2019092273A1 (en) 2017-11-13 2019-05-16 Schreder S.A. Lamp post with a functional pole module with bracket
WO2019175435A2 (en) 2018-03-16 2019-09-19 Schreder S.A. Luminaire network with sensors
WO2019243331A1 (en) 2018-06-18 2019-12-26 Schreder S.A. Luminaire system with holder
WO2020053342A1 (en) 2018-09-12 2020-03-19 Schreder S.A. Luminaire system for determining weather related information
WO2020152294A1 (en) 2019-01-23 2020-07-30 Schreder S.A. Lamp post with tubular pole
US20200357257A1 (en) * 2017-12-12 2020-11-12 Schreder S.A. Luminaire network with sensors
WO2021094612A1 (en) 2019-11-15 2021-05-20 Schreder S.A. Lamp post with a functional pole module
WO2021239993A1 (en) 2020-05-29 2021-12-02 Schreder S.A. Luminaire system and network of luminaire systems for disinfecting areas
WO2022122755A1 (en) 2020-12-07 2022-06-16 Schreder S.A. Self-learning system with sensors
WO2022122750A1 (en) 2020-12-07 2022-06-16 Schreder S.A. Network system using fog computing
JP7092615B2 (en) * 2018-08-24 2022-06-28 セコム株式会社 Shadow detector, shadow detection method, shadow detection program, learning device, learning method, and learning program
WO2022189601A1 (en) 2021-03-10 2022-09-15 Schreder S.A. Network system with sensor configuration model update
WO2023006970A1 (en) 2021-07-29 2023-02-02 Schreder Iluminaçao Sa Edge device configuration system and method

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180115751A1 (en) * 2015-03-31 2018-04-26 Westire Technology Limited Smart city closed camera photocell and street lamp device
EP3076073B1 (en) 2015-04-02 2017-03-08 Schreder Improvements in or relating to modular luminaire assemblies
EP3076073A1 (en) 2015-04-02 2016-10-05 Schreder Improvements in or relating to modular luminaire assemblies
WO2016193430A1 (en) 2015-06-05 2016-12-08 Schreder Improvements in or relating to luminaires
WO2019043046A1 (en) 2017-08-29 2019-03-07 Schreder S.A. Lamp post with improved cooling
WO2019043045A1 (en) 2017-08-29 2019-03-07 Schreder S.A. Lamp post with functional modules
WO2019053259A1 (en) 2017-09-18 2019-03-21 Schreder S.A. Air quality pole module and lamp post comprising such a module
WO2019092273A1 (en) 2017-11-13 2019-05-16 Schreder S.A. Lamp post with a functional pole module with bracket
US20200357257A1 (en) * 2017-12-12 2020-11-12 Schreder S.A. Luminaire network with sensors
WO2019175435A2 (en) 2018-03-16 2019-09-19 Schreder S.A. Luminaire network with sensors
WO2019243331A1 (en) 2018-06-18 2019-12-26 Schreder S.A. Luminaire system with holder
JP7092615B2 (en) * 2018-08-24 2022-06-28 セコム株式会社 Shadow detector, shadow detection method, shadow detection program, learning device, learning method, and learning program
WO2020053342A1 (en) 2018-09-12 2020-03-19 Schreder S.A. Luminaire system for determining weather related information
WO2020152294A1 (en) 2019-01-23 2020-07-30 Schreder S.A. Lamp post with tubular pole
WO2021094612A1 (en) 2019-11-15 2021-05-20 Schreder S.A. Lamp post with a functional pole module
WO2021239993A1 (en) 2020-05-29 2021-12-02 Schreder S.A. Luminaire system and network of luminaire systems for disinfecting areas
WO2022122755A1 (en) 2020-12-07 2022-06-16 Schreder S.A. Self-learning system with sensors
WO2022122750A1 (en) 2020-12-07 2022-06-16 Schreder S.A. Network system using fog computing
WO2022189601A1 (en) 2021-03-10 2022-09-15 Schreder S.A. Network system with sensor configuration model update
WO2023006970A1 (en) 2021-07-29 2023-02-02 Schreder Iluminaçao Sa Edge device configuration system and method

Similar Documents

Publication Publication Date Title
AU2021202430B2 (en) Smart city closed camera photocell and street lamp device
US10653014B2 (en) Systems and methods for an intermediate device structure
US11554776B2 (en) System and method for predicting of absolute and relative risks for car accidents
CN109686109B (en) Parking lot safety monitoring management system and method based on artificial intelligence
CN104613892B (en) Merge the compound snow depth monitoring system of video detection technology and laser ranging technique
US20180301031A1 (en) A method and system for automatically detecting and mapping points-of-interest and real-time navigation using the same
US20250004791A1 (en) Edge Device Configuration System and Method
JP2020098586A (en) Demand and pricing for ride sharing with car edge computing
CN110648528A (en) Wisdom highway management system
US20240333880A1 (en) Systems and methods for monitoring urban areas
CN120087915A (en) A smart city monitoring system and method
NL2034751B1 (en) Luminaire system for determining environmental characteristics
WO2021255277A1 (en) Luminaire network management method
NL2035257B1 (en) Method for optimization of sensor network placement
WO2024227897A1 (en) Luminaire system for determining environmental characteristics
Majumder An Approach to Counting Vehicles from Pre-Recorded Video Using Computer Algorithms
KR102821046B1 (en) Location based parameters for an image sensor
US12063730B2 (en) Method and system for performing management of a luminaire network
Finley et al. Evaluation of Wrong-Way Driving Detection Technologies
Mohring Smart streetlights: a feasibility study
KR20250084200A (en) Creating agents for simulated autonomous vehicles
KR20250000207A (en) Method and device for providing information for optimal installation of road infrastructure sensors based on digital twin
CN120599815A (en) A 4G highway intelligent traffic condition monitoring system
CN120916302A (en) Urban landscape lighting control methods, systems and procedures products
CN109862509A (en) A sensor node positioning system supporting WLAN fingerprint positioning