HK1231998A1 - Systems and methods for probabilistic semantic sensing in a sensory network - Google Patents
Systems and methods for probabilistic semantic sensing in a sensory network Download PDFInfo
- Publication number
- HK1231998A1 HK1231998A1 HK17105405.9A HK17105405A HK1231998A1 HK 1231998 A1 HK1231998 A1 HK 1231998A1 HK 17105405 A HK17105405 A HK 17105405A HK 1231998 A1 HK1231998 A1 HK 1231998A1
- Authority
- HK
- Hong Kong
- Prior art keywords
- data
- semantic data
- application
- derived
- classifier
- Prior art date
Links
Abstract
Systems and methods for probabilistic semantic sensing in a sensory network are disclosed. The system receives raw sensor data from a plurality of sensors and generates semantic data including sensed events. The system correlates the semantic data based on classifiers to generate aggregations of semantic data. Further, the system analyzes the aggregations of semantic data with a probabilistic engine to produce a corresponding plurality of derived events each of which includes a derived probability. The system generates a first derived event, including a first derived probability, that is generated based on a plurality of probabilities that respectively represent a confidence of an associated semantic datum to enable at least one application to perform a service based on the plurality of derived events.
Description
Related application
This application claims priority to U.S. patent application No. 14/639,901 filed on 3/5/2015 and priority to U.S. provisional application No. 61/948,960 filed on 3/6/2014, which is incorporated by reference in its entirety. The present application relates to united states non-provisional patent application No. 14/024,561 entitled "Networked Lighting Infrastructure for sensing applications" (filed on 9/11/2013), and united states provisional application No. 61/699,968 thereof filed on 9/12/2012 under the same name.
Technical Field
The present invention relates to the field of data communication. More particularly, systems and methods for probabilistic semantic sensing in a sensing network.
Background
The sensing network includes a plurality of sensors that can be used to sense and identify objects. The object being sensed may include a person, vehicle, or other entity. The entity may be stationary or in motion. Sometimes the sensor may not be positioned to sense the entire entity at all. At other times, the obstruction may impair sensing of the entity. In both instances, real-world impairments can lead to unreliable results.
Drawings
FIG. 1 illustrates a system for probabilistic semantic sensing in a sensing network according to an embodiment;
FIG. 2 further illustrates a system for probabilistic semantic sensing in a sensing network according to an embodiment;
FIG. 3 is a block diagram illustrating a system for probabilistic semantic sensing in a sensing network according to an embodiment;
FIG. 4A is a block diagram illustrating sensed event information, according to an embodiment.
FIG. 4B is a block diagram illustrating derived event information, according to an embodiment.
FIG. 5 is a block diagram illustrating user input information according to one embodiment;
FIG. 6 is a block diagram illustrating a method for probabilistic semantic sensing in a sensing network according to an embodiment;
FIG. 7 illustrates a portion of an overall architecture of a Lighting Infrastructure Application Framework (LIAF) in accordance with an embodiment;
FIG. 8 illustrates the architecture of a system at a higher level in accordance with an embodiment;
FIG. 9 is a block diagram of a node platform according to an embodiment.
Fig. 10 is a block diagram of a gateway platform according to an embodiment.
FIG. 11 is a block diagram of a service platform according to an embodiment.
Fig. 12 is a diagram illustrating a revenue model for a lighting infrastructure application, in accordance with an embodiment;
FIG. 13 illustrates a parking garage application for a networked lighting system according to an embodiment;
fig. 14 illustrates a lighting maintenance application of a networked lighting system according to an embodiment;
fig. 15A illustrates a warehouse inventory application of a networked lighting system according to an embodiment;
fig. 15B illustrates a warehouse inventory application of a networked lighting system according to an embodiment;
fig. 16 illustrates an application of a networked lighting system for monitoring a loading dock, in accordance with an embodiment;
FIG. 17 is a block diagram illustrating power monitoring and control circuitry at a node according to an embodiment;
FIG. 18 is a block diagram illustrating an application controller at a node according to one embodiment;
FIG. 19 is a block diagram illustrating an example of a software architecture installable on a machine, according to some example embodiments; and is
Fig. 20 is a block diagram illustrating components of a machine capable of reading instructions from a machine-readable medium (e.g., a machine-readable storage medium) and performing any one or more of the methodologies discussed herein, according to some example embodiments.
Headings are provided herein for convenience only and do not necessarily affect the scope or meaning of the terms used.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of some example embodiments. It will be apparent, however, to one skilled in the art that embodiments of the invention may be practiced without these specific details.
Detailed Description
The following description includes systems, methods, techniques, instruction sequences, and computer machine program products that embody illustrative embodiments of the invention. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide an understanding of various embodiments of the inventive subject matter. It will be apparent, however, to one skilled in the art that embodiments of the inventive subject matter may be practiced without these specific details. In general, well-known instruction examples, protocols, structures, and techniques are not necessarily shown in detail.
The present invention is directed to probabilistic semantic sensing in a sensor network. The present invention solves the problem of accurately sensing observable phenomena when there are real world obstructions or damage. The present invention addresses this problem by sensing the same underlying physical phenomenon in parallel and generating a single probability associated with the meaning or semantics of that physical phenomenon. Specifically, the present invention solves the problem by: the method includes sensing the same underlying physical phenomenon in parallel to generate semantic data describing the physical phenomenon in the form of sensed events each including semantic data (e.g., parking places are empty), associating each semantic data with a probability that quantifies the reliability of the semantic data, correlating the sensed events based on classifiers to generate a logical aggregation of the semantic data (e.g., the same parking places), analyzing the aggregation of the semantic data with a probability engine to generate a single derived event of a plurality of sensed events (where the single derived event includes the single derived probability), and enabling one or more applications that use the derived events. Those skilled in the art will recognize that while the present invention is discussed primarily in the context of light sensing networks, the present invention is also directed to sensing networks capable of sensing all types of physical phenomena (e.g., visual, sound, tactile, etc.).
The advent of light sensing networks or lighting infrastructure with embedded capabilities for application platforms, sensing, networking, and processing creates the opportunity to distribute sensors at significant scale and spatial density and enable sensing-based applications. However, the success of applications implemented by light sensing networks may be limited by the reliability of sensor data, which may be constrained in part due to the location of sensor deployment or interference caused by real-world obstructions (leaves, cars, people, other objects, etc.). Additionally, for any given sensor, various portions of the generated data may be more reliable or less reliable-for example, due to limited resolution, a video sensor may be more reliable in detecting an occupancy state of a car parking spot directly in front of the sensor than it is in a more distant location. In addition, the impracticality of delivering all of the data collected at each node in an optical sensing network strongly suggests that conclusions be drawn about the data at intermediate steps before the data or data outputs are combined. In other words, not all of the original data may be accessible in the same location. To the extent that multiple sensors can produce relevant data for a particular calculation related to the application of the light sensing network, or to the extent that external data inputs can affect such a calculation, it is optimal to create a system in which these multiple data sources can be optimally combined to produce the most useful calculation results and thus the most successful application.
The present invention describes the creation of probabilistic systems and methods that optimize the usefulness of conclusions based on data from a light sensing network with limited reliability. The described systems and methods include associating each semantic data with an associated probability representing a certainty or confidence of the data, and using parameters of the semantic data to correlate different semantic data and using a probability engine to derive events with derived probabilities. Enhanced reliability is described in the context of: lighting management and surveillance, parking management, surveillance, traffic surveillance, retail surveillance, business intelligence surveillance, asset surveillance, and environmental surveillance.
One system for implementing the described method may include a Light Sensing Network (LSN). The LSN may include integrated application platforms, sensors, and network capabilities, as described below. LSNs associated with the present invention may be constructed in a manner such that some processing of raw sensor data occurs locally at each node within the network. The output of this processing may be semantic data, or in other words, metadata or derived data representing key features detected during processing. The purpose of generating semantic data is to reduce the size of the data for subsequent transfer for further analysis. LSNs associated with the present invention may also be constructed in a manner such that passing semantic data beyond the origin node causes the semantic data to aggregate and correlate with other semantic data. The network connectivity of LSNs may take a variety of topologies, but the present invention is not aware of a particular topology (hub-and-spoke, ad hoc, etc.) as long as aggregation points within the network occur where multiple sources of semantic data are combined.
FIG. 1 illustrates a system 101 for probabilistic semantic sensing in a sensing network according to an embodiment. System 101 may include a sensing network that includes "light a" positioned on the left side and "light B" positioned on the right side. "light a" and "light B" may each include a sensing node in communication with each other and other sensing nodes (not shown) that are part of a sensing network. Each of the sensing nodes contains one or more sensors that sense raw sensor data of occupancy states of different parts of the parking lot, and more specifically of parking spots in the parking lot. For example, "light A" is illustrated as receiving raw sensor data of a portion of a parking lot and generating data including data for parking spot X1Semantic data and for parking place X2Semantic data of (2). Also for example, "light B" is illustrated as receiving raw sensor data for different portions of a parking lot and generating data including data for parking location X2Semantic data and for parking lot X3Semantic data of (2). More specifically, "light a" captures semantic data in the form: parking place X1With a probability of 99% (e.g., P (X)1) 99) an occupancy state of "empty" (the "empty" state is accurate) and a parking place X2With a probability of 75% (e.g., P (X)2) 75) an occupied state of "empty" (the "empty" state is accurate). To is directed atParking place X2May be due to parking spot X sensed by "light a2Limited visibility (e.g., fewer pixels). In addition, "light B" captures semantic data in the form: parking place X2Occupancy state and parking place X with 25% probability of "empty3Occupancy state with 99% probability of "empty", wherein for parking spot X2Again due to limited visibility. That is, FIG. 1 illustrates semantic data that includes probabilities that vary depending on location.
FIG. 2 illustrates a system 103 for probabilistic semantic sensing in a sensing network according to an embodiment. System 103 operates in a manner similar to system 101. The system 103 is illustrated to show the probability of how real-world obstructions (e.g., trees, other vehicles, etc.) limit semantic data. Specifically, "light A" is illustrated as indicating a parking spot X for capture2Semantic data that is empty with a probability of 10% and "light B" is illustrated as capturing an indicated parking location X3Semantic data that is empty with a probability of 10%. For parking place X2The reduced confidence of (a) is due to the tree that prevents the sensor at "light a" from fully sensing the parking spot X2And for parking place X3The reduced confidence of (c) is due to the transporter preventing the sensor at "light B" from fully sensing the parking spot X3. That is, FIG. 2 illustrates semantic data that includes probabilities that vary depending on an obstruction.
With respect to the process of determining semantic data from raw sensor data, the present invention does not claim any details of this process, except for the following exceptions: (i) associating each semantic data with a probability of the semantic data, (ii) associating each semantic data with spatial and temporal coordinates of a location of a sensor, and (iii) associating each semantic data with spatial and temporal coordinates of an event detected remotely from a sensor.
Types of raw sensor data that may be analyzed in order to generate semantic data include, but are not limited to, environmental sensor data, gas data, accelerometer data, particle data, power data, RF signals, ambient light data, motion detection data, still images, video data, audio data, and the like. According to some embodiments, various sensor nodes in the LSN may employ processing of raw sensor data to generate probabilistic semantic data. Probabilistic semantic data may represent events that include the detection of people, vehicles, or other objects via computer vision (video analytics) processing or other analysis of large data sets occurring locally on nodes in a network.
FIG. 3 is a block diagram illustrating a system 107 for probabilistic semantic sensing in a sensing network according to an embodiment. The system 107 may include two or more sensing nodes 109, an aggregation node 125, and one or more probability applications 117. Each of the sensing nodes 109 (e.g., machines) may include a sensing engine 111. Broadly speaking, the sensing nodes 109 each include one or more sensors 30 for sensing raw sensor data that is passed as raw sensor data to a sensing engine 111, which in turn processes the raw sensor data to generate semantic data 121. Semantic data 121 may include sensed event information 123 in the form of sensed events that each include a classifier that classifies semantic data 121. The classifier may include semantic data (not shown) that represents the meaning of the raw sensor data as discrete events including an expression of a binary state. For example, a binary may be for a parking place (e.g., occupied, idle), a person (e.g., present, absent), a vehicle (e.g., present, absent). Additional classifiers may be associated with the semantic data, as discussed below.
Sensing node 109 may pass semantic data 121 to aggregation node 125. The aggregation node 125 may include a correlation engine 113 and a probability engine 115. Other embodiments may include multiple aggregation nodes 125. Sensing nodes 109 that sense the same underlying phenomenon (e.g., parking spot #123) may communicate semantic data 121 (sensed events) representing the same underlying phenomenon to the same aggregation node 125. Thus, some sensing nodes 109 may communicate with two or more aggregation nodes 125 based on underlying phenomena being sensed and communicated by the sensing nodes 109. Aggregation node 125 may preferably be implemented in the cloud. Other embodiments may implement correlation engine 113 and probability engine 115 on any combination of sensing nodes 109 or another machine or the like.
Correlation engine 113 receives sensed event information 123 in the form of sensed events over a network (e.g., LAN, WAN, internet, etc.) and correlates/aggregates semantic data 121 based on classifiers in each of the sensed events to generate an aggregation of semantic data 127. The aggregation of semantic data 121 may be logically grouped. According to some embodiments, correlation engine 113 may correlate and aggregate semantic data 121 into aggregated semantic data 127 by constructing an abstract graph representing the correlation of two or more semantic data 121 received from one or more sensing nodes 109 in a sensing network. Correlation engine 113 can construct an abstract map based on the similarity of semantic data 121 or based on other classifiers included in the sensed events. Correlation engine 113 may further construct an abstract map based on relationships between classifiers that include spatial and temporal coordinates associated with each semantic data. For example only, the correlation engine 113 may correlate and aggregate all sensed events received from the sensing nodes 109 over a period of time, including an assertion of an occupancy state (e.g., occupied, unoccupied) for a particular parking spot in the parking lot and a probability of confidence with respect to the assertion, respectively. For further example, correlation engine 113 may correlate and aggregate all sensed events received from sensing nodes 109 over a period of time, the sensed events representing the presence (e.g., presence, absence) of a person at a particular location in a parking lot and a probability of confidence with respect to the assertion, respectively. Correlation engine 113 can construct an abstract map using exact matches of classifier 141 (fig. 4A), fuzzy matches of classifier 141, or a combination of both. According to some embodiments, correlation engine 113 may correlate and aggregate semantic data 121 into aggregated semantic data 127 based on a mathematical relationship between the locations of sensors 30 and/or the locations of the semantic data (e.g., locations of parking spots declared as empty or occupied). For example, correlation engine 113 may identify mathematical relationships between locations of sensors 30 (expressed in spatial and temporal coordinates) and/or locations of semantic data (e.g., matches, approximate matches, etc.) (expressed in spatial and temporal coordinates).
Correlation engine 113 can pass aggregated semantic data 127 to probability engine 115, which in turn processes aggregated semantic data 127 to generate derived event information 129 in the form of derived events. Those skilled in the art will appreciate that the sensing network may include multiple aggregation nodes 125 that communicate the derived event information 129 to the same sensing processing interface 131. Probability engine 115 processes aggregated semantic data 127 by calculating derived events having derived probabilities using the relationship of the individual probabilities of each semantic data included in each sensed event along with external data inputs. According to some embodiments, the external data input may include user input, application input, or other user-defined desired parameters with a desired accuracy, as described further below. Probability engine 115 processes a single aggregation of semantic data 127 to produce a single derived event. Thus, the probability engine 115 can intelligently reduce the amount of data that is then passed on for further analysis. For example only, the amount of data in a single aggregation of semantic data 127 is reduced by probability engine 115 to a single derived event. Further, probability engine 115 may intelligently reduce probabilities in the aggregation of semantic data 127 to produce a single derived event that includes a single derived probability. Probability engine 115 may further generate derived event information 129 based on threshold 137 and weighting values 139. The probability engine 115 may use a threshold 137 associated with each semantic data to determine the nature of the derived event. The initial determination of the threshold 137 may be defined in a heuristic manner or may result from any other suboptimal process. The probability engine 115 can alter the threshold 137 based on a continuous analysis of the probabilities of the semantic data 121, as represented by arrows illustrating movement of the threshold 137 both into and out of the probability engine 115. The probability engine 115 can alter the assignment of weights based on a continuous analysis of the probabilities of the semantic data 121, as represented by arrows illustrating the movement of the weighting values 139 into and out of the probability engine 115. The probability engine 115 can assign a higher weighting value 139 to semantic data 121 with a higher probability or certainty. The probability engine 115 may receive initial weights that are heuristically defined or generated by any other suboptimal process. The probability engine 115 can alter the assignment of weights based on a continuous analysis of the probabilities of the semantic data 121 in a stochastic process, as represented by arrows illustrating movement of the threshold 137 both into and out of the probability engine 115. According to some embodiments, the probability engine 115 may utilize the derived probabilities for additional processing, or may pass the derived probabilities in the derived events to the sensing processing interface 131 (e.g., an application processing interface). The sensory processing interface 131 may be read by one or more probabilistic applications 117 (e.g., "application W," "application X," "application Y," "application X") that in turn process the derived events to enable the one or more applications to execute the service.
FIG. 4A is a block diagram illustrating sensed event information 123 according to an embodiment. The sensed event information 123 may be embodied as sensed events generated by the sensing node 109 and communicated to the aggregation node 125 where they are received by the correlation engine 113. The sensed event may include a classifier 141 used to characterize semantic data 121. Classifier 141 may include semantic data, application identifiers, probabilities, locations of sensors 30, locations of semantic data, and the like. The semantic data classifier describes discrete events sensed by the sensor 30 at the sensing node 109 and may include semantic data expressing a binary state, as previously described (e.g., parking spot occupied or unoccupied). The application identifier classifier may identify one or more probabilistic applications 117 that receive derived event information 129 generated based on sensed events containing application identifiers. The probabilistic classifier describes the certainty or reliability of a sensed event, as asserted in the associated semantic data. For example, the sensed event may include semantic data that the parking place is empty with a 99% probability, indicating a confidence that the parking place is indeed empty of 99%. The location classifier of the sensor describes the location of the sensor 30 that senses the associated semantic data. The location classifier of the sensor may be embodied as spatial coordinates of the sensor 30 sensing the associated semantic data and as temporal coordinates indicating the date and time of the associated semantic data sensed by the sensor 30. The location classifier of the semantic data describes the location of the associated semantic data. The location classifier of the semantic data may be embodied as spatial coordinates of the associated semantic data and temporal coordinates indicating the date and time of the associated semantic data sensed by the sensor 30.
Fig. 4B is a block diagram illustrating derived event information 129, in accordance with an embodiment. Derived event information 129 may be embodied as derived events generated by probability engine 115 and passed to sensory processing interface 131. The derived event may include a classifier 143 for characterizing the derived event information 129. The meaning of classifier 143 in the derived event corresponds to the meaning of classifier 141 in the sensed event, as previously described. The semantic data classifier describes discrete events sensed by one or more sensors 30 respectively located at the sensing nodes 109 and may include descriptors expressing a binary state, as previously described (e.g., parking spot occupied or unoccupied). In some examples, two or more sensors 30 may be located at the same sensing node 109. The application identifier classifier may identify one or more probabilistic applications 117 that receive the derived event information 129. The probabilistic classifier in the derived event describes the certainty or reliability of the derived event, as asserted in the associated semantic data. The probability classifier 143 may include probabilities based on two or more sensed events as determined by the probability engine 115. For example, the derived events may include semantic data that the parking place is empty with a 99% probability (based on two or more sensed events). The sensor's location classifier describes the location of one or more sensors 30 that sense the associated semantic data. The location classifier of a sensor may be embodied as spatial coordinates of one or more sensors 30 sensing associated semantic data and as corresponding temporal coordinates indicating the date and time of the associated semantic data sensed by one or more corresponding sensors 30. The location classifier of the semantic data describes the location of the associated semantic data. The location classifier of the semantic data may be embodied as spatial coordinates of the associated semantic data and temporal coordinates indicating the date and time of the associated semantic data sensed by the corresponding one or more sensors 30.
FIG. 5 is a block diagram illustrating user input information 135, according to one embodiment. The user input information 135 may include parameters or configuration values received by the probability engine 115 and used by the probability engine 115 to generate the derived event information 129. The user input information 135 may include desired accuracy information, application input information, and user preference information. Desired accuracy information may be received to identify a minimum level of raw sensor data needed before the derived event is generated by probability engine 115. The application input information may be received to configure the tier for a particular probabilistic application 117. For example, the parking probability application 117 may utilize a level that is configurable for making a determination of whether the parking spot is empty. Configuring the level to be low (e.g., 0) may force the probability engine 115 to make a determination of whether the parking place is empty, regardless of the amount of aggregated semantic data 127 available for making the determination. Configuring the levels higher (e.g., 1-X, where X >0) may enable probability engine 115 to report insufficient information for a number of aggregated semantic data 127 below the configured levels and report empty (or not empty) for a number of aggregated semantic data 127 equal to or greater than the configured levels. For example, the report (e.g., sensed event) may include semantic data indicating that the parking space is "empty" (or not empty) or semantic data indicating "insufficient information".
FIG. 6 is a block diagram illustrating a method 147 for probabilistic semantic sensing in a sensing network, in accordance with an embodiment. The method 147 may begin at operation 151 with the light sensing network receiving raw sensor data. For example, the light sensing network may include two sensing nodes 109 positioned at the tops of two light poles, respectively, including two lights illuminating a parking lot, respectively. The lamps may be identified as "lamp a" and "lamp B". The sensing nodes 109 may each include a sensor 30 that receives raw sensor data and a sensing engine 111 that processes the raw sensor data. The raw sensor data represents occupancy states of a plurality of parking spaces in the parking lot. In one example, the raw sensor data collected at each of the sensor nodes 19 represents the same parking spot.
At operation 153, sensing engine 111 generates semantic data 121 based on the raw sensing data. For example, the sensing engine 111 at each of the sensing nodes 109 may process raw sensing data to generate semantic data 121 in the form of sensed event information 123 in the form of two sensed events. The sensing engine 111 at "light a" may generate a first sensed event that includes a classifier in the form of: semantic data that declares the parking place to be empty, an application identifier that identifies the parking place application, a 95% probability that the declared semantic data is true (e.g., the parking place is indeed empty), coordinates that identify the location of the sensor 30 that senses the declared semantic data at "light a" (e.g., latitude, longitude/Global Positioning System (GPS) coordinates, and the like), and coordinates that identify the location of the declared semantic data (e.g., latitude, longitude/GPS coordinates, and the like that identify the location of the parking place). A classifier specifying the date and time at which the sensor 30 is operated to acquire semantic data is further associated with the location coordinates of the sensor 30. A classifier specifying the date and time at which the semantic data was sensed by the sensor 30 is further associated with the location coordinates of the semantic data.
The sensing engine 111 at "light B" generates a second sensed event that includes a classifier 141 in the form of: semantic data that asserts the same semantic data (e.g., the parking spot is empty), an application identifier that identifies the parking spot application, a probability of 85% that the asserted semantic data is true (e.g., the parking spot is indeed empty), coordinates (e.g., latitude, longitude/GPS coordinates, and the like) that identify the location of the sensor 30 that senses the asserted semantic data at "light B," and coordinates (e.g., latitude, longitude/GPS coordinates, and the like) that identify the location of the asserted semantic data. A classifier 141 specifying the date and time at which the sensor 30 was operated to acquire semantic data is further associated with the location coordinates of the sensor 30. A classifier 141 specifying the date and time at which the semantic data was sensed by the sensor 30 is further associated with the location coordinates of the semantic data.
Finally, the sensing engine 111 at "light a" communicates the first sensed event described above, generated at "light a", to the aggregation node 125 via a network (e.g., LAN, WAN, internet, etc.), which is received by the correlation engine 113 at the aggregation node 125. Likewise, the sensing engine 111 at "light B" communicates the above-described second sensed event generated at "light B" to the same aggregation node 125 via a network (e.g., LAN, WAN, internet, etc.), which is received by the correlation engine 113 at the aggregation node 125. Those skilled in the art will appreciate that other embodiments may include additional sensing nodes 109 for sensing the same parking spot. According to another embodiment, the aggregation node 125 including the correlation engine 113 may be located in the cloud. According to a further embodiment, the correlation engine 113 may be located at the sensing node 109.
At operation 157, correlation engine 113 can correlate semantic data 121 based on classifiers 141 in semantic data 121 to produce an aggregation of semantic data 121. The correlation engine 113 may continuously receive sensed events from multiple sensing nodes 109 in real time. Correlation engine 113 may correlate the sensed events based on classifiers 141 in the sensed events to generate an aggregation 127 of semantic data. In the present example, the correlation engine 113 receives the first and second sensed events and correlates the two together based on selecting one or more classifiers 141 from a group of available classifiers. For example, the one or more classifiers 141 can include semantic data (e.g., the parking place is empty) and/or an application identifier and/or coordinates that identify the location of the semantic data being asserted. For example, correlation engine 113 may correlate and aggregate a first sensed event and a second sensed event together based on: matching semantic data (e.g., parking place is empty) and/or matching application identifiers (e.g., identifying parking place application) and/or matching coordinates (e.g., latitude, longitude/GPS coordinates identifying location of parking place, and the like) identifying location of asserted semantic data. Other classifiers 141 for correlation and generation of aggregated semantic data 127 (e.g., aggregation of sensed events) may be selected from a group of available classifiers. Finally, at operation 157, correlation engine 113 passes aggregated semantic data 127 to probability engine 115. According to one embodiment, the correlation engine 113 and the probability engine 115 execute in an aggregation node 125 in the cloud. In another embodiment, the correlation engine 113 and the probability engine 115 execute on different computing platforms and the correlation engine 113 passes the aggregated semantic data 127 to the probability engine 115 over a network (e.g., LAN, WAN, internet, etc.).
At operation 159, the probability engine 115 may analyze each of the aggregations 127 of semantic data to generate derived event information 129 (e.g., derived events). For example, probability engine 115 may analyze aggregation 127 of semantic data including the first sensed event and the second sensed event to generate derived event information 129 in the form of the first derived event. The first derived event may include a probability of the semantic data classifier 143 (e.g., a 90% probability that the parking place is empty) generated by the probability engine 115 based on the aggregation 127 of the semantic data, including a probability of the semantic data classifier 141 included in the first sensed event (e.g., a 85% probability that the parking place is empty) and a probability of the semantic data classifier 141 included in the second sensed event (e.g., a 95% probability that the parking place is empty). For example, the probability engine 115 may average the probabilities (e.g., 85% and 95%) of the two semantic data 141 from the first and second sensed events to generate a (single) probability (e.g., 90%) of the semantic data for the first derived event. Other examples may include additional probabilities of semantic data classifier 141 included in additional sensed events to generate a (single) probability of 90% of semantic data for the first derived event. The probability engine 115 may utilize the user input information 135, the threshold 137, and the weighting values 139 as previously described to generate the derived events.
At operation 161, the probability engine 115 may pass the derived event information 129 to the sensory processing interface 131 to implement at least one probability application 117. For example, the probability application 117 may read derived event information 129 (e.g., derived events) from the sensory processing interface 131 and utilize the classifiers 143 in the derived events to perform services and generate reports. The services may include controlling devices inside or outside of the sensor network and generating reports, as described more fully later in this document. For example, the probability engine 115 can pass derived event information 129 in the form of a first derived event to the sensory processing interface 131 via a network (e.g., LAN, WAN, internet, etc.), which is then read by one or more probability applications 117 (e.g., application X) that utilize the first derived event to perform a service or generate a report. In one embodiment, the probability engine 115 and the sensing processing interface 131 may be on the same computing platform. In another embodiment, the probability engine 115 and the sensing processing interface 131 can be on different computing platforms.
According to some embodiments, the application of semantic data 121 and derived event information 129 may include management or monitoring of lighting capabilities. This type of probabilistic application 117 may relate to classifiers 143 that include presence events and associated illumination changes for people, vehicles, other entities. This type of probabilistic application 117 may also include a determination of activity in areas below or around nodes of the lighting network, including under or around poles, walls, or other stationary objects. This type of probabilistic application 117 may also include detection of tampering or theft associated with the lighting infrastructure. In each case, the probability associated with each semantic data may be limited by the location of the sensor 30, obstruction of the sensor field due to real-world obstructions, limitations of lighting illumination, availability of network bandwidth, or availability of computing power.
According to some embodiments, the probability application 117 of the semantic data 121 and the derived event information 129 may include parking location and occupancy detection, monitoring, and reporting. This type of probability application 117 may involve classifiers 141 and/or 143 that include the presence and motion events of people, cars, and other vehicles. This type of probability application 117 may also use classifiers 143 about people, cars, and other vehicles based on parameters of the car or vehicle, including its make, model, type, and other aesthetic features. In each case, the probability associated with each semantic data may be limited by the location of the sensor 30, the obstruction of the sensor field due to real-world obstructions (including the parking location of other vehicles or the location of other objects within the range of the parking space), limitations of lighting illumination, availability of network bandwidth, or availability of computing power.
According to some embodiments, the probability application 117 of the semantic data 121 and the derived event information 129 may include monitoring and reporting. This type of probabilistic application 117 may involve classifiers 141 and/or 143 that include the detection of a person or object or the movement of a person or object. This type of probabilistic application 117 may be targeted to increase public safety or security of an area. In each case, the probability associated with each semantic data may be limited by the location of the sensor 30, obstruction of the sensor field due to real-world obstructions, limitations of lighting illumination, availability of network bandwidth, or availability of computing power.
According to some embodiments, probability application 117 of semantic data 121 and derived event information 129 may include traffic monitoring and reporting. This type of probabilistic application 117 may involve classifiers 141 and/or classifiers 143 that include the presence and movement of people, cars, and other vehicles. This type of probabilistic application 117 may also classify people, cars, and other vehicles based on parameters of the car, including its make, model, type, and other aesthetic features. In each case, the probability associated with each semantic data may be limited by the location of the sensor 30, obstruction of the sensor field due to real-world obstructions, limitations of lighting illumination, availability of network bandwidth, or availability of computing power.
According to some embodiments, probability application 117 of semantic data 121 and derived event information 129 may include retail customer monitoring and reporting. This type of probability application 117 may involve classifiers 141 and/or 143 that include the presence and movement of people, cars, and other vehicles in a manner that may be useful to the retailer. This type of probabilistic application 117 may also include determining trends regarding the use of retail locations. In each case, the probability associated with each semantic data may be limited by the location of the sensor 30, obstruction of the sensor field due to real-world obstructions, limitations of lighting illumination, availability of network bandwidth, or availability of computing power.
According to some embodiments, the probabilistic application 117 of semantic data 121 and derived event information 129 may include business intelligence monitoring. This type of probabilistic application 117 involves a classifier 141 and/or classifier 143 that includes the state of the system used by the enterprise for operational purposes, facility purposes, or business purposes, including activities at points of sale (PoS) and other strategic locations. This type of probabilistic application 117 may also include determining trends regarding business intelligence. In each case, the probability associated with each semantic data may be limited by the location of the sensor 30, obstruction of the sensor field due to real-world obstructions, limitations of lighting illumination, availability of network bandwidth, or availability of computing power.
According to some embodiments, probability application 117 of semantic data 121 and derived event information 129 may include asset monitoring. This type of probability application 117 may involve a classifier 141 and/or a classifier 143 that includes monitoring of high value or other strategic assets, such as vehicles, stock stocks, valuables, industrial equipment, and the like. In each case, the probability associated with each semantic data may be limited by the location of the sensor 30, obstruction of the sensor field due to real-world obstructions, limitations of lighting illumination, availability of network bandwidth, or availability of computing power.
According to some embodiments, the probabilistic application 117 of semantic data 121 and derived event information 129 may include environmental monitoring. This type of probabilistic application 117 may involve classifiers 141 and/or classifiers 143, including classifiers related to monitoring of wind, temperature, pressure, gas concentration, airborne particulate matter concentration, or other environmental parameters. In each case, the probability associated with each semantic data may be limited by the location of the sensor 30, obstruction of the sensor field due to real-world obstructions, limitations of lighting illumination, availability of network bandwidth, or availability of computing power. In some embodiments, environmental monitoring may include seismic sensing.
Lighting infrastructure application framework
The invention further relates to the use of street or other lighting systems as a basis for a network of sensors 30, platforms, controllers and software enabling functionality other than lighting of an outdoor or indoor space.
Industrialized countries around the world have extensive indoor and outdoor lighting networks. Streets, highways, parking lots, factories, office buildings and all types of facilities typically have extensive indoor and outdoor lighting. Essentially all of this lighting until recently used incandescent or high intensity gas discharge (HID) technology. However, incandescent or HID lighting is inefficient when power is converted to light output. A significant portion of the power used for incandescent lighting is dissipated as heat. This not only wastes energy, but often results in failure of the bulb itself as well as the lighting fixture.
Because of these drawbacks, as well as the cost-effectiveness of operating and maintaining light emitting diodes or other solid state lighting technologies, many owners of a large number of incandescent or HID lighting fixtures are converting them to using solid state lighting. Solid state lighting not only provides a longer life bulb, thereby reducing labor costs for replacement, but the resulting fixture also operates at low temperatures for longer periods of time, further reducing the need to maintain the fixture. The assignee of the present application provides lighting replacement services and devices to various municipalities, commercial and private owners, thereby enabling them to operate their facilities with reduced maintenance costs and reduced energy costs.
Networked sensor and application frameworks have been developed for deployment in street or other lighting systems. The architecture of this system allows for the deployment of the networked system within the lighting infrastructure already in place or at the time of its initial installation. While the system is typically deployed in outdoor street lighting, it may also be deployed indoors, for example, in a factory or office building. Further, when the system is deployed outdoors, it may be installed when the street light bulb changes from incandescent lighting to more efficient lighting, for example using Light Emitting Diodes (LEDs). The cost of replacing such incandescent bulbs is high, primarily due to labor costs and the necessity of using special equipment to reach each bulb in each street lamp. By installing the network described herein at that time, the added cost is minimal compared to replacing an existing incandescent bulb with just an LED bulb.
Since this system enables many different uses, the deployed network, sensors 30, controllers, and software system described herein are referred to as a Lighting Infrastructure Application Framework (LIAF). The system uses the lighting infrastructure as a platform for business and client applications implemented using a combination of hardware and software. The main components of the framework are node hardware and software, sensor hardware, site-specific or cloud-based server hardware, network hardware and software, and wide area network resources that enable data collection, analysis, action invocation, and communication with applications and users.
Those skilled in the art will appreciate that LIAF may be used to embody methods and systems for probabilistic semantic sensing, and more particularly probabilistic semantic sensing in light sensing networks, as previously described. Although the system described herein is in the context of street lighting, it will be apparent from the following description that the system has applicability to other environments as well, for example in a parking garage or factory environment.
In one embodiment, this system provides a network of lighting systems using existing outdoor parking structures and indoor industrial lights. Each light may become a node in the network, and each node includes: a power control terminal for receiving power; a light source coupled to the power control terminal; a processor coupled to the power control terminal; a network interface coupled between the processor and a lighting system network; and a sensor 30 coupled to the processor for detecting a condition at the node. In some applications as described below, the network does not rely on a lighting system. In combination, this system allows each node to communicate information about the condition at the node to other nodes and a central location. The processing may thus be distributed among the nodes in the LIAF.
A gateway coupled to the network interfaces of some LIAF nodes is used to provide information from the sensors 30 at the nodes to a local or cloud-based service platform where application software stores, processes, distributes, and displays the information. This software performs the desired operations relating to the conditions detected by the sensors 30 at the nodes. In addition, the gateway may receive information from the service platform and provide the information to each of the node platforms in its domain. The information may be used to facilitate maintenance of the lights, control the cameras, locate unoccupied parking spaces, measure carbon monoxide levels, or numerous other applications, several of which are described herein. A sensor 30 disposed near or near the node may be used with the controller to control the light source and provide a control signal (e.g., lock or unlock the parking area) to a device coupled to the node. Multiple gateways may be used to couple multiple zones of a lighting system together for the purpose of a single application.
Typically, each node will include an Alternating Current (AC)/Direct Current (DC) converter to convert the supplied AC power to DC for use by the processor, sensors 30, and the like. The gateways may communicate with each other through cellular phones, Wi-Fi, or other means to the service platform. The sensors 30 are typically devices that detect particular conditions, for example, audio from a glass break or car alarm, video cameras for security and parking related sensing, motion sensors, light sensors, radio frequency identification detectors, weather sensors, or detectors for other conditions.
In another embodiment, a network of sensors 30 is provided for collecting information by using an existing lighting system with fixtures having light sources. The method comprises the following steps: the light source at each fixture is replaced with a module that includes power control terminals connected to the existing light fixture, the replacement light source, the processor, a network interface coupled to the processor, and a power supply of a sensor 30 coupled to the processor. The sensors 30 detect conditions at and around the nodes and forward information about the conditions to the processor. Preferably, the network interfaces of each module at each appliance are coupled together, typically using a broadband or cellular communication network. Information is collected from the sensors 30 using a communications network and provided over the network to a local server at the site or to an application running on a server in the cloud. A local or site-based application server is referred to as a site controller. An application running on the site controller may manage data from one or more specific customer sites.
In one embodiment, each module at each of the appliances includes a controller and an apparatus coupled to the controller, and the controller is for causing actions to be performed by the apparatus. As mentioned above, signals may be transmitted from the computing device to the module and thereby to the controller via the communication network to cause actions to be performed by the apparatus of the lighting system.
The lighting infrastructure application framework described herein is based on node, gateway, and service architectures. The node architecture consists of node platforms deployed at various locations in the lighting infrastructure, such as at individual street luminaires. At least some of the nodes include sensors 30 that collect and report data to other nodes and, in some cases, to higher levels in the architecture. For example, at an individual node level, ambient light sensors may provide information about lighting conditions at the location of the lighting fixture. The camera may provide information about events occurring at the node.
Fig. 7 illustrates a portion of the overall architecture of such a system. As the figure shows, the lighting node includes a node platform 10 (e.g., "NP") (e.g., a sensing node 109) in addition to the light source itself. Depending on the particular application desired, the node platform 10 includes various types of sensors 30 selected by the owner of the lighting node. In the illustration, a daylight sensor 31 and an occupancy sensor 32 are depicted. The lighting node may also include a controller 40 for performing functions in response to the sensor 30 or in response to control signals received from other sources. Three exemplary controllers 40 are illustrated in the drawings, namely an irrigation control 42 for controlling the irrigation system, a door control 45 for opening and closing nearby doors, and a light controller 48. Light controllers 48 may be used to control the illumination sources in node platform 10, for example, turning the illumination sources off or on at different times of the day, dimming the illumination sources, causing the illumination sources to flash, sensing the condition of the light sources themselves to determine if maintenance is required or to provide other functionality. The sensors 30, controller 40, power supply, and other desired components may be collectively assembled into the housing of the node platform 10.
Other examples of control functions implemented by these or similar controllers 40 may include: management of power distribution, measurement and monitoring of power, and demand/response management. The controller 40 can activate and deactivate the sensor 30 and can measure and monitor the sensor output. In addition, the controller 40 provides management of communication functions, such as gateway operations for software downloads and security management, and management of video and audio processing, for example, detection or monitoring of events.
In the one embodiment, the architecture of this networked system enables a "plug and play" deployment of sensors 30 at lighting nodes. The Lighting Infrastructure Application Framework (LIAF) provides hardware and software to enable implementation of the sensor plug and play architecture. When a new sensor 30 is deployed, software and hardware manage the sensor 30, but the LIAF provides support for the general functionality associated with the sensor 30. This may reduce or eliminate the need for custom hardware and software support by the sensor 30. The sensor 30 may require power (typically a battery or wired low voltage DC), and preferably the sensor 30 generates an analog or digital signal as an output.
LIAF allows deployment of sensors 30 at lighting nodes without additional hardware and software components. In one embodiment, the LIAF provides DC power to the sensor 30. The LIAF also monitors the analog or digital interface associated with the sensor 30, as well as all other activities at the nodes.
The node platforms 10 located at some of the lights are coupled together to a gateway platform 50 (e.g., "GP") (e.g., aggregation node 125). Gateway platform 50 communicates with node platform 10 using techniques as described further below, but may include a wireless connection or a wired connection. The gateway platform 50 will preferably communicate with the internet 80 using well-known communication techniques 55, such as cellular data, Wi-Fi, GPRS, or other means. Of course, the gateway platform 50 need not be a stand-alone implementation. Which may be deployed at node platform 10. In addition to the functionality provided by the node platform 10, the gateway platform 50 also provides Wide Area Network (WAN) functionality and may provide complex data processing functionality.
The gateway platform 50 establishes communication with the service platform 90 (e.g., "SP") to enable the node to provide data to various applications 100 (e.g., the probabilistic application 117) or receive instructions from various applications 100. The service platform 90 is preferably implemented in the cloud to enable interaction with applications 100 (e.g., probabilistic applications 117). When the service platform 90 or a subset having the functionality is implemented locally at a site, then the service platform or subset having the functionality is referred to as a site controller. A variety of applications 100 (e.g., probabilistic applications 117) that provide end-user accessible functionality are associated with the service platform 90. An owner, partner, client, or other entity may provide these applications 100. For example, one typical application 100 provides reports on current weather conditions at nodes. The application 100 is typically developed by others and authorized for the infrastructure owner, but the application may also be provided by the node owner, or otherwise made available for use on various nodes.
Exemplary lighting-related applications 100 include lighting control, lighting maintenance, and energy management. These applications 100 preferably run on the service platform 90 or site controller. There may also be partner applications 100-applications 100 that may utilize confidential data and that the lighting infrastructure owner grants privileges. Such applications 100 may provide security management, parking management, traffic reporting, environmental reporting, asset management, logistics management, and retail data management, to name a few possible services. There is also a client application 100 that enables a client to utilize generic data, where access to such data is authorized, for example, by the infrastructure owner. Another type of application 100 is an application 100 provided by the owner. These applications are applications 100 developed and used by the infrastructure owner (e.g., traffic flow in a control area or along a municipal street). Of course, there may also be applications 100 that use customization data from the framework.
The main entities involved in the system illustrated in fig. 7 are the lighting infrastructure owner, the application framework provider, the application 100 or the application service owner and the end user. Typical infrastructure owners include municipalities, owners, tenants, electricity utilities, or other entities.
Figure 8 is a diagram illustrating the architecture of such a system at a higher level. As shown in fig. 8, groups of node platforms 10 communicate with each other and to gateway platform 50. The gateway in turn communicates to the internet 80 over a communication medium 55. In a typical implementation as illustrated, there will be multiple groups of nodes 10, multiple gateway platforms 50, multiple communication media 55, all collectively coupled together to a service platform 90 available through the internet 80. In this way, multiple applications 100 may provide a wide degree of functionality to individual nodes through gateways in the system.
Fig. 8 also illustrates a networking architecture for an array of nodes. In the left-hand portion 11 of the figure, an array of nodes 10 is illustrated. The solid lines among the nodes represent the data plane that connects the selected nodes to enable high local bandwidth traffic. These connections may enable, for example, the exchange of local video or data among the nodes. The dashed lines in portion 11 represent the control plane that connects all nodes to each other and provides transport for local and remote traffic, exchanging information about events, usage, node status and enabling control commands from and responses to the gateway.
Fig. 9 illustrates node platform 10 in more detail. The node infrastructure includes power modules 12, typically implemented as AC-to-DC converters. In one implementation, where the node is deployed at outdoor street lights, AC power is the primary power supply to such street lights. Since most sensor 30 and controller 40 architectures use semiconductor-based components, the power module 12 converts the available AC power to the appropriate DC power level for driving the node components.
As also shown in fig. 9, an array of sensors 30 and controllers 40 are connected to the power module 12, which may include an AC/DC converter, among other well-known components. The processor running the processor module 15 coordinates the operation of the sensors 30 and the controller 40 to implement the desired local functionality, including the operation of the sensing engine 111, as previously described. The processor module 15 also provides communication to other node platforms 10 via appropriate media. The application 100 may also drive the light source module 16, couple to the appropriate third party light source module 18, operate under the control of one of the controllers 40. Implementations may combine the power module 12 and light controller 48 functionality into a single module. As indicated by the figure, wired connections 46 and 47 and wireless connections 44 and 49 may be provided as desired.
In fig. 9, the lighting infrastructure is comprised of light source modules 16, 18, such as, for example, LED assemblies of LED assemblies commercially available from attorney sensitivity Systems Inc. Of course, third party manufacturers may provide third party light source modules 18 as well as other components. The module 16 may also be coupled to a controller 40. The sensors 30 associated with the nodes may be local to the node, or they may be remote. The controller 40 (other than the LED controller provided by the proxy sensitivity system company) is typically remote and uses wireless communication. The processor module 15 (also referred to as a node application controller) manages all functions within the node. The processor module 15 also implements management, data collection, and action instructions associated with the application 100. Typically these instructions are delivered to the controller 40 as application scripts. In addition, software on the application controller provides activation, administration, security (authentication and access control) and communication functions. The network module 14 provides Radio Frequency (RF) based wireless communication to other nodes. These wireless communications may be based on Neighborhood Area Networks (NAN), WiFi, 802.15.4, or other technologies. The sensor 30 may be operated with a sensor module. The processor module 15 is further illustrated as being communicatively coupled to a sensing engine 111 that operates as previously described.
Fig. 10 is a block diagram of the gateway platform 50. As suggested by the figure and mentioned above, the gateway platform 50 may be located at a node or in its own enclosure separate from the node. In the diagram of fig. 10, the power module 12, processor module 15, LED light source module 16 and third party light source module 18 components are again shown, as well as the sensor module 30 and controller module 40. A correlation engine 113 and a probability engine 115, both operating as previously described, are further illustrated.
In addition to the functions supported by the node platform 10, the gateway platform 50 hardware and software components also enable high bandwidth data processing and analysis using media modules 105 (e.g., at video rates) and relay or WAN gateways 110. The gateway platform 50 may be considered a node platform 10, but with additional functionality. The high bandwidth data processing media module 105 supports video and audio data processing functions that can analyze, detect, record, and report application specific events. The relay or WAN gateway 110 may be based on GSM, Wi-Fi, LAN to internet, or other wide area network technology.
Fig. 11 is a block diagram of the service platform 90. The service platform 90 supports an application gateway 120 and a custom node application builder 130. The application gateway 120 manages the interface to different types of applications (e.g., the probabilistic application 117) that are implemented using sensors and event data from lighting nodes. The service platform 90 with the application gateway 120 (e.g., the sensory processing interface 131, according to one embodiment) may be deployed as a site controller at a customer lighting site. Thus, the site controller is an example of a service platform 90 having only application gateway 120 functionality. Custom node application builder 130 allows custom node application scripts (e.g., probabilistic application 117) to be developed. These scripts specify to the processor module 15 (see FIG. 9) the data collection instructions and operations to be performed at the node level. The script specifies to the application gateway 120 how to provide the results associated with the script to the application (e.g., the probabilistic application 117).
FIG. 11 also illustrates that the owner application 140 (e.g., the probability application 117), the sensitivity application 144 (e.g., the probability application 117), the partner application 146 (e.g., the probability application 117), and the client application 149 (e.g., the probability application 117) utilize the application gateway API 150 (e.g., the sensory processing interface 131, according to one embodiment). To date, agents have developed and implemented various types of applications (e.g., probabilistic application 117) that are common to many uses of sensor 30. One such application 100 is lighting management. The lighting management application provides lighting status and control functionality for the light sources at the node platform 10. Another application provided by the agent (e.g., the probabilistic application 117) provides lighting maintenance. The lighting maintenance application allows users to maintain their lighting network, for example, by enabling monitoring of the status of the lights at each node. An energy management application, such as a probabilistic application 117, allows a user to monitor lighting infrastructure energy usage and thus better control the usage.
The partner applications 146 shown in fig. 11 are typically broker-approved applications and application service companies that have established markets for various desired functions, such as those listed below. These applications 100 utilize an application gateway API 150. The exemplary partner application 146 provides security management, parking management, traffic monitoring and reporting, environmental reporting, asset management, and logistics management.
The client application 149 utilizes the application gateway API 150 to provide client-related functionality. This API 150 provides access to publicly available, anonymous, and owner-approved data. Also shown is an owner application 140 developed and used by lighting infrastructure owners to meet their various specific needs.
Fig. 12 illustrates a lighting infrastructure application revenue model for the system described above. This revenue model illustrates how revenue is generated and distributed among key stakeholders in the lighting infrastructure. Generally, the application 100 and/or application service provider collects revenue A from the application user. The application 100 owner or service provider pays a fee B to the lighting infrastructure application framework service provider. The LIAF service provider pays a fee C to the lighting infrastructure owner.
Key stakeholders of the lighting infrastructure-based application 100 include the owner of the lighting infrastructure. These owners are the entities that own the property on which the light poles/appliances and lighting infrastructure are located. Another key party involved in the system is the LIAF service provider. These LIAF service providers are entities that provide hardware and software platforms deployed to provide data and services for the application 100. The agent herein is a service provider of the LIAF. Other important entities include application (e.g., probabilistic application 117) developers and owners. These entities sell applications 100 or application services. These applications 100 and service providers are based on data collected, processed and distributed by the LIAF.
Among the revenue sources for subsidizing LIAF are applications, application services and data. There are revenue options for the application 100 or application service provider. The user of the application 100 or application service pays a license fee, which is typically a license fee based on a time interval or as a one-time payment. This cost is based on different usage levels, for example, standards, expertise, and management. The usage cost may also depend on the type of data (e.g., raw or summarized, real-time versus non-real-time, etc.), access to historical data, data based on-demand dynamic pricing, and based on the location associated with the data.
Another application service includes an advertiser. These advertisers are businesses that want to advertise products or services to the application 100 and application service users. Such advertisers pay advertising fees for each application 100 or service.
With respect to the data, the application 100 and application service developers make payments to access the data. The data contains specific data, such as energy usage at nodes, on a per light engine basis, on a per light engine channel, or per sensor 30 for the entire light. Another type of data is the status of the lamp (e.g., management status), such as temperature thresholds or energy costs to trigger dimming, dimming percentages, reports of lamp status including settings of detection intervals and reporting intervals. This data may also include operating states, such as the current state of the lamp (on or off, amount of dimmed and dimming, malfunction, anomaly, etc.). Other types of data include environmental data (such as temperature, humidity, and atmospheric pressure at a node) or lighting data (such as ambient light and its color).
Nodes may also sense and provide numerous other types of data. For example, gases such as carbon dioxide, carbon monoxide, methane, natural gas, oxygen, propane, butane, ammonia, or hydrogen sulfide may be detected and reported. Other types of data include accelerometer status, intrusion detector status, Bluetooth TM. sup.1 Media Access Control (MAC) _ address, active Radio Frequency Identification (RFID) tag data, ISO-18000-7, and DASH 7 data indicating a seismic event. Some of these applications 100 and their collectible data are described in more detail below.
The application specific sensor data may include intrusion sensors to detect intrusion at the base of the wand or light fixture, unauthorized opening of the cover at the base of the wand, unauthorized opening of the light fixture, vibration sensors for intrusion related vibration detection, earthquake related vibration detection, or wand damage related vibration detection. The motion sensor may detect motion, direction of motion, and type of motion detected.
The audio sensor may provide another type of collectible data. The audio sensor may detect glass breakage, gunshot, a vehicle engine on or off event, tire noise, vehicle door closure, a human communication event, or a human distress noise event.
The person detection sensor may detect a single person, multiple persons, and a count of persons. Vehicle detection may include duration of single vehicle, multiple vehicles, and sensor visibility. Vehicle detection may provide vehicle count or identifying information about make, model, color, license plate, etc.
Such a system may also provide data regarding related events, typically by using data from multiple sensors 30. For example, sensor data from motion detectors and people detectors may be combined to stimulate lighting functions to turn a lamp on, turn a lamp off, dim a lamp, or brighten a lamp. Counting people by means of motion detection provides information about security, retail activities or traffic related events. Motion detection coupled with vehicle detection may be used to indicate a breach of security of the facility.
The use of a combination of sensors 30 (e.g., motion and vehicle counting or motion and audio) provides useful information for performing various actions. The time of data collection may also be combined with data from the sensors 30 (such as discussed above) to provide useful information, such as motion detection during the on and off hours at the facility. A light level sensor coupled to the motion detection sensor may provide useful information for lighting control. Motion detection may be combined with video to capture data only when an event occurs. Current and historical sensor data may be correlated and used to predict events or needs, such as traffic flow patterns, for adjusting control signals.
Another use of data collected at a node is aggregation. This allows the data events to be used to generate a representative value for a group using a variety of techniques. For example, the aggregated data may be used to collect information about: the type of luminaire at the site (e.g., post-top and wall luminaires); the environment protects the unprotected illuminator or the illuminator outside the exposed area. Data may be collected based on lighting areas (e.g., roads, parking lots, lanes), facility types (e.g., manufacturing, R & D), corporate regions (e.g., international versus national), and so forth.
Power usage may be aggregated for appliance types, facilities, facility types, or geographic regions. Environmental sensing related aggregation may be provided for a geographic area or a facility type. The security application contains an aggregation for a geographic area or a type of facility. The transportation application includes aggregation by time of day, week, month, year, or by geographic region (e.g., school region versus retail region). Retail applications include aggregation by time of day, week, month, etc., as well as by geographic region or facility type. The data may also be filtered or aggregated based on user-specified criteria, such as time of day.
Custom application development allows a user to specify the data to be collected and forwarded to the custom application 100 and services, the actions to be performed based on the data at the lighting nodes, the format of the data to be forwarded to the application 100 or application services, and the management of historical data.
This revenue distribution model allows revenue to be distributed among lighting infrastructure owners, application infrastructure owners, and application 100 or application service owners. Today, lighting is a cost center for infrastructure owners that involves capital investment, energy billing, and maintenance costs. Here, agents provide hardware, software, and network resources to implement applications 100 and application services on a daily basis, allowing infrastructure owners to offset at least some of the capital, operating, and maintenance costs.
Fig. 13-16 illustrate four sample applications 100 of the system described above. Fig. 13 illustrates a parking management application 181 (e.g., probability application 117). Each of a series of vehicle detection sensors 180 is positioned over each parking space in the parking garage, or a single multiple space occupancy detection sensor is positioned at each light. The sensor 180 may operate using any well-known technique for detecting the presence or absence of a vehicle parked thereunder. When parking space specific sensors 180 have been deployed, then each sensor 180 includes an LED that displays whether the space is open, occupied, or reserved. This enables a driver in the garage to locate open, available and reserved space. It also allows the garage owner to know when space is available without having to visually inspect the entire garage. Sensors 180 are coupled to node platform 10 using wired or wireless technology, such as described for the system above. The node platform 10 communicates to the site controller 200 via a Local Area Network (LAN)210 and/or to the service platform 90 using the gateway platform 50. The gateway platform 50 is connected to the service platform 90 and the user 220 via the internet 80. Site controller 200 may communicate with service platform 90 or parking management application 181. The parking management application 181 enables the user 220 to reserve space by accessing the application 181 via the internet 80.
Fig. 14 illustrates a lighting maintenance application 229 (e.g., the probability application 117). The lighting maintenance application 229 includes lighting nodes (e.g., node platforms 10) that are networked together and then coupled to the site controller 200 using, for example, the system described above. Information about the lighting nodes, such as power consumption, operating status, on-off activity, and sensor activity, is reported to the site controller 200 and/or the service platform 90 using the techniques described above. In addition, site controller 200 and/or service platform 90 may collect performance data (e.g., temperature or current) as well as status data (e.g., activities occurring at node 10). The lighting maintenance application 229, which provides lighting maintenance related functions, accesses the raw maintenance data from the service platform 90. Maintenance related data (e.g., LED temperature, LED power consumption, LED failures, network failures, and power supply failures) may be accessed by the lighting maintenance company 230 from the lighting maintenance application 229 to determine when service is desired or when other attention is needed.
Fig. 15A and 15B illustrate the inventory application 238 (e.g., the probability application 117) and the space utilization application 237 of the system described above. As illustrated above, a series of RFID tag readers 250 are located throughout the warehouse along the node platform 10. These tag readers 250 detect RFID tags 260 on various items in the warehouse. Using a network of node platforms 10 as described herein, tag reader 250 may provide the information to the node controller 200 and/or service platform 90. Tag reader 250 collects location and identification information and forwards the data to the site controller 200 and/or service platform 90 using the node platform 10. This data is then forwarded from the service platform 90 to the application 100 (e.g., the inventory application 238). The location and identification data may be used to track the flow of goods within a protected structure, such as a warehouse. The same strategy can be used to monitor warehouse space usage. The sensors 30 detect the presence of items in the warehouse and the space occupied by those items. This spatial usage data may be forwarded to the site controller 200 and/or the service platform 90. The application 100 monitoring and managing the space may utilize the space utilization application 237 (e.g., the probabilistic application 117) to access data describing the space from the service platform 90.
Fig. 16 illustrates a logistics application 236 (e.g., the probability application 117) for monitoring a loading dock and tracking goods from a source to a destination. For example, the RFID tag 260 may be positioned to track the cargo from a source (e.g., a loading dock), a transfer station (e.g., a weigh station or a gas station) through to a destination (e.g., a warehouse) by utilizing the node platform 10. Similarly, the RFID tag 260 may be located on the cargo and the vehicle that is transporting the cargo. RFID tag 260 transmits location information, identification information, and other sensor data information using node platform 10, which in turn transmits the aforementioned information to service platform 90. This may further be performed using the gateway platform 50 at each site (e.g., source, transit station, and destination). The service platform 90 makes this data available to applications 100, such as the logistics application 236, enabling users 220 accessing the logistics application 236 to obtain accurate location and cargo state information.
FIG. 17 is a block diagram of electrical components for power monitoring and control within a node. The illustrated power measurement and control module measures incoming AC power and controls the power provided to the AC/DC converter. The power measurement and control module also provides surge suppression to the node assembly and provides power to the node assembly.
This circuit is used to control power to the light emitting diodes at individual nodes. The actual count of inputs or outputs outlined below depends on the client application specification. As shown in the figure, AC power at a voltage range between 90 volts and 305 volts is provided via line 300. The voltage and current are sensed by the energy measurement integrated circuit 310. AC-to-DC transformer 320 provides 3.3 volts to circuit 310 to power integrated circuit 310. In fig. 17, the dashed lines represent non-isolated portions of the high voltage system. The dotted line indicates the portion of the circuit that is protected up to 10,000 volts.
Integrated circuit 310 is a Complementary Metal Oxide Semiconductor (CMOS) power measurement device that measures line voltage and current. The CMOS power measurement device is capable of calculating active, reactive and apparent power and RMS voltage and current. The CMOS power measurement device provides an output signal 315 to a "universal asynchronous receiver/transmitter" (UART) device 330. The UART device 330 converts data between a parallel interface and a serial interface. The UART 330 is connected to provide a signal to a microcontroller 340 that controls an output voltage provided to a load 350, preferably an LED lighting system 350. This control is implemented using switch 355.
Devices 360 and 365 are also coupled to microcontroller 340, with devices 360 and 365 implementing a controller area network bus system (commonly referred to as a CAN bus). The CAN bus allows multiple microcontrollers to communicate with each other without relying on a host computer. The CAN bus provides a message-based communication protocol. The CAN bus allows multiple nodes to be daisy-chained together for communication among them.
The power module 370 is optionally provided on a circuit board. The power module 370 accepts AC power through its input terminals and provides controlled DC power at its output terminals. If desired, the power module may provide input power for some of the devices illustrated in FIG. 18, which is discussed next.
Fig. 18 is a block diagram of an application controller located at a node. The node provides wireless communication with application software. This application software enables control of power, lighting, and sensors 30 running on the microcontroller 400. It also provides power to the various modules illustrated in the figure and enables communication with the sensors 30.
The application controller in fig. 18 operates under the control of a microcontroller 400, depicted in the center of the figure. Incoming power 405 (supplied by module 370 in fig. 17, for example) is stepped down to 5 volts by transformer 410 to provide power for Wi-Fi communication, and is also provided to 3.3 volt transformer 420, transformer 420 powering microcontroller 400. The power supply 430 also receives input power and provides it to the sensor 30 (not shown). The 3.3 volt power is also provided to the reference voltage generator 440.
Microcontroller 400 provides a number of input and output terminals for communicating with various devices. In particular, in one embodiment, the microcontroller 400 is coupled to provide three 0-10 volt analog output signals 450 and to receive two 0-10 volt analog input signals 460. These input and output signals 460 and 450 may be used to control and sense the conditions of the various sensors 30. Communication with microcontroller 400 is accomplished through UART 470 and using CAN bus 480. As explained with respect to fig. 17, the CAN bus 480 enables communication among the microcontrollers without the need for a host computer.
To enable future applications 100 and provide flexibility, the microcontroller 400 also includes a plurality of general purpose input/output pins 490. These general purpose input/output pins accept or provide signals in the range from 0 volts to 36 volts. These general purpose input/output pins are generic pins whose behavior can be controlled or programmed by software. Having these additional control lines allows for additional functionality to be implemented in software without requiring replacement of hardware.
The microcontroller 400 is also coupled to a pair of I2C bus interfaces 500. These bus interfaces 500 may be used to connect other components on a board or to connect other components linked via a cable. The I2C bus 500 does not require a predefined bandwidth, but still implements multi-master (multi-master), arbitration, and collision detection. Microcontroller 400 is also connected to SP1 interface 510 to provide surge protection. Additionally, microcontroller 400 is coupled to USB interface 520 and JTAG interface 530. The various input and output buses and control signals enable application controllers at the node interfaces, including a wide variety of sensors 30 and other devices, to provide, for example, lighting control and sensor management.
The foregoing is a detailed description of a networked lighting infrastructure for use with the sensing application 100. As described, the system provides unique capabilities of existing or future lighting infrastructures. While numerous details have been provided regarding specific embodiments of the system, it will be appreciated that the scope of the invention is defined by the appended claims.
Machine and software architecture
The modules, methods, engines, applications, etc. described in connection with fig. 1-18 are implemented in some embodiments in the context of multiple machines and associated software architectures. The following sections describe representative software architectures and machine (e.g., hardware) architectures suitable for use with the disclosed embodiments.
Software architectures are used in conjunction with hardware architectures to create devices and machines tailored to specific applications. For example, a particular hardware architecture coupled with a particular software architecture would form a mobile device, such as a mobile phone, tablet device, or the like. Slightly different hardware and software architectures may be created for smart devices used in the "internet of things". Yet another combination is produced for use within a cloud computing architecture. Not all combinations of such software and hardware architectures are presented here, as those skilled in the art can readily understand how to implement the invention in contexts other than the disclosure contained herein.
Software architecture
FIG. 19 is a block diagram 2000 illustrating a representative software architecture 2002 that may be used in conjunction with the various hardware architectures described herein. FIG. 19 is only a non-limiting example of the software architecture 2002 and it will be appreciated that many other architectures can be implemented to facilitate the functionality described herein. The software architecture 2002 may be executed on hardware, such as the machine 2100 of fig. 20, the machine 2100 including, among other things, a processor 2110, a memory 2130, and I/O components 2150. Returning to fig. 19, a representative hardware layer 2004 is illustrated and may represent, for example, the machine 2100 of fig. 20. The representative hardware layer 2004 includes one or more processing units 2006 with associated executable instructions 2008. Executable instructions 2008 represent executable instructions of software architecture 2002, including implementations of the methods, engines, modules, and so forth of fig. 1 through 18. The hardware layer 2004 also includes a memory and/or storage module 2010, which also has executable instructions 2008. The hardware layer 2004 may also include other hardware, as indicated by 2012, which represents any other hardware of the hardware layer 2004, such as other hardware 2012 illustrated as part of the machine 2100.
In the example architecture of fig. 19, the software 2002 may be conceptualized as a stack of layers, where each layer provides specific functionality. For example, software architecture 2002 may include, for example, the following layers: an operating system 2014, a library 2016, a framework/middleware 2018, an application 2020 (e.g., a probabilistic application 117), and a presentation layer 2044. Operationally, application 2020 and/or other components within a layer may invoke Application Programming Interface (API) call 2024 through a software stack and receive a response, return value, etc., illustrated as message 2026 in response to API call 2024. The layers illustrated are representative in nature and not all software architectures have all layers. For example, some mobile or private operating systems 2014 may not provide the framework/middleware layer 2018, while other operating systems may provide this layer. Other software architectures may include additional or different layers.
The operating system 2014 may manage hardware resources and provide common services. For example, the operating system 2014 may include a kernel 2028, services 2030, and drivers 2032. The kernel 2028 may act as an abstraction layer between the hardware layer and other software layers. For example, the kernel 2028 may be responsible for memory management, processor management (e.g., scheduling), component management, networking, security settings, and so forth. The services 2030 may provide other common services for other software layers. The driver 2032 may be responsible for controlling or interfacing with the underlying hardware. For example, the drivers 2032 may include a display driver, a camera driver, a video camera driver, depending on the hardware configuration,Drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers),Drive thePrograms, audio drivers, power management drivers, and the like.
The library 2016 may provide a common infrastructure that may be utilized by the applications 2020 and/or other components and/or layers. The library 2016 generally provides functionality that allows other software modules to perform tasks in a manner that is easier than directly interfacing with the underlying operating system 2014 functionality (e.g., the kernel 2028, services 2030, and/or drivers 2032). The library 2016 may include a system 2034 library (e.g., a C-standard library), which may provide functions such as memory allocation functions, string manipulation functions, mathematical functions, and the like. Additionally, the libraries 2016 may include API libraries 2036, such as media libraries (e.g., libraries to support the presentation and manipulation of various media formats, such as Moving Picture Experts Group (MPEG)4, h.264, MPEG-1 or MPEG-2 audio layer (MP3), AAC, AMR, joint photographic experts group (JPG), Portable Network Graphics (PNG)), graphics libraries (e.g., open graphics library (OpenGL) framework that can be used to render 2D and 3D in graphical content on a display), database libraries (e.g., Structured Query Language (SQL) that can provide various relational database functions), website libraries (e.g., WebKit that can provide website browsing functionality), and the like. The library 2016 may also include a wide variety of other libraries 2038 to provide the applications 2020 and other software components/modules with a number of other APIs 2036.
The framework 2018 (also sometimes referred to as middleware) may provide a higher-level common infrastructure that may be utilized by applications 2020 and/or other software components/modules. For example, the framework 2018 may provide various Graphical User Interface (GUI) functions, advanced resource management, advanced location services, and so forth. The framework 2018 may provide a wide range of other APIs 2036 that may be utilized by applications 2020 and/or other software components/modules, some of which may be specific to a particular operating system 2014 or platform.
The applications 2020 include built-in applications 2040 and/or third-party applications 2042. Examples of representative built-in applications 2040 may include, but are not limited to: contact application, browser application, book reader applicationA location application, a media application, a messaging application, and/or a gaming application. The third party applications 2042 may include any of the built-in applications as well as a wide variety of other applications 2020. In a particular example, the third-party application 2042 (e.g., used by an entity other than the vendor of the particular platform with Android)TMOr iOSTMApplications developed by Software Development Kit (SDK) may be in the mobile operating system 2014 (e.g., iOS)TM、AndroidTM、Phone) or other mobile operating system 2014. In this example, the third party application 2042 may invoke an API call 2024 provided by the mobile operating system (e.g., operating system 2014) to facilitate the functionality described herein.
The applications 2020 may utilize built-in operating system functions (e.g., kernel 2028, services 2030, and/or drivers 2032), libraries (e.g., system 2034, API 2036, and other libraries 2038), framework/middleware 2018 to create a user interface to interact with the system's users 220. Alternatively or additionally, in some systems, interaction with the user 220 may occur through a presentation layer (e.g., presentation layer 2044). In these systems, the application/module "logic" may be separate from aspects of the application/module that interact with the user 220.
Some software architectures 2002 utilize virtual machines. In the example of fig. 19, this is illustrated by virtual machine 2048. The virtual machine 2048 creates a software environment in which applications/modules are executable (as if they were executing on a hardware machine, such as the machine 2100 of figure 20, for example). The virtual machine 2048 is hosted by a host operating system (operating system 2014 in FIG. 21) and typically (but not always) has a virtual machine monitor 2046 that manages the operation of the virtual machine 2048 and the interface with the host operating system (i.e., operating system 2014). The software architecture 2002 executes within a virtual machine 2048, such as within an operating system 2050, a library 2052, a framework/middleware 2054, an application 2056, and/or a presentation layer 2058. These layers of the software architecture 2002 executing within the virtual machine 2048 may or may not be the same as the corresponding layers previously described.
Example machine architecture and machine-readable media
Fig. 20 is a block diagram illustrating components of a machine 2100 capable of reading instructions from a machine-readable medium (e.g., a machine-readable storage medium) and performing any one or more of the methodologies discussed herein, according to some example embodiments. In particular, fig. 20 shows a diagrammatic representation of machine 2100 in the example form of a computer system within which instructions 2116 (e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine 2100 to perform any one or more of the methodologies discussed herein may be executed. For example, the instructions 2116 may cause the machine 2100 to perform the flowchart of fig. 6. Additionally or alternatively, instructions 2116 may implement sensing engine 111, correlation engine 113, probability engine 115, and probability application 117 of fig. 3, and so on, including implementing the modules, engines, and applications in fig. 9-11. Instructions 2116 transform a generic, unprogrammed machine 2100 in the manner described into a specific machine 2100 programmed to perform the functions described and illustrated. In alternative embodiments, the machine 2100 operates as a standalone device or may be coupled (e.g., networked) to other machines 2100. In a networked deployment, the machine 2100 may operate in the capacity of a server machine or a client machine in server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. Machine 2100 may include, but is not limited to: a server computer, a client computer, a Personal Computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a Personal Digital Assistant (PDA), an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine 2100 capable of executing instructions 2116 that specify actions to be taken by the machine 2100, sequentially or otherwise. Further, while only a single machine 2100 is illustrated, the term "machine" will also be employed to include a collection of machines 2100 that individually or jointly execute the instructions 2116 to perform any one or more of the methodologies discussed herein.
The machine 2100 may include a processor 2110, a memory 2130, and I/O components 2150 that may be configured to communicate with one another, such as via a bus 2102. In an example embodiment, the processor 2110, such as a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Radio Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof, may include, for example, a processor 2112 and a processor 2114 that may execute instructions 2116. The term "processor" is intended to include a multi-core processor 2112 that may include two or more separate processors 2112 (sometimes referred to as "cores") that may execute instructions 2116 concurrently. Although fig. 20 shows multiple processors 2112, the machine 2100 may include a single processor 2112 with a single core, a single processor 2112 with multiple cores (e.g., multi-core processing), multiple processors 2112 with a single core, multiple processors 2112 with multiple cores, or any combination thereof.
The memory/storage device 2130 can include a memory 2132 (e.g., a main memory or other memory storage device) and a storage unit 2136, both of which memory 2132 and storage unit 2136 can be accessed by the processor 2110, e.g., via the bus 2102. The storage unit 2136 and the memory 2132 store instructions 2116 embodying any one or more of the methodologies or functions described herein. The instructions 2116 may also reside, completely or partially, within the memory 2132, within the storage unit 2136, within at least one of the processors 2110 (e.g., within the processors' caches) during execution thereof by the machine 2100, or any suitable combination thereof. Thus, the memory 2132, the storage unit 2136, and the memory of the processor 2110 are examples of machine-readable media.
As used herein, "machine-readable medium" means a device capable of storing instructions 2116 and data, either temporarily or permanently, and may include, but is not limited to: random Access Memory (RAM), Read Only Memory (ROM), buffer memory, flash memory, optical media, magnetic media, cache memory, other types of storage devices (e.g., erasable programmable read only memory (EEPROM)), and/or any suitable combination thereof. The term "machine-readable medium" shall be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) capable of storing the instructions 2116. The term "machine-readable medium" shall also be taken to include any medium, or combination of multiple media, that is capable of storing instructions (e.g., instructions 2116) for execution by a machine (e.g., machine 2100), such that, when executed by one or more processors of machine 2100 (e.g., processor 2110), instructions 2116 cause machine 2100 to perform any one or more of the methodologies described herein. Thus, "machine-readable medium" refers to a single storage device or appliance, as well as a "cloud-based" storage system or storage network that includes multiple storage devices or appliances. The term "machine-readable medium" by itself excludes signals.
I/O components 2150 may include a wide variety of components to receive input, provide output, generate output, transmit information, exchange information, capture measurements, and so forth. The particular I/O components 2150 included in a particular machine 2100 will depend on the type of machine. For example, a portable machine 2100, such as a mobile phone, would likely include a touch input device or other such input mechanism, while a no-peripheral server machine would likely not include such a touch input device. It will be appreciated that the I/O component 2150 may include many other components not shown in fig. 20. The I/O components 2150 are grouped according to functionality, which is merely to simplify the following discussion and the grouping is not limiting in any way. In various exemplary embodiments, I/O components 2150 can include output components 2152 and input components 2154. Output components 2152 can include visual components (e.g., a display such as a Plasma Display Panel (PDP), a Light Emitting Diode (LED) display, a Liquid Crystal Display (LCD), a projector, or a Cathode Ray Tube (CRT)), auditory components (e.g., speakers), tactile components (e.g., vibrating motors, resistance mechanisms), other signal generators, and so forth. Input component 2154 may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, an optical-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touch pad, a trackball, a joystick, a motion sensor, or other pointing instrument), tactile input components (e.g., physical buttons, a touch screen that provides a position and/or force or touch gesture of a touch, or other tactile input components), audio input components (e.g., a microphone), and the like.
In other example embodiments, I/O component 2150 may include biometric component 2156, motion component 2158, environment component 2160, or location component 2162, among a wide array of other components. For example, biometric component 2156 may include components to: detecting expressions (e.g., gesture expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measuring physiological signals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identifying a person (e.g., voice recognition, retinal recognition, facial recognition, fingerprint recognition, or electroencephalogram-based recognition). The motion component 2158 can include an acceleration sensor component (e.g., an accelerometer), a gravity sensor component, a rotation sensor component (e.g., a gyroscope), and so forth. Environmental components 2160 may include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), auditory sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors that are safe to detect concentrations of hazardous gases or to measure pollutants in the atmosphere), or other components that may provide an indication, measurement, or signal of a corresponding ambient physical environment. Positioning components 2162 may include location sensor components (e.g., Global Positioning System (GPS) receiver components), altitude sensor components (e.g., altimeters or barometers that detect barometric pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like.
Communication may be implemented using a wide variety of techniques. The I/O components 2150 may include a communications component 2164 operable to couple the machine 2100 to the network 2180 or the device 2170 via the coupling 2182 and the coupling 2172, respectively. For example, communications component 2164 may include a network interface component or other suitable device to interface with the network 2180. In other examples, communications component 2164 may include wired communications component, wireless communications component, cellular communications component, Near Field Communications (NFC) component, wireless communications component,the components (e.g.,low energy consumption),Components, and other communication components to provide communication via other modalities. The device 2170 may be another machine 2100 or any of a wide variety of peripheral devices, such as a peripheral device coupled via a Universal Serial Bus (USB).
Further, communications component 2164 may detect the identifier or include a component operable to detect the identifier. For example, communication components 2164 may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., optical sensors to detect one-dimensional barcodes such as Universal Product Code (UPC) barcodes, multi-dimensional barcodes such as Quick Response (QR) codes, Aztec codes, data matrices, Dataglyph, MaxiCode, PDF417, Ultra codes, UCCRSS-2D barcodes, and other optical codes), or audible detection components (e.g., microphones to identify tagged audio signals). In addition, can be via a communication groupPiece 2164 derives various information, e.g., via Internet Protocol (IP) geolocation derived location, viaSignal triangulation derived location, location derived via detection of NFC beacon signals that may indicate a particular location, and so forth.
Transmission medium
In various exemplary embodiments, one or more portions of network 2180 may be an ad hoc network, an intranet, an extranet, a Virtual Private Network (VPN), a Local Area Network (LAN), a Wireless LAN (WLAN), a Wide Area Network (WAN), a Wireless WAN (WWAN), a Metropolitan Area Network (MAN), the Internet 80, a portion of the Public Switched Telephone Network (PSTN), a Plain Old Telephone Service (POTS) network, a cellular telephone network, a wireless network, a network for providing access to a network, and a,A network, another type of network, or a combination of two or more such networks. For example, the network 2180 or a portion of the network 2180 may include a wireless or cellular network and the coupling 2182 may be a Code Division Multiple Access (CDMA) connection, a global system for mobile communications (GSM) connection, or other type of cellular or wireless coupling. In this example, coupling 2182 may implement any of a variety of types of data transmission techniques, such as single carrier radio transmission technology (1xRTT), evolution-data optimized (EVDO) techniques, General Packet Radio Service (GPRS) techniques, enhanced data rates for GSM evolution (EDGE) techniques, third generation partnership project (3GPP) (including 3G), fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standards, other techniques defined by various standards-setting organizations, other remote protocols, or other data transmission techniques.
The instructions 2116 may be transmitted or received over the network 2180 using a transmission medium via a network interface device, such as a network interface component included in the communications component 2164, and utilizing any of a number of well-known transmission protocols, such as the hypertext transfer protocol (HTTP). Similarly, the instructions 2116 may be transmitted or received to the device 2170 via a coupling 2172 (e.g., a peer-to-peer coupling) using a transmission medium. The term "transmission medium" shall be taken to include any intangible medium that is capable of storing, encoding, or carrying the instructions 2116 for execution by the machine 2100, and the term "transmission medium" includes digital or analog communications signals or other intangible medium to facilitate communication of such software.
Language(s)
Throughout this specification, multiple examples may implement a component, an operation, or a structure described as a single example. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in the exemplary configurations may be implemented as a combined structure or component. Similarly, structure and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements are intended to be within the scope of the subject matter herein.
Although the summary of the inventive subject matter has been described with reference to specific example embodiments, various modifications and changes may be made to these embodiments without departing from the broader scope of the embodiments of the invention. Such embodiments of the inventive subject matter may be referred to herein, individually or collectively, by the term "invention" merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept if more than one is in fact disclosed.
The embodiments illustrated herein are described in sufficient detail to enable those skilled in the art to practice the disclosed teachings. Other embodiments may be used and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. The implementations should not be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled.
As used herein, the term "or" is to be understood in an inclusive or exclusive sense. Further, multiple instances may be provided for a resource, operation, or structure described herein as a single instance. In addition, the boundaries between the various resources, operations, modules, engines, and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are contemplated and may fall within the scope of various embodiments of the invention. In general, structures and functionality presented as separate resources in example configurations may be implemented as a combined structure or resource. Similarly, structures and functionality presented as a single resource may be implemented as separate resources. These and other variations, modifications, additions, and improvements fall within the scope of the embodiments of the invention as represented by the claims that follow. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.
Claims (20)
1. A method, comprising:
receiving raw sensor data from a plurality of sensors;
generating semantic data based on the raw sensor data, the semantic data including a plurality of sensed events and further including a first plurality of classifiers including: a first classifier including semantic data representing events detected based on the receipt of the raw sensor data; a second classifier comprising a probability representing a confidence of the associated semantic data; a third classifier that identifies a location of a sensor for sensing associated semantic data; and a fourth classifier that identifies a location of associated semantic data;
correlating the semantic data based on a second plurality of classifiers selected from the first plurality of classifiers to produce a plurality of aggregations of semantic data;
analyzing the plurality of aggregations of semantic data with a probability engine to generate a corresponding plurality of derived events, each of the corresponding plurality of derived events including a derived probability, the analyzing including generating a first derived event including a first derived probability, the first derived event generated based on a plurality of probabilities respectively representing confidence levels of associated semantic data; and
enabling at least one application to perform a service based on the plurality of derived events.
2. The method of claim 1, wherein the plurality of sensors includes a first sensor located on a first node in an optical sensing network including a plurality of nodes.
3. The method of claim 1, wherein the raw sensor data includes visual data, audio data, and environmental data, and wherein the events represented by the semantic data include detection of a person, detection of a vehicle, detection of an object, and detection of an empty parking space.
4. The method of claim 1, wherein the third classifier includes spatial coordinates of a first sensor, and wherein the plurality of classifiers includes a fifth classifier that includes temporal coordinates associated with the third classifier.
5. The method of claim 4, wherein the fourth classifier includes spatial coordinates of a first event represented by the semantic data, and wherein the plurality of classifiers includes a sixth classifier including time coordinates describing a time at which the first sensor was used to detect a first event represented by the semantic data, and wherein the first event is an empty occupancy state of a parking spot.
6. The method of claim 1, wherein the first plurality of classifiers includes a seventh classifier that includes an application identifier used to identify the at least one application from a plurality of applications, wherein each application from the plurality of applications is used to perform a different service.
7. The method of claim 1, the correlating the semantic data comprising at least one of: correlation to produce an abstract map based on the matched classifiers, and correlation to produce an abstract map based on the fuzzy matched classifiers.
8. The method of claim 1, wherein the correlating the semantic data comprises correlating based on a mathematical relationship between spatial and temporal coordinates, and wherein the spatial and temporal coordinates relate to at least one of a sensor itself, detection of an event, and a combination of both.
9. The method of claim 1, wherein the plurality of sensed events includes a first sensed event, a second sensed event, and a third sensed event, and wherein the first sensed event includes a first classifier describing an empty occupancy state of a first parking place, the second sensed event includes a first classifier describing an empty occupancy state of the first parking place, and a third sensed event describes an empty occupancy state of the first parking place, and wherein the plurality of aggregated semantic data includes first aggregated semantic data that aggregates the first, second, and third sensed events.
10. A system, comprising:
a plurality of sensing engines implemented by one or more processors, the plurality of sensing engines configured to receive raw sensor data from a plurality of sensors, the plurality of sensing engines further configured to generate semantic data based on the raw sensor data, the semantic data including a plurality of sensed events and further including a first plurality of classifiers including: a first classifier including semantic data representing events detected based on the receipt of the raw sensor data; a second classifier comprising a probability representing a confidence of the associated semantic data; a third classifier that identifies a location of a sensor for sensing associated semantic data; and a fourth classifier that identifies a location of associated semantic data; and
a correlation engine implemented by one or more processors, the correlation engine configured to correlate the semantic data based on a second plurality of classifiers to produce a plurality of aggregations of semantic data, the correlation engine configured to select the second plurality of classifiers from the first plurality of classifiers; and
a probability engine implemented by one or more processors, the probability engine configured to analyze the plurality of aggregations of semantic data to generate a corresponding plurality of derived events, each of the corresponding plurality of derived events including a derived probability, the plurality of derived events including a first derived event, the first derived event including a first derived probability, the probability engine generating the first derived event based on a plurality of probabilities respectively representing confidence levels of associated semantic data, the probability engine further configured to pass the plurality of derived events to an interface to enable at least one application to perform a service based on the plurality of derived events.
11. The system of claim 10, wherein the plurality of sensors includes a first sensor located on a first node in an optical sensing network including a plurality of nodes.
12. The method of claim 10, wherein the raw sensor data includes visual data, audio data, and environmental data, and wherein the events represented by the semantic data include detection of a person, detection of a vehicle, detection of an object, and detection of an empty parking space.
13. The system of claim 10, wherein the third classifier includes spatial coordinates of a first sensor, and wherein the plurality of classifiers includes a fifth classifier that includes temporal coordinates associated with the third classifier.
14. The system of claim 13, wherein the fourth classifier includes spatial coordinates of a first event represented by the semantic data, and wherein the plurality of classifiers includes a sixth classifier including time coordinates describing a time at which the first sensor was used to detect a first event represented by the semantic data, and wherein the first event is an empty occupancy state of a parking spot.
15. The system of claim 10, wherein the probability engine is configured to analyze based on user input, wherein the user input includes a desired accuracy and user preferences.
16. The system of claim 10, wherein the probability engine is configured to analyze based on a weight assigned to a second classifier, and wherein the probability engine alters the weight over time.
17. The system of claim 10, wherein the plurality of derived events includes a first derived event, and wherein the probability engine analyzes based on a first threshold, and wherein the first threshold defines a minimum level of raw sensor data used to generate the first derived event.
18. The system of claim 17, wherein the probability engine alters the first threshold over time.
19. The system of claim 10, wherein the at least one application includes a parking location application, a monitoring application, a transportation application, a retail customer application, a business intelligence application, an asset monitoring application, an environmental application, a seismic sensing application.
20. A machine-readable medium having no transitory signal, the machine-readable medium storing a set of instructions that, when executed by a processor, cause a machine to perform operations comprising:
receiving raw sensor data from a plurality of sensors;
generating semantic data based on the raw sensor data, the semantic data including a plurality of sensed events and further including a first plurality of classifiers including: a first classifier including semantic data representing events detected based on the receipt of the raw sensor data; a second classifier comprising a probability representing a confidence of the associated semantic data; a third classifier that identifies a location of a sensor for sensing associated semantic data; and a fourth classifier that identifies a location of associated semantic data;
correlating the semantic data based on a second plurality of classifiers selected from the first plurality of classifiers to produce a plurality of aggregations of semantic data;
analyzing the plurality of aggregations of semantic data with a probability engine to generate a corresponding plurality of derived events, each of the corresponding plurality of derived events including a derived probability, the analyzing including generating a first derived event including a first derived probability, the first derived event generated based on a plurality of probabilities respectively representing confidence levels of associated semantic data; and
enabling at least one application to perform a service based on the plurality of derived events.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US61/948,960 | 2014-03-06 | ||
| US14/639,901 | 2015-03-05 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1231998A1 true HK1231998A1 (en) | 2017-12-29 |
| HK1231998B HK1231998B (en) | 2020-07-03 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11544608B2 (en) | Systems and methods for probabilistic semantic sensing in a sensory network | |
| JP6386217B2 (en) | Networked lighting infrastructure for sensing applications | |
| US10290065B2 (en) | Lighting infrastructure and revenue model | |
| HK1231998A1 (en) | Systems and methods for probabilistic semantic sensing in a sensory network | |
| HK1231998B (en) | Systems and methods for probabilistic semantic sensing in a sensory network | |
| HK1210901B (en) | Computing device and method for lighting infrastructure and revenue model |