WO2025024926A1 - Methods and systems for gas emission prediction and monitoring for cogeneration using stacked multivariate deep learning - Google Patents
Methods and systems for gas emission prediction and monitoring for cogeneration using stacked multivariate deep learning Download PDFInfo
- Publication number
- WO2025024926A1 WO2025024926A1 PCT/CA2024/051001 CA2024051001W WO2025024926A1 WO 2025024926 A1 WO2025024926 A1 WO 2025024926A1 CA 2024051001 W CA2024051001 W CA 2024051001W WO 2025024926 A1 WO2025024926 A1 WO 2025024926A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- module
- deep learning
- flow rate
- input parameters
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01F—MEASURING VOLUME, VOLUME FLOW, MASS FLOW OR LIQUID LEVEL; METERING BY VOLUME
- G01F15/00—Details of, or accessories for, apparatus of groups G01F1/00 - G01F13/00 insofar as such details or appliances are not adapted to particular types of such apparatus
- G01F15/06—Indicating or recording devices
- G01F15/061—Indicating or recording devices for remote indication
- G01F15/063—Indicating or recording devices for remote indication using electrical means
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01K—MEASURING TEMPERATURE; MEASURING QUANTITY OF HEAT; THERMALLY-SENSITIVE ELEMENTS NOT OTHERWISE PROVIDED FOR
- G01K7/00—Measuring temperature based on the use of electric or magnetic elements directly sensitive to heat ; Power supply therefor, e.g. using thermoelectric elements
- G01K7/42—Circuits effecting compensation of thermal inertia; Circuits for predicting the stationary value of a temperature
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N33/00—Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
- G01N33/0004—Gaseous mixtures, e.g. polluted air
- G01N33/0009—General constructional details of gas analysers, e.g. portable test equipment
- G01N33/0027—General constructional details of gas analysers, e.g. portable test equipment concerning the detector
- G01N33/0036—General constructional details of gas analysers, e.g. portable test equipment concerning the detector specially adapted to detect a particular component
- G01N33/0037—NOx
Definitions
- the present disclosure relates to methods the systems for emissions monitoring, and in particular, to methods and systems for emissions monitoring for cogeneration using stacked multivariable deep learning.
- Nitrogen oxides may have adverse environmental and health effects, which may contribute to smog and acid rain, as well as forming fine particles (PM) and ozone in ambient air (for example, tropospheric ozone or ground-level ozone).
- CEMS continuous emissions monitoring systems
- CEMS is designed and developed to monitor effluent gas streams resulting from combustion in industrial processes.
- CEMS may measure flue gas for content of CO, NOx, SO2, and O2 therein for providing information used for combustion control in industrial settings.
- CEMS may also measure parameters such air flow rate, pressure, temperature, flue gas opacity, and moisture.
- CEMS typically comprises analyzers for measuring gas concentrations within a stream, equipment to direct a sample of that gas stream to the analyzers if they are remote, and equipment to condition the sample gas by removing water and other components that may interfere with the reading.
- CEMS are generally expensive and may require frequent maintenance.
- CEMS may have additional costs for operators associated with requiring frequent cylinder gas audits (CGA).
- PEMS Predictive emissions monitoring systems
- Capital costs for PEMS may be estimated to be around 50% less than those for CEMS, and the operation and maintenance costs for PEMS may be approximately 10% to 20% of similar costs for CEMS.
- PEMS may be used for (1) continuous prediction and monitoring, (2) compliance reporting, (3) data analysis, and (4) sensor failure/drift prevention and correction.
- PEMS may be capable of predicting and monitoring gas emissions based on the operational setting of instruments. Compared to the existing CEMS, PEMS may significantly reduce capital and operating costs, while improving safety and mitigating environmental impacts.
- a method for providing real time predictions of output emissions comprises: receiving a plurality of input parameters from one or more sensors; transforming the plurality of input parameters into transformed data; analyzing using a machine learning (ML) module with the transformed data to generate predicted parameters comprising NOx concentration, mass flow rate and temperature; and displaying the predicted parameters.
- ML machine learning
- the method further comprises training the ML module.
- the plurality of input parameters comprise one or more of pressure, temperature, flow rate, humidity, vibration, and component ratio of different devices.
- transforming comprises one or more of cleaning data by unifying data type, removing irrelevant data, smooth data, normalizing data, converting data to time series format, storing information for future applications, analyzing data for dynamic and historical information, unifying data types to numeric values, and removing outliers.
- the ML module is for detecting anomalies in the plurality of input parameters.
- the ML module is for detecting and correcting sensor failure and drift.
- the method further comprises forecasting sensor readings for determining availability of input.
- the method further comprises receiving the results of one or more relative accuracy test audits (RATAs) to determine accuracy of the ML module.
- RTAs relative accuracy test audits
- the method further comprises retraining and deploying the ML module where the accuracy is below a specified threshold.
- one or more non transitory computer readable storage devices comprises instructions which, when the program is executed by a computer, cause the device to perform the method.
- a system for providing real time predictions of output emissions based on a plurality of input parameters measured by one or more sensors comprises: a module for receiving the plurality of input parameters from the one or more sensors; and one or more processors for: transforming the plurality of input parameters into transformed data, and analyzing using a ML module with the transformed data to generate predicted parameters comprising NOx concentration, mass flow rate and temperature.
- the one or more processors are further for training the ML module.
- the plurality of input parameters comprise one or more of pressure, temperature, flow rate, humidity, vibration, and component ratio of different devices.
- transforming comprises one or more of cleaning data by unifying data type, removing irrelevant data, smooth data, normalizing data, converting data to time series format, storing information for future applications, analyzing data for dynamic and historical information.
- the ML module is for detecting anomalies in the plurality of input parameters.
- the ML module is for detecting and correcting sensor failure and drift.
- the one or more processors are further for forecasting sensor readings for determining availability of input.
- the one or more processors are further for receiving the results of one or more RATAs to determine accuracy of the ML module.
- the one or more processors are further for retraining and deploying the ML module where in the accuracy is below a specified threshold.
- system further comprises a display for providing interactive visualizations, dashboards, and reports relating to the predicted parameters.
- FIG. 1 is a schematic illustration of a cogeneration unit for PEMS application in accordance with some embodiments of the present disclosure.
- FIG. 2 is a flow chart of a cloud-based and Al-powered PEMS system in accordance with some embodiments of the present disclosure.
- FIG. 3 is a flow chart illustrating main steps/tasks in developing machine learning and deep learning models for predicting NOx concentration, mass flow rate, and temperature of the flue gas for a cogeneration unit.
- FIG. 4 is a schematic diagram of an interpretable architecture of a stacked deep learning model for predicting the NOx concentration of flue gas at the exhaust stack of a cogeneration unit in accordance with some embodiments of the present disclosure.
- FIG. 5 is a schematic diagram of an interpretable architecture of deep learning models for predicting the mass flow rate and temperature of flue gas at the exhaust stack of a cogeneration unit in accordance with some embodiments of the present disclosure.
- FIG. 6 is a flow chart illustrating the process of determining sensor drift and failure based on the developed machine learning and deep learning models in accordance with some embodiments of the present disclosure.
- FIG. 7 is a dashboard page illustrating the tables and plots for monitoring and analyzing the performance and trend of instrumental sensors for a cogeneration unit in accordance with some embodiments of the present disclosure.
- FIG. 8 is a dashboard page illustrating the tables and plots for monitoring and analyzing the performance and trend of PEMS predictions for a cogeneration unit in accordance with some embodiments of the present disclosure.
- FIG. 9 is a dashboard page illustrating the showcase of established rules for relevant instrumental sensors dominating the PEMS predictions for a cogeneration unit in accordance with some embodiments of the present disclosure.
- FIG. 10 is a dashboard page illustrating the detection of sensor drift and failure over time by comparing predicted and actual sensor readings in accordance with some embodiments of the present disclosure.
- FIG. 11 is a schematic diagram of a computer network system for cloud-based and AI- powered PEMS system in accordance with some embodiments of the present disclosure.
- FIG. 12 is a schematic diagram showing a simplified hardware structure of a computing device of the computer network system shown in FIG. 11.
- FIG. 13 is a schematic diagram showing a simplified software architecture of a computing device of the computer network system shown in FIG. 11.
- FIG. 14 is a flowchart illustrating the steps of a method in accordance with some embodiments of the present disclosure.
- a cloud-based and AI- powered PEMS for multivariate gas emission predictions (i.e., NOx concentration, mass flow rate, and temperature of the flue gas) from a cogeneration unit.
- the disclosed PEMS may comprise three modules, a data pipeline module, a deep learning module, and a user interface module. These three modules may implement data collection, preprocessing and transferring, computation and prediction, and results exhibition, respectively.
- the data pipeline module may include functions or steps of acquiring historical instrumental sensor datasets that represent operational processes and settings of the cogeneration under working and off states from the target cogeneration unit; acquiring output variables that represent the measured results (ground truth) of gas emissions from the cogeneration unit with the CEMS unit installed on the target cogeneration; preprocessing the acquired data to a standard format; and comparing the predicted gas emission results with the measured results to evaluate the accuracy and reliability of the developed deep learning models.
- the deep learning module may include functions or steps for predicting results of gas emissions for cogeneration based on the developed stacked deep learning algorithms during the working and off periods and detecting and correcting the sensor failure or drift. Furthermore, as a primary component, the deep learning module contains a continuous integration and continuous deployment (CVCD) pipeline that can automatically re-deploy the updated or retrained deep learning models to the cloud-based data pipeline. The retraining and updating of the deep learning models are carried out in accordance with the results of the relative accuracy test audits (RATA) test.
- CVCD continuous integration and continuous deployment
- the user interface module includes functions or steps for acquiring results from the deep learning module and displaying data analysis and monitoring (predictions and monitored sensor readings) results to the clients.
- a cogeneration unit 10 may be a target emission source for PEMS application, comprising of a generator 12, a gas turbine 14, a duct 16, an exhaust stack 18, a heat recovery steam generator (HRSG) 20, and a supplemental duct burner 22, etc.
- the cogeneration unit may be for associating with a continuous emission monitoring system (CMES) to monitor the NOx concentration, mass flow rate, and temperature of the flue gas through sensors in order to report those values to regulators.
- CMES continuous emission monitoring system
- the cogeneration unit may comprise many physical sensors to monitor operational performance.
- FIG. 1 illustrates 130 physical sensors installed on the gas turbine 14 and HRSG 20 to acquire necessary data for monitoring.
- FIG. 2 illustrates structure and workflow of the developed cloud-based and Al-powered PEMS system 100 for a cogeneration unit of some embodiments of the present disclosure.
- a developed PEMS system may comprise three primary modules: a data pipeline module 120, a deep learning module (122 and 130), and a user interface module 140.
- FIG. 2 shows the general data collection process 110 on site.
- 100 quality-assured physical sensors 112 may be used for measuring pressure, temperature, flow rate, humidity, vibration, component ratio, etc. of different devices, representing the working status of the cogeneration unit.
- the measured sensor data may be recorded using a distributed control system (DCS) 114 for on-site management and monitoring.
- the frequency of data collection for the DCS system may be any suitable period, for example 60 seconds or 1 minute.
- an enterprise data control system 116 may assemble sensor data from a DCS system and appropriately store them for future application.
- a data pipeline module 120 is a flow path for data for assembling a series of processes and operations that collect, transform, and move data from various sources to a data storage 128 or analytics system 126, and then to a user interface 140.
- an application programming interface (API) service 121 may be used to call/get the necessary data from the enterprise data system 116 based on the dynamic data frequency.
- API application programming interface
- 130 sensor inputs/readings with the frequency of 60 seconds previously described are the amount of acquired data.
- a data pipeline may clean and process the acquired sensor data by unifying the data type to the numeric format, removing the NaN values, smoothing data with a Savitzky-Golay filter, normalizing/standardizing data, and transforming data to a time-series format for a given window size.
- the data may be sent to the deep learning module 122 to simultaneously predict the dynamic NOx concentration, mass flow rate, and temperature of the flue gas per minute for the target cogeneration unit. Additionally, the deep learning module 122 may identify sensor failure or drift and provide reliable forecasts to ensure data availability.
- the predicted NOx concentration, mass flow rate, and temperature of the flue gas coupled with the 130 inputs/sensor readings per minute will be sent to a data streaming module 124 (such as Microsoft® Azure Event HubTM) for accommodating real-time data.
- the real-time data may be sent to a real time analysis module 126 (such as Microsoft® Azure Stream AnalyticsTM) to perform dynamic analysis for the predictions and sensor data.
- the built-in machine learning models of the real time analysis module 126 may perform anomaly detection directly to the real-time data and then send alerts or trigger actions.
- real-time data may be stored in a structured query language (SQL) database 128 for future applications.
- Real-time data and historical data from the real time analysis module 126 and SQL database 128, respectively, may be provided to a data visualization module 140 (such as Microsoft® Power BITM) which serves as a user interface.
- the data visualization module 140 may provide interactive visualizations, dashboards, and reports.
- the data visualization module may comprise various visualization options, filters, and calculations may be used for analyzing and demonstrating real-time data, from which real-time monitoring and decision-making may be achieved.
- reports may be published and shared with others in the organization. [0053] Referring to FIG.
- the CI/CD pipeline 130 may be for automatically deploying updated deep learning models to the data pipeline 120.
- This CI/CD pipeline 130 may be for ensuring that changes to the deep leaming/machine learning (DL/ML) model code are automatically built, tested, and deployed, enabling the continuous updates and improvements of the DL/ML model.
- a version control system 132 for example using Microsoft® Azure ReposTM, provides a repository and may be for committing the DL/ML model code, scripts, and configuration files.
- a pipeline module 134 (such as Microsoft® Azure PipelineTM) may be for automated model update and deployment and a registry module 136 (such as Microsoft® Azure Container RegistryTM) may be for storing container images of DL/ML model.
- the pipeline module 134 may automatically build a DL/ML model and update container images in the registry module 136 whenever changes are made to the code and then deploy the container image from the registry module 136 to a specified deployment target.
- one or more RATA tests may be performed to validate the accuracy of PEMS based on the requirements from a regulator. If the PEMS is not meeting specification indicated by a RATA test, troubleshooting may be performed to address the foregoing. Then, another RATA test may be performed. Troubleshooting may include investigating input sensor readings and determining causes of issues.
- the RATA test is performed, measurements may be extracted by the data pipeline (as shown in FIG. 2), and predictions based on the current deep learning models may be compared with results of the RATA test. If the prediction is very accurate compared with the results of the RATA test. The results of the RATA test will be stored in database 128 for future applications. If the prediction is not very accurate, the current deep learning models will be updated/retrained based on the RATA test results and the CI/CD pipeline 130 will re-deploy the updated/retrained models to the cloud automatically.
- the retrained model may be automatically re-deployed to the data pipeline, and such retrained model will backfill the new predictions during the retraining period into a SQL database.
- deep learning module 122 is a significant component of the PEMS system.
- the accuracy, robustness, and resilience of the developed DL/ML models generally determine the performance of the PEMS system as a product.
- FIG. 3 presents a flow chart depicting the process of deep learning model development 300 for predicting real-time NOx concentration, mass flow rate, and temperature of the flue gas for a cogeneration unit, which may include steps from the raw data 301 to the final model 390.
- years of raw data million rows
- the model development which contains a big enough dataset to represent the changes in operations, maintenance, and environments (e.g., weather and season).
- the data preprocess 310 involves several key tasks including unifying data type to numeric 311, removing outliers 313 while keeping the normal operation and shutdown periods 315, removing NaN values 317, and smoothing the dataset 319.
- removing outliers 313 while keeping the normal operation and shutdown periods 315 are the critical steps achieved by applying the interquartile range (IQR) rule to sliced data intervals separately.
- IQR interquartile range
- the whole dataset may be split into three groups: a training dataset 320, a validation dataset 330, and a test dataset 340.
- the ratio of training dataset 320 to test dataset 340 is 7:3, and the validation dataset 330 is an extra amount of dataset for model validation 370.
- features/ attributes are selected by use of the method integrating Pearson correlation 331, Spearman correlation 333, principal component analysis (PCA) feature importance 335, Ridge and Lasso regression 337, and engineering perspective 339.
- Pearson correlation 331 represents the linear relationship between the inputs and target
- Spearman correlation 333 represents the nonlinear correlation between the inputs and target.
- PCA 335 may demonstrate the major representatives in each dimension
- Ridge and Lasso 337 are for addressing multicollinearity (high correlation between predictor variables) and to provide a subset of important features
- engineering perspective 339 selects the features with physical meanings and engineering significance for the cogeneration.
- DL/ML models may be built to simultaneously predict NOx concentration, mass flow rate, and temperature of the flue gas for the target cogeneration unit. Different machine learning and deep learning algorithms may be tried, stacked, and validated during the model development 360 and model validation 370 processes, in which the hyperparameters in each algorithm are optimized using the grid search technique.
- validation dataset 330 may be used to assess model performance, guide model selection, tune hyperparameters, and prevent overfitting (via early stopping). If the model’s accuracy does not satisfy the requirements or desired level, feature selection 350 may be revisited and steps 350 to 370 may be repeated until the requirements are met.
- a test dataset 340 may be used to carry out an unbiased model evaluation 380 on the performance and generalization of the developed model after the model training and hyperparameter tuning have been completed.
- a final (best) model 390 may be selected for final deployment. The architecture of the final model 390 will be discussed below.
- FIG. 4 illustrates an architecture of a stacked deep learning model 500 for predicting NOx concentration of flue gas, which corresponds to the architecture of the final model 390 for NOx concentration of flue gas shown in FIG. 3.
- the architecture 500 is designed to process both the sequential and tabular features, comprising of convolutional neural network (CNN) 520, recurrent neural network (RNN) 530, self-attention layer 540, and artificial neural network (ANN) 560.
- CNN convolutional neural network
- RNN recurrent neural network
- ANN artificial neural network
- the inputs from 1 to N 510 indicate the number of selected features/attributes presented in FIG. 3 at step 350, including compressor pressure ratio and exhaust temperature of gas turbine, temperatures of duct burner, boiler, and economizer, and fuel gas flow rate, etc.
- CNN 520 may be initially applied to catch peaks and local patterns, which contains a ID convolutional layer 522 followed by a Leaky ReLU layer 524, a batch normalization layer 526, and a max pooling layer 528.
- the CNN structure/block may be repeatedly applied and followed by multiple RNN layers 530 to extract time-series characteristics.
- the output of the RNN layer is sent to a self-attention layer 540 that considers the interactions between different positions or time steps and captures long-term dependencies in the sequence.
- the output of the self-attention layer is flattened 550 to a ID array for concatenation 568.
- ANN with residual connections 560 may be applied to capture tabular characteristics of the selected features/attributes.
- the input 562 of the ANN structure/block may comprise all the selected features/attributes.
- the input features may be sent to fully connected dense layers 564.
- the activation function of such a dense layer can be ReLU, GELU, linear, etc. Step 564 can be repeated several times to determine suitable performance.
- the outputs from all the dense layers are concatenated 566 and finally sent to a dense layer 570.
- the output of step 570 and the outputs of step 550 are concatenated 566 to a long array containing all the sequential and tabular features.
- FIG. 5 illustrates an architecture of deep learning model 700 for predicting mass flow rate and temperature of flue gas, which may correspond to the architecture of a final model 390 for the mass flow rate and temperature of flue gas presented in FIG. 3.
- inputs 710 for mass flow rate include gas turbine shaft speed and exhaust pressure, duct burner temperature and fuel gas flow rate, HRSG feedwater flow rate, etc.
- inputs 710 for temperature include the inlet guide vane angle, dew point temperature of the gas turbine, economizer temperature, etc.
- the inputs are sent to a fully connected dense layer 720 and a dense layer with linear activation function 730.
- the dense layer 720 can be repeatedly built multiple times with the activation function of ReLU, GELU, linear, etc.
- each dense layer is concatenated 740 to achieve the residual connections.
- the output of the concatenation is sent to a dense layer 720 to obtain the results (i.e., mass flow rate or temperature).
- the weights, activation function, and repeat times for all the dense layers are determined and optimized using the grid search technique.
- FIG. 6 demonstrates the process of identifying the sensor drift and failure 900.
- the key steps for identifying sensor drift and failure are predicting the future sensor readings via the developed models 920 and precisely labeling sensor drift and failure based on historical data 940.
- real-time sensor readings from the monitored sensors 910 are extracted from the DCS system.
- XGBoost, Random Forest, ANN, and RNN algorithms may be utilized to build predictive models for sensor readings considering both the sequential and tabular data features 920.
- the predicted sensor readings will be compared with the true sensor readings to find out whether there is a notable gap 930.
- the frequency of sensor readings is one per minute, as a result, the comparison is set between the mean values of the predicted sensor readings and true sensor readings in a 6-hour interval.
- data scientists and engineers will label the sensor failure/ drift and the corresponding corrections by reviewing the sensor readings in the past several years 940. Then, conclude threshold values for sensor failure and sensor drift separately 950 from the labeled sensor drift and failure. In the end, the gap obtained in step 930 is compared with the threshold value determined in step 950.
- the PEMS system will send the sensor failure or drift alarm/flag to the user interface and backfill (replace) the measured sensor values with the predicted sensor values 970. Subsequently, the stacked deep learning models as presented in FIGS. 4 and 5 will correct the predictions of NOx concentration, mass flow rate, and temperature of the flue gas based on the backfilled sensor data 980. On the other hand, if the gap is smaller than the threshold value 960, the sensor readings will be sent to the deep learning module directly.
- the following table illustrates the statistical performance of deep learning models for predicting NOx concentration, mass flow rate, and temperature of the flue gas at the exhaust stack of a cogeneration unit.
- the table illustrates the accuracy of the developed predictive models (FIGS. 4 and 5) for NOx concentration, mass flow rate, and temperature of the flue gas in deep learning module 122 in FIG. 2.
- the training, validation, and testing in the table indicate steps 370 and 380 in FIG. 3, respectively.
- the accuracy (R 2 value), root mean square error (RMSE), and mean absolute error (MAE) are calculated to exhibit the reliability and robustness of the developed models in accordance with the present disclosure.
- FIG. 7 to FIG. 10 illustrate functions, figures, and tables for instrumental sensor readings 1000, PEMS predictions 2000, set-up rules 3000, and sensor drift and failure detection 4000 during the operations, respectively.
- instrumental sensor readings are displayed via data table 1200.
- the table is controlled by the selection of time slot (slicer) and sensor name (dropdown menu) 1400.
- time slot scaling
- sensor name dropdown menu
- a gauge chart 1300 displays the average, minimum, and maximum values within the past 1 minute, 1 hour, 4 hours, 24 hours, and 2 weeks by selecting the dropdown menu.
- FIG. 8 shows the PEMS predictions 2000 of NOx concentration, mass flow rate, and temperature of the flue gas forecasted using deep learning models 122 presented in FIG. 2.
- the selection of time slot (slicer) and between time (dropdown menu) 2300 control the gauge charts 2100 and the dynamic plots of NOx concentration, mass flow rate, and temperature 2200.
- FIG. 9 shows the current PEMS predictions of NOx concentration, mass flow rate, and temperature 3100 and the higher bound, lower bound, and standard deviation of the relevant sensor 3300. All the relevant sensors that dominate the corresponding PEMS prediction (one of the three current PEMS predictions) can be selected using the slicer 3200.
- FIG. 8 shows the PEMS predictions 2000 of NOx concentration, mass flow rate, and temperature of the flue gas forecasted using deep learning models 122 presented in FIG. 2.
- the selection of time slot (slicer) and between time (dropdown menu) 2300 control the gauge charts 2100 and the dynamic plots of NOx concentration, mass flow rate, and temperature 2200.
- FIG. 9 shows
- FIG. 10 demonstrates the results of detecting sensor drift and failure over time by comparing predicted and actual sensor readings.
- the pairs of current sensor readings 4100 show the comparison between predicted and actual values.
- Line charts 4300 exhibit the predicted and actual sensor readings within the selected time range controlled by the time filter (slicer) 4200.
- Embodiments of a cloud-based and Al-powered PEMS system disclosed herein may fully replace the CEMS unit or serve as a supplementary system when CEMS is down, which is able to monitor gas emissions continuously or periodically for a cogeneration unit.
- Embodiments of PEMS disclosed herein may be directed at the following issues: (1) high capital and operational costs required to use the traditional CEMS; (2) lack of reliable, resilient, and real-time emission monitoring and data analysis tools or platforms; and (3) lack of dynamic sensor failure/ drift detection and correction functions for existing PEMS. Reducing annual operating headcount and maintenance costs, and increasing digitalization are the main reasons driving the preference to develop PEMS rather than CEMS.
- a cloud-based and Al-powered PEMS system that enables realtime big data monitoring and analysis will provide operators or engineers with key information to achieve more effective control and reduction of gas emissions, high operational efficiency, increased productivity, better management of industrial assets, and predictive and preventive maintenance.
- a method for providing real time predictions of output emissions from a cogeneration unit operating in response to settings of controllers and corresponding sensors measuring operational processes and settings of critical instruments relating to the cogeneration unit, providing predictions of NOx concentration, mass flow rate, and temperature of the flue gas, instrumental sensor readings, detecting and correcting instrumental sensor drift and failure, and demonstrating emission monitoring and dynamic data analysis results comprising: collecting input data; transforming the input data to transformed data to provide an effective path for data flow to deep learning models, data storage, and data analytics assembling a series of processes and operations comprises: cleaning data by unifying data type, removing irrelevant data such as not a number (NaN) values, smoothing data, normalizing/standardizing data, transforming data to a time series format, storing processed information for future applications, and analyzing data for dynamic and historical insights; applying training, validation, and testing processes of developing deep learning models (or a deep learning module) to the transformed data to contemporaneously predict dynamic NOx concentration, mass flow rate
- the method comprises developing the DL/ML process wherein: data preprocessing of the DL/ML process comprises steps in sequence: unifying data type to numeric values, removing outliers while keeping both normal operation and shutdown periods, removing NaN values, and smoothing data, in which the removing outliers while keeping the normal operation and shutdown periods is achieved by applying interquartile range rule to sliced data intervals to retain the data of startup and shutdown during the maintenance instead of removing them as outliers, leading to the developed models are capable of predicting target variables under working and off states; features and attributes selection of the DL/ML process comprises integrating Pearson correlation, Spearman correlation, principal component analysis feature importance, and Ridge and Lasso regression, with engineering perspective to consider linear and non-linear relationships, major representatives in dimensions, multicollinearity, and physical meanings, respectively; an architecture of the DL/ML process for predicting NOx concentration of flue gas is for processing both the sequential and tabular features/attributes, comprising CNN, RNN, self
- the method further comprises retraining DL/ML processes based on the results of the relative accuracy test audit (RATA) test, wherein: when one of the PEMS predictions fails the RATA test, the corresponding process will be retrained, and another RATA test or equivalent measurements conducted with a portable gas analyzer will be carried out to validate the accuracy of the retrained process; during the retraining process, the weights assigned to the data points from the current RATA test are higher than the weights assigned to the data points from previous RATA tests and training dataset; when the accuracy of retrained process meets the requirement, i.e., R 2 > 0.64 during process validation, the retrained process will be re-deployed to the Azure data pipeline via CI/CD pipeline, and such retrained process will backfill the new predictions corresponding to the retraining process into the cloud (Azure) database for future applications.
- RAA relative accuracy test audit
- the method comprises developing the DL/ML process for detecting and correcting instrumental sensor drift and failure, wherein: predictive models are developed with XGBoost, Random Forest, ANN, and RNN architectures to predict sensor readings, and compare predicted sensor readings with true sensor readings to find out the gap between the mean values of the predicted sensor readings and true sensor readings in a 6-hour interval; threshold values are concluded for sensor failure and sensor drift separately based on the labeled sensor drift and failure in historical data; sensor failure or drift alarms/flags are sent to the user interface and backfill (replace) the measured sensor values with the predicted sensor values if the gap value is larger than the threshold value; and finally request corresponding deep learning models to re-calculate the predictions of NOx concentration, mass flow rate, and temperature of the flue gas based on the backfilled sensor data.
- predictive models are developed with XGBoost, Random Forest, ANN, and RNN architectures to predict sensor readings, and compare predicted sensor readings with true sensor readings to find out the gap between the mean values of the predicted sensor
- the method further comprises: providing real-time monitoring for the instrumental sensors of a cogeneration unit via a historical data table with a control for selecting time slot (via slicer) and sensor name (via a dropdown menu), a dynamic gauge chart s featuring minimum, maximum, and average values with 1 -minute, 1- hour, 24-hour and 2-week intervals, and a last 24-hour data distribution (histogram) plot; providing real-time monitoring for the predictions of NOx concentration, mass flow rate, and temperature of the flue gas at the exhaust stack of a cogeneration unit via gauge charts and line plot with 1 -minute, 1-hour, 4-hour, 24-hour, and 2-week intervals or any selected time slot (via slicer); exhibiting current predictions of NOx concentration, mass flow rate, and temperature of the flue gas at the exhaust stack of a cogeneration unit and the higher bound, lower bound, and standard deviation of the one of relevant sensors that dominate the corresponding prediction (one of the three current PEMS predictions) via a dropdown menu for sensor name; Comparing
- FIG. 11 a computer network system for PEMS is shown and is generally identified using reference numeral 1100.
- the PEMS system 1100 is configured for performing methods and tasks disclosed herein.
- the PEMS system 1100 comprises one or more server computers 1102, a plurality of client computing devices 1104, and one or more client computer systems 1106 functionally interconnected by a network 1108, such as the Internet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), and/or the like, via suitable wired and wireless networking connections.
- a network 1108 such as the Internet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), and/or the like, via suitable wired and wireless networking connections.
- the server computers 1102 may be computing devices designed specifically for use as a server, and/or general-purpose computing devices acting as server computers while also being used by various users. Each server computer 1102 may execute one or more server programs.
- the client computing devices 1104 may be portable and/or non-portable computing devices such as laptop computers, tablets, smartphones, Personal Digital Assistants (PDAs), desktop computers, and/or the like. Each client computing device 1104 may execute one or more client application programs which sometimes may be called “apps”.
- apps client application programs
- computing devices 1102 and 1104 comprise similar hardware structures such as hardware structure 1120 shown in FIG. 12.
- the hardware structure 1120 comprises a processing structure 1122, a controlling structure 1124, one or more non-transitory computer-readable memory or storage devices 1126, a network interface 1128, an input interface 1130, and an output interface 1132, functionally interconnected by a system bus 1138.
- the hardware structure 1120 may also comprise other components 134 coupled to the system bus 1138.
- the processing structure 1122 may be one or more single-core or multiple-core computing processors, generally referred to as central processing units (CPUs), such as INTEL® microprocessors (INTEL is a registered trademark of Intel Corp., Santa Clara, CA, USA), AMD® microprocessors (AMD is a registered trademark of Advanced Micro Devices Inc., Sunnyvale, CA, USA), ARM® microprocessors (ARM is a registered trademark of Arm Ltd., Cambridge, UK) manufactured by a variety of manufactures such as Qualcomm of San Diego, California, USA, under the ARM® architecture, or the like.
- CPUs central processing units
- CPUs central processing units
- INTEL® microprocessors INTEL is a registered trademark of Intel Corp., Santa Clara, CA, USA
- AMD® microprocessors AMD is a registered trademark of Advanced Micro Devices Inc., Sunnyvale, CA, USA
- ARM® microprocessors ARM is a registered trademark of Arm Ltd., Cambridge, UK manufactured by a variety of manufactures such
- the processing structure 1122 may also comprise one or more real-time processors, programmable logic controllers (PLCs), microcontroller units (MCUs), p-controllers (UCs), specialized/ customized processors, hardware accelerators, and/or controlling circuits (also denoted “controllers”) using, for example, field-programmable gate array (FPGA) or application-specific integrated circuit (ASIC) technologies, and/or the like.
- PLCs programmable logic controllers
- MCUs microcontroller units
- UCs p-controllers
- specialized/ customized processors such as field-programmable gate array (FPGA) or application-specific integrated circuit (ASIC) technologies, and/or the like.
- the processing structure includes a CPU (otherwise referred to as a host processor) and a specialized hardware accelerator which includes circuitry configured to perform computations of neural networks such as tensor multiplication, matrix multiplication, and the like.
- the host processor may offload some computations to the hardware accelerator to perform computation operations of neural
- Examples of a hardware accelerator include a graphics processing unit (GPU), Neural Processing Unit (NPU), and Tensor Process Unit (TPU).
- the host processors and the hardware accelerators may be generally considered processors.
- the processing structure 1122 comprises necessary circuitries implemented using technologies such as electrical and/or optical hardware components for executing an encryption process and/or a decryption process, as the design purpose and/or the use case maybe, for encrypting and/or decrypting data received from the input 1106 and outputting the resulting encrypted or decrypted data through the output 1108.
- the processing structure 1122 may comprise logic gates implemented by semiconductors to perform various computations, calculations, and/or processing.
- logic gates include AND gate, OR gate, XOR (exclusive OR) gate, and NOT gate, each of which takes one or more inputs and generates or otherwise produces an output therefrom based on the logic implemented therein.
- a NOT gate receives an input (for example, a high voltage, a state with electrical current, a state with an emitted light, or the like), inverts the input (for example, forming a low voltage, a state with no electrical current, a state with no light, or the like), and output the inverted input as the output.
- the inputs and outputs of the logic gates are generally physical signals and the logics or processings thereof are tangible operations with physical results (for example, outputs of physical signals), the inputs and outputs thereof are generally described using numerals (for example, numerals “0” and “1”) and the operations thereof are generally described as “computing” (which is how the “computer” or “computing device” is named) or “calculation”, or more generally, “processing”, for generating or producing the outputs from the inputs thereof.
- Sophisticated combinations of logic gates in the form of a circuitry of logic gates may be formed using a plurality of AND, OR, XOR, and/or NOT gates. Such combinations of logic gates may be implemented using individual semiconductors, or more often be implemented as integrated circuits (ICs).
- ICs integrated circuits
- a circuitry of logic gates may be “hard-wired” circuitry which, once designed, may only perform the designed functions.
- the processes and functions thereof are “hard-coded” in the circuitry.
- circuitry of logic gates such as the processing structure 1122 may be alternatively designed in a general manner so that it may perform various processes and functions according to a set of “programmed” instructions implemented as firmware and/or software and stored in one or more non-transitory computer-readable storage devices or media.
- the circuitry of logic gates such as the processing structure 1122 is usually of no use without meaningful firmware and/or software.
- the controlling structure 1124 comprises one or more controlling circuits, such as graphic controllers, input/output chipsets and the like, for coordinating operations of various hardware components and modules of the computing device 1102/1104.
- controlling circuits such as graphic controllers, input/output chipsets and the like, for coordinating operations of various hardware components and modules of the computing device 1102/1104.
- the memory 1126 comprises one or more storage devices or media accessible by the processing structure 1122 and the controlling structure 1124 for reading and/or storing instructions for the processing structure 1122 to execute, and for reading and/or storing data, including input data and data generated by the processing structure 1122 and the controlling structure 1124.
- the memory 126 may be volatile and/or non-volatile, non-removable or removable memory such as RAM, ROM, EEPROM, solid-state memory, hard disks, CD, DVD, flash memory, or the like.
- the network interface 1128 comprises one or more network modules for connecting to other computing devices or networks through the network 108 by using suitable wired or wireless communication technologies such as Ethernet, WI-FI® (WI-FI is a registered trademark of Wi-Fi Alliance, Austin, TX, USA), BLUETOOTH® (BLUETOOTH is a registered trademark of Bluetooth Sig Inc., Kirkland, WA, USA), Bluetooth Low Energy (BLE), Z-Wave, Long Range (LoRa), ZIGBEE® (ZIGBEE is a registered trademark of ZigBee Alliance Corp., San Ramon, CA, USA), wireless broadband communication technologies such as Global System for Mobile Communications (GSM), Code Division Multiple Access (CDMA), Universal Mobile Telecommunications System (UMTS), Worldwide Interoperability for Microwave Access (WiMAX), CDMA2000, Long Term Evolution (LTE), 3GPP, 5G New Radio (5G NR) and/or other 5G networks, and/or the like.
- wired or wireless communication technologies such as Ethernet, WI-FI® (WI-
- the input interface 1130 comprises one or more input modules for one or more users to input data via, for example, touch-sensitive screen, touch-sensitive whiteboard, touch-pad, keyboards, computer mouse, trackball, microphone, scanners, cameras, and/or the like.
- the input interface 1130 may be a physically integrated part of the computing device 1102/1104 (for example, the touch-pad of a laptop computer or the touch-sensitive screen of a tablet), or may be a device physically separate from, but functionally coupled to, other components of the computing device 1102/1104 (for example, a computer mouse).
- the input interface 1130 in some implementations, may be integrated with a display output to form a touch-sensitive screen or touch-sensitive whiteboard.
- the output interface 1132 comprises one or more output modules for output data to a user.
- the output modules comprise displays (such as monitors, LCD displays, LED displays, projectors, and the like), speakers, printers, virtual reality (VR) headsets, augmented reality (AR) goggles, and/or the like.
- the output interface 1132 may be a physically integrated part of the computing device 1102/1104 (for example, the display of a laptop computer or tablet), or may be a device physically separate from but functionally coupled to other components of the computing device 1102/1104 (for example, the monitor of a desktop computer).
- the computing device 1102/1104 may also comprise other components 1134 such as one or more positioning modules, temperature sensors, barometers, inertial measurement unit (IMU), and/or the like.
- other components 1134 such as one or more positioning modules, temperature sensors, barometers, inertial measurement unit (IMU), and/or the like.
- the system bus 1138 interconnects various components 1122 to 1134 enabling them to transmit and receive data and control signals to and from each other.
- FIG. 13 shows a simplified software architecture 1160 of the computing device 1102 or 1104.
- the software architecture 1160 comprises one or more application programs 1164, an operating system 1166, a logical input/output (I/O) interface 1168, and a logical memory 1172.
- One or more application programs 1164, operating system 1166, and logical I/O interface 1168 are generally implemented as computer-executable instructions or code in the form of software programs or firmware programs stored in the logical memory 1172 which may be executed by the processing structure 1122.
- One or more application programs 1164 executed by or run by the processing structure 1122 for performing various tasks.
- the operating system 1166 manages various hardware components of the computing device 1102 or 1104 via the logical I/O interface 1168, manages the logical memory 1172, and manages and supports the application programs 1164.
- the operating system 1166 is also in communication with other computing devices (not shown) via the network 1108 to allow application programs 1164 to communicate with those running on other computing devices.
- the operating system 166 may be any suitable operating system such as MICROSOFT® WINDOWS® (MICROSOFT and WINDOWS are registered trademarks of the Microsoft Corp., Redmond, WA, USA), APPLE® OS X, APPLE® iOS (APPLE is a registered trademark of Apple Inc., Cupertino, CA, USA), Linux, ANDROID® (ANDROID is a registered trademark of Google LLC, Mountain View, CA, USA), or the like.
- the computing devices 1102 and 1104 of the image-sanitization system 1100 may all have the same operating system or may have different operating systems.
- the logical I/O interface 1168 comprises one or more device drivers 1170 for communicating with respective input and output interfaces 1130 and 1132 for receiving data therefrom and sending data thereto. Received data may be sent to one or more application programs 1164 for being processed by one or more application programs 1164. Data generated by the application programs 1164 may be sent to the logical I/O interface 1168 for outputting to various output devices (via the output interface 1132). [0097]
- the logical memory 1172 is a logical mapping of the physical memory 1126 for facilitating the application programs 1164 to access.
- the logical memory 1172 comprises a storage memory area that may be mapped to a non-volatile physical memory such as hard disks, solid-state disks, flash drives, and the like, generally for long-term data storage therein.
- the logical memory 1172 also comprises a working memory area that is generally mapped to highspeed, and in some implementations volatile, physical memory such as RAM, generally for application programs 1164 to temporarily store data during program execution.
- an application program 164 may load data from the storage memory area into the working memory area and may store data generated during its execution into the working memory area.
- the application program 1164 may also store some data in the storage memory area as required or in response to a user’s command.
- one or more application programs 1164 generally provide server functions for managing network communication with client computing devices 1104 and facilitating collaboration between the server computer 1102 and the client computing devices 1104.
- server may refer to a server computer 1102 from a hardware point of view or a logical server from a software point of view, depending on the context.
- the processing structure 1122 is usually of no use without meaningful firmware and/or software.
- a computer system such as the PEMS system 1100 may have the potential to perform various tasks, it cannot perform any tasks and is of no use without meaningful firmware and/or software.
- the PEMS system 1100 described herein and the modules, circuitries, and components thereof, as a combination of hardware and software generally produces tangible results tied to the physical world, wherein the tangible results such as those described herein may lead to improvements to the computer devices and systems themselves, the modules, circuitries, and components thereof, and/or the like.
- the PEMS system is configured to operate on a cloud-based computer architecture, wherein the server computers 1102 are virtual computing environments comprising allocated scalable computer capacity, which may be called instances.
- each server computer 1102 may be a variable configuration of CPU, memory, storage, and networking capacity.
- each server computer 1102 is assigned a unique internet protocol (IP) address within the network 108.
- IP internet protocol
- the processing, memory, storage, and networking capacity may be provided by a single physical server computer 1102 or a more complex computer architecture comprising a plurality of interconnected service, storage, and network components.
- the cloud-based computer architecture may be provided by a number of cloud computer service platforms such as Amazon Web Services® or AWS® (Amazon Web Services and AWS are registered trademarks of Amazon Web Services, Inc., a subsidiary of Amazon of Seattle, Washington, USA), Microsoft AzureTM (Azure is a trademark of Microsoft Corporation of Redmond, Washington, USA) and Google Cloud Platform or GCP.
- Amazon Web Services® or AWS® Amazon Web Services and AWS are registered trademarks of Amazon Web Services, Inc., a subsidiary of Amazon of Seattle, Washington, USA
- Microsoft AzureTM Azure is a trademark of Microsoft Corporation of Redmond, Washington, USA
- Google Cloud Platform or GCP Google Cloud Platform
- FIG. 14 illustrates a method 1400 according to some embodiments of the present disclosure.
- the method 1400 begins with optionally, training the ML module (at step 1402).
- the method comprises receiving a plurality of input parameters from one or more sensors.
- the method comprises transforming the plurality of input parameters into transformed data.
- the method comprises analyzing using a ML module with the transformed data to generate predicted parameters comprising NOx concentration, mass flow rate and temperature.
- the method comprises displaying the predicted parameters.
- the method comprises, optionally, forecasting sensor readings for determining availability of input.
- the method comprises, optionally, receiving the results of one or more RATAs to determine accuracy of the ML module.
- the method comprises, optionally, retraining and deploying the ML module where the accuracy is below a specified threshold.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Fluid Mechanics (AREA)
- Testing And Monitoring For Control Systems (AREA)
Abstract
Methods and systems for predicting and monitoring gas emissions from a cogeneration unit is developed by use of the stacked multivariate deep learning algorithms. Deep learning-powered predictive emission monitoring system (PEMS), which utilizes the operational parameters of relevant instruments to predict and monitor the emitted NOx concentration, mass flow rate, and temperature of the flue gas at the exhaust stack of a cogeneration unit. The established system comprises a data pipeline module, a deep learning module, and a user interface module. The data pipeline module acquires the datasets of the measured gas emission variables and the representative settings of the operational process. The deep learning module stacks a hybrid convolutional neural network and recurrent neural network model with an artificial neural network model to process sequential and tabular datasets simultaneously. The user interface module demonstrates the results of predicted gas emissions and provides monitoring and analysis of gas emission processes.
Description
METHODS AND SYSTEMS FOR GAS EMISSION PREDICTION AND MONITORING FOR COGENERATION USING STACKED MULTIVARIATE DEEP LEARNING
TECHNICAL FIELD
[0001] The present disclosure relates to methods the systems for emissions monitoring, and in particular, to methods and systems for emissions monitoring for cogeneration using stacked multivariable deep learning.
BACKGROUND
[0002] Nitrogen oxides (NOx) may have adverse environmental and health effects, which may contribute to smog and acid rain, as well as forming fine particles (PM) and ozone in ambient air (for example, tropospheric ozone or ground-level ozone). To ensure compliance with regulatory emission limits, industrial facilities with large stationary sources are typically required to equip one or more continuous emissions monitoring systems (CEMS) to monitor NOx and other air emissions. CEMS is designed and developed to monitor effluent gas streams resulting from combustion in industrial processes. CEMS may measure flue gas for content of CO, NOx, SO2, and O2 therein for providing information used for combustion control in industrial settings. CEMS may also measure parameters such air flow rate, pressure, temperature, flue gas opacity, and moisture. CEMS typically comprises analyzers for measuring gas concentrations within a stream, equipment to direct a sample of that gas stream to the analyzers if they are remote, and equipment to condition the sample gas by removing water and other components that may interfere with the reading.
[0003] However, CEMS are generally expensive and may require frequent maintenance. In addition, CEMS may have additional costs for operators associated with requiring frequent cylinder gas audits (CGA).
SUMMARY
[0004] Predictive emissions monitoring systems (PEMS) may be developed to enhance efficiency and accuracy of NOx emission prediction while reducing costs of operations. Capital costs for PEMS may be estimated to be around 50% less than those for CEMS, and the operation and maintenance costs for PEMS may be approximately 10% to 20% of similar costs for CEMS. Currently, PEMS may be used for (1) continuous prediction and monitoring, (2) compliance reporting, (3) data analysis, and (4) sensor failure/drift prevention and correction.
[0005] Existing PEMSs may require on-site training, and ongoing support from software providers is often needed. Software licensing may be complicated and/or cost-prohibitive for larger installations of PEMS. The initial capital cost for PEMS installation may be cheaper than CEMS, and the maintenance cost may be much less than CEMS.
[0006] Developing and optimizing data-driven models may be beneficial to digital transformation for energy producers. By integrating machine leaming/deep learning methods, PEMS may be capable of predicting and monitoring gas emissions based on the operational setting of instruments. Compared to the existing CEMS, PEMS may significantly reduce capital and operating costs, while improving safety and mitigating environmental impacts.
[0007] In a broad aspect of the present disclosure, a method for providing real time predictions of output emissions comprises: receiving a plurality of input parameters from one or more sensors;
transforming the plurality of input parameters into transformed data; analyzing using a machine learning (ML) module with the transformed data to generate predicted parameters comprising NOx concentration, mass flow rate and temperature; and displaying the predicted parameters.
[0008] In some embodiments, the method further comprises training the ML module.
[0009] In some embodiments, the plurality of input parameters comprise one or more of pressure, temperature, flow rate, humidity, vibration, and component ratio of different devices.
[0010] In some embodiments, transforming comprises one or more of cleaning data by unifying data type, removing irrelevant data, smooth data, normalizing data, converting data to time series format, storing information for future applications, analyzing data for dynamic and historical information, unifying data types to numeric values, and removing outliers.
[0011] In some embodiments, the ML module is for detecting anomalies in the plurality of input parameters.
[0012] In some embodiments, the ML module is for detecting and correcting sensor failure and drift.
[0013] In some embodiments, the method further comprises forecasting sensor readings for determining availability of input.
[0014] In some embodiments, the method further comprises receiving the results of one or more relative accuracy test audits (RATAs) to determine accuracy of the ML module.
[0015] In some embodiments, the method further comprises retraining and deploying the ML module where the accuracy is below a specified threshold.
[0016] In some embodiments, one or more non transitory computer readable storage devices comprises instructions which, when the program is executed by a computer, cause the device to perform the method.
[0017] In a broad aspect of the present disclosure, a system for providing real time predictions of output emissions based on a plurality of input parameters measured by one or more sensors comprises: a module for receiving the plurality of input parameters from the one or more sensors; and one or more processors for: transforming the plurality of input parameters into transformed data, and analyzing using a ML module with the transformed data to generate predicted parameters comprising NOx concentration, mass flow rate and temperature.
[0018] In some embodiments, the one or more processors are further for training the ML module.
[0019] In some embodiments, the plurality of input parameters comprise one or more of pressure, temperature, flow rate, humidity, vibration, and component ratio of different devices.
[0020] In some embodiments, transforming comprises one or more of cleaning data by unifying data type, removing irrelevant data, smooth data, normalizing data, converting data to time series format, storing information for future applications, analyzing data for dynamic and historical information.
[0021] In some embodiments, the ML module is for detecting anomalies in the plurality of input parameters.
[0022] In some embodiments, the ML module is for detecting and correcting sensor failure and drift.
[0023] In some embodiments, the one or more processors are further for forecasting sensor readings for determining availability of input.
[0024] In some embodiments, the one or more processors are further for receiving the results of one or more RATAs to determine accuracy of the ML module.
[0025] In some embodiments, the one or more processors are further for retraining and deploying the ML module where in the accuracy is below a specified threshold.
[0026] In some embodiments, the system further comprises a display for providing interactive visualizations, dashboards, and reports relating to the predicted parameters.
BRIEF DESCRIPTION OF FIGURES
[0027] FIG. 1 is a schematic illustration of a cogeneration unit for PEMS application in accordance with some embodiments of the present disclosure.
[0028] FIG. 2 is a flow chart of a cloud-based and Al-powered PEMS system in accordance with some embodiments of the present disclosure.
[0029] FIG. 3 is a flow chart illustrating main steps/tasks in developing machine learning and deep learning models for predicting NOx concentration, mass flow rate, and temperature of the flue gas for a cogeneration unit.
[0030] FIG. 4 is a schematic diagram of an interpretable architecture of a stacked deep learning model for predicting the NOx concentration of flue gas at the exhaust stack of a cogeneration unit in accordance with some embodiments of the present disclosure.
[0031] FIG. 5 is a schematic diagram of an interpretable architecture of deep learning models for predicting the mass flow rate and temperature of flue gas at the exhaust stack of a cogeneration unit in accordance with some embodiments of the present disclosure.
[0032] FIG. 6 is a flow chart illustrating the process of determining sensor drift and failure based on the developed machine learning and deep learning models in accordance with some embodiments of the present disclosure.
[0033] FIG. 7 is a dashboard page illustrating the tables and plots for monitoring and analyzing the performance and trend of instrumental sensors for a cogeneration unit in accordance with some embodiments of the present disclosure.
[0034] FIG. 8 is a dashboard page illustrating the tables and plots for monitoring and analyzing the performance and trend of PEMS predictions for a cogeneration unit in accordance with some embodiments of the present disclosure.
[0035] FIG. 9 is a dashboard page illustrating the showcase of established rules for relevant instrumental sensors dominating the PEMS predictions for a cogeneration unit in accordance with some embodiments of the present disclosure.
[0036] FIG. 10 is a dashboard page illustrating the detection of sensor drift and failure over time by comparing predicted and actual sensor readings in accordance with some embodiments of the present disclosure.
[0037] FIG. 11 is a schematic diagram of a computer network system for cloud-based and AI- powered PEMS system in accordance with some embodiments of the present disclosure.
[0038] FIG. 12 is a schematic diagram showing a simplified hardware structure of a computing device of the computer network system shown in FIG. 11.
[0039] FIG. 13 is a schematic diagram showing a simplified software architecture of a computing device of the computer network system shown in FIG. 11.
[0040] FIG. 14 is a flowchart illustrating the steps of a method in accordance with some embodiments of the present disclosure.
DETAILED DESCRIPTION
[0041] Unless otherwise defined, all technical and scientific terms used herein generally have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. Exemplary terms are defined below for ease in understanding the subject matter of the present disclosure.
[0042] The term “a” or “an” refers to one or more of that entity; for example, “a module” refers to one or more modules or at least one module. As such, the terms “a” (or “an”), “one or more” and “at least one” are used interchangeably herein. In addition, reference to an element or feature by the indefinite article “a” or “an” does not exclude the possibility that more than one of the elements or features are present, unless the context clearly requires that there is one and only one of the elements. Furthermore, reference to a feature in the plurality (e.g., modules), unless clearly intended, does not mean that the modules or methods disclosed herein must comprise a plurality.
[0043] The expression “and/or” refers to and encompasses any and all possible combinations of one or more of the associated listed items (e.g. one or the other, or both), as well as the lack of combinations when interrupted in the alternative (or).
[0044] In the following, embodiments of an electrical device are described. In the description, directional phrases such as “top”, “bottom”, “up”, “down”, “front”, “rear”, “left” and “right” are used only for describing the directions of components relative to each other.
[0045] In accordance with some embodiments of the present disclosure, a cloud-based and AI- powered PEMS is provided for multivariate gas emission predictions (i.e., NOx concentration, mass flow rate, and temperature of the flue gas) from a cogeneration unit. The disclosed PEMS may comprise three modules, a data pipeline module, a deep learning module, and a user interface module. These three modules may implement data collection, preprocessing and transferring, computation and prediction, and results exhibition, respectively.
[0046] The data pipeline module may include functions or steps of acquiring historical instrumental sensor datasets that represent operational processes and settings of the cogeneration
under working and off states from the target cogeneration unit; acquiring output variables that represent the measured results (ground truth) of gas emissions from the cogeneration unit with the CEMS unit installed on the target cogeneration; preprocessing the acquired data to a standard format; and comparing the predicted gas emission results with the measured results to evaluate the accuracy and reliability of the developed deep learning models.
[0047] The deep learning module may include functions or steps for predicting results of gas emissions for cogeneration based on the developed stacked deep learning algorithms during the working and off periods and detecting and correcting the sensor failure or drift. Furthermore, as a primary component, the deep learning module contains a continuous integration and continuous deployment (CVCD) pipeline that can automatically re-deploy the updated or retrained deep learning models to the cloud-based data pipeline. The retraining and updating of the deep learning models are carried out in accordance with the results of the relative accuracy test audits (RATA) test.
[0048] In addition, the user interface module includes functions or steps for acquiring results from the deep learning module and displaying data analysis and monitoring (predictions and monitored sensor readings) results to the clients.
[0049] Referring to FIG. 1, in some embodiments of the present disclosure, a cogeneration unit 10 may be a target emission source for PEMS application, comprising of a generator 12, a gas turbine 14, a duct 16, an exhaust stack 18, a heat recovery steam generator (HRSG) 20, and a supplemental duct burner 22, etc. Generally, the cogeneration unit may be for associating with a continuous emission monitoring system (CMES) to monitor the NOx concentration, mass flow rate, and temperature of the flue gas through sensors in order to report those values to regulators.
In addition, as a combination of multiple instruments, the cogeneration unit may comprise many physical sensors to monitor operational performance. FIG. 1 illustrates 130 physical sensors installed on the gas turbine 14 and HRSG 20 to acquire necessary data for monitoring.
[0050] FIG. 2 illustrates structure and workflow of the developed cloud-based and Al-powered PEMS system 100 for a cogeneration unit of some embodiments of the present disclosure. Referring to FIG. 2, a developed PEMS system may comprise three primary modules: a data pipeline module 120, a deep learning module (122 and 130), and a user interface module 140. In addition to the three primary modules, FIG. 2 shows the general data collection process 110 on site. In accordance with the target cogeneration unit, 100 quality-assured physical sensors 112 may be used for measuring pressure, temperature, flow rate, humidity, vibration, component ratio, etc. of different devices, representing the working status of the cogeneration unit. The measured sensor data may be recorded using a distributed control system (DCS) 114 for on-site management and monitoring. The frequency of data collection for the DCS system may be any suitable period, for example 60 seconds or 1 minute. Subsequently, an enterprise data control system 116 may assemble sensor data from a DCS system and appropriately store them for future application.
[0051] Referring to FIG. 2, a data pipeline module 120 is a flow path for data for assembling a series of processes and operations that collect, transform, and move data from various sources to a data storage 128 or analytics system 126, and then to a user interface 140. First, an application programming interface (API) service 121 may be used to call/get the necessary data from the enterprise data system 116 based on the dynamic data frequency. In an exemplary embodiment, 130 sensor inputs/readings with the frequency of 60 seconds previously described are the amount of acquired data. A data pipeline may clean and process the acquired sensor data by unifying the
data type to the numeric format, removing the NaN values, smoothing data with a Savitzky-Golay filter, normalizing/standardizing data, and transforming data to a time-series format for a given window size. After preprocessing, the data may be sent to the deep learning module 122 to simultaneously predict the dynamic NOx concentration, mass flow rate, and temperature of the flue gas per minute for the target cogeneration unit. Additionally, the deep learning module 122 may identify sensor failure or drift and provide reliable forecasts to ensure data availability.
[0052] Subsequently, the predicted NOx concentration, mass flow rate, and temperature of the flue gas coupled with the 130 inputs/sensor readings per minute will be sent to a data streaming module 124 (such as Microsoft® Azure Event Hub™) for accommodating real-time data. Then, the real-time data may be sent to a real time analysis module 126 (such as Microsoft® Azure Stream Analytics™) to perform dynamic analysis for the predictions and sensor data. In addition, the built-in machine learning models of the real time analysis module 126 may perform anomaly detection directly to the real-time data and then send alerts or trigger actions. On the other hand, rather than sending the real-time data to the real time analysis module 126, real-time data may be stored in a structured query language (SQL) database 128 for future applications. Real-time data and historical data from the real time analysis module 126 and SQL database 128, respectively, may be provided to a data visualization module 140 (such as Microsoft® Power BI™) which serves as a user interface. The data visualization module 140 may provide interactive visualizations, dashboards, and reports. As the data visualization module may comprise various visualization options, filters, and calculations may be used for analyzing and demonstrating real-time data, from which real-time monitoring and decision-making may be achieved. Moreover, reports may be published and shared with others in the organization.
[0053] Referring to FIG. 2, in addition to the data pipeline 120 as the main path for data flow, the CI/CD pipeline 130 may be for automatically deploying updated deep learning models to the data pipeline 120. This CI/CD pipeline 130 may be for ensuring that changes to the deep leaming/machine learning (DL/ML) model code are automatically built, tested, and deployed, enabling the continuous updates and improvements of the DL/ML model. Among the CI/CD pipeline 130, a version control system 132 (for example using Microsoft® Azure Repos™), provides a repository and may be for committing the DL/ML model code, scripts, and configuration files. A pipeline module 134 (such as Microsoft® Azure Pipeline™) may be for automated model update and deployment and a registry module 136 (such as Microsoft® Azure Container Registry™) may be for storing container images of DL/ML model. The pipeline module 134 may automatically build a DL/ML model and update container images in the registry module 136 whenever changes are made to the code and then deploy the container image from the registry module 136 to a specified deployment target.
[0054] In some embodiments of the present disclosure, one or more RATA tests may be performed to validate the accuracy of PEMS based on the requirements from a regulator. If the PEMS is not meeting specification indicated by a RATA test, troubleshooting may be performed to address the foregoing. Then, another RATA test may be performed. Troubleshooting may include investigating input sensor readings and determining causes of issues. When the RATA test is performed, measurements may be extracted by the data pipeline (as shown in FIG. 2), and predictions based on the current deep learning models may be compared with results of the RATA test. If the prediction is very accurate compared with the results of the RATA test. The results of the RATA test will be stored in database 128 for future applications. If the prediction is not very
accurate, the current deep learning models will be updated/retrained based on the RATA test results and the CI/CD pipeline 130 will re-deploy the updated/retrained models to the cloud automatically.
[0055] It should be noted that if one of the PEMS predictions does not meet the requirements of the RATA test, retraining may be performed, and another RATA test or equivalent measurements by use of a portable gas analyzer may be carried out to validate the accuracy of the retrained model. During the retraining process, weights provided to the recent RATA test (data points) are higher than the weights for all the previous data points (i.e., previous RATA test results and training dataset). In other words, the contribution from the current RATA test will get more focus during the retraining process. Meanwhile, all of the sensor readings may be stored in a database during the retraining process. Once the retraining is completed and the accuracy of retrained model satisfies the requirement (for example, R2 > 0.64), the retrained model may be automatically re-deployed to the data pipeline, and such retrained model will backfill the new predictions during the retraining period into a SQL database.
[0056] Referring again to FIG. 2, deep learning module 122 is a significant component of the PEMS system. The accuracy, robustness, and resilience of the developed DL/ML models generally determine the performance of the PEMS system as a product. FIG. 3 presents a flow chart depicting the process of deep learning model development 300 for predicting real-time NOx concentration, mass flow rate, and temperature of the flue gas for a cogeneration unit, which may include steps from the raw data 301 to the final model 390. In the embodiment illustrated, years of raw data (million rows) from 130 physical sensors are collected for the model development, which contains a big enough dataset to represent the changes in operations, maintenance, and environments (e.g.,
weather and season). The data preprocess 310 involves several key tasks including unifying data type to numeric 311, removing outliers 313 while keeping the normal operation and shutdown periods 315, removing NaN values 317, and smoothing the dataset 319. Among these tasks, removing outliers 313 while keeping the normal operation and shutdown periods 315 are the critical steps achieved by applying the interquartile range (IQR) rule to sliced data intervals separately. Using such a method, the data of startup and shutdown during the maintenance will be retained instead of removed as outliers, leading to the developed models being capable of predicting target variables under working and off states.
[0057] After the preprocessing steps, the whole dataset may be split into three groups: a training dataset 320, a validation dataset 330, and a test dataset 340. In some embodiments of the present disclosure, the ratio of training dataset 320 to test dataset 340 is 7:3, and the validation dataset 330 is an extra amount of dataset for model validation 370. Based on the training dataset, features/ attributes are selected by use of the method integrating Pearson correlation 331, Spearman correlation 333, principal component analysis (PCA) feature importance 335, Ridge and Lasso regression 337, and engineering perspective 339. Pearson correlation 331 represents the linear relationship between the inputs and target, while Spearman correlation 333 represents the nonlinear correlation between the inputs and target. In addition, PCA 335 may demonstrate the major representatives in each dimension, Ridge and Lasso 337 are for addressing multicollinearity (high correlation between predictor variables) and to provide a subset of important features, and engineering perspective 339 selects the features with physical meanings and engineering significance for the cogeneration.
[0058] With the representative features, DL/ML models may be built to simultaneously predict NOx concentration, mass flow rate, and temperature of the flue gas for the target cogeneration unit. Different machine learning and deep learning algorithms may be tried, stacked, and validated during the model development 360 and model validation 370 processes, in which the hyperparameters in each algorithm are optimized using the grid search technique. Moreover, the validation dataset 330 may be used to assess model performance, guide model selection, tune hyperparameters, and prevent overfitting (via early stopping). If the model’s accuracy does not satisfy the requirements or desired level, feature selection 350 may be revisited and steps 350 to 370 may be repeated until the requirements are met. A test dataset 340 may be used to carry out an unbiased model evaluation 380 on the performance and generalization of the developed model after the model training and hyperparameter tuning have been completed. A final (best) model 390 may be selected for final deployment. The architecture of the final model 390 will be discussed below.
[0059] Referring again to FIG. 3, the steps from feature selection 350 to final model 390 may be applied to generate three models for predicting NOx concentration, mass flow rate, and temperature of the flue gas, respectively. The three models may work independently and in parallel after being deployed to the cloud-based system. FIG. 4 illustrates an architecture of a stacked deep learning model 500 for predicting NOx concentration of flue gas, which corresponds to the architecture of the final model 390 for NOx concentration of flue gas shown in FIG. 3. The architecture 500 is designed to process both the sequential and tabular features, comprising of convolutional neural network (CNN) 520, recurrent neural network (RNN) 530, self-attention layer 540, and artificial neural network (ANN) 560. In some embodiments, referring to FIG. 4, the
inputs from 1 to N 510 indicate the number of selected features/attributes presented in FIG. 3 at step 350, including compressor pressure ratio and exhaust temperature of gas turbine, temperatures of duct burner, boiler, and economizer, and fuel gas flow rate, etc. For each selected feature/attribute, CNN 520 may be initially applied to catch peaks and local patterns, which contains a ID convolutional layer 522 followed by a Leaky ReLU layer 524, a batch normalization layer 526, and a max pooling layer 528. The CNN structure/block may be repeatedly applied and followed by multiple RNN layers 530 to extract time-series characteristics. Subsequently, the output of the RNN layer is sent to a self-attention layer 540 that considers the interactions between different positions or time steps and captures long-term dependencies in the sequence. Finally, the output of the self-attention layer is flattened 550 to a ID array for concatenation 568.
[0060] In addition to the hybrid CNN-RNN structure, ANN with residual connections 560 may be applied to capture tabular characteristics of the selected features/attributes. The input 562 of the ANN structure/block may comprise all the selected features/attributes. The input features may be sent to fully connected dense layers 564. The activation function of such a dense layer can be ReLU, GELU, linear, etc. Step 564 can be repeated several times to determine suitable performance. In the end, the outputs from all the dense layers are concatenated 566 and finally sent to a dense layer 570. The output of step 570 and the outputs of step 550 are concatenated 566 to a long array containing all the sequential and tabular features. Then, such a long array is sent to a dense layer 570 to obtain the final output 580 (NOx concentration) of the stacked architecture eventually. It should be noted that the weights, activation function, and repeat times for CNN 520, RNN 530, self-attention 540, and ANN 560 (illustrated in FIG. 4) may be determined and optimized using the grid search technique.
[0061] FIG. 5 illustrates an architecture of deep learning model 700 for predicting mass flow rate and temperature of flue gas, which may correspond to the architecture of a final model 390 for the mass flow rate and temperature of flue gas presented in FIG. 3. It should be noted that the architecture of ANN with residual connections 700 for predicting the mass flow rate and temperature of flue gas is nearly identical, a difference is the repeat times of the fully connected dense layer 720. In FIG. 5, inputs 710 for mass flow rate include gas turbine shaft speed and exhaust pressure, duct burner temperature and fuel gas flow rate, HRSG feedwater flow rate, etc. Whereas inputs 710 for temperature include the inlet guide vane angle, dew point temperature of the gas turbine, economizer temperature, etc. The inputs are sent to a fully connected dense layer 720 and a dense layer with linear activation function 730. The dense layer 720 can be repeatedly built multiple times with the activation function of ReLU, GELU, linear, etc. In the end, the output of each dense layer is concatenated 740 to achieve the residual connections. The output of the concatenation is sent to a dense layer 720 to obtain the results (i.e., mass flow rate or temperature). In the embodiment illustrated, the weights, activation function, and repeat times for all the dense layers are determined and optimized using the grid search technique.
[0062] In addition to the predictions of NOx concentration, mass flow rate, and temperature of the flue gas at the exhaust stack of a cogeneration unit, the developed DL/ML models may be able to detect and correct sensor drift and failure. FIG. 6 demonstrates the process of identifying the sensor drift and failure 900. The key steps for identifying sensor drift and failure are predicting the future sensor readings via the developed models 920 and precisely labeling sensor drift and failure based on historical data 940. In the embodiment shown in FIG. 6, real-time sensor readings from the monitored sensors 910 are extracted from the DCS system. XGBoost, Random Forest,
ANN, and RNN algorithms may be utilized to build predictive models for sensor readings considering both the sequential and tabular data features 920. The developed models with high accuracy (R2 > 0.8 during model evaluation) will be retained. Then, the predicted sensor readings will be compared with the true sensor readings to find out whether there is a notable gap 930. As discussed herein, the frequency of sensor readings is one per minute, as a result, the comparison is set between the mean values of the predicted sensor readings and true sensor readings in a 6-hour interval. On the other hand, data scientists and engineers will label the sensor failure/ drift and the corresponding corrections by reviewing the sensor readings in the past several years 940. Then, conclude threshold values for sensor failure and sensor drift separately 950 from the labeled sensor drift and failure. In the end, the gap obtained in step 930 is compared with the threshold value determined in step 950. If the gap is larger than the threshold value 960, the PEMS system will send the sensor failure or drift alarm/flag to the user interface and backfill (replace) the measured sensor values with the predicted sensor values 970. Subsequently, the stacked deep learning models as presented in FIGS. 4 and 5 will correct the predictions of NOx concentration, mass flow rate, and temperature of the flue gas based on the backfilled sensor data 980. On the other hand, if the gap is smaller than the threshold value 960, the sensor readings will be sent to the deep learning module directly.
[0063] The following table illustrates the statistical performance of deep learning models for predicting NOx concentration, mass flow rate, and temperature of the flue gas at the exhaust stack of a cogeneration unit. The table illustrates the accuracy of the developed predictive models (FIGS. 4 and 5) for NOx concentration, mass flow rate, and temperature of the flue gas in deep learning module 122 in FIG. 2. The training, validation, and testing in the table indicate steps 370 and 380
in FIG. 3, respectively. The accuracy (R2 value), root mean square error (RMSE), and mean absolute error (MAE) are calculated to exhibit the reliability and robustness of the developed models in accordance with the present disclosure. From the R2 values of the training, validation, and testing processes higher than 0.9, it can be concluded that there are strong correlations between the predicted and true/actual values of the NOx concentration, mass flow rate, and temperature, respectively. In addition, it should be noted that the mean values of NOx concentration, mass flow rate, and temperature are 20.95 in ppm, 38.85 in kg/h; and 136.57 in °C, respectively. Therefore, the RMSE and MAE values are fairly small, indicating the predictions of NOx concentration, mass flow rate, and temperature are precise.
Note: NOx mean value: 20.95 in ppm; Mass Flow Rate mean value: 38.85 in kg/h; Temperature mean value: 136.57 in °C
[0064] Referring to FIG. 2 again, user interface 140 may be for demonstrating the real-time results of emission monitoring and statistical analysis throughout the data visualization module 140. FIG. 7 to FIG. 10 illustrate functions, figures, and tables for instrumental sensor readings 1000, PEMS predictions 2000, set-up rules 3000, and sensor drift and failure detection 4000 during the operations, respectively. In FIG. 7, instrumental sensor readings are displayed via data table 1200. The table is controlled by the selection of time slot (slicer) and sensor name (dropdown
menu) 1400. In terms of the “As of Now Analysis”, a gauge chart 1300 displays the average, minimum, and maximum values within the past 1 minute, 1 hour, 4 hours, 24 hours, and 2 weeks by selecting the dropdown menu. Moreover, a data distribution histogram 1500 within the last 24 hours for a selected sensor is presented to provide a quick assessment of recent sensor behavior. FIG. 8 shows the PEMS predictions 2000 of NOx concentration, mass flow rate, and temperature of the flue gas forecasted using deep learning models 122 presented in FIG. 2. The selection of time slot (slicer) and between time (dropdown menu) 2300 control the gauge charts 2100 and the dynamic plots of NOx concentration, mass flow rate, and temperature 2200. FIG. 9 shows the current PEMS predictions of NOx concentration, mass flow rate, and temperature 3100 and the higher bound, lower bound, and standard deviation of the relevant sensor 3300. All the relevant sensors that dominate the corresponding PEMS prediction (one of the three current PEMS predictions) can be selected using the slicer 3200. FIG. 10 demonstrates the results of detecting sensor drift and failure over time by comparing predicted and actual sensor readings. The pairs of current sensor readings 4100 show the comparison between predicted and actual values. Line charts 4300 exhibit the predicted and actual sensor readings within the selected time range controlled by the time filter (slicer) 4200.
[0065] Embodiments of a cloud-based and Al-powered PEMS system disclosed herein may fully replace the CEMS unit or serve as a supplementary system when CEMS is down, which is able to monitor gas emissions continuously or periodically for a cogeneration unit. Embodiments of PEMS disclosed herein may be directed at the following issues: (1) high capital and operational costs required to use the traditional CEMS; (2) lack of reliable, resilient, and real-time emission monitoring and data analysis tools or platforms; and (3) lack of dynamic sensor failure/ drift
detection and correction functions for existing PEMS. Reducing annual operating headcount and maintenance costs, and increasing digitalization are the main reasons driving the preference to develop PEMS rather than CEMS. A cloud-based and Al-powered PEMS system that enables realtime big data monitoring and analysis will provide operators or engineers with key information to achieve more effective control and reduction of gas emissions, high operational efficiency, increased productivity, better management of industrial assets, and predictive and preventive maintenance.
[0066] In a broad aspect according to embodiments of the present disclosure, a method for providing real time predictions of output emissions from a cogeneration unit operating in response to settings of controllers and corresponding sensors measuring operational processes and settings of critical instruments relating to the cogeneration unit, providing predictions of NOx concentration, mass flow rate, and temperature of the flue gas, instrumental sensor readings, detecting and correcting instrumental sensor drift and failure, and demonstrating emission monitoring and dynamic data analysis results comprising: collecting input data; transforming the input data to transformed data to provide an effective path for data flow to deep learning models, data storage, and data analytics assembling a series of processes and operations comprises: cleaning data by unifying data type, removing irrelevant data such as not a number (NaN) values, smoothing data, normalizing/standardizing data, transforming data to a time series format, storing processed information for future applications, and analyzing data for dynamic and historical insights; applying training, validation, and testing processes of developing deep learning models (or a deep learning module) to the transformed data to contemporaneously predict dynamic NOx concentration, mass flow rate, and temperature of the flue gas at the exhaust stack of a cogeneration
unit; displaying information relating to the cogeneration unit comprising: providing interactive visualizations, dashboards, and reports for emission monitoring and analysis results, for real time monitoring and decision making and sharing dashboard information and reports; applying a DL/ML model to detect sensor failure or drift; applying a DL/ML model to provide forecasts of sensory readings to guarantee availability of the input data; applying CI/CD pipeline to update or retrain the DL/ML model; and applying a CI/CD pipeline to deploy updated or retrained model to the data pipeline.
[0067] In some embodiments of the present disclosure, the method comprises developing the DL/ML process wherein: data preprocessing of the DL/ML process comprises steps in sequence: unifying data type to numeric values, removing outliers while keeping both normal operation and shutdown periods, removing NaN values, and smoothing data, in which the removing outliers while keeping the normal operation and shutdown periods is achieved by applying interquartile range rule to sliced data intervals to retain the data of startup and shutdown during the maintenance instead of removing them as outliers, leading to the developed models are capable of predicting target variables under working and off states; features and attributes selection of the DL/ML process comprises integrating Pearson correlation, Spearman correlation, principal component analysis feature importance, and Ridge and Lasso regression, with engineering perspective to consider linear and non-linear relationships, major representatives in dimensions, multicollinearity, and physical meanings, respectively; an architecture of the DL/ML process for predicting NOx concentration of flue gas is for processing both the sequential and tabular features/attributes, comprising CNN, RNN, self-attention layer, and ANN, in which the sequential features/attributes as inputs are connected to a hybrid CNN-RNN structure followed by self-attention layer, and
meanwhile the tabular features/attributes as inputs are connected to the ANN with residual connections, in the end, the outputs are concatenated and connected to a dense layer to obtain the final output; and an architecture of the DL/ML process for predicting the mass flow rate and temperature of flue gas comprises of fully connected dense layers with residual connections, in which the tabular features/attributes as inputs are connected to multiple dense layers and a dense layer with linear activation function, and the output of each dense layer (residual connection) is concatenated, and then sent to a dense layer to obtain the final output.
[0068] In some embodiments of the present disclosure, the method further comprises retraining DL/ML processes based on the results of the relative accuracy test audit (RATA) test, wherein: when one of the PEMS predictions fails the RATA test, the corresponding process will be retrained, and another RATA test or equivalent measurements conducted with a portable gas analyzer will be carried out to validate the accuracy of the retrained process; during the retraining process, the weights assigned to the data points from the current RATA test are higher than the weights assigned to the data points from previous RATA tests and training dataset; when the accuracy of retrained process meets the requirement, i.e., R2 > 0.64 during process validation, the retrained process will be re-deployed to the Azure data pipeline via CI/CD pipeline, and such retrained process will backfill the new predictions corresponding to the retraining process into the cloud (Azure) database for future applications.
[0069] In some embodiments of the present disclosure, the method comprises developing the DL/ML process for detecting and correcting instrumental sensor drift and failure, wherein: predictive models are developed with XGBoost, Random Forest, ANN, and RNN architectures to predict sensor readings, and compare predicted sensor readings with true sensor readings to find
out the gap between the mean values of the predicted sensor readings and true sensor readings in a 6-hour interval; threshold values are concluded for sensor failure and sensor drift separately based on the labeled sensor drift and failure in historical data; sensor failure or drift alarms/flags are sent to the user interface and backfill (replace) the measured sensor values with the predicted sensor values if the gap value is larger than the threshold value; and finally request corresponding deep learning models to re-calculate the predictions of NOx concentration, mass flow rate, and temperature of the flue gas based on the backfilled sensor data.
[0070] In some embodiments of the present disclosure, the method further comprises: providing real-time monitoring for the instrumental sensors of a cogeneration unit via a historical data table with a control for selecting time slot (via slicer) and sensor name (via a dropdown menu), a dynamic gauge chart showcasing minimum, maximum, and average values with 1 -minute, 1- hour, 24-hour and 2-week intervals, and a last 24-hour data distribution (histogram) plot; providing real-time monitoring for the predictions of NOx concentration, mass flow rate, and temperature of the flue gas at the exhaust stack of a cogeneration unit via gauge charts and line plot with 1 -minute, 1-hour, 4-hour, 24-hour, and 2-week intervals or any selected time slot (via slicer); exhibiting current predictions of NOx concentration, mass flow rate, and temperature of the flue gas at the exhaust stack of a cogeneration unit and the higher bound, lower bound, and standard deviation of the one of relevant sensors that dominate the corresponding prediction (one of the three current PEMS predictions) via a dropdown menu for sensor name; Comparing current predicted and actual sensor readings via pairs of value-boards, and displaying difference between the predicted and actual sensor readings within the selected time range controlled by the time filter.
[0071] Computer and Network Systems
[0072] Turning now to FIG. 11, a computer network system for PEMS is shown and is generally identified using reference numeral 1100. In these embodiments, the PEMS system 1100 is configured for performing methods and tasks disclosed herein.
[0073] As shown in FIG. 11, the PEMS system 1100 comprises one or more server computers 1102, a plurality of client computing devices 1104, and one or more client computer systems 1106 functionally interconnected by a network 1108, such as the Internet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), and/or the like, via suitable wired and wireless networking connections.
[0074] The server computers 1102 may be computing devices designed specifically for use as a server, and/or general-purpose computing devices acting as server computers while also being used by various users. Each server computer 1102 may execute one or more server programs.
[0075] The client computing devices 1104 may be portable and/or non-portable computing devices such as laptop computers, tablets, smartphones, Personal Digital Assistants (PDAs), desktop computers, and/or the like. Each client computing device 1104 may execute one or more client application programs which sometimes may be called “apps”.
[0076] Generally, computing devices 1102 and 1104 comprise similar hardware structures such as hardware structure 1120 shown in FIG. 12. As shown, the hardware structure 1120 comprises a processing structure 1122, a controlling structure 1124, one or more non-transitory computer-readable memory or storage devices 1126, a network interface 1128, an input interface 1130, and an output interface 1132, functionally interconnected by a system bus 1138. The hardware structure 1120 may also comprise other components 134 coupled to the system bus 1138.
[0077] The processing structure 1122 may be one or more single-core or multiple-core computing processors, generally referred to as central processing units (CPUs), such as INTEL® microprocessors (INTEL is a registered trademark of Intel Corp., Santa Clara, CA, USA), AMD® microprocessors (AMD is a registered trademark of Advanced Micro Devices Inc., Sunnyvale, CA, USA), ARM® microprocessors (ARM is a registered trademark of Arm Ltd., Cambridge, UK) manufactured by a variety of manufactures such as Qualcomm of San Diego, California, USA, under the ARM® architecture, or the like. When the processing structure 1122 comprises a plurality of processors, the processors thereof may collaborate via a specialized circuit such as a specialized bus or via the system bus 1138.
[0078] The processing structure 1122 may also comprise one or more real-time processors, programmable logic controllers (PLCs), microcontroller units (MCUs), p-controllers (UCs), specialized/ customized processors, hardware accelerators, and/or controlling circuits (also denoted “controllers”) using, for example, field-programmable gate array (FPGA) or application-specific integrated circuit (ASIC) technologies, and/or the like. In some embodiments, the processing structure includes a CPU (otherwise referred to as a host processor) and a specialized hardware accelerator which includes circuitry configured to perform computations of neural networks such as tensor multiplication, matrix multiplication, and the like. The host processor may offload some computations to the hardware accelerator to perform computation operations of neural network. Examples of a hardware accelerator include a graphics processing unit (GPU), Neural Processing Unit (NPU), and Tensor Process Unit (TPU). In some embodiments, the host processors and the hardware accelerators (such as the GPUs, NPUs, and/or TPUs) may be generally considered processors.
[0079] Generally, the processing structure 1122 comprises necessary circuitries implemented using technologies such as electrical and/or optical hardware components for executing an encryption process and/or a decryption process, as the design purpose and/or the use case maybe, for encrypting and/or decrypting data received from the input 1106 and outputting the resulting encrypted or decrypted data through the output 1108.
[0080] For example, the processing structure 1122 may comprise logic gates implemented by semiconductors to perform various computations, calculations, and/or processing. Examples of logic gates include AND gate, OR gate, XOR (exclusive OR) gate, and NOT gate, each of which takes one or more inputs and generates or otherwise produces an output therefrom based on the logic implemented therein. For example, a NOT gate receives an input (for example, a high voltage, a state with electrical current, a state with an emitted light, or the like), inverts the input (for example, forming a low voltage, a state with no electrical current, a state with no light, or the like), and output the inverted input as the output.
[0081] While the inputs and outputs of the logic gates are generally physical signals and the logics or processings thereof are tangible operations with physical results (for example, outputs of physical signals), the inputs and outputs thereof are generally described using numerals (for example, numerals “0” and “1”) and the operations thereof are generally described as “computing” (which is how the “computer” or “computing device” is named) or “calculation”, or more generally, “processing”, for generating or producing the outputs from the inputs thereof.
[0082] Sophisticated combinations of logic gates in the form of a circuitry of logic gates, such as the processing structure 122, may be formed using a plurality of AND, OR, XOR, and/or NOT
gates. Such combinations of logic gates may be implemented using individual semiconductors, or more often be implemented as integrated circuits (ICs).
[0083] A circuitry of logic gates may be “hard-wired” circuitry which, once designed, may only perform the designed functions. In this example, the processes and functions thereof are “hard-coded” in the circuitry.
[0084] With the advance of technologies, it is often that circuitry of logic gates such as the processing structure 1122 may be alternatively designed in a general manner so that it may perform various processes and functions according to a set of “programmed” instructions implemented as firmware and/or software and stored in one or more non-transitory computer-readable storage devices or media. In this example, the circuitry of logic gates such as the processing structure 1122 is usually of no use without meaningful firmware and/or software.
[0085] Of course, those skilled in the art will appreciate that a process or a function (and thus the processor 1102) may be implemented using other technologies such as analog technologies.
[0086] Referring back to FIG. 10, the controlling structure 1124 comprises one or more controlling circuits, such as graphic controllers, input/output chipsets and the like, for coordinating operations of various hardware components and modules of the computing device 1102/1104.
[0087] The memory 1126 comprises one or more storage devices or media accessible by the processing structure 1122 and the controlling structure 1124 for reading and/or storing instructions for the processing structure 1122 to execute, and for reading and/or storing data, including input data and data generated by the processing structure 1122 and the controlling structure 1124. The
memory 126 may be volatile and/or non-volatile, non-removable or removable memory such as RAM, ROM, EEPROM, solid-state memory, hard disks, CD, DVD, flash memory, or the like.
[0088] The network interface 1128 comprises one or more network modules for connecting to other computing devices or networks through the network 108 by using suitable wired or wireless communication technologies such as Ethernet, WI-FI® (WI-FI is a registered trademark of Wi-Fi Alliance, Austin, TX, USA), BLUETOOTH® (BLUETOOTH is a registered trademark of Bluetooth Sig Inc., Kirkland, WA, USA), Bluetooth Low Energy (BLE), Z-Wave, Long Range (LoRa), ZIGBEE® (ZIGBEE is a registered trademark of ZigBee Alliance Corp., San Ramon, CA, USA), wireless broadband communication technologies such as Global System for Mobile Communications (GSM), Code Division Multiple Access (CDMA), Universal Mobile Telecommunications System (UMTS), Worldwide Interoperability for Microwave Access (WiMAX), CDMA2000, Long Term Evolution (LTE), 3GPP, 5G New Radio (5G NR) and/or other 5G networks, and/or the like. In some embodiments, parallel ports, serial ports, USB connections, optical connections, or the like may also be used for connecting other computing devices or networks although they are usually considered as input/output interfaces for connecting input/output devices.
[0089] The input interface 1130 comprises one or more input modules for one or more users to input data via, for example, touch-sensitive screen, touch-sensitive whiteboard, touch-pad, keyboards, computer mouse, trackball, microphone, scanners, cameras, and/or the like. The input interface 1130 may be a physically integrated part of the computing device 1102/1104 (for example, the touch-pad of a laptop computer or the touch-sensitive screen of a tablet), or may be a device physically separate from, but functionally coupled to, other components of the computing device
1102/1104 (for example, a computer mouse). The input interface 1130, in some implementations, may be integrated with a display output to form a touch-sensitive screen or touch-sensitive whiteboard.
[0090] The output interface 1132 comprises one or more output modules for output data to a user. Examples of the output modules comprise displays (such as monitors, LCD displays, LED displays, projectors, and the like), speakers, printers, virtual reality (VR) headsets, augmented reality (AR) goggles, and/or the like. The output interface 1132 may be a physically integrated part of the computing device 1102/1104 (for example, the display of a laptop computer or tablet), or may be a device physically separate from but functionally coupled to other components of the computing device 1102/1104 (for example, the monitor of a desktop computer).
[0091] The computing device 1102/1104 may also comprise other components 1134 such as one or more positioning modules, temperature sensors, barometers, inertial measurement unit (IMU), and/or the like.
[0092] The system bus 1138 interconnects various components 1122 to 1134 enabling them to transmit and receive data and control signals to and from each other.
[0093] FIG. 13 shows a simplified software architecture 1160 of the computing device 1102 or 1104. The software architecture 1160 comprises one or more application programs 1164, an operating system 1166, a logical input/output (I/O) interface 1168, and a logical memory 1172. One or more application programs 1164, operating system 1166, and logical I/O interface 1168 are generally implemented as computer-executable instructions or code in the form of software
programs or firmware programs stored in the logical memory 1172 which may be executed by the processing structure 1122.
[0094] One or more application programs 1164 executed by or run by the processing structure 1122 for performing various tasks.
[0095] The operating system 1166 manages various hardware components of the computing device 1102 or 1104 via the logical I/O interface 1168, manages the logical memory 1172, and manages and supports the application programs 1164. The operating system 1166 is also in communication with other computing devices (not shown) via the network 1108 to allow application programs 1164 to communicate with those running on other computing devices. As those skilled in the art will appreciate, the operating system 166 may be any suitable operating system such as MICROSOFT® WINDOWS® (MICROSOFT and WINDOWS are registered trademarks of the Microsoft Corp., Redmond, WA, USA), APPLE® OS X, APPLE® iOS (APPLE is a registered trademark of Apple Inc., Cupertino, CA, USA), Linux, ANDROID® (ANDROID is a registered trademark of Google LLC, Mountain View, CA, USA), or the like. The computing devices 1102 and 1104 of the image-sanitization system 1100 may all have the same operating system or may have different operating systems.
[0096] The logical I/O interface 1168 comprises one or more device drivers 1170 for communicating with respective input and output interfaces 1130 and 1132 for receiving data therefrom and sending data thereto. Received data may be sent to one or more application programs 1164 for being processed by one or more application programs 1164. Data generated by the application programs 1164 may be sent to the logical I/O interface 1168 for outputting to various output devices (via the output interface 1132).
[0097] The logical memory 1172 is a logical mapping of the physical memory 1126 for facilitating the application programs 1164 to access. In this embodiment, the logical memory 1172 comprises a storage memory area that may be mapped to a non-volatile physical memory such as hard disks, solid-state disks, flash drives, and the like, generally for long-term data storage therein. The logical memory 1172 also comprises a working memory area that is generally mapped to highspeed, and in some implementations volatile, physical memory such as RAM, generally for application programs 1164 to temporarily store data during program execution. For example, an application program 164 may load data from the storage memory area into the working memory area and may store data generated during its execution into the working memory area. The application program 1164 may also store some data in the storage memory area as required or in response to a user’s command.
[0098] In a server computer 1102, one or more application programs 1164 generally provide server functions for managing network communication with client computing devices 1104 and facilitating collaboration between the server computer 1102 and the client computing devices 1104. Herein, the term “server” may refer to a server computer 1102 from a hardware point of view or a logical server from a software point of view, depending on the context.
[0099] As described above, the processing structure 1122 is usually of no use without meaningful firmware and/or software. Similarly, while a computer system such as the PEMS system 1100 may have the potential to perform various tasks, it cannot perform any tasks and is of no use without meaningful firmware and/or software. As will be described in more detail later, the PEMS system 1100 described herein and the modules, circuitries, and components thereof, as a combination of hardware and software, generally produces tangible results tied to the physical
world, wherein the tangible results such as those described herein may lead to improvements to the computer devices and systems themselves, the modules, circuitries, and components thereof, and/or the like.
[0100] In some embodiments disclosed herein, the PEMS system is configured to operate on a cloud-based computer architecture, wherein the server computers 1102 are virtual computing environments comprising allocated scalable computer capacity, which may be called instances. In some embodiments, each server computer 1102 may be a variable configuration of CPU, memory, storage, and networking capacity. In some embodiments, each server computer 1102 is assigned a unique internet protocol (IP) address within the network 108. The processing, memory, storage, and networking capacity may be provided by a single physical server computer 1102 or a more complex computer architecture comprising a plurality of interconnected service, storage, and network components. The cloud-based computer architecture may be provided by a number of cloud computer service platforms such as Amazon Web Services® or AWS® (Amazon Web Services and AWS are registered trademarks of Amazon Web Services, Inc., a subsidiary of Amazon of Seattle, Washington, USA), Microsoft Azure™ (Azure is a trademark of Microsoft Corporation of Redmond, Washington, USA) and Google Cloud Platform or GCP.
[0101] FIG. 14 illustrates a method 1400 according to some embodiments of the present disclosure. The method 1400 begins with optionally, training the ML module (at step 1402). At step 1404, the method comprises receiving a plurality of input parameters from one or more sensors. At step 1406, the method comprises transforming the plurality of input parameters into transformed data. At step 1408, the method comprises analyzing using a ML module with the transformed data to generate predicted parameters comprising NOx concentration, mass flow rate
and temperature. At step 1410, the method comprises displaying the predicted parameters. At step 1412, the method comprises, optionally, forecasting sensor readings for determining availability of input. At step 1414, the method comprises, optionally, receiving the results of one or more RATAs to determine accuracy of the ML module. At step 1416, the method comprises, optionally, retraining and deploying the ML module where the accuracy is below a specified threshold.
[0102] The foregoing explanations of embodiments of the present disclosure should be regarded as purely illustrative. Therefore, it should be recognized that the various structural and operational features disclosed herein can be subject to numerous alterations or modifications that are in line with the capabilities of knowledgeable individuals in the art, none of which departs from the essence and scope of the present disclosure as defined in the appended claims.
Claims
1. A method for providing real time predictions of output emissions, the method comprising: receiving a plurality of input parameters from one or more sensors; transforming the plurality of input parameters into transformed data; analyzing using a machine learning (ML) module with the transformed data to generate predicted parameters comprising NOx concentration, mass flow rate and temperature; and displaying the predicted parameters.
2. The method of claim 1 further comprising training the ML module.
3. The method of claim 1 or 2, wherein the plurality of input parameters comprise one or more of pressure, temperature, flow rate, humidity, vibration, and component ratio of different devices.
4. The method of any one of claims 1 to 3, wherein transforming comprises one or more of cleaning data by unifying data type, removing irrelevant data, smooth data, normalizing data, converting data to time series format, storing information for future applications, analyzing data for dynamic and historical information, unifying data types to numeric values, and removing outliers.
5. The method of any one of claims 1 to 4, wherein the ML module is for detecting anomalies in the plurality of input parameters.
6. The method of any one of claims 1 to 5, wherein the ML module is for detecting and correcting sensor failure and drift.
7. The method of any one of claims 1 to 6 further comprising forecasting sensor readings for determining availability of input.
8. The method of any one of claims 1 to 7 further comprising receiving the results of one or more relative accuracy test audits (RATAs) to determine accuracy of the ML module.
9. The method of claim 8 further comprising retraining and deploying the ML module where the accuracy is below a specified threshold.
10. One or more non transitory computer readable storage devices comprising instructions which, when the program is executed by a computer, cause the device to perform the method of any one of claims 1 to 9.
11. A system for providing real time predictions of output emissions based on a plurality of input parameters measured by one or more sensors, the system comprising: a module for receiving the plurality of input parameters from the one or more sensors; and one or more processors for: transforming the plurality of input parameters into transformed data, and
analyzing using a ML module with the transformed data to generate predicted parameters comprising NOx concentration, mass flow rate and temperature.
12. The system of claim 11 , wherein the one or more processors are further for training the ML module.
13. The system of claim 11 or 12, wherein the plurality of input parameters comprise one or more of pressure, temperature, flow rate, humidity, vibration, and component ratio of different devices.
14. The system of any one of claims 11 to 13, wherein transforming comprises one or more of cleaning data by unifying data type, removing irrelevant data, smooth data, normalizing data, converting data to time series format, storing information for future applications, analyzing data for dynamic and historical information.
15. The system of any one of claims 11 to 14, wherein the ML module is for detecting anomalies in the plurality of input parameters.
16. The system of any one of claims 1 1 to 15, wherein the ML module is for detecting and correcting sensor failure and drift.
17. The system of any one of claims 11 to 16, where the one or more processors are further for forecasting sensor readings for determining availability of input.
18. The system of any one of claims 11 to 17, where the one or more processors are further for receiving the results of one or more RATAs to determine accuracy of the ML module.
19. The system of claim 18, where the one or more processors are further for retraining and deploying the ML module where in the accuracy is below a specified threshold.
20. The system of any one of claims 12 to 19 further comprising a display for providing interactive visualizations, dashboards, and reports relating to the predicted parameters.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202363516432P | 2023-07-28 | 2023-07-28 | |
| US63/516,432 | 2023-07-28 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025024926A1 true WO2025024926A1 (en) | 2025-02-06 |
Family
ID=94392974
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CA2024/051001 Pending WO2025024926A1 (en) | 2023-07-28 | 2024-07-26 | Methods and systems for gas emission prediction and monitoring for cogeneration using stacked multivariate deep learning |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025024926A1 (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20230304664A1 (en) * | 2022-03-24 | 2023-09-28 | Solar Turbines Incorporated | Gas turbine predictive emissions modeling, reporting, and model management via a remote framework |
| US20240200991A1 (en) * | 2022-12-15 | 2024-06-20 | Schlumberger Technology Corporation | Machine learning based methane emissions monitoring |
-
2024
- 2024-07-26 WO PCT/CA2024/051001 patent/WO2025024926A1/en active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20230304664A1 (en) * | 2022-03-24 | 2023-09-28 | Solar Turbines Incorporated | Gas turbine predictive emissions modeling, reporting, and model management via a remote framework |
| US20240200991A1 (en) * | 2022-12-15 | 2024-06-20 | Schlumberger Technology Corporation | Machine learning based methane emissions monitoring |
Non-Patent Citations (1)
| Title |
|---|
| SI MINXING; TARNOCZI TYLER J.; WIENS BRETT M.; DU KE: "Development of Predictive Emissions Monitoring System Using Open Source Machine Learning Library – Keras: A Case Study on a Cogeneration Unit", IEEE ACCESS, IEEE, USA, vol. 7, 1 January 1900 (1900-01-01), USA , pages 113463 - 113475, XP011742355, DOI: 10.1109/ACCESS.2019.2930555 * |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10902368B2 (en) | Intelligent decision synchronization in real time for both discrete and continuous process industries | |
| US20210203157A1 (en) | Scalable systems and methods for assessing healthy condition scores in renewable asset management | |
| Steurtewagen et al. | Adding interpretability to predictive maintenance by machine learning on sensor data | |
| US20200210824A1 (en) | Scalable system and method for forecasting wind turbine failure with varying lead time windows | |
| JP7651436B2 (en) | Method and system for regime-based treatment optimization of industrial assets - Patents.com | |
| ES2992919T3 (en) | Systems and methods for detecting wind turbine operation anomaly using deep learning | |
| EP3355145A1 (en) | Systems and methods for reliability monitoring | |
| US20190086908A1 (en) | Devices, methods, and systems for a distributed rule based automated fault detection | |
| US20210201179A1 (en) | Method and system for designing a prediction model | |
| WO2024043888A1 (en) | Real time detection, prediction and remediation of machine learning model drift in asset hierarchy based on time-series data | |
| WO2023186317A1 (en) | System, apparatus and method for managing one or more assets | |
| US12462901B2 (en) | Method and system for performance optimization of flue gas desulphurization (FGD) unit | |
| CN117836734A (en) | Comprehensive analysis module for determining processing equipment performance | |
| JP2025514626A (en) | Recommendation Background for Operation and Asset Failure Prevention | |
| US20250298688A1 (en) | Real-time detection, prediction, and remediation of sensor faults through data-driven approaches | |
| Dhaliwal | Validating software upgrades with ai: ensuring devops, data integrity and accuracy using ci/cd pipelines | |
| Ozdemir et al. | Machine learning insights into forecasting solar power generation with explainable AI | |
| WO2025024926A1 (en) | Methods and systems for gas emission prediction and monitoring for cogeneration using stacked multivariate deep learning | |
| KR20250028213A (en) | Systems and methods for end-to-end optimization of process control or monitoring | |
| US20240160191A1 (en) | Industrial automation relational data extraction, connection, and mapping | |
| US20240160193A1 (en) | Industrial automation data staging and transformation | |
| US20240411303A1 (en) | Industrial power generation fault advisory system | |
| US12504749B2 (en) | Conditional industrial data source connection | |
| US20240160164A1 (en) | Industrial automation data quality and analysis | |
| EP4560494A1 (en) | Method and controller for generating a predictive maintenance alert |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24847460 Country of ref document: EP Kind code of ref document: A1 |