[go: up one dir, main page]

US20240240546A1 - Fourier transform-based machine learning for well placement - Google Patents

Fourier transform-based machine learning for well placement Download PDF

Info

Publication number
US20240240546A1
US20240240546A1 US18/412,031 US202418412031A US2024240546A1 US 20240240546 A1 US20240240546 A1 US 20240240546A1 US 202418412031 A US202418412031 A US 202418412031A US 2024240546 A1 US2024240546 A1 US 2024240546A1
Authority
US
United States
Prior art keywords
properties
convolved
encoded
data
encoded properties
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US18/412,031
Inventor
John GODLEWSKI
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Schlumberger Technology Corp
Original Assignee
Schlumberger Technology Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Schlumberger Technology Corp filed Critical Schlumberger Technology Corp
Priority to US18/412,031 priority Critical patent/US20240240546A1/en
Assigned to SCHLUMBERGER TECHNOLOGY CORPORATION reassignment SCHLUMBERGER TECHNOLOGY CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GODLEWSKI, John
Publication of US20240240546A1 publication Critical patent/US20240240546A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • EFIXED CONSTRUCTIONS
    • E21EARTH OR ROCK DRILLING; MINING
    • E21BEARTH OR ROCK DRILLING; OBTAINING OIL, GAS, WATER, SOLUBLE OR MELTABLE MATERIALS OR A SLURRY OF MINERALS FROM WELLS
    • E21B43/00Methods or apparatus for obtaining oil, gas, water, soluble or meltable materials or a slurry of minerals from wells
    • E21B43/16Enhanced recovery methods for obtaining hydrocarbons
    • EFIXED CONSTRUCTIONS
    • E21EARTH OR ROCK DRILLING; MINING
    • E21BEARTH OR ROCK DRILLING; OBTAINING OIL, GAS, WATER, SOLUBLE OR MELTABLE MATERIALS OR A SLURRY OF MINERALS FROM WELLS
    • E21B2200/00Special features related to earth drilling for obtaining oil, gas or water
    • E21B2200/20Computer models or simulations, e.g. for reservoirs under production, drill bits
    • EFIXED CONSTRUCTIONS
    • E21EARTH OR ROCK DRILLING; MINING
    • E21BEARTH OR ROCK DRILLING; OBTAINING OIL, GAS, WATER, SOLUBLE OR MELTABLE MATERIALS OR A SLURRY OF MINERALS FROM WELLS
    • E21B2200/00Special features related to earth drilling for obtaining oil, gas or water
    • E21B2200/22Fuzzy logic, artificial intelligence, neural networks or the like

Definitions

  • the present disclosure relates to systems and methods for using Fourier transform-based machine learning for well placement during energy development.
  • drilling wells is one of the highest and most important capital expenditures that operating companies incur. Deciding where to drill into a reservoir may be one of the most important decisions to make during development of energy, gas storage operations, geothermal operations, hydrogeology operations, and/or carbon sequestration operations.
  • computational optimization of the placement of multiple wells is an extremely difficult problem to solve due in part to the large number of potential locations that can be considered, the uncertainty of the subsurface characterization for such potential locations, and the combinatorial nature of well-placement evaluations for said potential locations. For example, one permeability field to be analyzed for placement of one to four wells in a grid of sixty-four locations by fifty-six locations gives trillions of possibilities.
  • a method for applying Fourier Transform techniques to a machine learning model for well-placement operations comprises: obtaining properties related to an underground reservoir; encoding the properties into a latent space; performing a Fourier transform on a first portion of the encoded properties to obtain a transformed first portion; convolving the transformed first portion of the encoded properties using a neural network to obtain a convolved first portion; performing an inverse Fourier transform on the convolved first portion of the encoded properties to obtain an inversely transformed convolved first portion; performing local convolution on a second portion of the encoded properties to obtain a convolved second portion; combining the convolved second portion of the encoded properties and the inversely transformed convolved first portion of the encoded properties; and decoding predicted results related to a potential well location based at least in part on the combination of the convolved second portion of the
  • a system and a computer program can include or execute the method for applying Fourier Transform techniques to a machine learning model for well-placement operations.
  • Encoding the properties into a latent space comprises encoding the properties using a fully connected neural network.
  • combining the convolved second portion of the encoded properties and the inversely transformed convolved first portion of the encoded properties comprises summing the convolved second portion of the encoded properties and the inversely transformed convolved first portion of the encoded properties and applying an activation function.
  • the activation function comprises a Gaussian error logic unit activation function.
  • the foregoing method can also comprise iteratively performing Fourier transformations and convolutions on the combination of the convolved second portion of the encoded properties and the inversely transformed convolved first portion of the encoded properties a plurality of times, such that the predicted results are based at least in part on the iterative transformations and convolutions.
  • the disclosed method can also comprise filtering frequencies in the Fourier space before executing the convolution.
  • the obtained properties comprise a stack of images showing one or more conditions related to the underground reservoir.
  • the stack of images referenced above comprise representations of measurements related to horizontal permeability, vertical permeability, porosity, water levels, depth, well locations/control, or a combination thereof.
  • the potential well location corresponds to a potential well location to be used for carbon capture and storage.
  • the predicted results comprise data indicating carbon dioxide gas saturation, an amount or percentage of carbon dissolved in water, capillary capture, and/or reactions after carbon injection as part of carbon capture and storage.
  • multiple predicted results are generated using a plurality of neural networks that comprises the neural network.
  • a method for applying Fourier techniques to data associated with an underground reservoir comprises: obtaining data related to an underground reservoir; obtaining subsequent conditions of the underground reservoir after placement of a well at a potential location; and training one or more neural networks to map input conditions in the data to the subsequent conditions after placement of the well using discrete Fourier transform (DFT) forms.
  • DFT discrete Fourier transform
  • a system and a computer program can include or execute the method for applying Fourier techniques to the data associated with the underground reservoir.
  • the forgoing method can comprise obtaining the subsequent conditions including simulating the subsequent conditions from the input conditions in the data.
  • obtaining the subsequent conditions can comprise using data from a subsequent time compared to the data at which the input conditions are obtained.
  • the subsequent conditions correspond to a final resting condition after well placement at a later time.
  • a system for applying Fourier Transform techniques to modeling parameters associated with a reservoir may include memory storing instructions; and a processor configured to execute the instructions to cause the system to: encode first and second properties corresponding to an underground reservoir into a latent space to obtain first and second encoded properties; perform a Fourier transform on the first encoded properties to obtain transformed first encoded properties; perform low pass filtration on the transformed first encoded properties to obtain filtered first encoded properties; convolve the filtered first encoded properties in a neural network to obtain convolved first encoded properties; perform an inverse Fourier transform on the convolved first encoded properties to obtain inversely transformed first encoded properties; perform a local convolution on the second encoded properties to obtain convolved second encoded properties; combine the convolved second encoded properties and the inversely transformed first encoded properties; and decode predicted results related to a potential well location based at least in part on the combination of the convolved second encoded properties and the inversely transformed first encoded properties; and decode predicted results related to
  • the neural network is implemented at least in part using the processor.
  • the foregoing system may comprise one or more fully connected neural networks that are configured to: encode the first and second properties corresponding to the underground reservoir into the latent space; and decode the predicted results related to the potential well location based at least in part on the combination of the convolved second encoded properties and the inversely transformed first encoded properties.
  • the processor may be configured to execute the instructions to cause the system to perform iterative Fourier-based convolutions on the combination of the convolved second encoded properties and the inversely transformed first encoded properties, wherein the predicted results are based at least in part on the iterative Fourier-based convolutions.
  • FIG. 1 illustrates a diagram of a drilling system used to drill and capture data in and/or around an oilfield related to an underground reservoir, according to some embodiments.
  • FIG. 2 illustrates a computing system used to process data related to the underground reservoir of FIG. 1 , according to some embodiments.
  • FIG. 3 illustrates a process for operating the computing system of FIG. 2 to train a neural network to perform Fourier-transform-based machine learning, according to some embodiments.
  • FIG. 4 illustrates a process showing construction of the trained neural network trained using the process of FIG. 3 to perform Fourier-transform based prediction of underground reservoir properties, according to some embodiments.
  • FIG. 5 provides an exemplary mapping of input data and/or other boundary conditions data for a reservoir under consideration to output data including state data, according to some embodiments.
  • FIG. 6 shows an exemplary visualization of the processing steps associated with FIG. 3 , according to some embodiments.
  • FIG. 7 shows exemplary dimensions used for coupling or modeling a reservoir to an aquifer, according to some embodiments.
  • FIG. 8 shows location data associated with the disclosed feeders, according to some embodiments.
  • FIG. 9 shows a hybrid workflow associated with generating and training a machine learning model, according to some embodiments.
  • FIG. 10 depicts a relationship between the impact of training data on the accuracy of a machine learning model, according to some embodiments.
  • FIG. 11 depicts an exemplary report that includes state data associated with a plurality of parameters of a machine learning model.
  • FIG. 12 depicts well placement data associated with the report of FIG. 10 , according to some embodiments.
  • first, second, third, etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms may be only used to distinguish one element, component, region, layer or section from another region, layer, or section. Terms such as “first,” “second,” and other numerical terms, when used herein, do not imply a sequence or order unless clearly indicated by the context. Thus, a first element, component, region, layer, or section discussed herein could be termed a second element, component, region, layer, or section without departing from the teachings of the example embodiments.
  • a deep learning model e.g., a neural network model
  • a neural network model may be trained to associate or map input and boundary conditions associated with subterranean structures such as an underground reservoir, a saline aquifer, or a depleted oil and gas reservoir.
  • some reservoirs may be relatively deep (e.g., over half a mile) underground at a given resource site (e.g., an oil or gas field).
  • a resource site e.g., an oil or gas field.
  • Such relatively deep reservoirs may cause determining optimal well locations and/or estimating drilling and/or or other extraction conditions or parameters to be challenging even with wellbore logging measurements and/or measurements from above the surface under which the reservoir resides since large parts of the surrounding environment around said reservoir may be unseen or otherwise inaccessible for substantively taking measurements and/or be unavailable for accurately modeling the subsurface.
  • unseen conditions associated with the reservoir under consideration may include rock properties, fluid properties and conditions such reservoir pressure data, chemical concentrations data, temperature data, etc., associated with a reservoir under consideration.
  • a neural network may be used to approximate partial differential equations that characterize properties of a reservoir.
  • the neural network may use Fourier Transforms to predict a final state and/or other temporal interval data (e.g., 1, 2, 3, or more months) without determining each intermediate result by analyzing input data and/or conditions in the frequency domain through Fourier transforms (e.g., Fast Fourier Transforms (FFT) to acquire a Discrete Fourier Transform (DFT)).
  • FFT Fast Fourier Transforms
  • DFT Discrete Fourier Transform
  • well outcome predictions may be generated more quickly.
  • well placement analysis may predict reservoir conditions orders of magnitude faster than some prediction calculations.
  • this increased speed may allow analysis of many more potential well locations and can enable more efficient use of optimization tools, such as differential evolution and quantum optimizations (e.g., quadratic unconstrained binary optimization (QUBO)) tools.
  • QUBO quadratic unconstrained binary optimization
  • FIG. 1 shows one example of a drilling system 100 for drilling an earth formation 101 to form a wellbore 102 in an oilfield.
  • the drilling system 100 includes a drill rig 103 used to support and rotate a drilling tool assembly 104 that extends downward into the wellbore 102 .
  • the drilling tool assembly 104 may include a drill string 105 , a bottomhole assembly (“BHA”) 106 , and a bit 110 , attached to the downhole end of drill string 105 .
  • BHA bottomhole assembly
  • the drill string 105 may include several joints of drill pipe 108 connected end-to-end through tool joints 109 .
  • the drill string 105 transmits drilling fluid through a central bore and transmits rotational power from the drill rig 103 to the BHA 106 .
  • the drill string 105 further includes additional components, such as subs, pup joints, and so forth.
  • the drill pipe 108 provides a hydraulic passage through which drilling fluid is pumped from the surface.
  • the drilling fluid discharges through nozzles, jets, or other orifices in the bit 110 and/or the BHA 106 for the purposes of cooling the bit 110 and cutting structures thereon, and for transporting cuttings out of the wellbore 102 .
  • the BHA 106 may include the bit 110 or other components.
  • An example BHA 106 may include additional or other components (e.g., coupled between the drill string 105 and the bit 110 ).
  • additional BHA components include drill collars, stabilizers, measurement-while-drilling (“MWD”) tools, logging-while-drilling (“LWD”) tools, downhole motors, underreamers, section mills, hydraulic disconnects, jars, vibration or dampening tools, other components, or combinations of the foregoing.
  • the bit 110 may also include other cutting structures in addition to or other than a drill bit, such as milling or underreaming tools.
  • the drilling system 100 may include other drilling components and accessories, such as make-up/break-out devices (e.g., iron roughnecks or power tongs), valves (e.g., kelly cocks, blowout preventers, and safety valves), other components, or combinations of the foregoing. Additional components included in the drilling system 100 may be considered a part of the drilling tool assembly 104 , the drill string 105 , or a part of the BHA 106 depending on their locations in the drilling system 100 .
  • make-up/break-out devices e.g., iron roughnecks or power tongs
  • valves e.g., kelly cocks, blowout preventers, and safety valves
  • Additional components included in the drilling system 100 may be considered a part of the drilling tool assembly 104 , the drill string 105 , or a part of the BHA 106 depending on their locations in the drilling system 100 .
  • the bit 110 in the BHA 106 may be any type of bit suitable for degrading a geological formation or other downhole materials.
  • the bit 110 may be a drill bit suitable for drilling the earth formation 101 .
  • Exemplary types of drill bits used for drilling earth formations are fixed-cutter or drag bits, roller cone bits, and percussion hammer bits.
  • the bit 110 is an expandable underreamer used to expand a wellbore diameter.
  • the bit 110 is a mill used for removing metal, composite, elastomer, other downhole materials, or combinations thereof.
  • the bit 110 may be used with a whipstock to mill into a casing 107 lining the wellbore 102 .
  • the bit 110 may also be used to mill away tools, plugs, cement, and other materials within the wellbore 102 , or combinations thereof. Swarf or other cuttings formed by use of a mill may be lifted to the surface or may be allowed to fall downhole.
  • the wellbore 102 may be measured using a wireline tool in the oilfield.
  • the oilfield may include a surface unit 112 that is used to capture data from a wireline operation, pumping operation, drilling operation, extraction operation, or any other operation being performed in the oilfield.
  • the wireline tool may, for example, have an explosive, radioactive, electrical, or acoustic energy source that sends and/or receives electrical signals to surrounding subterranean formations and/or fluids therein. Return signals may be detected using the wireline tool and/or other tools located at other locations at/near the oilfield. Additionally or alternatively, at least some of such measurements may be made from the surface.
  • Computer facilities may be positioned at various locations about the oilfield (e.g., the surface unit 112 ) and/or at remote locations.
  • the surface unit 112 may be used to communicate with the wireline tool, and/or offsite operations, as well as with other surface sensors 114 or downhole sensors.
  • the surface unit 112 is capable of communicating with the wireline tool or surface sensors 114 to send commands to the wireline tool and to receive data from the wireline tool or the surface sensors 114 .
  • the surface unit 112 may also collect data generated during the drilling operation and/or logging and produces data output 116 , which may then be stored or transmitted.
  • the surface unit 112 may collect data generated during the wireline operation or other operations and may produce data output 116 that may be stored or transmitted (e.g., via a transmitter of the surface unit 112 ).
  • the wireline tool may be positioned at various depths in the wellbore 102 to provide a survey or other information relating to the subterranean formation.
  • the surface unit 112 , the wireline tool, and/or surface sensors 114 may include any suitable device, such as a geophone, an accelerator porosity sonde (APS), a high-temperature lithodensity tool (HLDT), a dual laterolog (DLL), a phasor dual induction-spherically focused resistivity tool (DIT-E), a temperature sensor, a pressure sensor, a natural gamma sonde (HNGS), various sonic/acoustic measuring devices, an inclinometer tool, a neutron measurement device, an aluminum activation clay tool (AACT), an induced gamma spectrometry tool (GSST), a seismic truck, a computer, a seismic sensor, a fluid composition sensor, a water content sensor, and/or other measuring or analyzing devices suitable for obtaining properties related to the underground reservoir.
  • APS accelerator porosity sonde
  • HLDT high-temperature lithodensity tool
  • DLL dual laterolog
  • DIT-E
  • the surface unit 112 may include one or more various sensors and/or gauges that may additionally or alternatively be located at other locations in the oilfield. These sensors and/or gauges may be positioned about the oilfield to collect data relating to various field operations. For instance, at least one downhole sensor may be positioned in the wireline tool/BHA 106 to measure downhole parameters which relate to, for example porosity, permeability, fluid composition and/or other parameters of the field operation. During drilling, different or more parameters, such as weight on bit, torque on bit, pressures, temperatures, flow rates, compositions, rotary speed, and/or other parameters of the field operation, may be measured.
  • the surface unit 112 may include a transceiver to enable communications between the surface unit 112 and various portions of the oilfield or other locations.
  • the surface unit 112 may also be provided with or functionally connected to one or more controllers for actuating mechanisms at the oilfield.
  • the surface unit 112 may then send command signals to the oilfield in response to data received.
  • the surface unit 112 may receive commands via the transceiver or may itself execute commands to the controller.
  • a computing system including a processor may be provided to analyze the data (locally or remotely), make decisions, control operations, and/or actuate the controller. In this manner, the oilfield may be selectively adjusted based on the data collected. This technique may be used to enhance portions of the field operation, such as controlling drilling, weight on bit, pump rates, and/or other parameters. These adjustments may be made automatically based on an executing application with or without user input.
  • At least some of the data output 116 may be captured during drilling such that the wireline tool is replaced and/or supplemented by drilling tools suspended by the drill rig 103 and advanced into the subterranean formations to form the wellbore 102 .
  • a mud pit 118 may be used to draw drilling mud into the drilling tools via a flow line for circulating drilling mud down through the drilling tools, then up the wellbore 102 and back to the surface. The drilling mud may be filtered and returned to the mud pit 118 .
  • a circulating system may be used for storing, controlling, or filtering the flowing drilling muds.
  • the drilling tools are advanced into subterranean formations to reach a reservoir 120 . Each well may target one or more reservoirs.
  • the drilling tools may be adapted for measuring downhole properties using logging while drilling tools.
  • the logging while drilling tools may also be adapted for taking core samples.
  • the wireline/BHA 106 tool may include a communication subassembly that communicates with the surface unit 112 .
  • the communication subassembly may be adapted to send signals to and receive signals from the surface using a communications channel such as mud pulse telemetry, electromagnetic telemetry, or wired drill pipe communications.
  • the communication subassembly may include, for example, a transmitter that generates a signal, such as an acoustic or electromagnetic signal, which is representative of the measured parameters. It is appreciated by one having skill in the art that a variety of telemetry systems may be employed, such as wired drill pipe, electromagnetic, or other known telemetry systems.
  • the data gathered by surface sensors 114 and/or other sensors may be collected by the surface unit 112 and/or other data collection sources for analysis or other processing.
  • the data collected by the surface sensors 114 may be used alone or in combination with other data.
  • the data may be collected in one or more databases and/or transmitted to another location on-site or off-site.
  • the data may be historical data, real time data, or combinations thereof.
  • the real time data may be used in real time or stored for later use.
  • the data may also be combined with historical data and/or other inputs for further analysis.
  • the data may be stored in separate databases and/or combined into a single database.
  • the wellbore 102 is drilled according to a drilling plan that is established prior to drilling.
  • the drilling plan can set forth equipment data associated with the drilling, fluid or material data pressures associated with the drilling, drilling trajectory data and/or other parameters that characterize the drilling process for the wellsite.
  • the drilling operation may then be performed according to the drilling plan.
  • the generation of this drilling plan may be extensive and/or costly in terms of computing resources and/or in terms of time due to the numerous potential well locations and/or complications of wellbore surroundings.
  • at least a portion of the prediction of well placement results may be made using neural networks by performing convolutions on the input data that has been transformed into the frequency domain using Discrete Fourier Transform (DFT) techniques.
  • DFT Discrete Fourier Transform
  • FIG. 2 is a block diagram of a system 250 that may be used for analyzing/using data output 116 referenced in FIG. 1 .
  • the data output 116 may be received as input data 252 at a computing system 254 .
  • the computing system 254 may be implemented in the surface unit 112 and/or may be implemented at other locations within a resource site (e.g., oilfield, gas field, etc.) or remotely relative to the resource site such that the remote locations are able to receive the data.
  • the various functional blocks shown in FIG. 2 may include hardware elements (e.g., including circuitry), software elements (e.g., including computer code stored on a tangible computer-readable medium), or a combination of both hardware and software elements. It is appreciated that FIG. 2 is merely an example of a particular implementation and is intended to illustrate the types of components that may be present in the computing system 254 .
  • the computing system 254 can include one or more processor(s) 256 , a memory 258 , a display 260 , input devices 262 , one or more neural networks(s) 264 (e.g., data models that are structured based on neural network(s)), and one or more interface(s) 266 .
  • the processor(s) 256 may be operably coupled with the memory 258 to facilitate the use of the processors(s) 256 to implement various stored programs.
  • Such programs or instructions executed by the processor(s) 256 may be stored in any suitable article of manufacture that includes one or more tangible, computer-readable media at least collectively storing the instructions or routines, such as the memory 258 .
  • the memory 258 may include any suitable articles of manufacture for storing data and executable instructions, such as a random-access memory, a read-only memory, a rewritable flash memory, hard drives, and optical discs, or other electronic data storage media.
  • programs e.g., an operating system
  • encoded on said computer program product may also include instructions that may be executed by the processor(s) 256 to enable the computing system 254 to provide various functionalities.
  • the input devices 262 of the computing system 254 may enable a user to interact with the computing system 254 (e.g., pressing a button to increase or decrease a volume level).
  • the interface(s) 266 may enable the computing system 254 to interface with various other electronic devices.
  • the interface(s) 266 may include, for example, one or more network interfaces for a personal area network (PAN), such as a Bluetooth network, for a local area network (LAN) or wireless local area network (WLAN), such as an IEEE 802.11x Wi-Fi network or an IEEE 802.15.4 wireless network, and/or for a wide area network (WAN), such as a cellular network.
  • PAN personal area network
  • LAN local area network
  • WLAN wireless local area network
  • WAN wide area network
  • the interface(s) 266 may additionally or alternatively include one or more interfaces for, for example, broadband fixed wireless access networks (WiMAX), mobile broadband Wireless networks (mobile WiMAX), and so forth.
  • WiMAX broadband fixed wireless access networks
  • the computing system 254 may include a transceiver (Tx/Rx) 267 .
  • the transceiver 267 may include any circuitry that may be useful in both wirelessly receiving and wirelessly transmitting signals (e.g., data signals).
  • the transceiver 267 may include a transmitter and a receiver combined into a single unit. In other embodiments, the transceiver may be comprised in circuitry that facilitates both wireless and wired communications.
  • the input devices 262 may allow a user to control the computing system 254 .
  • the input devices 262 may be used to control/initiate operation of the neural network(s) 264 .
  • Some input devices 262 may include a keyboard and/or mouse, a microphone that may obtain a user's voice for various voice-related features, and/or a speaker that may enable audio playback.
  • the input devices 262 may also include a headphone input that may provide a connection to external speakers and/or headphones.
  • the neural network(s) 264 may include hardware and/or software logic that may be arranged in one or more network layers.
  • the neural network(s) 264 may be used to implement machine learning and may include one or more suitable neural network types.
  • the neural network(s) 264 may include a perceptron, a feed-forward neural network, a multi-layer perceptron, a convolutional neural network, a long short-term memory (LSTM) network, a sequence-to-sequence model, and/or a modular neural network.
  • the neural network(s) 264 may include at least one deep learning neural network.
  • the neural network(s) 264 may be trained or otherwise configured to predict reservoir state(s) (e.g., an initial, intermediate, or final state of a reservoir) of a reservoir indicating fluid storage and/or fluid movements within the reservoir such as the reservoir 120 of FIG. 1 .
  • the reservoir state(s) of the reservoir may be part of a reservoir modeling platform used to complement and/or at least partially replace reservoir simulators.
  • the reservoir modeling platform may supplement geological modeling tools such as INTERSECT or ECLIPSE reservoir simulators as part of a modeling platform (e.g., DELFI platform).
  • other planning tools and/or screening tools may use the analysis performed using the neural network(s) 264 .
  • the neural network(s) 264 may be implemented as part of, or as a supplement to a planning solution, such as the FDPlan software.
  • the output of the neural network(s) 264 may be based on the input data 252 and/or inputs to the neural network(s) 264 , such as one or more parameters including but not limited to horizontal permeability, vertical permeability, porosity, depth, well controls (e.g., rate controls of volume injection over time or pressure control, etc.), and/or other parameters.
  • This output may be used by the computing system 254 .
  • the output from the neural network(s) 264 may be transmitted using a communication path 268 from the computing system 254 to a gateway 270 .
  • the communication path 268 may use any of the communication techniques previously discussed as available via the interface(s) 266 .
  • the interface(s) 266 may connect to the gateway 270 using wired (e.g., Ethernet) or wireless (e.g., IEEE 802.11) connections.
  • the gateway 270 can couple the computing system 254 to a wide-area network (WAN) connection 272 , such as the Internet.
  • the WAN connection 272 may couple the computing system 254 to a cloud network 274 .
  • the cloud network 274 may include one or more computing systems 254 grouped into one or more locations (e.g., data centers).
  • the cloud network 274 can also include one or more databases 276 that may be used to store the output of the neural network(s) 264 and/or implement and perform the steps discussed in association with the neural network(s) 264 .
  • the cloud network 274 may perform additional transformations on the data using its own processor(s) 256 and/or neural network(s) 264 .
  • FIG. 3 is a flow diagram of a process 300 that may be used in training and testing the neural network(s) 264 to perform Fourier Transform-based machine learning for well injection locations.
  • the one or more computing systems 254 obtain data associated with an underground reservoir (block 302 ).
  • the subsurface data may include input images (e.g., sensor or non-sensor images) associated the surface proximal to the reservoir or subsurface within which the reservoir is located. These input images may indicate three-dimensional (3D) measurements and/or 3D measurements over time (3D plus time).
  • these measurements may be related to subsurface and/or surface parameters that may be relevant to a well operation, such as drilling operations, carbon capture and storage (CCS) operations, injection operations, oil and gas production operations, geothermal injection operations, geothermal production operations, water heating operations, natural gas storage operations, and/or other operations to be performed on or with the underground reservoir.
  • CCS carbon capture and storage
  • the input images may indicate to horizontal permeability data, vertical permeability data, porosity data, water levels data, depth data, well locations/control data, and/or other parametric data.
  • the computing system(s) 254 may simulate conditions of the underground reservoir after well placement and control decisions (block 304 ).
  • the conditions to be simulated may be the same as the parameters in the input images. Additionally or alternatively, the simulated conditions may be related to the parameters of the input images. For instance, if the input images are related to permeability or porosity, the output data may be associated with gas saturation, dissolved carbonated water, capillary capture, and/or reactions after carbon injection.
  • the simulated conditions may be final conditions after the planned operation (e.g., carbon storage) such that the simulations may reflect changes in the underground reservoir over time, but one or more periods of time may be the points of interest. As is discussed below, using discrete Fourier transform (DFT) representations, these periods of interest may be calculated quickly without progressing through intermediate points as done when using the simulations to simulate final conditions.
  • DFT discrete Fourier transform
  • the computing system(s) 254 may use the input images and the simulated results to train the neural network(s) to map input conditions in the input images to conditions after well placement via machine learning using DFT representations (block 306 ).
  • the well locations/controls may be allowed to vary randomly in a supervised learning dataset during training a machine learning model generated based on the neural network(s).
  • the input and outputs of the neural network(s) 264 are mapped to each other by training a neural network using neural operators.
  • a neural operator is a deep-learning architecture that learns based on an operator that maps data between functional spaces. For instance, neural operators may be used to resolve partial differential equations and/or perform Fourier Transforms.
  • trained neural network(s) 264 may be able to predict the outcome from unseen well locations and conditions.
  • the disclosed approach beneficially predicts one or more well placement positions under various operating conditions of a given reservoir based on the use of machine learning models derived from neural network structures combined with Fourier Transforms.
  • well placement is a combinatorial problem that can scale exponentially and may be computationally intractable with reservoir simulators.
  • DFT approximation based on the neural network(s) 264 , such computations may be much faster (e.g., 100,000 ⁇ faster) enabling more effective capital expenditure to be used on well placement determinations and on well development for a given resource site.
  • the computing system(s) 254 may then test or otherwise validate the trained neural network(s) 264 (block 308 ). For instance, additional input data may be simulated to obtain corresponding output conditions and compared to an output of the neural network(s) 264 to ensure accuracy of the trained neural network(s) 264 . In some embodiments empirical data may be used in place of or in addition to the simulated conditions. For instance, if pre- and post-operation images are available from real world measurements, such data may be used in training and/or testing the training of the neural network(s) 264 .
  • the neural network(s) 264 may be used to predict multiple parameters.
  • each parameter may be predicted by respective individual neural networks of the neural network(s) 264 .
  • different neural networks may be used to predict different respective output parameters, such as carbon saturation, carbon dissolution, pressure, and/or any other parameter that may be of interest related to the underground reservoir after well placement.
  • at least one of the neural network(s) 264 may predict multiple parameters in a single neural network.
  • the multiple neural network(s) 264 may function independently of each other. Additionally or alternatively, at least one of the neural network(s) 264 may depend on/use an output from another of the neural network(s) 264 .
  • a neural network 264 may be trained for a first parameter (e.g., pressure) first. The results from the output of the neural network 264 may then be inputted to another neural network 264 that may correspond to another parameter (e.g., carbon saturation) that may at least partially depend on the first parameter. Similarly, in some embodiments, the second parameter may be used in another neural network 264 for a third parameter (e.g., carbon dissolution) that may at least partially depend on the first parameter and/or the second parameter. Likewise, the parameter outputs of any neural network(s) 264 may be subsequently used in another neural network(s) 264 that corresponds to parameters that may be at least partially related to the parameter outputs.
  • a first parameter e.g., pressure
  • the results from the output of the neural network 264 may then be inputted to another neural network 264 that may correspond to another parameter (e.g., carbon saturation) that may at least partially depend on the first parameter.
  • the second parameter may be used in another neural network 264 for a third parameter
  • FIG. 4 is a block diagram of a process 400 performed using the trained neural network(s) 264 configured using the process 300 of FIG. 3 .
  • the computing system 254 detailed in FIG. 2 obtains properties related to an underground reservoir (block 402 ).
  • the properties may be obtained as a stack of input data (e.g., input images) corresponding to measurements or interpretations of various parameters, such as horizontal permeability data, vertical permeability data, porosity data, depth data, location controls data, well controls data, and the like.
  • the input image may plot one parameter (e.g., permeability data or porosity data) against space and/or time.
  • the computing system 254 then encodes the properties into a latent space (block 404 ).
  • the neural network(s) 264 may include one or more fully connected neural networks with a number (e.g., 8) of outputs.
  • fully connected neural networks may use a rectified linear unit (ReLU) function to shape the input data into the latent space.
  • the latent space comprises a data state of the input data between the input and output of the neural network.
  • data properties of the input data including properties (e.g., physical properties of a reservoir under consideration) such as permeability data, porosity data, etc., may be transformed or encoded by, for example, a first neural transformer comprised in the neural network to generate a transformed input data in the latent space.
  • the transformed input data may or may not be subjected to additional analysis and/or additional transformation by a second encoder of the neural network or a third encoder of the neural network or a fourth encoder of the neural network, etc., prior to being outputted from the neural network.
  • the computing system 254 then performs a Fourier Transform on the encoded properties in the latent space (block 406 ). For instance, the computing system 254 may perform a fast Fourier transform (FFT) on the encoded points serially in four dimensions: three spatial dimensions and the time dimension. In some embodiments, the computing system 254 may filter at least some frequencies (block 408 ). For instance, the computing system 254 may truncate frequencies higher than a first number (e.g., 12) of for the spatial domains and a second number (e.g., 8) for the time domain. This filtration may be implemented using a low-pass filter that is implemented using hardware, software, or a combination of both.
  • FFT fast Fourier transform
  • the neural network(s) 264 then perform convolutions on the filtered (or unfiltered) encoded properties using learned/trained weights in the Fourier space to generate convolved data (block 410 ). For instance, these convolutions may be matrix multiplications of the Fourier-transformed three-dimensional data and/or four-dimensional data. The weights may be for frequencies and/or phase as trained using the process 300 described in relation to FIG. 3 .
  • the computing system 254 then performs an inverse Fourier transform on the convolved data (block 412 ). At least some of the input data may bypass the Fourier transform. This data may instead be locally convolved by the computing system 254 (block 414 ).
  • the local convolution may include a 1-D convolution that is performed using a linear transformation that may maintain at least some spatial awareness that may be lost in the Fourier transformations. Additionally or alternatively, the local convolution may be performed using different/more robust neural networks. For instance, the local convolution may be performed using larger (e.g., 3 ⁇ 3) convolutional networks and/or graph neural networks.
  • the computing system 254 may combine the locally convolved data and the transformed convolved data (block 416 ). For instance, the results may be summed together and/or passed to a Gaussian linear unit (GeLU) activation function.
  • GeLU Gaussian linear unit
  • the multiple iterations and/or re-transformations beneficially trains a model associated with the neural network to enhance said model's accuracy in predicting data outputs for new reservoirs that may be similar to or distinct from the reservoir used to train the model.
  • the neural network(s) 264 may comprise a first linear step for processing the input data followed by applying an “activation function” to the processed input data.
  • the activation function for example, provides discrete or non-discrete amounts of data processing or data transformations that are non-linear.
  • the first linear step may be followed by multiple activation function operations on data generated from the first linear step and thereby provide a plurality of discrete, incremental, or small amounts of non-linear processing to the data generated from the first linear step.
  • the output of the combination replaces the encoded input from each successive pass.
  • the convolution may occur in Fourier space a single time where N would be zero.
  • the computing system 254 may decode predicted results from the output of the combination in block 416 .
  • the computing system 254 may use a fully connected neural network to decode the output based at least in part on the combination of the transformed convolved data and the locally convolved data to generate a predicted outcome for a parameter, such as carbon saturation, carbon dissolution, and pressure when the operation being analyzed is well placement for carbon capture and storage. Similar or different predicted outcomes may be made for other types of reservoir-related operations using similar techniques based on different input data and different predicted results.
  • the predicted results may be made by skipping to periods of interest (e.g., the end) rather than being forced to simulate through the periods over time.
  • the predicted results may be made while skipping at least some intermediate results.
  • the process 400 shows a list of process blocks, these sub-operations detailed in these blocks may be performed in an order other than that shown in FIG. 4 .
  • the local convolution may be performed before, after, and/or at least partially simultaneously with the Fourier-based convolutions.
  • Applying the principles of the process 400 that may have been trained using the process 300 using a suitable number (e.g., 3,000) of training cases may provide a suitable mechanism to predict the output of numerous (e.g., 7 trillion combinations) quickly with a relatively low amount of processing (e.g., 1.6 ms per prediction per NVIDIA V100 tensor core graphics processing unit core used).
  • a suitable number e.g., 3,000
  • a suitable mechanism to predict the output of numerous (e.g., 7 trillion combinations) quickly with a relatively low amount of processing e.g., 1.6 ms per prediction per NVIDIA V100 tensor core graphics processing unit core used.
  • the process 300 enables faster and efficient searching for favorable/optimum well
  • a fast-running, machine learning model e.g., a reservoir proxy model
  • a fast-running, machine learning model that is capable of providing (e.g., predicting) state data associated with, for example, fluid states and physical conditions relative to a reservoir of interest and/or well placement locations associated with said reservoir.
  • the disclosed approach can be used to generate data (e.g., reports) indicating the final state and location of a gas such as CO 2 with attendant properties including CO 2 saturation data, dissolved CO 2 data and/or CO 2 pressure data.
  • the generated data or reports may be used to: guide or otherwise direct carbon capture and storage operations associated with the reservoir; determine optimal well locations to place about the reservoir; inform fluid extraction strategies associated with retrieving fluid from the reservoir; etc.
  • the machine learning model is a fast-running model configured or otherwise optimized to determine well placement locations for a reservoir that best traps, for example, fluid such as carbon dioxide.
  • a machine learning model is developed and structured based on a neural network architecture (e.g., image-to-image neural network) that is adapted to predict state data (e.g., the final state of a carbon capture) for fluid storage within the reservoir under various well placement scenarios.
  • a neural network architecture e.g., image-to-image neural network
  • state data e.g., the final state of a carbon capture
  • the neural network architecture may enable the machine learning model may have a plurality of surface or subsurface parameters that characterize for example, a given reservoir including a permeability parameter (e.g., a horizontal permeability parameter and a vertical permeability parameter), a porosity parameter, a depth parameter, and a well location or well controls parameter.
  • the parameters of the machine learning model may be trained using, for example, datasets (e.g., supervised and/or unsupervised learning datasets) including sensor datasets associated with the reservoir and/or synthetic datasets associated with the reservoir as the case may require.
  • datasets e.g., supervised and/or unsupervised learning datasets
  • the datasets may comprise spatial data such as a multi-dimensional image data (3-dimensional image data) of the reservoir under consideration and/or spatio-temporal data such as a time dependent multi-dimensional image data that relate the spatial and temporal aspects of the reservoir. This can include a 3-dimensional image of the reservoir under consideration for a period of time.
  • the well locations/controls parameter referenced above may be allowed to randomly or selected strategically from all their possible values.
  • training and testing the machine learning model based on the datasets may be achieved using a reservoir simulation tool that is configured to determine or estimate state data (e.g., final states fluid stored in the reservoir) by indicating said state data in, for example, reports that include images of the stored fluid based on the state data including saturation information associated with said stored fluid, dissolved fluid information at within the reservoir based on the state data, and fluid pressure data of stored fluid within the reservoir derived from the state data.
  • state data e.g., final states fluid stored in the reservoir
  • reports that include images of the stored fluid based on the state data including saturation information associated with said stored fluid, dissolved fluid information at within the reservoir based on the state data, and fluid pressure data of stored fluid within the reservoir derived from the state data.
  • the machine learning model comprises one or more neural operators that map inputs (e.g., input dataset such as training datasets and/or non-training datasets) to output datasets including the predictions and/or estimations (e.g., state data) referenced above.
  • the neural network structure of the machine learning model beneficially predicts the results of unseen well locations and/or conditions associated with a reservoir under consideration.
  • the machine learning model in response to being trained is able to ingest new data such as permeability data, (e.g., horizontal permeability data and vertical permeability data, porosity data, depth data associated with a new reservoir similar to, or different from the reservoir used to configure or train the machine learning model and thereby generate state data indicating multiple well placement locations for the new reservoir under various operating conditions.
  • permeability data e.g., horizontal permeability data and vertical permeability data, porosity data, depth data associated with a new reservoir similar to, or different from the reservoir used to configure or train the machine learning model and thereby generate state data indicating multiple well placement locations for the new reservoir under various operating conditions.
  • permeability data e.g., horizontal permeability data and vertical permeability data, porosity data, depth data associated with a new reservoir similar to, or different from the reservoir used to configure or train the machine learning model and thereby generate state data indicating multiple well placement locations for the new reservoir under various operating conditions.
  • training the machine learning model comprises applying a deep learning process to the machine learning model that associates or maps (associative or mapping 504 of FIG. 5 ) input data and/or other boundary conditions data (e.g., a stack 502 of input data and/or boundary condition data) for a reservoir under consideration to output data including the above-referenced state data (e.g., state data 506 ).
  • the training process may comprise:
  • FIG. 6 These steps are visualized in FIG. 6 . It is appreciated that the above process may be used to generate a trained machine learning model which in turn may be used to accurately predict unseen well combinations for well placement screening.
  • the machine learning model generated predicted output for approximately 7 trillion possible combinations within a timeframe of 1.6 ms per prediction per V100 GPU used. This allows for much faster and efficient searching for favorable well combinations for energy development operations.
  • the disclosed process for determining well placement for a given reservoir based on aforementioned report beneficially informs optimal locations to drill or otherwise access a given reservoir for, for example, fluid storage.
  • This allows the optimal placement of multiple wells about a given reservoir as well as executing screening operations for large geological areas associated with said reservoir for suitable carbon capture and storage (CCS) operations such as injection operations.
  • CCS carbon capture and storage
  • the disclosed approach enables characterizing, based on the generated report from the above steps, uncertainty data associated with executing the CCS operations.
  • the disclosed approach enables CCS process optimizations based on the uncertainty data as well as inform fluid access decisions associated with accessing fluid comprised in a reservoir under consideration. It is appreciated that the disclosed machine learning model can be used to provide superior well placement results for a given reservoir, faster total workflow time in determining said results, and lower computing costs under tested conditions.
  • the disclosed example is directed to optimally placing four wells (e.g., vertical injector wells) about a reservoir at a resource site (e.g., oilfield, gas field, site with a depleted reservoir, etc.).
  • a resource site e.g., oilfield, gas field, site with a depleted reservoir, etc.
  • the approach used solves a combinatorial well placement problem based on associated complexities with a large number of possible results in the solution space. For example, even with a defined modest well-placement grid, there were over 6.8 trillion possible combinations of well placement locations for the disclosed example.
  • properties associated with the reservoir such as heterogenous permeability and variable well locations implicate complex interference patterns and plume shapes for the reservoir under consideration.
  • determining optimal locations about the reservoir for fluid storage may be based on: first data associated with the amount of fluid (e.g., gas) to be injected or otherwise pumped into the reservoir; second data associated with the safety or security associated with injecting said fluid into the reservoir; and third data indicating the cost of injecting said fluid into the reservoir.
  • the first data, second data, and third data may be combined into a single objective function that drives determining optimal locations about the reservoir to place the one or more wells.
  • the quantity of fluid being stored within the reservoir under consideration may have associated incentivizing credits (e.g., carbon credits) for organizations such that the more fluid or gas is stored, the more carbon credits said organization accrues.
  • credits e.g., carbon credits
  • the fluid to be stored or trapped within the reservoir may be stored in for example, a plurality of ways (e.g., one of four ways).
  • the fluid storage security/safety may be based on: structural data associated with the reservoir; capillary data associated with the reservoir; solubility data associated with the reservoir; and mineral data associated with the reservoir.
  • multiplier data e.g., multipliers
  • the multiplier data comprises an objection function which numerically or quantitatively or qualitatively characterizes or describes optimal, desired, or expected output data associated with applying input data to the model.
  • the objective function may characterize how desirable an outcome is so that the model can be optimized to provide such an outcome.
  • the objective function can be associated with the net present value associated with a reservoir under consideration. Using CO 2 storage in a depleted reservoir under consideration as an example, if is determined that CO 2 trapped in state A within said reservoir is twice as valuable as in state B associated with the reservoir, the multiplier data may include a multiplier value of 2 within the objective function for all CO 2 being modeled based on state B.
  • a penalty function (e.g., a large penalty function) is provided to correlate or otherwise represent any fluid leaks (e.g., fluid leaks associated with stored fluid within the reservoir) outside of specified fluid leak thresholds associated with the reservoir.
  • costs associated with developing optimally placed wells about the reservoir in question may be estimated based on a total drilling distance required for all locations from a platform drilling center. As such, drilling wells further apart incurs a higher cost.
  • the objective function referenced above may be determined at for example, a first time step (e.g., a single time step) according to one embodiment.
  • a time-weighted function is used to characterize the objective function.
  • the disclosed objective function for a given fluid e.g., CO 2
  • a given fluid e.g., CO 2
  • porosity data associated with a location for well placement e.g., injection zone
  • porosity data associated with a location for well placement may be modeled using a gaussian random function simulation that may have a spherical variogram of about 1000 m N/S, 500 m E/W with a 40 m vertical anisotropy.
  • the permeability data in a sand zone associated with the reservoir in question may be characterized using a poro-perm transform function given by:
  • shale breaks associated with the reservoir may be ascribed a value of 0 permeability.
  • an aquifer e.g., a large, infinite-acting numerical aquifer
  • a geometrically increasing cell spacing with exemplary dimensions in horizontal and/or vertical directions as indicated in FIG. 6 .
  • the aquifer cells disclosed can provide both realistic transient pressure support data and/or easy quantification of any fluid (e.g., CO 2 ) leakage outside of established thresholds for the reservoir.
  • the tight permeability and large aquifer coupled to the reservoir may be implemented to determine whether one or more neural operators such as Fourier neural operators (FNOs) accurately predict the interactions (e.g., interference) of the wells with the aquifer under complex transient conditions.
  • FNOs Fourier neural operators
  • a computational tool e.g., Equinor
  • Equinor may be used to interpret several “feeders” or leakage points between the zones as implemented based on vertical permeability multiplier data. These feeders may be left intact and increase the complexity of the well placement task.
  • the dots in FIG. 8 indicate location data of the feeders.
  • fluid injection happens in the lowest zone with a simplified field strategy.
  • injectors e.g., all four injectors
  • injectors inject simultaneously at a maximum bottom hole pressure of about 230 bar, or a bottom hole pressure approximately equal to a fracture pressure associated with the reservoir.
  • rate controls e.g., fluid rate controls
  • variable rate controls relative to time may be tested. It is appreciated that the disclosed example represents fluid injection into the reservoir for a first period (e.g., 9 years) following which injection is shut for a second time period (e.g., 1 year).
  • the hybrid workflow associated with the disclosed example is indicated in FIG. 8 (see arrow path in figure).
  • Data from reservoir simulations may be used as “ground truth” or a benchmark to train an artificial intelligence (AI) surrogate model or a machine learning (ML) model.
  • AI artificial intelligence
  • ML machine learning
  • optimization of the machine learning model is executed in the machine learning space, which, in this case is at various levels of the neural network structure of the machine learning model.
  • Solution(s) e.g., predictions, reports, etc.
  • Full training and test datasets were created by running reservoir simulations with random well locations and then using generated image data (e.g., 3-dimensional stack data) therefrom to train or otherwise configure the neural operators of the ML model. Validation operations were carried out to ensure that, for example, no two wells are allowed in the same cell or within neighboring cells. Lower data requirements may be possible with targeted sampling of the design space. In one embodiment, a total of 4000 simulations were executed to test for both high and low data (e.g., training data) availability scenarios.
  • high and low data e.g., training data
  • Modifications were made to the number of data layers associated with the ML model and the number of Fourier modes the neural architecture of said layers that bypass the Fourier transform.
  • the physics conservation equations can be encoded into the loss function referenced above. While this improves accuracy, it requires finer time discretization and may take longer to train as the elements in the loss function must be balanced against each other.
  • the FNO is enhanced to be as fast as possible.
  • the size of the neural network structure of the ML model may be reduced We reduced in addition to reducing the number of Fourier modes (e.g., via, for example, the filtering operation discussed above).
  • the result space or output space of the model is reduced to indicate a section of the state data such as the final state data of the reservoir in response to determining well placement locations using the model.
  • These reduced aspects of the model can be beneficially used to configure the objective function of the ML model (e.g., excluding intermediate steps).
  • the parameters of the ML model are discretized to generate a plurality of trainable parameters to facilitate easy training and deployment of the ML model.
  • the discretization of the parameters of the ML model may comprise breaking the ML model's neural network structure into a plurality of trainable parameters (e.g., about 8 million trainable parameters in the disclosed example).
  • the ML model is trained using between 25-3200 multi-dimensional (3-dimensional stack data).
  • training of the ML model was completed between 5 minutes to 1 hour using 1 to 4 V100 graphical processing units (GPUs) for 40 epochs or times, each with a batch size of 4.
  • GPUs graphical processing units
  • comparing the disclosed approach to other physics based simulation or modeling tools for the reservoir under consideration revealed a modeling time of about 8240 seconds on a quad-core central processing computing unit.
  • the disclosed implementation on a single V100 GPU took approximately 0.017 seconds, representing a 500000 times faster to generate the necessary reports with even more accuracy than the physics based modeling tools referenced above.
  • the accuracy of ML model can depend on how much data is used to train it.
  • the dependence between model accuracy and amount of training data can be linear, according to some embodiments, as indicated in FIG. 9 . More specifically, this figure indicates that the more training data is available, the lesser the value of the errors associated with outputs (e.g., or reports (e.g., report indicated in FIG. 10 ) derived therefrom) from the ML model.
  • the table below provides specific quantitative data associated with chart of FIG. 9 .
  • a mismatch between errors e.g., material balance errors
  • actual predictions is quantified to gauge the performance of the ML model.
  • errors e.g., material balance errors
  • the difference between total fluid injected between the ML model and a physics-based simulator may be given by:
  • the ML model leverages a differential evolution optimizer with attendant library (e.g., a scipy library) to automatically place the wells, with i-j locations of each well being treated as independent variables within the bounds of a grid associated with the reservoir.
  • a differential evolution optimizer with attendant library (e.g., a scipy library) to automatically place the wells, with i-j locations of each well being treated as independent variables within the bounds of a grid associated with the reservoir.
  • genetic optimization e.g., optimization based on the neural operator
  • model convergence e.g., the model has been trained to optimally place wells about the reservoir
  • the well locations are run in a simulator to validate the final objective function results.
  • a characteristic of the simulator-plus-ML model versus simulator-only workflows is that a level of parallelization can be achieved to reduce the elapsed time of generating predictions based on the ML model.
  • training data can potentially be launched simultaneously with cloud computing resources.
  • FIG. 10 depicts an exemplary report that includes state data (e.g., final state data) associated with a plurality of parameters of the ML model.
  • state data e.g., final state data
  • data associated with a new reservoir that may be similar to or different from the reservoir used to develop the model is applied to the model and used to generate the state data indicating fluid (e.g., CO 2 ) saturating data, dissolved fluid data, and fluid pressure data associated with the new reservoir.
  • fluid e.g., CO 2
  • the reports may also indicate the number of wells placed about the reservoir.
  • the ML model may be used to recommend a plurality of locations to place wells about the reservoir as depicted in FIG. 11 where up-to 3200 optimal well locations are determined for fluid injection into a given reservoir.
  • the term optimize/optimal and its variants may simply indicate improving, rather than the ultimate form of ‘perfection’ or the like.
  • first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are used to distinguish one element from another.
  • a first object or step could be termed a second object or step, and, similarly, a second object or step could be termed a first object or step, without departing from the scope of the disclosure.
  • the first object or step, and the second object or step are both objects or steps, respectively, but they are not to be considered the same object or step.
  • the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Geology (AREA)
  • Mining & Mineral Resources (AREA)
  • Physics & Mathematics (AREA)
  • Environmental & Geological Engineering (AREA)
  • Fluid Mechanics (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Geochemistry & Mineralogy (AREA)
  • Geophysics And Detection Of Objects (AREA)

Abstract

Systems and methods of the present disclosure provide systems and methods that include obtaining properties related to an underground reservoir and encoding the properties into a latent space. A first portion of the encoded properties is then transformed using a Fourier transform. In the Fourier space, the transformed first portion of the encoded properties is convolved using a trained neural network. The convolved first portion of the encoded properties is then inversely Fourier transformed. A second portion of the encoded properties is locally convolved. The convolved second portion of the encoded properties and the inversely transformed convolved first portion of the encoded properties are then combined. Predicted results are decoded where the results are related to a potential well location based at least in part on the combination of the convolved second portion of the encoded properties and the inversely transformed convolved first portion of the encoded properties.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application claims priority to U.S. Provisional Patent App. No. 63/479,922, filed on Jan. 13, 2023, and titled “Fourier Transform-Based Machine Learning For Well Placement,” which is incorporated herein by reference in its entirety for all purposes.
  • FIELD OF THE INVENTION
  • The present disclosure relates to systems and methods for using Fourier transform-based machine learning for well placement during energy development.
  • BACKGROUND INFORMATION
  • Wellbores in downhole wells have complex and varied surroundings. Thus, applying machine learning to wellbore-related applications may be difficult due to such high complexity and due to the diversity of the subsurface features.
  • Furthermore, drilling wells is one of the highest and most important capital expenditures that operating companies incur. Deciding where to drill into a reservoir may be one of the most important decisions to make during development of energy, gas storage operations, geothermal operations, hydrogeology operations, and/or carbon sequestration operations. In addition, computational optimization of the placement of multiple wells is an extremely difficult problem to solve due in part to the large number of potential locations that can be considered, the uncertainty of the subsurface characterization for such potential locations, and the combinatorial nature of well-placement evaluations for said potential locations. For example, one permeability field to be analyzed for placement of one to four wells in a grid of sixty-four locations by fifty-six locations gives trillions of possibilities.
  • Thus, there is a need to develop fast, efficient, and accurate methods and systems that optimally determine locations for well placement based on robust evaluation of a site in real-time or near real-time using machine learning.
  • SUMMARY
  • Disclosed are methods, systems, and computer programs for applying Fourier Transform techniques to a machine learning model for well-placement operations. According to one embodiment, a method for applying Fourier Transform techniques to a machine learning model for well-placement operations comprises: obtaining properties related to an underground reservoir; encoding the properties into a latent space; performing a Fourier transform on a first portion of the encoded properties to obtain a transformed first portion; convolving the transformed first portion of the encoded properties using a neural network to obtain a convolved first portion; performing an inverse Fourier transform on the convolved first portion of the encoded properties to obtain an inversely transformed convolved first portion; performing local convolution on a second portion of the encoded properties to obtain a convolved second portion; combining the convolved second portion of the encoded properties and the inversely transformed convolved first portion of the encoded properties; and decoding predicted results related to a potential well location based at least in part on the combination of the convolved second portion of the encoded properties and the inversely transformed convolved first portion of the encoded properties.
  • In other embodiments, a system and a computer program can include or execute the method for applying Fourier Transform techniques to a machine learning model for well-placement operations. These and other implementations may each optionally include one or more of the following features.
  • Encoding the properties into a latent space comprises encoding the properties using a fully connected neural network.
  • Furthermore, decoding the predicted results based at least in part on the combination of the convolved second portion of the encoded properties and the inversely transformed convolved first portion of the encoded properties comprises decoding the predicted results based at least in part on the combination of the convolved second portion of the encoded properties and the inversely transformed convolved first portion of the encoded properties using a fully connected neural network.
  • In some embodiments, combining the convolved second portion of the encoded properties and the inversely transformed convolved first portion of the encoded properties comprises summing the convolved second portion of the encoded properties and the inversely transformed convolved first portion of the encoded properties and applying an activation function.
  • In addition, the activation function comprises a Gaussian error logic unit activation function.
  • The foregoing method can also comprise iteratively performing Fourier transformations and convolutions on the combination of the convolved second portion of the encoded properties and the inversely transformed convolved first portion of the encoded properties a plurality of times, such that the predicted results are based at least in part on the iterative transformations and convolutions.
  • The disclosed method can also comprise filtering frequencies in the Fourier space before executing the convolution.
  • In some instances, the obtained properties comprise a stack of images showing one or more conditions related to the underground reservoir.
  • Moreover, the stack of images referenced above comprise representations of measurements related to horizontal permeability, vertical permeability, porosity, water levels, depth, well locations/control, or a combination thereof.
  • In addition, the potential well location corresponds to a potential well location to be used for carbon capture and storage.
  • According to one embodiment, the predicted results comprise data indicating carbon dioxide gas saturation, an amount or percentage of carbon dissolved in water, capillary capture, and/or reactions after carbon injection as part of carbon capture and storage.
  • Furthermore, multiple predicted results are generated using a plurality of neural networks that comprises the neural network.
  • In other embodiments, a method for applying Fourier techniques to data associated with an underground reservoir is provided. The method comprises: obtaining data related to an underground reservoir; obtaining subsequent conditions of the underground reservoir after placement of a well at a potential location; and training one or more neural networks to map input conditions in the data to the subsequent conditions after placement of the well using discrete Fourier transform (DFT) forms.
  • In other embodiments, a system and a computer program can include or execute the method for applying Fourier techniques to the data associated with the underground reservoir. These and other implementations may each optionally include one or more of the following features.
  • The forgoing method can comprise obtaining the subsequent conditions including simulating the subsequent conditions from the input conditions in the data.
  • Furthermore, obtaining the subsequent conditions can comprise using data from a subsequent time compared to the data at which the input conditions are obtained.
  • In some cases, the subsequent conditions correspond to a final resting condition after well placement at a later time.
  • In other implementations, a system for applying Fourier Transform techniques to modeling parameters associated with a reservoir is disclosed. The system may include memory storing instructions; and a processor configured to execute the instructions to cause the system to: encode first and second properties corresponding to an underground reservoir into a latent space to obtain first and second encoded properties; perform a Fourier transform on the first encoded properties to obtain transformed first encoded properties; perform low pass filtration on the transformed first encoded properties to obtain filtered first encoded properties; convolve the filtered first encoded properties in a neural network to obtain convolved first encoded properties; perform an inverse Fourier transform on the convolved first encoded properties to obtain inversely transformed first encoded properties; perform a local convolution on the second encoded properties to obtain convolved second encoded properties; combine the convolved second encoded properties and the inversely transformed first encoded properties; and decode predicted results related to a potential well location based at least in part on the combination of the convolved second encoded properties and the inversely transformed first encoded properties.
  • These and other implementations may each optionally include one or more of the following features. The neural network is implemented at least in part using the processor.
  • The foregoing system may comprise one or more fully connected neural networks that are configured to: encode the first and second properties corresponding to the underground reservoir into the latent space; and decode the predicted results related to the potential well location based at least in part on the combination of the convolved second encoded properties and the inversely transformed first encoded properties.
  • Furthermore, the processor may be configured to execute the instructions to cause the system to perform iterative Fourier-based convolutions on the combination of the convolved second encoded properties and the inversely transformed first encoded properties, wherein the predicted results are based at least in part on the iterative Fourier-based convolutions.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The disclosure is illustrated by way of example, and not by way of limitation in the figures of the accompanying drawings in which like reference numerals are used to refer to similar elements. It is appreciated that various features may not be drawn to scale, and the dimensions of various features may be arbitrarily increased or reduced for clarity of discussion.
  • FIG. 1 illustrates a diagram of a drilling system used to drill and capture data in and/or around an oilfield related to an underground reservoir, according to some embodiments.
  • FIG. 2 illustrates a computing system used to process data related to the underground reservoir of FIG. 1 , according to some embodiments.
  • FIG. 3 illustrates a process for operating the computing system of FIG. 2 to train a neural network to perform Fourier-transform-based machine learning, according to some embodiments.
  • FIG. 4 illustrates a process showing construction of the trained neural network trained using the process of FIG. 3 to perform Fourier-transform based prediction of underground reservoir properties, according to some embodiments.
  • FIG. 5 provides an exemplary mapping of input data and/or other boundary conditions data for a reservoir under consideration to output data including state data, according to some embodiments.
  • FIG. 6 shows an exemplary visualization of the processing steps associated with FIG. 3 , according to some embodiments.
  • FIG. 7 shows exemplary dimensions used for coupling or modeling a reservoir to an aquifer, according to some embodiments.
  • FIG. 8 shows location data associated with the disclosed feeders, according to some embodiments.
  • FIG. 9 shows a hybrid workflow associated with generating and training a machine learning model, according to some embodiments.
  • FIG. 10 depicts a relationship between the impact of training data on the accuracy of a machine learning model, according to some embodiments.
  • FIG. 11 depicts an exemplary report that includes state data associated with a plurality of parameters of a machine learning model.
  • FIG. 12 depicts well placement data associated with the report of FIG. 10 , according to some embodiments.
  • DETAILED DESCRIPTION
  • In the following, reference is made to embodiments of the disclosure. It is appreciated, however, that the disclosure is not limited to specific described embodiments. Instead, any combination of the following features and elements, whether related to different embodiments or not, is contemplated to implement and practice the disclosure. Furthermore, although embodiments of the disclosure may achieve advantages over other solutions and/or over the prior art, whether or not a particular advantage is achieved by a given embodiment is not limiting of the disclosure. Thus, the following aspects, features, embodiments, and advantages are merely illustrative and are not considered elements or limitations of the claims except where explicitly recited in a claim. Likewise, reference to “the disclosure” shall not be construed as a generalization of the inventive subject matter disclosed herein and should not be considered to be an element or limitation of the claims except where explicitly recited in a claim.
  • Although the terms first, second, third, etc., may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms may be only used to distinguish one element, component, region, layer or section from another region, layer, or section. Terms such as “first,” “second,” and other numerical terms, when used herein, do not imply a sequence or order unless clearly indicated by the context. Thus, a first element, component, region, layer, or section discussed herein could be termed a second element, component, region, layer, or section without departing from the teachings of the example embodiments.
  • When introducing elements of various embodiments of the present disclosure, the articles “a,” “an,” and “the” are intended to mean that there are one or more of the elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. Additionally, it should be understood that references to “one embodiment” or “an embodiment” of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features.
  • Some embodiments will now be described with reference to the figures. Like elements in the various figures will be referenced with like numbers for consistency. In the following description, numerous details are set forth to provide an understanding of various embodiments and/or features. It will be understood, however, by those skilled in the art, that some embodiments may be practiced without many of these details, and that numerous variations or modifications from the described embodiments are possible. As used herein, the terms “above” and “below,” “up” and “down,” “upper,” and “lower,” “upwardly,” and “downwardly,” and other like terms indicating relative positions above or below a given point are used in this description to more clearly describe certain embodiments.
  • Overview
  • The embodiments described herein include systems and methods related to using Fourier transforms to apply machine learning to well drilling location analysis applications as part of well placement screenings. Specifically, a deep learning model (e.g., a neural network model) may be trained to associate or map input and boundary conditions associated with subterranean structures such as an underground reservoir, a saline aquifer, or a depleted oil and gas reservoir.
  • As context, some reservoirs may be relatively deep (e.g., over half a mile) underground at a given resource site (e.g., an oil or gas field). Such relatively deep reservoirs may cause determining optimal well locations and/or estimating drilling and/or or other extraction conditions or parameters to be challenging even with wellbore logging measurements and/or measurements from above the surface under which the reservoir resides since large parts of the surrounding environment around said reservoir may be unseen or otherwise inaccessible for substantively taking measurements and/or be unavailable for accurately modeling the subsurface. For example, unseen conditions associated with the reservoir under consideration may include rock properties, fluid properties and conditions such reservoir pressure data, chemical concentrations data, temperature data, etc., associated with a reservoir under consideration.
  • To predict even with these unseen conditions, a neural network may be used to approximate partial differential equations that characterize properties of a reservoir. As such, the neural network may use Fourier Transforms to predict a final state and/or other temporal interval data (e.g., 1, 2, 3, or more months) without determining each intermediate result by analyzing input data and/or conditions in the frequency domain through Fourier transforms (e.g., Fast Fourier Transforms (FFT) to acquire a Discrete Fourier Transform (DFT)). By analyzing at least some data using DFTs, well outcome predictions may be generated more quickly. For example, well placement analysis may predict reservoir conditions orders of magnitude faster than some prediction calculations. Furthermore, this increased speed may allow analysis of many more potential well locations and can enable more efficient use of optimization tools, such as differential evolution and quantum optimizations (e.g., quadratic unconstrained binary optimization (QUBO)) tools.
  • With the foregoing in mind, FIG. 1 shows one example of a drilling system 100 for drilling an earth formation 101 to form a wellbore 102 in an oilfield. The drilling system 100 includes a drill rig 103 used to support and rotate a drilling tool assembly 104 that extends downward into the wellbore 102. The drilling tool assembly 104 may include a drill string 105, a bottomhole assembly (“BHA”) 106, and a bit 110, attached to the downhole end of drill string 105.
  • The drill string 105 may include several joints of drill pipe 108 connected end-to-end through tool joints 109. The drill string 105 transmits drilling fluid through a central bore and transmits rotational power from the drill rig 103 to the BHA 106. In some embodiments, the drill string 105 further includes additional components, such as subs, pup joints, and so forth. The drill pipe 108 provides a hydraulic passage through which drilling fluid is pumped from the surface. The drilling fluid discharges through nozzles, jets, or other orifices in the bit 110 and/or the BHA 106 for the purposes of cooling the bit 110 and cutting structures thereon, and for transporting cuttings out of the wellbore 102.
  • The BHA 106 may include the bit 110 or other components. An example BHA 106 may include additional or other components (e.g., coupled between the drill string 105 and the bit 110). Examples of additional BHA components include drill collars, stabilizers, measurement-while-drilling (“MWD”) tools, logging-while-drilling (“LWD”) tools, downhole motors, underreamers, section mills, hydraulic disconnects, jars, vibration or dampening tools, other components, or combinations of the foregoing. The bit 110 may also include other cutting structures in addition to or other than a drill bit, such as milling or underreaming tools. In general, the drilling system 100 may include other drilling components and accessories, such as make-up/break-out devices (e.g., iron roughnecks or power tongs), valves (e.g., kelly cocks, blowout preventers, and safety valves), other components, or combinations of the foregoing. Additional components included in the drilling system 100 may be considered a part of the drilling tool assembly 104, the drill string 105, or a part of the BHA 106 depending on their locations in the drilling system 100.
  • The bit 110 in the BHA 106 may be any type of bit suitable for degrading a geological formation or other downhole materials. For instance, the bit 110 may be a drill bit suitable for drilling the earth formation 101. Exemplary types of drill bits used for drilling earth formations are fixed-cutter or drag bits, roller cone bits, and percussion hammer bits. In some embodiments, the bit 110 is an expandable underreamer used to expand a wellbore diameter. In other embodiments, the bit 110 is a mill used for removing metal, composite, elastomer, other downhole materials, or combinations thereof. For instance, the bit 110 may be used with a whipstock to mill into a casing 107 lining the wellbore 102. The bit 110 may also be used to mill away tools, plugs, cement, and other materials within the wellbore 102, or combinations thereof. Swarf or other cuttings formed by use of a mill may be lifted to the surface or may be allowed to fall downhole.
  • In addition to or in place of the drill string 105, the wellbore 102 may be measured using a wireline tool in the oilfield. Indeed, the oilfield may include a surface unit 112 that is used to capture data from a wireline operation, pumping operation, drilling operation, extraction operation, or any other operation being performed in the oilfield. The wireline tool may, for example, have an explosive, radioactive, electrical, or acoustic energy source that sends and/or receives electrical signals to surrounding subterranean formations and/or fluids therein. Return signals may be detected using the wireline tool and/or other tools located at other locations at/near the oilfield. Additionally or alternatively, at least some of such measurements may be made from the surface.
  • Computer facilities may be positioned at various locations about the oilfield (e.g., the surface unit 112) and/or at remote locations. The surface unit 112 may be used to communicate with the wireline tool, and/or offsite operations, as well as with other surface sensors 114 or downhole sensors. The surface unit 112 is capable of communicating with the wireline tool or surface sensors 114 to send commands to the wireline tool and to receive data from the wireline tool or the surface sensors 114. The surface unit 112 may also collect data generated during the drilling operation and/or logging and produces data output 116, which may then be stored or transmitted. In other words, the surface unit 112 may collect data generated during the wireline operation or other operations and may produce data output 116 that may be stored or transmitted (e.g., via a transmitter of the surface unit 112). The wireline tool may be positioned at various depths in the wellbore 102 to provide a survey or other information relating to the subterranean formation. In some embodiments, the surface unit 112, the wireline tool, and/or surface sensors 114 may include any suitable device, such as a geophone, an accelerator porosity sonde (APS), a high-temperature lithodensity tool (HLDT), a dual laterolog (DLL), a phasor dual induction-spherically focused resistivity tool (DIT-E), a temperature sensor, a pressure sensor, a natural gamma sonde (HNGS), various sonic/acoustic measuring devices, an inclinometer tool, a neutron measurement device, an aluminum activation clay tool (AACT), an induced gamma spectrometry tool (GSST), a seismic truck, a computer, a seismic sensor, a fluid composition sensor, a water content sensor, and/or other measuring or analyzing devices suitable for obtaining properties related to the underground reservoir.
  • The surface unit 112 may include one or more various sensors and/or gauges that may additionally or alternatively be located at other locations in the oilfield. These sensors and/or gauges may be positioned about the oilfield to collect data relating to various field operations. For instance, at least one downhole sensor may be positioned in the wireline tool/BHA 106 to measure downhole parameters which relate to, for example porosity, permeability, fluid composition and/or other parameters of the field operation. During drilling, different or more parameters, such as weight on bit, torque on bit, pressures, temperatures, flow rates, compositions, rotary speed, and/or other parameters of the field operation, may be measured.
  • As previously noted, the surface unit 112 may include a transceiver to enable communications between the surface unit 112 and various portions of the oilfield or other locations. The surface unit 112 may also be provided with or functionally connected to one or more controllers for actuating mechanisms at the oilfield. The surface unit 112 may then send command signals to the oilfield in response to data received. The surface unit 112 may receive commands via the transceiver or may itself execute commands to the controller. A computing system including a processor may be provided to analyze the data (locally or remotely), make decisions, control operations, and/or actuate the controller. In this manner, the oilfield may be selectively adjusted based on the data collected. This technique may be used to enhance portions of the field operation, such as controlling drilling, weight on bit, pump rates, and/or other parameters. These adjustments may be made automatically based on an executing application with or without user input.
  • As previously noted, at least some of the data output 116 may be captured during drilling such that the wireline tool is replaced and/or supplemented by drilling tools suspended by the drill rig 103 and advanced into the subterranean formations to form the wellbore 102. A mud pit 118 may be used to draw drilling mud into the drilling tools via a flow line for circulating drilling mud down through the drilling tools, then up the wellbore 102 and back to the surface. The drilling mud may be filtered and returned to the mud pit 118. A circulating system may be used for storing, controlling, or filtering the flowing drilling muds. The drilling tools are advanced into subterranean formations to reach a reservoir 120. Each well may target one or more reservoirs. The drilling tools may be adapted for measuring downhole properties using logging while drilling tools. The logging while drilling tools may also be adapted for taking core samples.
  • The wireline/BHA 106 tool may include a communication subassembly that communicates with the surface unit 112. The communication subassembly may be adapted to send signals to and receive signals from the surface using a communications channel such as mud pulse telemetry, electromagnetic telemetry, or wired drill pipe communications. The communication subassembly may include, for example, a transmitter that generates a signal, such as an acoustic or electromagnetic signal, which is representative of the measured parameters. It is appreciated by one having skill in the art that a variety of telemetry systems may be employed, such as wired drill pipe, electromagnetic, or other known telemetry systems.
  • The data gathered by surface sensors 114 and/or other sensors may be collected by the surface unit 112 and/or other data collection sources for analysis or other processing. The data collected by the surface sensors 114 may be used alone or in combination with other data. The data may be collected in one or more databases and/or transmitted to another location on-site or off-site. The data may be historical data, real time data, or combinations thereof. The real time data may be used in real time or stored for later use. The data may also be combined with historical data and/or other inputs for further analysis. The data may be stored in separate databases and/or combined into a single database.
  • According to one embodiment, the wellbore 102 is drilled according to a drilling plan that is established prior to drilling. The drilling plan can set forth equipment data associated with the drilling, fluid or material data pressures associated with the drilling, drilling trajectory data and/or other parameters that characterize the drilling process for the wellsite. The drilling operation may then be performed according to the drilling plan. However, the generation of this drilling plan may be extensive and/or costly in terms of computing resources and/or in terms of time due to the numerous potential well locations and/or complications of wellbore surroundings. As discussed below, to reduce these costs, at least a portion of the prediction of well placement results may be made using neural networks by performing convolutions on the input data that has been transformed into the frequency domain using Discrete Fourier Transform (DFT) techniques.
  • FIG. 2 is a block diagram of a system 250 that may be used for analyzing/using data output 116 referenced in FIG. 1 . The data output 116, as described in FIG. 1 , may be received as input data 252 at a computing system 254. The computing system 254 may be implemented in the surface unit 112 and/or may be implemented at other locations within a resource site (e.g., oilfield, gas field, etc.) or remotely relative to the resource site such that the remote locations are able to receive the data. The various functional blocks shown in FIG. 2 may include hardware elements (e.g., including circuitry), software elements (e.g., including computer code stored on a tangible computer-readable medium), or a combination of both hardware and software elements. It is appreciated that FIG. 2 is merely an example of a particular implementation and is intended to illustrate the types of components that may be present in the computing system 254.
  • As illustrated, the computing system 254 can include one or more processor(s) 256, a memory 258, a display 260, input devices 262, one or more neural networks(s) 264 (e.g., data models that are structured based on neural network(s)), and one or more interface(s) 266. In the computing system 254, the processor(s) 256 may be operably coupled with the memory 258 to facilitate the use of the processors(s) 256 to implement various stored programs. Such programs or instructions executed by the processor(s) 256 may be stored in any suitable article of manufacture that includes one or more tangible, computer-readable media at least collectively storing the instructions or routines, such as the memory 258. The memory 258 may include any suitable articles of manufacture for storing data and executable instructions, such as a random-access memory, a read-only memory, a rewritable flash memory, hard drives, and optical discs, or other electronic data storage media. In addition, programs (e.g., an operating system) encoded on said computer program product may also include instructions that may be executed by the processor(s) 256 to enable the computing system 254 to provide various functionalities.
  • The input devices 262 of the computing system 254 may enable a user to interact with the computing system 254 (e.g., pressing a button to increase or decrease a volume level). The interface(s) 266 may enable the computing system 254 to interface with various other electronic devices. The interface(s) 266 may include, for example, one or more network interfaces for a personal area network (PAN), such as a Bluetooth network, for a local area network (LAN) or wireless local area network (WLAN), such as an IEEE 802.11x Wi-Fi network or an IEEE 802.15.4 wireless network, and/or for a wide area network (WAN), such as a cellular network. The interface(s) 266 may additionally or alternatively include one or more interfaces for, for example, broadband fixed wireless access networks (WiMAX), mobile broadband Wireless networks (mobile WiMAX), and so forth.
  • In certain embodiments, to enable the computing system 254 to communicate over the aforementioned wireless networks (e.g., Wi-Fi, WiMAX, mobile WiMAX, 4G, LTE, and so forth), the computing system 254 may include a transceiver (Tx/Rx) 267. The transceiver 267 may include any circuitry that may be useful in both wirelessly receiving and wirelessly transmitting signals (e.g., data signals). The transceiver 267 may include a transmitter and a receiver combined into a single unit. In other embodiments, the transceiver may be comprised in circuitry that facilitates both wireless and wired communications.
  • The input devices 262, in combination with the display 260, may allow a user to control the computing system 254. For example, the input devices 262 may be used to control/initiate operation of the neural network(s) 264. Some input devices 262 may include a keyboard and/or mouse, a microphone that may obtain a user's voice for various voice-related features, and/or a speaker that may enable audio playback. The input devices 262 may also include a headphone input that may provide a connection to external speakers and/or headphones.
  • The neural network(s) 264 may include hardware and/or software logic that may be arranged in one or more network layers. In some embodiments, the neural network(s) 264 may be used to implement machine learning and may include one or more suitable neural network types. For instance, the neural network(s) 264 may include a perceptron, a feed-forward neural network, a multi-layer perceptron, a convolutional neural network, a long short-term memory (LSTM) network, a sequence-to-sequence model, and/or a modular neural network. In some embodiments, the neural network(s) 264 may include at least one deep learning neural network.
  • The neural network(s) 264 may be trained or otherwise configured to predict reservoir state(s) (e.g., an initial, intermediate, or final state of a reservoir) of a reservoir indicating fluid storage and/or fluid movements within the reservoir such as the reservoir 120 of FIG. 1 . In some embodiments, the reservoir state(s) of the reservoir may be part of a reservoir modeling platform used to complement and/or at least partially replace reservoir simulators. For instance, the reservoir modeling platform may supplement geological modeling tools such as INTERSECT or ECLIPSE reservoir simulators as part of a modeling platform (e.g., DELFI platform). Additionally or alternatively, other planning tools and/or screening tools may use the analysis performed using the neural network(s) 264. For instance, the neural network(s) 264 may be implemented as part of, or as a supplement to a planning solution, such as the FDPlan software.
  • The output of the neural network(s) 264 may be based on the input data 252 and/or inputs to the neural network(s) 264, such as one or more parameters including but not limited to horizontal permeability, vertical permeability, porosity, depth, well controls (e.g., rate controls of volume injection over time or pressure control, etc.), and/or other parameters. This output may be used by the computing system 254. Additionally or alternatively, the output from the neural network(s) 264 may be transmitted using a communication path 268 from the computing system 254 to a gateway 270. The communication path 268 may use any of the communication techniques previously discussed as available via the interface(s) 266. For instance, the interface(s) 266 may connect to the gateway 270 using wired (e.g., Ethernet) or wireless (e.g., IEEE 802.11) connections. The gateway 270 can couple the computing system 254 to a wide-area network (WAN) connection 272, such as the Internet. The WAN connection 272 may couple the computing system 254 to a cloud network 274. The cloud network 274 may include one or more computing systems 254 grouped into one or more locations (e.g., data centers). The cloud network 274 can also include one or more databases 276 that may be used to store the output of the neural network(s) 264 and/or implement and perform the steps discussed in association with the neural network(s) 264. In some embodiments, the cloud network 274 may perform additional transformations on the data using its own processor(s) 256 and/or neural network(s) 264.
  • Workflows
  • To address the challenges in performing well placement analysis using machine learning, the mapping from input data to output data may be performed using the one or more computing systems 254. FIG. 3 is a flow diagram of a process 300 that may be used in training and testing the neural network(s) 264 to perform Fourier Transform-based machine learning for well injection locations. The one or more computing systems 254 obtain data associated with an underground reservoir (block 302). For instance, the subsurface data may include input images (e.g., sensor or non-sensor images) associated the surface proximal to the reservoir or subsurface within which the reservoir is located. These input images may indicate three-dimensional (3D) measurements and/or 3D measurements over time (3D plus time). For instance, these measurements may be related to subsurface and/or surface parameters that may be relevant to a well operation, such as drilling operations, carbon capture and storage (CCS) operations, injection operations, oil and gas production operations, geothermal injection operations, geothermal production operations, water heating operations, natural gas storage operations, and/or other operations to be performed on or with the underground reservoir. For instance, if the operation to be performed is a CCS operation, the input images may indicate to horizontal permeability data, vertical permeability data, porosity data, water levels data, depth data, well locations/control data, and/or other parametric data.
  • The computing system(s) 254 may simulate conditions of the underground reservoir after well placement and control decisions (block 304). The conditions to be simulated may be the same as the parameters in the input images. Additionally or alternatively, the simulated conditions may be related to the parameters of the input images. For instance, if the input images are related to permeability or porosity, the output data may be associated with gas saturation, dissolved carbonated water, capillary capture, and/or reactions after carbon injection. Furthermore, the simulated conditions may be final conditions after the planned operation (e.g., carbon storage) such that the simulations may reflect changes in the underground reservoir over time, but one or more periods of time may be the points of interest. As is discussed below, using discrete Fourier transform (DFT) representations, these periods of interest may be calculated quickly without progressing through intermediate points as done when using the simulations to simulate final conditions.
  • The computing system(s) 254 may use the input images and the simulated results to train the neural network(s) to map input conditions in the input images to conditions after well placement via machine learning using DFT representations (block 306). The well locations/controls may be allowed to vary randomly in a supervised learning dataset during training a machine learning model generated based on the neural network(s). Using training data, the input and outputs of the neural network(s) 264 are mapped to each other by training a neural network using neural operators. A neural operator, according to one embodiment, is a deep-learning architecture that learns based on an operator that maps data between functional spaces. For instance, neural operators may be used to resolve partial differential equations and/or perform Fourier Transforms. These trained neural network(s) 264 may be able to predict the outcome from unseen well locations and conditions. In some embodiments, the disclosed approach beneficially predicts one or more well placement positions under various operating conditions of a given reservoir based on the use of machine learning models derived from neural network structures combined with Fourier Transforms.
  • As previously noted, well placement is a combinatorial problem that can scale exponentially and may be computationally intractable with reservoir simulators. Using the neural operator DFT approximation based on the neural network(s) 264, such computations may be much faster (e.g., 100,000× faster) enabling more effective capital expenditure to be used on well placement determinations and on well development for a given resource site.
  • The computing system(s) 254 may then test or otherwise validate the trained neural network(s) 264 (block 308). For instance, additional input data may be simulated to obtain corresponding output conditions and compared to an output of the neural network(s) 264 to ensure accuracy of the trained neural network(s) 264. In some embodiments empirical data may be used in place of or in addition to the simulated conditions. For instance, if pre- and post-operation images are available from real world measurements, such data may be used in training and/or testing the training of the neural network(s) 264.
  • As multiple parameters may be relevant in the well placement analysis, the neural network(s) 264 may be used to predict multiple parameters. In some embodiments, each parameter may be predicted by respective individual neural networks of the neural network(s) 264. As such, different neural networks may be used to predict different respective output parameters, such as carbon saturation, carbon dissolution, pressure, and/or any other parameter that may be of interest related to the underground reservoir after well placement. Additionally or alternatively, at least one of the neural network(s) 264 may predict multiple parameters in a single neural network. Furthermore, in some embodiments, the multiple neural network(s) 264 may function independently of each other. Additionally or alternatively, at least one of the neural network(s) 264 may depend on/use an output from another of the neural network(s) 264. For instance, a neural network 264 may be trained for a first parameter (e.g., pressure) first. The results from the output of the neural network 264 may then be inputted to another neural network 264 that may correspond to another parameter (e.g., carbon saturation) that may at least partially depend on the first parameter. Similarly, in some embodiments, the second parameter may be used in another neural network 264 for a third parameter (e.g., carbon dissolution) that may at least partially depend on the first parameter and/or the second parameter. Likewise, the parameter outputs of any neural network(s) 264 may be subsequently used in another neural network(s) 264 that corresponds to parameters that may be at least partially related to the parameter outputs.
  • FIG. 4 is a block diagram of a process 400 performed using the trained neural network(s) 264 configured using the process 300 of FIG. 3 . The computing system 254 detailed in FIG. 2 obtains properties related to an underground reservoir (block 402). The properties may be obtained as a stack of input data (e.g., input images) corresponding to measurements or interpretations of various parameters, such as horizontal permeability data, vertical permeability data, porosity data, depth data, location controls data, well controls data, and the like. For instance, the input image may plot one parameter (e.g., permeability data or porosity data) against space and/or time. The computing system 254 then encodes the properties into a latent space (block 404). For instance, the neural network(s) 264 may include one or more fully connected neural networks with a number (e.g., 8) of outputs. For instance, fully connected neural networks may use a rectified linear unit (ReLU) function to shape the input data into the latent space. According to one embodiment, the latent space comprises a data state of the input data between the input and output of the neural network. For example, data properties of the input data including properties (e.g., physical properties of a reservoir under consideration) such as permeability data, porosity data, etc., may be transformed or encoded by, for example, a first neural transformer comprised in the neural network to generate a transformed input data in the latent space. The transformed input data may or may not be subjected to additional analysis and/or additional transformation by a second encoder of the neural network or a third encoder of the neural network or a fourth encoder of the neural network, etc., prior to being outputted from the neural network.
  • The computing system 254 then performs a Fourier Transform on the encoded properties in the latent space (block 406). For instance, the computing system 254 may perform a fast Fourier transform (FFT) on the encoded points serially in four dimensions: three spatial dimensions and the time dimension. In some embodiments, the computing system 254 may filter at least some frequencies (block 408). For instance, the computing system 254 may truncate frequencies higher than a first number (e.g., 12) of for the spatial domains and a second number (e.g., 8) for the time domain. This filtration may be implemented using a low-pass filter that is implemented using hardware, software, or a combination of both.
  • The neural network(s) 264 then perform convolutions on the filtered (or unfiltered) encoded properties using learned/trained weights in the Fourier space to generate convolved data (block 410). For instance, these convolutions may be matrix multiplications of the Fourier-transformed three-dimensional data and/or four-dimensional data. The weights may be for frequencies and/or phase as trained using the process 300 described in relation to FIG. 3 . The computing system 254 then performs an inverse Fourier transform on the convolved data (block 412). At least some of the input data may bypass the Fourier transform. This data may instead be locally convolved by the computing system 254 (block 414). In some embodiments, the local convolution may include a 1-D convolution that is performed using a linear transformation that may maintain at least some spatial awareness that may be lost in the Fourier transformations. Additionally or alternatively, the local convolution may be performed using different/more robust neural networks. For instance, the local convolution may be performed using larger (e.g., 3×3) convolutional networks and/or graph neural networks. The computing system 254 may combine the locally convolved data and the transformed convolved data (block 416). For instance, the results may be summed together and/or passed to a Gaussian linear unit (GeLU) activation function. The output of the combination may then be looped through blocks 406, 408, 410, 412, 414, and 416 to re-transform the data N number (e.g., N>=3) of times (block 418). It is appreciated that the multiple iterations and/or re-transformations beneficially trains a model associated with the neural network to enhance said model's accuracy in predicting data outputs for new reservoirs that may be similar to or distinct from the reservoir used to train the model. In one embodiment, the neural network(s) 264 may comprise a first linear step for processing the input data followed by applying an “activation function” to the processed input data. The activation function, for example, provides discrete or non-discrete amounts of data processing or data transformations that are non-linear. In one embodiment the first linear step may be followed by multiple activation function operations on data generated from the first linear step and thereby provide a plurality of discrete, incremental, or small amounts of non-linear processing to the data generated from the first linear step.
  • Thus, the output of the combination replaces the encoded input from each successive pass. In some embodiments, the convolution may occur in Fourier space a single time where N would be zero. When loops are completed, the computing system 254 may decode predicted results from the output of the combination in block 416. For instance, the computing system 254 may use a fully connected neural network to decode the output based at least in part on the combination of the transformed convolved data and the locally convolved data to generate a predicted outcome for a parameter, such as carbon saturation, carbon dissolution, and pressure when the operation being analyzed is well placement for carbon capture and storage. Similar or different predicted outcomes may be made for other types of reservoir-related operations using similar techniques based on different input data and different predicted results. Furthermore, as the predicted results are based on discrete Fourier Transforms, the predicted results may be made by skipping to periods of interest (e.g., the end) rather than being forced to simulate through the periods over time. In other words, the predicted results may be made while skipping at least some intermediate results.
  • Although the process 400 shows a list of process blocks, these sub-operations detailed in these blocks may be performed in an order other than that shown in FIG. 4 . For instance, in certain embodiments, the local convolution may be performed before, after, and/or at least partially simultaneously with the Fourier-based convolutions. Applying the principles of the process 400 that may have been trained using the process 300 using a suitable number (e.g., 3,000) of training cases may provide a suitable mechanism to predict the output of numerous (e.g., 7 trillion combinations) quickly with a relatively low amount of processing (e.g., 1.6 ms per prediction per NVIDIA V100 tensor core graphics processing unit core used). Thus, the process 300 enables faster and efficient searching for favorable/optimum well combinations for various well operations.
  • The techniques presented and claimed herein are referenced and applied to material objects and concrete examples of a practical nature that demonstrably improve the present technical field and, as such, are not abstract, intangible, or purely theoretical. Further, if any claims appended to the end of this specification contain one or more elements designated as “means for performing a function . . . ” or “step for performing a function . . . ,” it is intended that such elements are to be interpreted under 35 U.S.C. § 112(f). However, for any claims containing elements designated in any other manner, it is intended that such elements are not to be interpreted under 35 U.S.C. § 112(f).
  • Implementations
  • This disclosure leverages deep learning to construct a fast-running, machine learning model (e.g., a reservoir proxy model) that is capable of providing (e.g., predicting) state data associated with, for example, fluid states and physical conditions relative to a reservoir of interest and/or well placement locations associated with said reservoir. For example, the disclosed approach can be used to generate data (e.g., reports) indicating the final state and location of a gas such as CO2 with attendant properties including CO2 saturation data, dissolved CO2 data and/or CO2 pressure data. In one embodiment, the generated data or reports may be used to: guide or otherwise direct carbon capture and storage operations associated with the reservoir; determine optimal well locations to place about the reservoir; inform fluid extraction strategies associated with retrieving fluid from the reservoir; etc. According to one embodiment, the machine learning model is a fast-running model configured or otherwise optimized to determine well placement locations for a reservoir that best traps, for example, fluid such as carbon dioxide.
  • In some embodiments, a machine learning model is developed and structured based on a neural network architecture (e.g., image-to-image neural network) that is adapted to predict state data (e.g., the final state of a carbon capture) for fluid storage within the reservoir under various well placement scenarios. For example, the neural network architecture may enable the machine learning model may have a plurality of surface or subsurface parameters that characterize for example, a given reservoir including a permeability parameter (e.g., a horizontal permeability parameter and a vertical permeability parameter), a porosity parameter, a depth parameter, and a well location or well controls parameter. According to one embodiment, the parameters of the machine learning model may be trained using, for example, datasets (e.g., supervised and/or unsupervised learning datasets) including sensor datasets associated with the reservoir and/or synthetic datasets associated with the reservoir as the case may require. For example, the datasets may comprise spatial data such as a multi-dimensional image data (3-dimensional image data) of the reservoir under consideration and/or spatio-temporal data such as a time dependent multi-dimensional image data that relate the spatial and temporal aspects of the reservoir. This can include a 3-dimensional image of the reservoir under consideration for a period of time.
  • During the training phase of the machine learning model, the well locations/controls parameter referenced above may be allowed to randomly or selected strategically from all their possible values. Furthermore, training and testing the machine learning model based on the datasets may be achieved using a reservoir simulation tool that is configured to determine or estimate state data (e.g., final states fluid stored in the reservoir) by indicating said state data in, for example, reports that include images of the stored fluid based on the state data including saturation information associated with said stored fluid, dissolved fluid information at within the reservoir based on the state data, and fluid pressure data of stored fluid within the reservoir derived from the state data.
  • According to one embodiment, the machine learning model comprises one or more neural operators that map inputs (e.g., input dataset such as training datasets and/or non-training datasets) to output datasets including the predictions and/or estimations (e.g., state data) referenced above. According to one embodiment, the neural network structure of the machine learning model beneficially predicts the results of unseen well locations and/or conditions associated with a reservoir under consideration. For example, the machine learning model, in response to being trained is able to ingest new data such as permeability data, (e.g., horizontal permeability data and vertical permeability data, porosity data, depth data associated with a new reservoir similar to, or different from the reservoir used to configure or train the machine learning model and thereby generate state data indicating multiple well placement locations for the new reservoir under various operating conditions. It is appreciated that the disclosed neural network approximation using the above machine learning model achieves approximately 100,000 times faster relative to non-neural network models that do not use the disclosed neural operators, allowing for more advanced optimization techniques for reservoir modeling that lead to executing far more effective energy management operations.
  • According to one embodiment, training the machine learning model comprises applying a deep learning process to the machine learning model that associates or maps (associative or mapping 504 of FIG. 5 ) input data and/or other boundary conditions data (e.g., a stack 502 of input data and/or boundary condition data) for a reservoir under consideration to output data including the above-referenced state data (e.g., state data 506). For example, the training process may comprise:
      • 1. Providing an input dataset (e.g., a training dataset) that parameterizes the various parameters of the machine learning model including a stack of images that indicate, for example, horizontal permeability data of a reservoir under consideration, vertical permeability data of said reservoir, porosity data associated with said reservoir, depth data associated with said reservoir, well location or well controls data;
      • 2. Encoding the image into a latent space via a connected neural network with a plurality of channels (e.g., 8 channels, 16 channels, 32 channels, etc.) as output;
      • 3. Performing or executing a Fast Fourier Transform on the encoded image serially in multiple dimensions (e.g., 4 dimensions—3 of which can be spatial dimensions (x, y, z) associated with the reservoir with the fourth dimension being time) to generate a frequency domain representation of the machine learning model; this is the spatio-temporal data referenced above;
      • 4. Truncating, removing, or filtering higher frequencies from frequency domain representation of the machine learning model (e.g., removing frequency modes higher than 12 modes for the spatial domain (e.g., x, y, z) data, and frequency modes higher than 8 modes for the time domain component;
      • 5. Performing convolution operations using the filtered frequency domain representation based on weights in the Fourier space derived from the training dataset and which are applied to the parameters of the machining learning model during the training phase to generate a convolved machine learning model; it is appreciated that the convolution operations in the Fourier space comprise matrix multiplications;
      • 6. Performing an inverse Fast Fourier Transform on the convolved machine learning model to generate a deconvolved machine learning model; it is appreciated that each channel or point referenced at step 2 goes through a 1-dimensional convolution at step 5 which, in aggregate, form the convolved machine learning model;
      • 7. Combining the convolution operations and the inverse Fast Fourier Transform into a non-linear activation function plus bias term to generate the neural operator;
      • 8. Generating output values using the neural operator and passing said output values (output values based on the input dataset) through steps 3-8 about 2-5 times to generate optimized output data; and
      • 9. Decoding the optimized output data to generate a report indicating the state data including the well placement data for the reservoir under consideration.
  • These steps are visualized in FIG. 6 . It is appreciated that the above process may be used to generate a trained machine learning model which in turn may be used to accurately predict unseen well combinations for well placement screening. In an exemplary implementation where a dataset of approximately 3000 training cases, the machine learning model generated predicted output for approximately 7 trillion possible combinations within a timeframe of 1.6 ms per prediction per V100 GPU used. This allows for much faster and efficient searching for favorable well combinations for energy development operations.
  • The disclosed process for determining well placement for a given reservoir based on aforementioned report beneficially informs optimal locations to drill or otherwise access a given reservoir for, for example, fluid storage. This allows the optimal placement of multiple wells about a given reservoir as well as executing screening operations for large geological areas associated with said reservoir for suitable carbon capture and storage (CCS) operations such as injection operations. Moreover, the disclosed approach enables characterizing, based on the generated report from the above steps, uncertainty data associated with executing the CCS operations. In addition, the disclosed approach enables CCS process optimizations based on the uncertainty data as well as inform fluid access decisions associated with accessing fluid comprised in a reservoir under consideration. It is appreciated that the disclosed machine learning model can be used to provide superior well placement results for a given reservoir, faster total workflow time in determining said results, and lower computing costs under tested conditions.
  • Example
  • Due to the large spatial/temporal extent and the non-linearity of governing systems associated with reservoir modeling, physics-based simulators for reservoir modeling often require large computing resources and long times to generate modeling results which may or may not be accurate. This leads to generating results data associated with reservoir modeling based on a limited number of possibilities for a given timeframe thereby generating sub-optimal development plans for a reservoir under consideration.
  • The disclosed example is directed to optimally placing four wells (e.g., vertical injector wells) about a reservoir at a resource site (e.g., oilfield, gas field, site with a depleted reservoir, etc.). It is appreciated that the approach used solves a combinatorial well placement problem based on associated complexities with a large number of possible results in the solution space. For example, even with a defined modest well-placement grid, there were over 6.8 trillion possible combinations of well placement locations for the disclosed example. In addition, properties associated with the reservoir such as heterogenous permeability and variable well locations implicate complex interference patterns and plume shapes for the reservoir under consideration.
  • According to one embodiment, determining optimal locations about the reservoir for fluid storage, for example, may be based on: first data associated with the amount of fluid (e.g., gas) to be injected or otherwise pumped into the reservoir; second data associated with the safety or security associated with injecting said fluid into the reservoir; and third data indicating the cost of injecting said fluid into the reservoir. The first data, second data, and third data may be combined into a single objective function that drives determining optimal locations about the reservoir to place the one or more wells.
  • According to one embodiment, the quantity of fluid being stored within the reservoir under consideration may have associated incentivizing credits (e.g., carbon credits) for organizations such that the more fluid or gas is stored, the more carbon credits said organization accrues.
  • To model the fluid storage security/safety of a reservoir under consideration, it is appreciated that the fluid to be stored or trapped within the reservoir may be stored in for example, a plurality of ways (e.g., one of four ways). For example, the fluid storage security/safety may be based on: structural data associated with the reservoir; capillary data associated with the reservoir; solubility data associated with the reservoir; and mineral data associated with the reservoir. According to one embodiment, multiplier data (e.g., multipliers) is used to characterize fluid trapped in one or more desirable states within the reservoir, with a focus on structural data, mineral data, and solubility data in the absence of reliable geochemical data used to indicate the mineral data. According to one embodiment, the multiplier data comprises an objection function which numerically or quantitatively or qualitatively characterizes or describes optimal, desired, or expected output data associated with applying input data to the model. For example, the objective function may characterize how desirable an outcome is so that the model can be optimized to provide such an outcome. In one embodiment, the objective function can be associated with the net present value associated with a reservoir under consideration. Using CO2 storage in a depleted reservoir under consideration as an example, if is determined that CO2 trapped in state A within said reservoir is twice as valuable as in state B associated with the reservoir, the multiplier data may include a multiplier value of 2 within the objective function for all CO2 being modeled based on state B.
  • In some cases, a penalty function (e.g., a large penalty function) is provided to correlate or otherwise represent any fluid leaks (e.g., fluid leaks associated with stored fluid within the reservoir) outside of specified fluid leak thresholds associated with the reservoir. In addition, costs associated with developing optimally placed wells about the reservoir in question may be estimated based on a total drilling distance required for all locations from a platform drilling center. As such, drilling wells further apart incurs a higher cost.
  • The objective function referenced above may be determined at for example, a first time step (e.g., a single time step) according to one embodiment. In other embodiments, a time-weighted function is used to characterize the objective function. For example, the disclosed objective function for a given fluid (e.g., CO2) may be given by:
  • CP * ( CO 2 struc + ( λ cap * CO 2 cap ) + ( λ dis * CO 2 dis ) - ( λ penalty * CO 2 leak ) ) - ( D { drill } * CD )
      • Where:
      • CO2struc=Structurally trapped CO2
      • CO2cap=Capillary trapped CO2
      • CO2dis=Dissolution trapped CO2
      • CO2leak=CO2 leaked from license block to adjacent aquifer
      • D(drill)=Drilling Distance
  • In addition, the following table provides additional values that may be ascribed to the remaining variables from the above equation:
  • Function Parameter Value
    CP, Carbon price, ($/metric ton) 85
    λcap, Capillary Multiplier 1.5
    λdis, Dissolved Multiplier 2
    λpenalty, Leak Penalty Multiplier 50
    CD, Drilling cost ($/m) 7600
  • Reservoir Model
  • To add heterogeneity data associated with the dataset used to develop the machine learning model, porosity data associated with a location for well placement (e.g., injection zone) associated with a reservoir under consideration may be modeled using a gaussian random function simulation that may have a spherical variogram of about 1000 m N/S, 500 m E/W with a 40 m vertical anisotropy. The permeability data in a sand zone associated with the reservoir in question may be characterized using a poro-perm transform function given by:
  • permeability = porosity ( % ) 2.27
  • It is appreciated that shale breaks associated with the reservoir may be ascribed a value of 0 permeability.
  • According to one embodiment, an aquifer (e.g., a large, infinite-acting numerical aquifer) is coupled or otherwise modeled into the reservoir with a geometrically increasing cell spacing with exemplary dimensions in horizontal and/or vertical directions as indicated in FIG. 6 . It is appreciated that the aquifer cells disclosed can provide both realistic transient pressure support data and/or easy quantification of any fluid (e.g., CO2) leakage outside of established thresholds for the reservoir.
  • The tight permeability and large aquifer coupled to the reservoir may be implemented to determine whether one or more neural operators such as Fourier neural operators (FNOs) accurately predict the interactions (e.g., interference) of the wells with the aquifer under complex transient conditions.
  • Although there are several zones of the reservoir that can have low permeability barriers between them, said zones may not be totally sealing. A computational tool (e.g., Equinor) may be used to interpret several “feeders” or leakage points between the zones as implemented based on vertical permeability multiplier data. These feeders may be left intact and increase the complexity of the well placement task. The dots in FIG. 8 indicate location data of the feeders.
  • In one embodiment, fluid (e.g., CO2) injection happens in the lowest zone with a simplified field strategy. Furthermore, one or more injectors (e.g., all four injectors) inject simultaneously at a maximum bottom hole pressure of about 230 bar, or a bottom hole pressure approximately equal to a fracture pressure associated with the reservoir. In addition, rate controls (e.g., fluid rate controls), including variable rate controls relative to time may be tested. It is appreciated that the disclosed example represents fluid injection into the reservoir for a first period (e.g., 9 years) following which injection is shut for a second time period (e.g., 1 year).
  • Hybrid Workflow
  • The hybrid workflow associated with the disclosed example is indicated in FIG. 8 (see arrow path in figure). Data from reservoir simulations may be used as “ground truth” or a benchmark to train an artificial intelligence (AI) surrogate model or a machine learning (ML) model. After generating training data and training the ML model based on same, optimization of the machine learning model is executed in the machine learning space, which, in this case is at various levels of the neural network structure of the machine learning model. Solution(s) (e.g., predictions, reports, etc.) from this process are then passed back to the reservoir simulation for verification and any further detailed engineering and/or tweaking.
  • Data Generation
  • Full training and test datasets were created by running reservoir simulations with random well locations and then using generated image data (e.g., 3-dimensional stack data) therefrom to train or otherwise configure the neural operators of the ML model. Validation operations were carried out to ensure that, for example, no two wells are allowed in the same cell or within neighboring cells. Lower data requirements may be possible with targeted sampling of the design space. In one embodiment, a total of 4000 simulations were executed to test for both high and low data (e.g., training data) availability scenarios.
  • FNO Network Design Decisions and Speed
  • Modifications were made to the number of data layers associated with the ML model and the number of Fourier modes the neural architecture of said layers that bypass the Fourier transform. In addition to the direct data-to-data learning features of the ML model, the physics conservation equations can be encoded into the loss function referenced above. While this improves accuracy, it requires finer time discretization and may take longer to train as the elements in the loss function must be balanced against each other.
  • In the disclosed approach, the FNO is enhanced to be as fast as possible. In particular, the size of the neural network structure of the ML model may be reduced We reduced in addition to reducing the number of Fourier modes (e.g., via, for example, the filtering operation discussed above). In one embodiment, the result space or output space of the model is reduced to indicate a section of the state data such as the final state data of the reservoir in response to determining well placement locations using the model. These reduced aspects of the model can be beneficially used to configure the objective function of the ML model (e.g., excluding intermediate steps). According to one embodiment, the parameters of the ML model are discretized to generate a plurality of trainable parameters to facilitate easy training and deployment of the ML model. For example, the discretization of the parameters of the ML model may comprise breaking the ML model's neural network structure into a plurality of trainable parameters (e.g., about 8 million trainable parameters in the disclosed example).
  • According to one embodiment, the ML model is trained using between 25-3200 multi-dimensional (3-dimensional stack data). In addition, training of the ML model was completed between 5 minutes to 1 hour using 1 to 4 V100 graphical processing units (GPUs) for 40 epochs or times, each with a batch size of 4.
  • In other embodiments, comparing the disclosed approach to other physics based simulation or modeling tools for the reservoir under consideration revealed a modeling time of about 8240 seconds on a quad-core central processing computing unit. In contrast, the disclosed implementation on a single V100 GPU took approximately 0.017 seconds, representing a 500000 times faster to generate the necessary reports with even more accuracy than the physics based modeling tools referenced above.
  • FNO Accuracy vs Data
  • The accuracy of ML model can depend on how much data is used to train it. In particular, the dependence between model accuracy and amount of training data can be linear, according to some embodiments, as indicated in FIG. 9 . More specifically, this figure indicates that the more training data is available, the lesser the value of the errors associated with outputs (e.g., or reports (e.g., report indicated in FIG. 10 ) derived therefrom) from the ML model. The table below provides specific quantitative data associated with chart of FIG. 9 .
  • Validation Loss Material Balance Error %
    Number of Dissolved Total in
    Training Cases Pressure Saturation Gas Free Trapped Dissolved Boundary
    25 0.094 0.975 0.963 100.0 129.1 49.0 13.9
    50 0.087 0.426 0.436 15.3 11.4 8.4 10.9
    100 0.095 0.462 0.404 20.4 16.8 7.7 10.9
    200 0.09 0.373 0.377 14.0 16.7 9.5 10.8
    400 0.038 0.324 0.342 9.5 6.7 6.0 6.8
    800 0.027 0.26 0.275 5.5 5.1 4.8 4.7
    1600 0.017 0.203 0.205 4.6 6.0 2.7 4.4
    3200 0.013 0.167 0.170 2.6 2.3 1.8 2.1
  • According to one embodiment, a mismatch between errors (e.g., material balance errors) accurate predictions and actual predictions is quantified to gauge the performance of the ML model. For example, the difference between total fluid injected between the ML model and a physics-based simulator may be given by:
  • all cells V p * ( ( S CO 2 * ρ CO 2 ( sc ) ) + S water * Rs * Bw * ρ CO 2 ( std ) ) )
      • Where
      • Vp=pore volume, porosity*bulk volume, sm3
      • SCO2=saturation (pore fraction) of CO2
      • ρCO2(sc)=density of supercritical CO2, a function of pressure, kg/m3
      • Swater=saturation (pore fraction) of water, 1−SCO2
      • Rs=dissolved gas ratio, sm3/sm3
      • Bw=water formation volume factor
      • ρCO2(std)=density of gaseous CO2 at standard conditions, kg/m3
  • It is appreciated that the above equation can be used to indicate summing fluid in 3-dimensional grid cells associated with a reservoir under consideration. Visually, a comparison of the predicted values of parameters such as saturation parameter, dissolved saturation parameter, and pressure parameter of the model are depicted in the table above discussed in association with the chart of FIG. 9 for a line drive well pattern which was absent from the training cases for the ML model. This shows that the complexity of a fluid plume is largely predicted even under the effects of multi-well interference conditions, buoyancy conditions, and under other heterogenous, non-isotropic multi-dimensional (e.g., 3-dimensional) permeability conditions associated with a field within which the reservoir under consideration is located. This also shows that the errors are mainly concentrated in a few cells around the plume edge. In this area the PDE's solution is a hyperbolic saturation shock front. Poor estimation of this phenomenon is a well-known limitation of many current ML techniques, although the mismatch can be mitigated if it is critical to the application at hand.
  • Optimization Process
  • According to one embodiment, the ML model leverages a differential evolution optimizer with attendant library (e.g., a scipy library) to automatically place the wells, with i-j locations of each well being treated as independent variables within the bounds of a grid associated with the reservoir. In addition, genetic optimization (e.g., optimization based on the neural operator) may be chosen to impose a minimization constraint on the ML model. After model convergence (e.g., the model has been trained to optimally place wells about the reservoir), the well locations are run in a simulator to validate the final objective function results.
  • Workflow Comparison
  • A characteristic of the simulator-plus-ML model versus simulator-only workflows is that a level of parallelization can be achieved to reduce the elapsed time of generating predictions based on the ML model. When the ML workflow is being followed, training data can potentially be launched simultaneously with cloud computing resources.
  • Exemplary Reports
  • FIG. 10 depicts an exemplary report that includes state data (e.g., final state data) associated with a plurality of parameters of the ML model. In particular, data associated with a new reservoir that may be similar to or different from the reservoir used to develop the model is applied to the model and used to generate the state data indicating fluid (e.g., CO2) saturating data, dissolved fluid data, and fluid pressure data associated with the new reservoir.
  • In other embodiments, the reports may also indicate the number of wells placed about the reservoir. For example, the ML model may be used to recommend a plurality of locations to place wells about the reservoir as depicted in FIG. 11 where up-to 3200 optimal well locations are determined for fluid injection into a given reservoir.
  • While any discussion of or citation to related art in this disclosure may or may not include some prior art references, this is neither a concession nor acquiescence to the position that any given reference is prior art or analogous prior art.
  • In some embodiments, the term optimize/optimal and its variants (e.g., efficient, optimally, etc.) may simply indicate improving, rather than the ultimate form of ‘perfection’ or the like.
  • Furthermore, the foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to explain the principles of the disclosure and its practical applications, to thereby enable others skilled in the art to use the invention and various embodiments with various modifications as are suited to the particular use contemplated.
  • It is appreciated that although the terms first, second, etc., may be used herein to describe various elements, these elements should not be limited by these terms. These terms are used to distinguish one element from another. For example, a first object or step could be termed a second object or step, and, similarly, a second object or step could be termed a first object or step, without departing from the scope of the disclosure. The first object or step, and the second object or step, are both objects or steps, respectively, but they are not to be considered the same object or step.
  • The terminology used in the description herein is for the purpose of describing particular embodiments and is not intended to be limiting. As used in the description of the disclosure and the appended claims, the singular forms “a,” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any possible combination of one or more of the associated listed items. It will be further understood that the terms “includes,” “including,” “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
  • As used herein, the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context.
  • Those with skill in the art will appreciate that while some terms in this disclosure may refer to absolutes, e.g., all source receiver traces, each of a plurality of objects, etc., the methods and techniques disclosed herein may also be performed on fewer than all of a given thing, e.g., performed on one or more components and/or performed on one or more source receiver traces. Accordingly, in instances in the disclosure where an absolute is used, the disclosure may also be interpreted to be referring to a subset.

Claims (20)

What is claimed is:
1. A method, comprising:
obtaining properties related to an underground reservoir;
encoding the properties into a latent space;
performing a Fourier transform on a first portion of the encoded properties to obtain a transformed first portion;
convolving the transformed first portion of the encoded properties using a neural network to obtain a convolved first portion;
performing an inverse Fourier transform on the convolved first portion of the encoded properties to obtain an inversely transformed convolved first portion;
performing local convolution on a second portion of the encoded properties to obtain a convolved second portion;
combining the convolved second portion of the encoded properties and the inversely transformed convolved first portion of the encoded properties; and
decoding predicted results related to a potential well location based at least in part on the combination of the convolved second portion of the encoded properties and the inversely transformed convolved first portion of the encoded properties.
2. The method of claim 1, wherein encoding the properties into a latent space comprises encoding the properties using a fully connected neural network.
3. The method of claim 1, wherein decoding the predicted results based at least in part on the combination of the convolved second portion of the encoded properties and the inversely transformed convolved first portion of the encoded properties comprises decoding the predicted results based at least in part on the combination of the convolved second portion of the encoded properties and the inversely transformed convolved first portion of the encoded properties using a fully connected neural network.
4. The method of claim 1, wherein combining the convolved second portion of the encoded properties and the inversely transformed convolved first portion of the encoded properties comprises summing the convolved second portion of the encoded properties and the inversely transformed convolved first portion of the encoded properties and applying an activation function.
5. The method of claim 4, wherein the activation function comprises a Gaussian error logic unit activation function.
6. The method of claim 1, comprising iteratively performing Fourier transformations and convolutions on the combination of the convolved second portion of the encoded properties and the inversely transformed convolved first portion of the encoded properties a plurality of times, and the predicted results are based at least in part on the iterative transformations and convolutions.
7. The method of claim 1, comprising filtering frequencies in the Fourier space before convolution.
8. The method of claim 1, wherein the obtained properties comprise a stack of images showing one or more conditions related to the underground reservoir.
9. The method of claim 8, wherein the stack of images comprise representations of measurements related to horizontal permeability, vertical permeability, porosity, water levels, depth, well locations/control, or a combination thereof.
10. The method of claim 9, wherein the potential well location corresponds to a potential well location to be used for carbon capture and storage.
11. The method of claim 10, wherein the predicted results comprise data indicating carbon dioxide gas saturation, an amount or percentage of carbon dissolved in water, capillary capture, and/or reactions after carbon injection as part of carbon capture and storage.
12. The method of claim 11, wherein multiple predicted results are generated using a plurality of neural networks that comprises the neural network.
13. A method, comprising:
obtaining data related to an underground reservoir;
obtaining subsequent conditions of the underground reservoir after placement of a well at a potential location; and
training one or more neural networks to map input conditions in the data to the subsequent conditions after placement of the well using discrete Fourier transform (DFT) forms.
14. The method of claim 13, wherein obtaining the subsequent conditions comprises simulating the subsequent conditions from the input conditions in the data.
15. The method of claim 13, wherein obtaining the subsequent conditions comprises using data from a subsequent time compared to the data at which the input conditions are obtained.
16. The method of claim 13, wherein the subsequent conditions correspond to a final resting condition after well placement at a later time.
17. A system, comprising:
a memory storing instructions; and
a processor configured to execute the instructions to cause the system to:
encode first and second properties corresponding to an underground reservoir into a latent space to obtain first and second encoded properties;
perform a Fourier transform on the first encoded properties to obtain transformed first encoded properties;
perform low pass filtration on the transformed first encoded properties to obtain filtered first encoded properties;
convolve the filtered first encoded properties in a neural network to obtain convolved first encoded properties;
perform an inverse Fourier transform on the convolved first encoded properties to obtain inversely transformed first encoded properties;
perform a local convolution on the second encoded properties to obtain convolved second encoded properties;
combine the convolved second encoded properties and the inversely transformed first encoded properties; and
decode predicted results related to a potential well location based at least in part on the combination of the convolved second encoded properties and the inversely transformed first encoded properties.
18. The system of claim 17, wherein the neural network is implemented at least in part using the processor.
19. The system of claim 17, comprising one or more fully connected neural networks that are configured to:
encode the first and second properties corresponding to the underground reservoir into the latent space; and
decode the predicted results related to the potential well location based at least in part on the combination of the convolved second encoded properties and the inversely transformed first encoded properties.
20. The system of claim 17, wherein the processor is configured to execute the instructions to cause the system to perform iterative Fourier-based convolutions on the combination of the convolved second encoded properties and the inversely transformed first encoded properties, wherein the predicted results are based at least in part on the iterative Fourier-based convolutions.
US18/412,031 2023-01-13 2024-01-12 Fourier transform-based machine learning for well placement Pending US20240240546A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/412,031 US20240240546A1 (en) 2023-01-13 2024-01-12 Fourier transform-based machine learning for well placement

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363479922P 2023-01-13 2023-01-13
US18/412,031 US20240240546A1 (en) 2023-01-13 2024-01-12 Fourier transform-based machine learning for well placement

Publications (1)

Publication Number Publication Date
US20240240546A1 true US20240240546A1 (en) 2024-07-18

Family

ID=91855314

Family Applications (1)

Application Number Title Priority Date Filing Date
US18/412,031 Pending US20240240546A1 (en) 2023-01-13 2024-01-12 Fourier transform-based machine learning for well placement

Country Status (1)

Country Link
US (1) US20240240546A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119474694A (en) * 2024-11-07 2025-02-18 广东海洋大学 A method and system for predicting ocean three-dimensional temperature-salinity current based on Fourier neural operator
CN120124530A (en) * 2025-05-08 2025-06-10 中国石油大学(华东) Physical Attention Enhanced Fourier Neural Operator for Three-Dimensional Turbulence Prediction
US12499196B1 (en) * 2023-04-02 2025-12-16 Unwind, Inc. Method and system for verifying the identity of a user

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12499196B1 (en) * 2023-04-02 2025-12-16 Unwind, Inc. Method and system for verifying the identity of a user
CN119474694A (en) * 2024-11-07 2025-02-18 广东海洋大学 A method and system for predicting ocean three-dimensional temperature-salinity current based on Fourier neural operator
CN120124530A (en) * 2025-05-08 2025-06-10 中国石油大学(华东) Physical Attention Enhanced Fourier Neural Operator for Three-Dimensional Turbulence Prediction

Similar Documents

Publication Publication Date Title
US12050981B2 (en) Petroleum reservoir behavior prediction using a proxy flow model
US10563493B2 (en) System and method for performing downhole stimulation operations
US8762118B2 (en) Modeling hydraulic fractures
US10895131B2 (en) Probabilistic area of interest identification for well placement planning under uncertainty
US8229880B2 (en) Evaluation of acid fracturing treatments in an oilfield
EP3488073B1 (en) Modeling of oil and gas fields for appraisal and early development
US10866340B2 (en) Integrated oilfield asset modeling using multiple resolutions of reservoir detail
US20170002630A1 (en) Method of performing additional oilfield operations on existing wells
US20240240546A1 (en) Fourier transform-based machine learning for well placement
CA2733841C (en) System and method for simulating oilfield operations
CA3106971C (en) Automated production history matching using bayesian optimization
US9482088B2 (en) Mean regression function for permeability
Mata et al. Hydraulic fracture treatment, optimization, and production modeling
US20250124525A1 (en) Field survey system

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: SCHLUMBERGER TECHNOLOGY CORPORATION, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GODLEWSKI, JOHN;REEL/FRAME:067266/0521

Effective date: 20240426