ORCID Profile
0000-0002-4414-372X
Current Organisation
University of Adelaide
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: SPE
Date: 04-06-2012
DOI: 10.2118/154544-MS
Abstract: The history matching procedure can be ided into three sections: decision variables definition, objective function formulation and optimization. The most widespread approach regarding objective function formulation is the Bayesian framework. A Bayesian framework allows the incorporation of prior knowledge into the objective function which acts as a regularization method. In this approach, objective function consists of two terms likelihood and prior knowledge functions. In order to maximize posterior probability function, usually a summation of prior and likelihood functions is minimized in which the prior and observed data covariance matrixes relate these two functions. Inappropriate covariance matrixes can lead to an incorrect domination of one of the functions over the other one and accordingly result in a false optimum point. In this study, to decrease the chance of convergence into a false optimum point, due to inaccurate covariance matrixes, an application of multi-objective optimization in history matching is introduced while likelihood and prior functions are the two objective functions. By making use of Pareto optimization (multi-objective optimization), a set of solutions named the Pareto front is provided which consists of nondominated solutions. Hence, an inaccuracy in the covariance matrixes cannot allow one objective function to dominate over the other one. After providing the set of solutions, usually a number of solutions are taken out from the set based on postoptimzation trade-offs for uncertainty analysis purposes. For this study, a synthetic case is constructed and history matching is carried out with two different approaches the conventional and the proposed approach. In order to compare the approaches, it is assumed that covariance matrix of the observed data is not exactly known. Then, history matching is carried out, using a single objective genetic algorithm with different covariance matrixes and also, using a multi-objective genetic algorithm. A comparison between the outcomes of the conventional approach and the proposed approach demonstrates that decisions can be made with more confidence using the proposed approach.
Publisher: Elsevier BV
Date: 09-2014
Publisher: Elsevier BV
Date: 07-2020
Publisher: Elsevier BV
Date: 09-2022
Publisher: Elsevier BV
Date: 2021
Publisher: Elsevier BV
Date: 09-2013
Publisher: Elsevier BV
Date: 04-2020
Publisher: Elsevier BV
Date: 07-2016
Publisher: Elsevier BV
Date: 03-2017
Publisher: EAGE Publications BV
Date: 03-09-2018
Publisher: Elsevier BV
Date: 05-2018
Publisher: Elsevier BV
Date: 2019
Publisher: SPE
Date: 13-02-2011
DOI: 10.2118/141379-MS
Abstract: Water flooding is one of the most economical methods to increase oil recovery. To maximize oil recovery during water-flooding, it is essential to provide a forecast of reservoir performance. Hence, various methods are used to simulate reservoirs. Although grid-based simulation is the most common and accurate method, time-consuming computation and demand for large quantities of data restrict the use of this method. This study presents the development of a new method to predict the performance of water injection based on Transfer Functions (TF). This method is faster since it requires less data and the only requirements are injection and production rates. In this method, it is assumed that a reservoir consists of a combination of black boxes (TFs). The order and arrangement of the TFs are chosen based on the physical condition of the reservoir which is ascertained by checking several cases. The injection and production rates act as input and output signals to these black boxes, respectively. After analyzing input and output signals, unknown parameters of TFs are calculated. Then, it is possible to predict the reservoir performance. Different cases are employed to validate the derived model. The simulation results show a good agreement with those obtained from common grid-based simulators. In addition, we found out that the TF parameters depend on the characteristics and the pattern of different sections of the reservoir. This method is a rapid way to simulate water-flooding and could be a new window to the future of fast simulators. It enables prediction of the performance of water-flooding and optimization of oil production by testing different injection scenarios. The method also provides key parameters such as well connectivity.
Publisher: Elsevier BV
Date: 05-2020
Publisher: Elsevier BV
Date: 09-2022
Publisher: Elsevier BV
Date: 03-2017
Publisher: Elsevier BV
Date: 10-2022
Publisher: CSIRO Publishing
Date: 2012
DOI: 10.1071/AJ11009
Abstract: To obtain an accurate estimation of reservoir performance, the reservoir should be properly characterised. One of the main stages of reservoir characterisation is the calibration of rock property distributions with flow performance observation, which is known as history matching. The history matching procedure consists of three distinct steps: parameterisation, regularisation and optimisation. In this study, a Bayesian framework and a pilot-point approach for regularisation and parameterisation are used. The major focus of this paper is optimisation, which plays a crucial role in the reliability and quality of history matching. Several optimisation methods have been studied for history matching, including genetic algorithm (GA), ant colony, particle swarm (PS), Gauss-Newton, Levenberg-Marquardt and Limited-memory, Broyden-Fletcher-Goldfarb-Shanno. One of the most recent optimisation algorithms used in different fields is artificial bee colony (ABC). In this study, the application of ABC in history matching is investigated for the first time. ABC is derived from the intelligent foraging behaviour of honey bees. A colony of honey bees is comprised of employed bees, onlookers and scouts. Employed bees look for food sources based on their knowledge, onlookers make decisions for foraging using employed bees’ observations, and scouts search for food randomly. To investigate the application of ABC in history matching, its results for two different synthetic cases are compared with the outcomes of three different optimisation methods: real-valued GA, simulated annealing (SA), and pre-conditioned steepest descent. In the first case, history matching using ABC afforded a better result than GA and SA. ABC reached a lower fitness value in a reasonable number of evaluations, which indicates the performance and execution-time capability of the method. ABC did not appear as efficient as PSD in the first case. In the second case, SA and PDS did not perform acceptably. GA achieved a better result in comparison to SA and PSD, but its results were not as superior as ABC’s. ABC is not concerned with the shape of the landscape that is, whether it is smooth or rugged. Since there is no precise information about the landscape shape of the history matching function, it can be concluded that by using ABC, there is a high chance of providing high-quality history matching and reservoir characterisation.
Publisher: European Association of Geoscientists & Engineers
Date: 2022
Publisher: Society of Petroleum Engineers (SPE)
Date: 31-08-2020
DOI: 10.2118/186306-PA
Abstract: Presented here is an analytical framework to assess the impact of transient-temperature changes in the wellbore on the pressure-transient response of cold-water injection wells. We focus attention on both drawdown and falloff periods in a well after injection. Historically, these pressure data have been used to calculate reservoir properties concerning flood-efficiency and completion properties (formation permeability/thickness, mechanical skin, and fluid-bank mobilities). One key question addressed in this paper is whether the effects of thermal heating of wellbore fluids during a falloff survey can mask the pressure signature of a two-region composite reservoir. The pressure deflections required to detect mobility changes can be relatively small compared with pressure changes induced by temperature effects in the well. The framework proposed in this paper allows for the numerical evaluation of the contribution of each. Previously, researchers have studied multiple bank-transient-injection problems extensively for the case of reservoir flow and pressure drop, even for nonisothermal problems. The effect of temperature changes in the wellbore and overburden are seldom discussed, however. It is demonstrated in this paper that these effects can, in some cases, be substantial, and it is worthwhile to incorporate them into an interpretation model. The results of this paper are useful for planning and designing a pressure-falloff survey to minimize the adverse effect that heating of wellbore fluid by overburden rock can have on the pressure-transient signature. The theory can also be used to analyze existing data affected by the phenomenon. A real-field case study is shown for a cold-water injector where pressure-falloff data have been affected by temperature changes. The analytical model fits the field data closely when parameters are adjusted within reservoir-property-uncertainty ranges.
Publisher: SPE
Date: 10-06-2013
DOI: 10.2118/164861-MS
Abstract: History matching is an inseparable part of reservoir modelling in which flow performance data are incorporated into geological model to reduce uncertainty in decision making. Although, thus far numerous studies have been published in order to augment the quality of history matching, a comprehensive approach is not being taken toward it, and development in different steps of history matching, including reparameterization and optimization, is looked for. Reparameterization is used to reduce the number of decision variables and subsequently the degree of calibration freedom in order to expedite history matching procedure. But, it always brings reparameterization error into the problem. In this study, an inventive algorithm is introduced in which reparameterization is unnecessary and accordingly the corresponding error is eliminated. In this algorithm, entire uncertain parameters including porosity and permeability distributions can directly be adjusted. Image fusion technique is widely used in image processing problems. Image fusion is a technique for combining a number of images into a single image to afford a more informative image. In this study, image fusion technique is applied in history matching problems. In the designed algorithm, different models intelligently and stochastically are merged based on their corresponding fitness values to produce a fitting model. It is a repetitive approach until stopping criteria are met. Hence, it performs similar to evolutionary algorithms while image fusion works as a mating operator. In order to assess the proposed algorithm, the outcomes of history matching using this algorithm on a synthetic model and also PUNQ-S3 reservoir model are compared with a number of history matching approaches. The outcomes demonstrate an improvement over the applied approaches in terms of the quality of the history matched models. The proposed approach is coped with a significantly larger number of variables, and accordingly high resolution history matched models are achieved. Also, the algorithm has an acceptable speed of convergence even in comparison with variable metric algorithms.
Publisher: CSIRO Publishing
Date: 2018
DOI: 10.1071/AJ17087
Abstract: Rock typing or sub ision of a reservoir either vertically or laterally is an important task in reservoir characterisation and production prediction. Different depositional environments and diagenetic effects create rocks with different grain size distribution and grain sorting. Rock typing and zonation is usually made by analysing log data and core data (mercury injection capillary pressure and permeability measurement). In this paper, we introduce a new technique (approach) for rock typing using fractal theory in which resistivity logs are the only required data. Since resistivity logs are sensitive to rock texture, in this study, deep conventional resistivity logs are used from eight different wells. Fractal theory is applied to our log data to seek any meaningful relationship between the variability of resistivity logs and complexity of rock fabric. Fractal theory has been previously used in many stochastic processes which have common features on multiple scales. The fractal property of a system is usually characterised by a fractal dimension. Therefore, the fractal dimension of all the resistivity logs is obtained. The results of our case studies in the Cooper Basin of Australia show that the fractal dimension of resistivity logs increases from 1.14 to 1.29 for clean to shaly sand respectively, indicating that the fractal dimension increases with complexity of rock texture. The fractal dimension of resistivity logs is indicative of the complexity of pore fabric, and therefore can be used to define rock types.
Publisher: CSIRO Publishing
Date: 2013
DOI: 10.1071/AJ12035
Abstract: History matching is a computationally expensive inverse problem. The computation costs are dominantly associated with the optimisation step. Fitness approximation (proxy-modelling) approaches are common methods for reducing computational costs where the time-consuming original fitness function is substituted with an undemanding function known as approximation function (proxy). Almost all of the applied fitness approximation methods in history-matching problems use a similar approach called uncontrolled fitness approximation. It has been corroborated that the uncontrolled fitness approximation approach may mislead the optimisation direction to a wrong optimum point. To prevent this error, it is endorsed that the original function should be utilised along with the approximation function during the optimisation process. To make use of the original function efficiently, a model-management (evolution-control) technique should be applied. There are three different techniques: in idual-based, population-based, and adaptive. By using each of these techniques, a controlled fitness approximation approach is assembled, which benefits from online learning. In the first two techniques, the number of original function evaluations in each evolution-control cycle is fixed and predefined, which may result in an inefficient model management. In the adaptive technique, the number is altered based on the fidelity of the approximation function. In this study, a specific adaptive technique is designed, based on heuristic fuzzy roles then, for the first time, the applications of all the three techniques are investigated in history matching. To deliver an assessment between the four approaches (the uncontrolled approach and three controlled approaches), a framework is developed in which ECLIPSE-E100 is coupled with MATLAB and an artificial neural network, a genetic algorithm—with a customised crossover—and a Latin hypercube s ling strategy are used as the proxy model, optimiser, and experimental design method, respectively. History matching is carried out using each of the four approaches for the PUNQ-S3 reservoir model, while the same amount of computation time was allowed for each of the approaches. The outcomes demonstrate that the uncontrolled approach cannot deliver reliable results in comparison with the controlled approaches, and among the controlled approaches, the developed adaptive technique is more efficient.
Publisher: SPE
Date: 09-11-2015
DOI: 10.2118/177020-MS
Abstract: Multiphase flow simulation of hydraulically fractured wells with pre-existing natural fractures is a challenging task. Discrete fracture network (DFN) is an effective method to simulate a system of anisotropic fracture network. In DFN method unlike double porosity approach, fractures can be defined in different scales and in different apertures and directions such that all details of fractures will be included in the simulation. Also all the interactions and fluid flow in and between the fractures and within the matrix are modelled in a unified manner, using the same computational grid. We developed software based on DFN model in which both induced fractures and pre-existing natural fractures can be placed at any location and direction in the geological model. Then, quadrilateral unstructured grids are generated using Paving method. Quadrilateral grids are more efficient and more flexible than triangular grids such that the number of quadrilateral grids is reduced to half. A drawback of algorithm in this mesh generation process is that very small grids are generated in the intersection of fractures and in the ends of the fracture grids due to the size difference between the fracture and the matrix. In order to avoid severe time-step restrictions associated with small cells, an endpoint transformation method is used to replace the quadrilateral grids with the triangular grids at both ends of the fracture. As a result, the number of very small grids at the ends of fractures is notably decreased. In this paper, the results of two-phase flow simulation in a hydraulically fractured well with pre-existing natural fractures are presented. For solving the pressure and saturation equations in each grid, the inter region of quadrilateral mesh is taken as a control volume cell and the governing equations are solved using MPFA (Multi-Point Flux Approximation) technique which is designed to give a correct discretization of the flow equations for general non-orthogonal grids. Finally the results of our software are tested to verify its validity in theory, algorithm, and efficiency in calculation.
Publisher: EAGE Publications BV
Date: 07-09-2018
Publisher: Elsevier BV
Date: 04-2021
Publisher: CSIRO Publishing
Date: 2013
DOI: 10.1071/AJ12034
Abstract: Water production in the early life of Coal Seam Gas (CSG) recovery makes these reservoirs different from conventional gas reservoirs. Normally, a large amount of water is produced during the early production period, while the gas-rate is negligible. It is essential to drill infill wells in optimum locations to reduce the water production and increase the gas recovery. To optimise infill locations in a CSG reservoir, an integrated framework is developed to couple the reservoir flow simulator (ECLIPSE) and the genetic algorithm (GA) optimisation toolbox of (MATLAB). In this study, the desired objective function is the NPV of the infill drilling. To obtain the economics of the infill drilling project, the objective function is split into two objectives. The first objective is the gas income the second objective is the cost associated with water production. The optimisation problem is then solved using the multi-objective solver. The economics of the infill drilling program is investigated for a case study constructed based on the available data from the Tiffany unit in San Juan basin when gas price and water treatment cost are variable. Best obtained optimal locations of 20 new wells in the reservoir are attained using this optimisation framework to maximise the profit of this project. The results indicate that when the gas price is less than $2/Mscf, the infill plan, regardless of the cost of water treatment, is not economical and drilling additional wells cannot be economically justified. When the cost of water treatment and disposal increases from $0.01/STB to $4/STB, the optimisation framework intelligently distributes the infill wells across the reservoir in a way that the total water production of infill wells is reduced by 26%. Simulation results also indicate that when water treatment is an expensive operation, lower water production is attained by placing the infill wells in depleted sections of the coal bed, close to the existing wells. When water treatment cost is low, however, infill wells are freely allocated in virgin sections of the coal bed, where both coal gas content and reservoir pressure are high.
Publisher: Elsevier BV
Date: 11-2015
Publisher: CSIRO Publishing
Date: 2016
DOI: 10.1071/AJ15027
Abstract: After fluid injection (slickwater) during hydraulic fracturing, the flow-back of fracture fluid is necessary before gas production starts. A review of fracture treatments indicates that the incomplete return of treating fluids is a reason for the failure of hydraulic fracturing and is associated with poor gas production. The aim of this study is to investigate the parameters that limit flow-back in low permeability gas wells in the Cooper Basin. The authors used numerical simulation to find the critical controlling parameters to introduce the best practice for maximising the flow-back in the Cooper Basin. Several 3D and multiphase flow simulation models were constructed for three wells in the Patchawarra Formation during fracture fluid injection, soaking time and during flow-back. All models were validated using history matching with the production data. The results show that the drainage pattern is distinctly different in the following directions: vertically upward, vertically downward, and horizontal along the fracture half-length and along the matrix. The lowest recovery is observed during the upward vertical displacements due to poor sweep efficiency. Furthermore, it is observed that drawdown does not influence the recovery significantly for upward displacements. Surface tension reduction, however, can improve sweep efficiency and improve recovery considerably. Also, the wettability of the rocks has a significant impact on ultimate recovery when the effect of gravity is dominant. The authors conclude that a significant amount of injected fluid is trapped in the formation because of poor sweep efficiency and formation of gas fingers, which results from low mobility ratio and gravity segregation.
Publisher: No publisher found
Date: 2018
Publisher: European Association of Geoscientists & Engineers
Date: 2020
Publisher: SPE
Date: 14-09-2015
DOI: 10.2118/175618-MS
Abstract: The high computational costs associated with reservoir simulation create a major barrier in history matching problems. Within the past decades, several efforts have been made to reduce CPU-time by metamodeling (proxy-modeling) techniques in which the original fitness function (OF) is partially or completely substituted by an approximation-function (AF), often called metamodel or surrogate. In order to build a high-fidelity surrogate, usually a large number of training s les is required. This can make the traditional surrogate-assisted algorithms, inefficient. In recent years, it has been shown that to approximate the global optimum using a surrogate-assisted algorithm, the AF does not necessarily need to be of high-fidelity, if it is applied in conjunction with the OF and retrained online, during the optimization process. In order to use the OF effectively and minimally, a model-management strategy is required. In this study, a new adaptive model-management strategy is proposed, in which a second surrogate is required. The first surrogate approximates the original fitness function landscape, and the second one estimates the fidelity of the first surrogate over the search-space. On the basis of the estimated fidelity, a probability for employing the OF is calculated, and then according to the calculated probability, the algorithm stochastically decides to use the OF or the AF. A heuristic fuzzy rule is also applied to adjust the possible range of probability values in each evolution-cycle, based on the average fidelity of the second surrogate. The robustness of the proposed algorithm is analyzed using different analytical benchmarking functions, and a semi-synthetic history matching problem, IC-fault model. To deliver a comparative study, the optimization is also carried out using three other algorithms, without metamodeling (OF-only), offline-learning (AF-only), and online metamodeling with a predefined and constant probability. The outcomes of all four algorithms are compared with each other. The assessment shows that the proposed method can reduce the computational costs significantly.
Publisher: Society of Petroleum Engineers (SPE)
Date: 07-03-2022
DOI: 10.2118/209584-PA
Abstract: Uncertainties are present in many decision-making processes. In field development planning, these uncertainties, typically represented by a set of geological realizations, need to be propagated in response to any proposed alternative (solution). Incorporation of the full set of realizations results in the oil and gas field development optimization problem—where either an algorithm iteratively tries to find the best solution from all the possible alternatives or the best solution must be selected from a set of predefined engineering judgment-driven development scenarios (i.e., set of either well control or well placement settings)—becoming computationally demanding. As such, realization subset selection techniques are required to reduce the computational overhead. We first introduce a reformulation of the subset selection problem to one that aims at ensuring consistent ranking of alternatives between those obtained by the full set and the selected subset. We argue that this should be the ultimate goal of any subset selection technique in such problems. In addition, we also propose a technique which selects a subset that minimizes the difference between the rankings obtained by the full set and subset, for a small batch of alternatives. The key idea, which we investigate thoroughly, is that there is a positive association between the goodness (in terms of ranking alternatives) of the subset selected using a small batch of alternatives and its fidelity in ranking other alternatives. Unlike previous methods, this technique does not depend on selecting subjective (static) properties to perform the subset selection nor does it rely only on flow-response vectors of a base-case scenario. In this work, the proposed technique is assessed using well placement and well control development alternatives to determine the applicability within field development planning. Additionally, the proposed subset selection technique is implemented in an adaptive scheme to solve a well placement optimization problem. The results are promising as the proposed technique consistently selects subsets that are able to rank development alternatives in a similar manner to the full set regardless of the type of development strategy (well control settings or well placement). Furthermore, the implementation of the proposed technique in an adaptive scheme is able to reduce the computational costs, on average, by a factor close to 9 without compromising the solution found for well placement optimization.
Publisher: Informa UK Limited
Date: 04-08-2014
Publisher: EAGE Publications BV
Date: 12-06-2017
Publisher: Elsevier BV
Date: 03-2023
Publisher: Elsevier BV
Date: 07-2011
Publisher: Springer Science and Business Media LLC
Date: 19-02-2022
DOI: 10.1007/S10596-022-10135-9
Abstract: In this study, we propose the use of a first-order gradient framework, the adaptive moment estimation (Adam), in conjunction with a stochastic gradient approximation, to well location and trajectory optimization problems. The Adam framework allows the incorporation of additional information from previous gradients to calculate variable-specific progression steps. As a result, this assists the search progression to be adjusted further for each variable and allows a convergence speed-up in problems where the gradients need to be approximated. We argue that under computational budget constraints, local optimization algorithms provide suitable solutions from a heuristic initial guess. Nonlinear constraints are taken into account to ensure the proposed solutions are not in violation of practical field considerations. The performance of the proposed algorithm is compared against steepest descent and generalized pattern search, using two case studies — the placement of four vertical wells and placement of 20 nonconventional (deviated, horizontal and/or slanted) wells. The results indicate that the proposed algorithm consistently outperforms the tested methods in terms computational efficiency and final optimum value. Additional discussions regarding nonconventional parameterization provide insights into simultaneous perturbation gradient approximations.
Publisher: Society of Petroleum Engineers (SPE)
Date: 05-2022
DOI: 10.2118/210562-PA
Abstract: Well control and well placement optimization have typically been considered as separate problems. More recently, there have been a number of works which have shown improved results when these two problems are considered in a joint manner. However, this joint optimization problem, whether in a sequential or simultaneous manner, is more computationally demanding. In light of this, we propose the use of capacitance-resistance models (CRMs) to assist the computational demand of the joint optimization of well controls and well placement. Specifically, we use a bilevel (or nested) approach, where the outer loop is the well placement problem and the inner loop is the well control problem assisted by CRMs. The well placement problem is solved using particle swarm optimization (PSO), and the well control problem is solved using Adam-simultaneous perturbation stochastic approximation (SPSA). The proposed approach is compared with the conventional implementation using only high fidelity full-physics simulations on two reservoir models of varying complexity. We also investigate the accuracy of the CRMs during the optimization procedure. The proposed approach resulted in solutions for the joint optimization problems with objective function values of up to 21.8% higher than the conventional approach and up to a 99.6% decrease in the number of required reservoir simulations.
Publisher: Elsevier BV
Date: 03-2016
Publisher: Society of Petroleum Engineers (SPE)
Date: 09-04-2021
DOI: 10.2118/205371-PA
Abstract: An approach to the analysis of production data from waterflooded oil fields is proposed in this paper. The method builds on the established techniques of rate-transient analysis (RTA) and extends the analysis period to include the transient- and steady-state effects caused by a water-injection well. This includes the initial rate transient during primary production, the depletion period of boundary-dominated flow (BDF), a transient period after injection starts and diffuses across the reservoir, and the steady-state production that follows. RTA will be applied to immiscible displacement using a graph that can be used to ascertain reservoir properties and evaluate performance aspects of the waterflood. The developed solutions can also be used for accurate and rapid forecasting of all production transience and boundary-dominated behavior at all stages of field life. Rigorous solutions are derived for the transient unit mobility displacement of a reservoir fluid, and for both constant-rate-injection and constant-pressure-injection after a period of reservoir depletion. A simple treatment of two-phase flow is given to extend this to the water/oil-displacement problem. The solutions are analytical and are validated using reservoir simulation and applied to field cases. In idual wells or total fields can be studied with this technique several ex les of both will be given. Practical cases are given for use of the new theory. The equations can be applied to production-data interpretation, production forecasting, injection-water allocation, and for the diagnosis of waterflood-performance problems.
Publisher: Elsevier
Date: 2018
Publisher: EAGE Publications BV
Date: 29-08-2016
Publisher: Springer Science and Business Media LLC
Date: 21-12-2020
Publisher: Elsevier BV
Date: 06-2019
Publisher: CSIRO Publishing
Date: 2018
DOI: 10.1071/AJ17197
Abstract: Coal seam gas (CSG) usually contains high levels of methane, which is mostly in the adsorbed state on micropores. For coal that is not highly permeable, stimulation may be required to enhance productivity. In this study, we propose a new technique to increase near wellbore productivity in tight CSG. This technique comprises three stages: injection, soaking and production. Firstly, nitrogen is introduced to the target formation while maintaining high reservoir pressure. Next, the well is shut for a period of time before the gases are flown back to the surface. The technique is based on competitive adsorption of methane and nitrogen during the shut-in period, which yields pressure build-up. Hence, with this combination of desorption, the coal matrix shrinks and permeability eventually increases. The proposed technique was tested by adsorption simulation at core scale. The model was constructed for crushed s les and the extended Langmuir isotherm and micro–macro kinetics models were applied in ASPEN adsorption software. Tight coal was then simulated with different porosity and sorption characteristics. Finally, we used the stress-sensitive permeability model Palmer–Mansoori to predict permeability changes. The results show that permeability is improved based on pressure variations. We observed 10% pressure increments with greater than 150% permeability enhancement. The model indicates the feasibility of the newly proposed technique to produce the ‘unproducible’. However, more experimental and simulation studies at a reservoir scale are needed to fully confirm the technique.
Publisher: CSIRO Publishing
Date: 2018
DOI: 10.1071/AJ17134
Abstract: The fracture surface roughness is an essential characteristic of the hydraulic fracturing process and has not been fully explored. The surface asperities play a significant role in proppant flow and settlement, fluid leak-off and fracture tip movement. In this study, we performed an experimental investigation to evaluate the fracture surface roughness of a hydraulic fracture generated in the lab tests. The experiments were conducted using a polyaxial cell and a cuboid siltstone block with dimensions of 20 cm × 20 cm × 16.5 cm. The fracturing fluid was injected into a drilled hole in the specimen to initiate and propagate the hydraulic fracture in nearly homogeneous siltstone material. Low injection rates were applied to all lab tests to maintain slow and stable fracture propagation as in the field. The fracture surfaces were digitised by surface mapping techniques utilising high-resolution laser scanning method and analysed using a standard statistical way. Our results showed that the surface topography and roughness parameters are different in various selected segments. However, they follow an increasing trend in radial directions from the initiation point at the wellbore wall toward the specimen borders.
Publisher: SPE
Date: 20-10-2015
DOI: 10.2118/176468-MS
Abstract: Finding an optimal well placement scenario is a complex and computationally intensive problem. The major computational cost is associated with the numerical reservoir simulation which should be executed for the fitness evaluation of each scenario. For such problems, a robust optimization algorithm is required. Population-based algorithms are powerful, but computationally demanding, optimizers. Surrogate-assisted algorithms have been developed to reduce the CPU-time of these optimizers, by substituting the original fitness function (OF), partially or completely, with an approximation function (AF), often known as proxy or surrogate. in these algorithms, a model management strategy should be implemented to use the OF effectively and minimally, during the optimization process, fn this study, a self-adaptive model management strategy is developed, in which two surrogates are required. The first surrogate approximates the fitness function landscape, and the second one estimates the fidelity of the first surrogate over the search space (uncertainty map). According to the estimated uncertainty, the probability of using the OF is calculated for each in idual, and then the algorithm stochastically decides to use the OF or AF. A heuristic fuzzy rule is applied to define the range of probability values in each evolution-cycle, based on the average fidelity of the second surrogate. The proposed technique is installed on a genetic algorithm, and two artificial neural networks are used as the proxies, which are trained online. The robustness of the proposed algorithm is analyzed using a semi-synthetic reservoir model, PUNQ-S3. The outcomes are compared with the results achieved by two common algorithms, a typical (unassisted) genetic algorithm, and an offline-learning surrogate-assisted genetic algorithm. The comparison indicates that the proposed method can reduce the computational costs up to 48%. This surrogate-assisted algorithm may enhance the applicability of population-based algorithms in well placement optimization problems, by reducing the associated computational costs.
Location: Iran (Islamic Republic of)
No related grants have been discovered for Mohammad Sayyafzadeh.