By Sumon Battacharyya

This article was first published on LinkedIn, January 6th, 2017

Happy New Year!!!! Let me begin by apologizing for the hiatus in writing about my understanding of geological models. So eventually after ignoring my survival instincts, I finally managed to put my act together and write something on a topic which all of us have gracefully parked aside and have not even bothered to question. Like many other things in the Geological Modelling world this is taken for granted and has fallen into the great chasm of the automated world that does not value experience (sorry for the moan but you know I am right!).

This extremely important parameter namely Net-to-Gross (NTG) has a significant impact on the reservoir volumes and can make or break the in-place volumes. With more and more reservoir volumes (OIIP) being reported using a three dimensional reservoir models, the scale of non-standardized definitions and modelling methodology can lead to staggeringly different in-place volume.  As we get dazzled with beautiful and gorgeous display of 3D images and views (exciting time isn’t it!!! rotating, moving 3D cubes wow!!! Better graphic cards, clear crystal display and what not!!) we have completely ignored to ask the question “Are the results consistent or not?” (Here I go again!!!)

Let me begin this by quoting a section from PRMS Guidelines for Evaluation of Reserves and Resources (2001) where a comparison has been given for volumetric overestimation of pre-post drill resource of Norwegian Continental Shelf – To quote “Overall experience shows that the prediction of gross rock volumes (GRVs) is the primary contributing factor leading to this overestimation while the prediction of net-to-gross values is the second most important factor”. This for me neatly summarizes how NTG is the key factor in volume calculations and has to be understood and modelled correctly irrespective of whether you are comparing pre- and post drill or just how it has been distributed in the model. The important and relevant question is – “If NTG is so important why it is not being debated or argued upon in the reservoir-modelling or technical world? Why there are virtually no or extremely scarce publications on NTG? (Go and search on OnePetro and check the number of papers). Is this a case of purely this is an unimportant topic or is it more of not opening the can of worms!! To my surprise, NTG has been so much painfully ignored that we have even forgotten to define what it means properly. Maybe it is not as important as I think it is! Maybe I am living in my bubble which someone has to pop!! So as a warning and since this is my blog, if you feel that NTG is important then keep reading or else this is the right time to press the back button on your browser or better still close the browser.

Well if you are still there, just to be clear, specifically for this article we are discussing only the impact of NTG to modelling and estimation of OIIP and not the reservoir engineering/simulation issues.

Let us start with very basic observation (which you already know), if you look into the standard volumetric equation (http://wiki.aapg.org/Reserves_estimation) there is an abbreviation “h” in the equation. This “h” represents the height or thickness of the pay zone from log and core data, which in other words can be total thickness times the Net Pay Thickness. So from OIIP perspective, the pay thickness should be sufficient to define the Net-to-Gross. In other words, Net-to-Gross or Net/Gross or NTG represents the productive hydrocarbon zones in the reservoir for further exploitation. However, this is not always the case, as you would see in the subsequent sections.

So how do you define the NTG? Traditionally, NTG has been defined using petrophysical cut-offs or just cut-offs. However, there have always been issues regarding definition and derivation of the NTG using the petrophysical cut-offs. There can be range of questions starting from definitions and the relationship of “Net Sand” to “Net Reservoir” to “Net Pay”. The question has always been related to convergence of the variation in definition from a petrophysicist, geologist and reservoir engineer. There is no doubt that these cut-offs can significantly affect the reservoir volumes, it is ironical that no standard has been derived for definition of these parameters (Please note what I mean here is purely from the point of view of definition to be consistent and not the method).  However, since the topic of discussion is 3D Geological Modelling of NTG, let us not go into the issues in the definition and imagine for simplicity sake, that a set of cut-off is required to define NTG (Whether this is Net Sand, Net Reservoir or Net Pay is a question better parked for the time being – as a side note, you are already aware that in models which is going to be used for simulation, it is important the keep in mind the aquifer so just having pay may not be a good choice!!). So indirectly speaking, the NTG is just a flag of 0 (Non-Reservoir/Non-Pay) and 1 (Reservoir/Pay), which is then summarized over a zone by petrophysicist and hence represented as a fraction.

Now comes the most relevant part of this discussion, how do we model NTG in 3D? In fact let me ask even further basic question how does someone distribute a flag marked as either 0 or 1?? What is the relation of this flag to geology or depositional facies/architecture or for that matter to diagenesis or compaction?? If somehow we do know this correctly, do we know that how the flag is distributed laterally within these depositional architecture elements?  We would get back to this later.

In order to understand and appreciate the problem, let us tackle this from two aspects – before and after the popularization of geological modelling as a tool for estimating reservoir volumes – or in other words compare map based volumetric calculation to volumes from modern geological modelling techniques.

In the mapping world the NTG map believe me or not – is fairly straightforward, other than the definition issue (which is a debate on its own right and not discussed), making map is not that difficult. The reason being simply the way NTG is estimated in maps. In mapping world NTG is averaged over a zone (remember ‘h’ in the volumetric equation) and hence mostly a fraction apart from the area above the contact and seldom 1 (or 100% Net) across a zone. This makes life easier as you can assume a normal or log normal distribution of the NTG across the modelled area or in other words there are no extremes i.e. 0 and 1 in the distribution. If there are 0s or no pay in the maps it can be conveniently edited taking the geology into account. No issue right!! This in essence gives a volumetric estimate that is fairly robust. By the way, all the Crystal Ball fans out there… you can safely assume a normal distribution of NTG to calculate the volume spread, all you need to worry about is definition 🙂

On the other hand within the reservoir model, the problem now has multiplied a million times (since the user has to assign NTG each cell in the million cell grid built). He/She also has now got a flag of 0 and 1 at log resolution. So the million dollar question is how do you actually distribute a flag (or set of 0 and 1) in the model or more appropriately do you actually need to distribute the 0 and 1 or you can assume a simple constant NTG.

Some of my intelligent colleagues might argue that when you upscale, the NTG will get fractionated. Simple!! No my friends it is not as simple as that!! In my experiments with truth (;)), no matter how much you upscale there will always be a significant number of samples of 0s and 1s. The only exception where this is not true is if you build a single layer per zone model which none of us are going to do (as this defeats the very purpose of modelling and capturing the heterogeneity of the reservoir.)  This means that we have to somehow find a robust way of distributing these 0s and 1s in the model.

In general, there are several different methods for modelling Net-to-Gross followed in the industry depending on which Company you are talking to and mostly all the big multinational brilliant Companies have their own method which even the Creator (Almighty GOD) can’t question. The most commonly practiced methods are:

a)   Assigning a value of 0 for non-reservoir rocks and assigning a value of 1 to reservoir rocks. This is principally based on depositional facies model. So the big assumption here is the depositional facies/architectural elements in the model being directly related to pay.

b)  The second method is using NTG as a facies and distributing the non-reservoir and reservoir rocks using facies modelling algorithm. Now this can be done with or without a depositional facies framework.

c)   The other method is simply stochastically modelling the NTG (it could be any net but in most of the cases I have seen so far it is the Net Reservoir). The net and non-net-reservoir flags are assumed to be continuous property and modelled accordingly after upscaling. This can be done with or without the facies model. The first part of the workflow is to distribute the net porosity in the model. The second step is to distribute the flag as a continuous property using collocated co-krigging (porosity model as trend implying). Could be done other way round if you know how to distribute NTG. The rationale for this methodology is to prevent double dipping of the reservoir volumes.  (The reduction in volume comes due to NTG distribution and not porosity).

d)  Deriving the NTG model using cutoffs from the final porosity, VSh, and permeability or saturation models.

There are other methods as well e.g. using a cross plot between upscaled NTG and Porosity to distribute the NTG (which is also flawed like the rest).. and many others. But for simplicity sake let us concentrate on the methods above.

Now each of the methods described above has some inherent flaws in it. I will leave to the reader to judge the seriousness of this issue whilst I highlight the problems with the assumption.

Method a) assumes that there is no non-net within a depositional facies which is definitely not correct, as each of the depositional facies would have some non-net in  (this could be due to the sedimentological characteristics of the rock due to deposition or compaction or diagenesis) and hence defeats the purpose of this exercise.

Method b) has several issues – firstly from the algorithmic perspective it is not advisable to have two facies (it becomes and/or case which is definitely not advisable to capture the variability and of course use of similar variogram etc.) and secondly the impossible task of relating these flags to depositional architecture. Remember the reservoir/pay is only a flag with no geological significance. So it becomes relatively challenging to relate these directly to the depositional architecture of the reservoir.

Method c) indicates distributing the NTG stochastically. This indeed is a good one. How do you stochastically distribute any property which has 90% of the samples shared by 0 and 1 values (I know how it is done but I doubt the correctness of the method). Having peaks in distribution (in this case a peak at 0 and another one at 1) renders the stochastic distribution questionable and moreover forcing a Normal Score Transform to such a distribution is definitely a force to reckon with.  Also the lateral trends would be difficult to quantify as it is difficult to relate a lateral trend to a flag (I am aware of different variants of this method but this is besides the point I am trying to make). Also remember the porosity in this workflow is net porosity and so is permeability. This might create greater connectivity in the reservoir model.

Method d) applying cut-offs on the final model could create issues in connectivity which would be controlled by stochastic imprints. This would also require the user to distribute all the porosity net and non-net correctly in the model (again non-trivial). Also as I have told you earlier, the question of what cut-off to use is again academic. Moreover, the Saturation cut-off might prove to be havoc, as it could wipe off the volumes within the transition zone completely and also in cells with poorer rock quality (and obviously you would struggle to define a height function for poorer quality rocks). The possible issues in this case can be quite significant and each cut-off would have grievous consequence on the volumes.

So what is the solution?? What is the right way of modelling NTG?? Is there a method??? How do we ensure that the estimated volumes are correct???

Right now, I cannot answer all the questions. I guess I have managed to highlight the problems of modelling NTG in this article and someone somewhere would take it seriously and give it a serious nudge which it deserves… The issue is definitely non-trivial and hence needs significant understanding and further work.

Now comeon all is not lost… For one aspect that we started discussing previously, OIIP, we definitely can find the answer, without over or under estimating. There is definitely a way to cross check the OOIP. For the rest of the issues including best solutions and how it impacts the overall connectivity is a different ball game that needs further work.

I think it is appropriate to mention here that I did start working on the issue but unfortunately due to unavailability of the resources to work on this independently I have rested my curiosity. This is the only time I wished I had millions sitting in my bank!!! BTW, mind you!! the quest for knowledge is resting and not killed!!!!

As a footnote, I have seen models built using all the above methods (it pays to having seen and QC’ed and reviewed so many models) which have been history matched (History match is no criteria to class a model to be predictive – you can force any model to match history) – so please don’t go there!!!

For the time being let me conclude by warning you to be careful and cross checking the OOIP through alternative methods before reporting it directly from 3D Geological Model.

Feel free to share!! and thanks for reading.

 

Tell your friends!