Business Intelligence Tools / Data Science & Analytics / Geology & Geoscience

Extracting features from seismic

So far we have used our TERR engine to calculate on existing subsurface geometries; in this post, we will use it to actually generate new geometries to the scene. In particular, we are going to extract ‘features’ from seismic data and return as a point set data table. This point set data table can be visualized alongside seismic, well paths, and horizons in the 3D Subsurface Visualization.

In some previous posts we have talked about the data format for seismic in Spotfire, extracted seismic values, and even run some basic seismic interpretation. In this post, we are going to read in seismic and return a data table of features – basically X, Y, Z and size information. These will be displayed as spheres in the subsurface. Here’s what we are going to produce in the end.

To get started we need to create a data function that reads in seismic trace data. In this workflow, we will use 2D surveys, but this would work equally well on 3D data. For the purpose of demonstration, we will leverage the high-pass filter functionality that we wrote before as our feature function. With this we will return a point set that identifies seismic values above the provided threshold. The size of the point will be the set to the seismic value.

R is a scripting language, which means it is interpreted at runtime. It doesn’t get compiled or optimized. So, for good performance, you really need to use the optimized and compiled functions that are provided as libraries. In this case, we will use the ‘merge’ function to get the X, Y, and traceno from the seismic data table onto the point set data.

We’ll start by creating the arrays that hold the information. Since we don’t know how many features we’ll find, we need to grow the result arrays dynamically. If you’re looking for an optimization opportunity, here’s a great one. Egregious memory allocation with scripting languages can really get out of hand. As we loop over the seismic traces, we’ll also need to calculate depths in order to position these points in the Z dimension. If your seismic is in time, then Z can also be interpreted as time. Spotfire is pretty much blind to the difference.

To calculate depths for each seismic value, we need to know the top and bottom Z (min/max) and the number of samples in the trace. Seismic stores samples at uniform steps, so we can easily calculate the index of the seismic value to a depth coordinate using a standard affine transformation:

Index*(maxZ-minZ)/numTraces + minZ

Usually R returns a Boolean array for array indices that match an expression like “vals > threshold”. We can instead change this to a list of indices using the ‘which’ function.

unlist(which(vals>threshold))

This will give you an array of indexes that can be transformed (altogether) using the affine depth transformation. Because we have the header data already in table format, we just need to left join the trace header table with the marker Z array.
FeaturesMerge
We can then send this result table back to Spotfire and add a Point Set layer in the 3D subsurface visualization which will display each of these X,Y,Z points as spheres in the scene. You could use a fixed radius and color or you could color by the seismic amplitude. You could even calculate some inferred points by clustering points together.

These points identify valuable areas in the subsurface or even hazardous ones. They can be filtered and further manipulated in Spotfire.

You’ll need to add the point set data table to the subsurface scene as a new layer. I didn’t walk through that in this post, but you basically need to set the X, Y, Z and size variables that make sense for your scene. For instance, in my code I normalized the Z value (between 0 and 1). This means I’ll need to add a scaling factor to the Z to get it to show up on the seismic data. This could be handled in the R code, but I cut that out for brevity.

Now that we’ve done some things with seismic and extracted features, representing them as point sets, we’ll dive into what we can do with wellbore trajectories and seismic.

Here’s the full code for what we did in this post.

read.floats <- function(data) {
  conn = rawConnection( data )
  num = length(data)/4;
  values = readBin( conn, "double", size=4, n=num);
  close(conn);
  return(values);
}

write.floats <- function(data) {
  zz <- rawConnection(raw(0), "r+")
  writeBin( data, zz, size=4 );
  seek( zz, 0 );
  raw.data = rawConnectionValue(zz);
  close(zz);
  return(raw.data);
}

col.values = "VALUES"

Result = NULL
Seismic$Row = 1:nrow(Seismic)

z = c()
s = c()
r = c()
for( i in 1:nrow(Seismic) ) {
   data = as.raw(Seismic[[col.values]][[i]])
   values = read.floats( data );

   idx = which(values >= Threshold & !is.na(values))
   if(length(idx) > 0) {
      sizes = values[idx]
      z = c(z, idx/ (1.0*length(values)) )
      s = c(s, sizes)
      r = c(r, rep(i, length(idx)))
   }
}
df = data.frame(Z=z, Size=s, Row=r)

Result = merge(Seismic[,c('FFID','TRACENO','SOU_X', 'SOU_Y', 'Row')], df)

Leave a Reply

Your email address will not be published. Required fields are marked *