Business Intelligence Tools / Geology & Geoscience

Seismic interpretation in Spotfire: a high pass filter

Being able to bring seismic data into Spotfire and visualize it is powerful from a co-visualization standpoint, but now we are going to take a step further. In this post, we are actually going to run a simple interpretation on the seismic by building a high pass filter. Because R supports many different types of image and time-series analyses and transformations, there are numerous types of interpretations you’ll be able to implement in Spotfire. Further, because Spotfire supports a flexible data model, these calculations on seismic can take other ad-hoc data like wellbore trajectories, drilling data, and productivity statistics in order to enrich the analyses.

In a previous post, we walked through loading and extracting seismic values from the binary data. Now, we will extract, run a calculation, and save the updated values to the seismic data table.

We want to write a simple high-pass filter that takes as input a threshold and returns new seismic values such that everything below the threshold is truncated and everything above it retains its value. We only want “high” values to filter through. The user puts in a value and everything above that value is visualized.

It’s important to note that this could be done with just a color map; however, what we are trying to demonstrate is the capability to unpack and process the seismic. We just are using a toy example to better illustrate the mechanics and code necessary to accomplish it. Much more sophisticated calculations can be built instead of a high pass filter.

We have several options for how we could bring the results back to Spotfire. For simplicity, we will just create a new data table that holds the results, but we could also bring back just a new column which has the interpreted seismic. The latter is more memory efficient and would be better if you wanted to run the seismic through multiple filters or calculations. It’s also possible to daisy-chain the computations so the seismic values are interpreted one after the other – this would be useful for usability or debugging.

For this example, we have decided to use the 2D seismic surveys. You could send in the 3D seismic, but it will take longer to crunch. From a debugging perspective, it’s a good idea to start with less data then open up the gates once you have production ready code.

In a previous post we talked about how we can unpack seismic values using the ‘readBin’ R function. This function reads binary data and converts it to a different datatype. In this case we want to read the binary float values and stick them in a float array.

conn = rawConnection(data)
vals = readBin(conn, "double", size=4, num=length(data)/4) 

Now, lets add our high pass filter.

vals[ vals < threshold ] = 0 # or threshold

Then assign it back to the data table. Here, this is where we need to use the ‘writeBin’ function in R which returns a binary version of the float array. This will serialize the array back into a format which is memory efficient and can be rendered by the 3D Subsurface Visualization.

zz <- rawConnection(raw(0), "r+")
writeBin( vals, zz, size=4 );
seek( zz, 0 );
raw.data = rawConnectionValue(zz);

It’s important to note that our high pass filter is implemented in one line. You could replace this with another function that implements more sophisticated functionality. So, your code could look like this:

vals = low_pass_filter( vals, 0.2 )
vals = predict_lithology( vals, model )
vals = contrast_filter(vals, model)

When we run this in Spotfire, we will get a new data table with the updated seismic values. We can set this to run automatically when the inputs change – which in this case is the threshold value. Now, as the user changes the parameter, the script will run in the background, then the 3D visualization will update. I’m assuming you’ve configured the resulting data table as a seismic layer in Spotfire already.

Here’s the whole R script.

read.floats <- function(data) {
  conn = rawConnection( data )
  num = length(data)/4;
  values = readBin( conn, "double", size=4, n=num);
  close(conn);
  return(values);
}

write.floats <- function(data) {
  zz <- rawConnection(raw(0), "r+")
  writeBin( data, zz, size=4 );
  seek( zz, 0 );
  raw.data = rawConnectionValue(zz);
  close(zz);
  return(raw.data);
}

col.values = "VALUES"

Result = NULL
Seismic$Row = 1:nrow(Seismic)

z = c()
s = c()
r = c()
for( i in 1:nrow(Seismic) ) {
   data = as.raw(Seismic[[col.values]][[i]])
   values = read.floats( data );

   idx = which(values >= Threshold & !is.na(values))
   if(length(idx) > 0) {
	sizes = values[idx]
	z = c(z, idx/ (1.0*length(values)) )
	s = c(s, sizes)
	r = c(r, rep(i, length(idx)))
   }
}
df = data.frame(Z=z, Size=s, Row=r)

Result = merge(Seismic[,c('FFID','TRACENO','SOU_X', 'SOU_Y', 'Row')], df)

You can grab this workflow off of Exchange.ai here. You’ll need the 3D Subsurface Visualization to see it in Spotfire.

Leave a Reply

Your email address will not be published. Required fields are marked *