Out of the box, Spotfire does not naturally load or represent seismic data. In this post, we will cover how we have extended the Spotfire platform with the Segy data source (available as part of the 3D Subsurface Visualization) which allows the loading, representation and visualization of seismic data in Spotfire.
A program like Spotfire runs on data tables. This is nice because we have a common data structure to use for ad-hoc data; however, this can be problematic for data types that are not inherently tabular. Case in point is grid data like seismic. We have chosen to represent seismic data in Spotfire as a data table with each row corresponding to a trace.
A trace has header information like position, top and bottom depths, sample interval, trace number (or inline and cross line) and array information of values. These values correspond to either depths or timeseries sampled below the surface. These values are flexible in meaning: they could be seismic amplitudes, interpreted facies, Boolean flags, or other meta-information of the subsurface.
Inside Spotfire, trace header information becomes columns of the data table and the array of values are collapsed into a binary float array. Seismic traces can be shorts, ints, bytes, or floats – but for simplicity we will represent it with floats, the most expressive (and least efficient) type. By condensing the array of values into a binary object, we efficiently load the seismic data into Spotfire. If we were to have a row per sample, your ability to load in 3D seismic or large 2D surveys would be seriously limited. This is the difference between having 600,000 rows of data and 600 million rows! We have been able to load Teapot dome 3D seismic into Spotfire running on a Surface Pro 3 with 8 GB of memory and analyze, visualize, and filter it without any performance issues.
It’s important to note that Segy is an abused data format. What I mean is that people will make a mess of where they store certain attributes of the trace, even though there’s a standard for it. So, sometimes INLINENO is in the CDP_X location. A good rule of thumb is to poke around the data that gets loaded in to see what is what. The nice thing is inside Spotfire, you just need to tell it what columns to use – it won’t hiccup if CDP_Y is actually XLINENO.
Here’s a short video showing Segy data being loaded into Spotfire.
Now that we have seismic data in Spotfire, we can visualize it with the 3D subsurface visualization. We can visualize either 2D surveys or 3D volumes. To do so, just add a layer for seismic and point it at the right table and columns. You can even write custom expressions for the columns in case you want to do a crude time to depth migration.
The color property pane is a powerful tool that allows you to specify scientific color maps in HSV space, supporting alpha blends. Adding alpha is nice because you can then see through certain portions of the seismic to better see how stratigraphy or subsurface patterns correlate to well placement. You can think of HSV space like a circle (ref HSV). The property dialog expects values between 0 and 1.
Once we have the seismic visualized inside Spotfire, we can use filtering to hide or show certain traces. This can be useful to focus on a certain area of the seismic volume or 2D survey.
You can improve rendering performance by disconnecting seismic from responding to filtering or marking. By doing this, the visualization can just cache the seismic since it knows the underlying data won’t be changing.
I know what you’re thinking – visualizing seismic isn’t new! That’s true, but now that it is inside Spotfire, we are going to be able to leverage it in some novel workflows. We’ll start that process by pulling it into TIBCO’s compute engine called TERR.
So, in the next post we will take a look at how you can bring this seismic data into TERR and unpack the values.
You can access the DXP for this post on Exchange.ai here. You’ll need Spotfire, of course. In case you don’t have it, here’s where you can get a trial license.