Platforms in Mining - Eclipse Mining Technologies
Whitepapers, 2020/02/18

Platforms in Mining

“This article expands on the possibilities of how true open data in an agile enterprise platform can push the envelope of how we work with data in the mining industry.”

Erik Johnson has over 20 years of experience leading highly skilled teams in the creation of high-performance engineering and scientific software for the mining industry. His industry knowledge and background in mining engineering, geology, geophysics, and geomodeling, provides insight into how we view and use the invaluable asset of DATA in mining.

This is the second in a series of articles that discuss the realities of open data in the mining industry. The first article outlined the current state of the industry, highlighting the strengths, weaknesses and misconceptions about how we work with open data in mining. This article will address the need for a platform to hold and curate open data. The final article ties the two in a real-world way and outlines what the future of mining data will be.

What Should A Mined Data Platform Be?

Platform, ecosystem, hub – these are all terms we have heard applied in different ways to mining technologies through the years. These poorly and loosely defined terms allow vendors to offer solutions that may or may not meet our expectations. Regardless of what we hope for, these “platforms” are significantly divergent in capabilities, the scope of what they can solve, and ultimately, value.

Let’s take a step back and agnostically examine what a mining data platform should be based on the needs of mining. The strength of modern data analytic techniques is the ability to ‘close the loop’ on disparate but associated processes that affect one another. For example, rock hardness affects blasting and blast patterns, which affects the need for secondary crushing, which impacts the budget, and so on. These relationships are ubiquitous in mining, but the data for each live on its own “data island”, even if you are using the same provider. Couple this with the fact that most operations use multiple providers. More than likely they have dozens of providers and technologies.

A mining platform should not ‘own’ all of its data, instead, it should be able to contextualize it with the rest of the mine’s data and provide a centralized method to discover and access key elements for use in more whole mine systemic methods of data analytics. Moreover, this solution should be open, allowing for all other solutions to query, retrieve and update the data in a platform-agnostic way. And it needs to extend to modern interface paradigms such as the ability to quickly and easily query and retrieve data using methods such as a simple JSON service.

We need accessible, searchable, contextual, fully described, vendor agnostic, correlate-able open data in a unifying platform to make these methods truly useful in the commercial space.

Data Platform Characteristics:

Let’s break down this newly defined mining platform into various key characteristics.

Centralize The Data

Querying and correlating across multiple platforms is nearly impossible. Say you want all data related to “North Pit B”. Blast patterns, drillholes, pit designs, design documents, etc. This would require multiple data source queries on multiple systems and often on multiple individual people’s computers.
A successful data platform would centralize all the data types for easy correlation and querying and be able to report and aggregate data across multiple mines.

This aggregation across mines significantly affects how a platform needs to be defined and designed from the beginning. This is especially of concern when it comes to ‘keying’ values. A data key for one site may be duplicated at another site, requiring an error-prone and resource-intensive ETL process to refactor the data, which also usually would break the link back to the source data (making it near impossible to trace and issue back before the ETL).

Attribute / Correlate / Contextalize

All mining software solutions have some level of data context. In many, it is just a filename and directory structure. For more meaningful use, we need more context than this. We need to know when data or processes are updated, what was updated, and by who, what process updated data, as well as links to the preserved setup used in running that process and all associated data. All this level of context
should be done automatically by the platform.

Data should also be able to be attributed with robust custom/ user-defined information to allow for better organization, tracking, and use in various mining workflows and applications.

Track History Audit

Related to the context of the data – all the context and history of the data should be preserved and managed by the platform in a method that is easy for a user to understand the history of any piece of data. This tracking should be done automatically at the time the data or metadata is changed. More than just the end result should be tracked, all the data used in the operation should also be tracked and linked in the history and audit trail. Users should be able to easily ask questions such as – when did this happen (or did it happen), what did the data look like at a specific point in time? Am I using the latest information? They also should be able to easily see the data and or metadata in that original state easily.

Vendor Agnostic (Exposed Open)

Mining companies live in the reality of multiple mining technology solutions from many different companies. Mining technology companies are loath to allow other vendors access to their own data backends and only address data types for which they have a monetized solution (for example they will not make a data store for FMS data if they are not also selling an FMS solution).

The only hope for any meaningful mining data platform is for it to be truly open and vendor agnostic.

Communication / Messaging System

The expected behavior of a modern platform is for it to communicate effectively with the outside world of data and controls. The platform must be able to listen for data changes in the greater ecosystem as well as tell others (tools, systems, dashboards) about changes that are happening within the data itself. For example, a platform should detect when changes to data have occurred (such as a block model has been updated), and perform actions as required (notify relevant stakeholders with messages, automatically update the model for a report, and have a web dashboard update).

Mining Specific

General platform solutions do not scale well to the specifics of mining data. Mining data has special and diverse characteristics like time-series data, block models, constantly changing terrain models, and drill hole data. Solutions specific to different fields can be excellent in some areas and just not work at all in others.

Visualization / Spatial (And Visual)

A mining platform requires a visual component. At its very nature, mining is a geospatially 3D process (more precisely 4D as it will change radically through time past and future), not 2/2.5D as seen in popular GIS packages.

Ideally, this visualization component would not be based on a floating-point precision backbone. While this seems like an oddly specific requirement, it is the basis for all gaming engines and many commercial CAD packages. Floating-point has significant repercussions when used in mining software. Floating point coordinates over kilometer distances can result in a significant error in positional accuracy.

Network Connection Tolerance

An operating mine is a never-ending construction zone. Network connectivity can be unavailable for a whole host of reasons, or one could simply be trying to get some work done at the coffee shop with no VPN access. This should not stop a user from getting work done. The platform should be able to capture changes to data, setup, and metadata even when disconnected from the server. These changes should be rationalized back to the main server when the connection is restored seamlessly

So What Platforms Exist Now?

The most common solutions are a collection of legacy individual applications storing to a proprietary set of binary files on local machines and shared drives. Nothing is tracked, attribution is by filename, there is no way to do a simple query across data from multiple individual applications.

Some have tried to insert a check-in/check-out system on top of the above system. This allows users to add some context to the files being changed. While this should be a step forward, it is rarely the case, as it requires users to be very disciplined in their workflow. Moreover, the data stored is rarely anything beyond the end result data, not the workflows used to create, or additional external information used in its creation, making it only moderately useful at best. Theoretically it ‘should’ be easy to find the latest version of a file you want.

We have seen a move recently to GIS type platforms. This move can be quite useful to the day to day operational visualization type solutions. However, it misses the entire point of a platform, that this is more of a data management problem, not a visualization problem. Even in the area of visualization, it fails in the understanding that mining is a truly 4D problem.

What Does This Mean For You?

Think about the current state of your mining data management, and ask yourself these questions:
• Where is your data? Can you access is all from a single place?
• How is your data attributed? Does it require a highly trained and disciplined workforce to comply with a set of fixed rules?
• How does your data express dependencies to other data? Does the block model know what version of the drill hole set was
used to interpolate it? Of what workflow was used including a link to that workflow in the state it was when it was run?
• Does the system allow easy access to data, both in reading and writing? Not just to other mining vendors but also things like general data analytic packages?
• Does the system message you when data you are concerned about (or actively using) was changed?
• Does your mining organization have your own internal database created and managed by your team or outside consultant? Would you rather have a trusted partner take care of the updates, bugs, enhancements, and more?

Current providers of mining platforms should be met with a bit of scrutiny. How do they define platform? And how well do they serve the needs of the mine with regard to data access openness, application interoperability, and data accountability in terms of history and auditability? The mining industry needs something better. My final article brings together how we work with open data in the mining industry, and what we need from a data platform.

Through products that revolutionize data connectivity and data management, their decades of experience in the industry and free from restrictive legacy technology, the Eclipse team is uniquely equipped to bring a much-demanded sea of change to the industry.

Download PDF

INQUIRE TODAY