Data ontology and knowledge graphs have incredible potential for the mining industry. Here’s why
Building a Multidimensional View to Harness the Power of Your Data

Not All Mining Data Integration is Created Equal – Part 2

Published On: July 18, 2023|Categories: Whitepapers|

In part 2 of this article series, Sean Hunter, Director of Product Development at Eclipse, provides technical insight into achieving true mining data integration

In the first installment of this two-part articles series, we learned about the different types of data integration and why achieving true integration is so important in the mining industry today – the ‘what’ and the ‘why’ pieces of the puzzle.

In this second part, we’re going to discuss the technical challenges and opportunities – ‘how’ to integrate data in a way that preserves its attributes and allows seamless use across different applications and functions within a mining organization.

Sean Hunter, Director of Product Development at Eclipse Mining Technologies, was instrumental in building the company’s SourceOne® Enterprise Knowledge Performance System (EKPS). Having applied his computing and coding expertise to software architecture roles at Mintec, he joined Eclipse in 2018 and now leads the company’s product development team.

Thanks to his time spent developing general mine planning (GMP) software, Hunter knows more than a thing or two about successful data integration, and this is evident in the unbridled capabilities of SourceOne as well as its user-friendly interface.

“Mining is a large field,” he explained. “It has a rich set of data that has some unique quirks. For example, large block models can stress systems that aren’t designed to handle them. This understanding helped us a lot when it came to developing SourceOne. To properly integrate mine planning data, it’s crucial to first understand it. And that’s hard to do with off-the-shelf tools.”

First, know your data

Hunter’s experience helped the team to determine, not only what they wanted to achieve with SourceOne – a flexible, open data platform – but, just as importantly, what they didn’t.

“Point solution packages like GMPs usually come with their own file formats which can lock mines into certain ways of working,” said Hunter. “When creating SourceOne, the goal was to go in the opposite direction, giving freedom instead of lock-in to a new standard.”

Significant advancements in technology over the past five years mean that this feature is more important than ever. Part of the challenge of adopting new technologies which support cleaner, greener and more efficient mining practices, is stitching them into the wider technology landscape at each operation and organization. For many mines, this is a key decider in whether or not they can harness things like autonomous haulage or predictive maintenance practices.

“There have been lots of changes to both software and hardware,” said Hunter. “Artificial intelligence (AI) is much more doable these days, because computers have become a lot more powerful. Storage is also faster and cheaper, and this opens a lot of options.”

This challenges industry orthodoxies and, according to Hunter, it can be hard to take advantage of new processes and techniques using legacy systems.

“If a piece of software provides one way of doing something, it can be very hard to change, because people rely on that behavior,” he explained. “Sometimes it might work out, but if systems are inflexible and mines can’t adjust their sails, then their performance is limited.”

Achieving common data format integration

All of this knowledge helped to inform SourceOne’s design, where there isn’t the same risk of breaking data compatibility, and new features can be added seamlessly on top of the data.

The system takes a two-pronged approach to common data format integration: first, users are advised to save data in its original format when it’s ingested into SourceOne. This avoids data being lost during conversion.

“Some display options might not be available inside the SourceOne system itself,” Hunter explained. “But that’s not as dire as data being lost, and users can always get it back out. It also means that SourceOne updates can add new support for existing data, helping with versioning issues. The system isn’t pushing its own formats; instead, it’s providing rich views into data. Application programming interfaces (APIs) are sometimes used for this, but in a very different way.”

Second, SourceOne focuses on data that is well understood when data needs to be read or exported. While CSV files are often used for interchange, parquet files provide an open, fast alternative.

“Geometry and other data can be provided in common formats, including the OMF formats from the Global Mining Guidelines group,” Hunter expanded. “There are also multiple ways to read one piece of data depending on what users require, helping to eliminate the issue of data being lost in conversions.”

Providing access to these common formats is important, especially for data that’s derived or edited. To make sure that SourceOne works in these cases, the system stores data as changes layered on the original, meaning that the original data in its original format won’t be lost, even when it’s used to do further downstream work.

The problem with APIs

Going back to the APIs that Hunter mentioned earlier… Why can’t those just be bundled together to allow data integration between legacy software and vendor-neutral solutions?

“From a technical standpoint, overcoming this challenge requires planning and design right from the beginning,” said Hunter. “Doing it afterwards usually results in a number of common roadblocks.

“For instance, when making a tool to unify several disparate data formats, developers tend to lean towards creating yet another new standard or file format. Instead of fixing the issue, this adds more standards and further fractures the existing landscape.”

He added that, even in a perfect situation where every vendor has shared APIs, this approach can still result in data loss; the implementation of features varies between different systems, and sometimes the data simply can’t be imported cleanly between them.

“When taking data to another product, working on it, and bringing it back, you will likely have lost data, especially for complex data types,” said Hunter. “Worse still, it might be fine today, but broken tomorrow. APIs and versions change constantly, so their management is an ongoing battle.”

Many software vendors claim they have evolved their legacy products to be ‘vendor neutral’, but this raises questions (aside from the prohibitive expense involved) as to whether the exercise results in an optimally performing solution.

Hunter commented: “Legacy solutions manage data in a completely different way, and so it would be very hard to pivot them. Some tools provide the ability to view certain other data formats directly, but they aren’t necessarily built to enable that from the base up. Often, that capability will only work in limited circumstances. The other features beyond that – having derived data with data lineage and so on – would be extremely difficult to pivot to, since they need to work at a foundational level.”

Flexibility for the future

AI is often extolled as the future of industry, underpinning movements like Industry 4.0. The speed at which AI and its capabilities are evolving are a prime example of why mines need a platform that can be expanded to manage datasets of any size and type on will.

“Beyond SourceOne’s flexibility in working with many types of data, it’s also aligned with some of the broader trends in technology; one notable example being generative AI,” said Hunter. “Generative AI is powerful, but it doesn’t deal with facts. However, the SourceOne EKPS, contains all the facts and knowledge associated with each operation; the two are complimentary.”

Another unique feature of SourceOne is its use of data ontology and knowledge graphs. There are various definitions of these, but all center on organizing and understanding data and its relationships in a way that both humans and machines can easily understand.

Hunter explained: “Many tools require data to fit into certain predefined concepts, and it might not always be a great fit. Mines must either work the way they’re told using workarounds, or maybe there’s no way to do what they want to. With the knowledge graphs in the SourceOne system, concepts themselves can be configured inside the software, along with how they relate to each other.”

Beyond flexibility, this allows data modelling and analytics that would otherwise be extremely difficult, especially with relationships between data. Data can also be validated to ensure it makes sense more readily, as specific rules can be put into place in each knowledge graph. This helps prevent errors in addition to realizing new efficiencies.

“I’m not aware of any other mining-specific software systems that use knowledge graphs,” Hunter concluded. “Using a more general product, mines can run into trouble integrating their data and software with the special requirements of mining.

“SourceOne’s flexible, open approach can help meet people where they need to be.”

To learn more about SourceOne contact info@eclipsemining.com or visit https://eclipsemining.com/

Download PDF