Let A.I. Help Target Your Infill Drilling

From time to time I come across interesting new tech that I like to share with colleagues.  The topic of this blog relates to solving the problem of defining an optimal infill drill program.
In the past I have worked on some PEA’s whose economics were largely based on Inferred ore.  The company wanted to advance to the Pre-Feasibility (PFS) stage. However, before the PFS could start they would need additional drilling to convert much of the Inferred resource into Measured and Indicated resources.
I’ve seen similar experience with projects that are advance from PFS to FS where management has a requirement that the ore mined during the payback period consist of Measured classification.

The Problem

In both cases described above, it is necessary for someone to outline an infill drill program to upgrade the resource classification while also meeting other project priorities.  The goal is to design an infill drill program with minimal time and cost yet maximize resource conversion.  Possibly some resource expansion drilling, metallurgical sampling, and geotechnical investigations may be required at the same time.
I’m not certain how various resource geologists go about designing an infill drill plan.  However, I have seen instances where dummy holes were inserted into the block model and then the classification algorithm was re-run to determine the new block model tonnage classification.   If it didn’t meet the corporate objectives, then the dummy holes may be moved or new ones added, and the process repeated.
One would not consider such a trial & error solution as optimal. It may not necessarily meet the cost and time objectives although it may meet the resource conversion goals.

The Solution

The DRX Drill Hole and Reporting algorithm developed by Objectivity.ca uses artificial intelligence to optimize the infill drilling layout.  It intends to match the QP/CP constraints with corporate/project objectives.
For example, does company management require 70% of the resource in M&I classifications or do they require 90% in M&I?  Each goal can be achieved with a different drill plan.
The following description of DRX is based on discussions with the Objectivity staff as well as a review of some case studies.  The company is willing to share these studies if you contact them.
The DRX algorithm relies on the resource classification criteria specified by the company QP.  For example, the criteria could be something like “For a block to qualify as Measured, the average distance to the nearest three drill holes must be 30 m or less of the block centroid. For a block to qualify as Indicated, the average distance from the block centroid to the nearest three holes must be 50 m or less. For a block to qualify as Inferred it will generally be within 100 m laterally and 50 m vertically of a single drill hole.
The DRX algorithm will use these criteria to optimize drill hole placement three dimensionally to deliver the biggest bang for the buck.   Whatever the corporate objective, DRX will attempt to find an optimal layout to achieve it.  The idea being that fewer well targeted holes may deliver a better value than a large manually developed drill program.
The DRX outcome will prioritize the hole drilling sequence in case the drill program gets cut short due to poor weather, lack of funding, or the arrival of the PDAC news cycle.
The DRX approach can also be used to optimally site metallurgical holes and/or geotechnical holes in combination with resource drilling if there are defined criteria that must be met (by location, ore type, rock type, etc.).   The algorithm will rely on rules and search criteria developed by experts in those disciplines.  It does not develop the rules, it only applies them.
DRX can also help optimize step-out drilling, such that the step-out distance will not be beyond the range that negates the use of the hole in a resource estimate.  It can also consider geological structure in defining drill targets.

By optimizing the number of drill holes and their orientation, the company may see savings in drill pad prep, drilling costs, field support costs, and sample assaying.
One can even request drilling multiple holes from the same drill pad to minimize drill relocation costs and safety issues in difficult terrain.
A large benefit of DRX is to be able to examine what-ifs.  For example, one may desire 85% of the resource to be M&I.   However, if one is willing to accept 80%, then one may be able to save multiple holes and associated costs.   Perhaps with the addition of just a few extra holes one could get to 90% M&I.   These are optimizations that can be evaluated with DRX.

An Example

In the one case study provided to me, a $758,000 manually developed drill program would convert 96.6% of the Inferred resource to Indicated.  DMX suggested that they could achieve 96.7% for $465,000. Alternatively they could achieve 94% conversion for $210,000.  These are large reductions in drilling cost for small reductions in conversion rate.  This may allow the drill-metres saved to be used for other purposes.
For that same project, a subsequent study was done to convert Indicated to Measured in a starter pit area. DRX concluded that a 5000-metre program could convert 62% of Indicated into Measured.  A 12,000-metre program would convert 86%,  A 16,000-metre program would achieve 92%.
So now company management can make an informed decision on either how much money they wish to spend or how much Measure Resource they want to have.

Conclusion

Although I have not yet worked with DRX, I can see the value in it.   I look forward to one day applying it on a project I’m involved with to develop a better understanding of what goes in and what comes out.   DRX hopes to become to resource drilling what Whittle has become to pit design – an industry standard.
The use of the DRX algorithm may help mitigate situations where, moving from a PEA to PFS, one finds that the infill program did not deliver as hoped on the resource conversion.  Unfortunately, this leaves the PFS with less mineable ore than anticipated and sub-optimal economics.
New tech is continually being developed in the mining industry.  Hopefully this is one we continue to see forward advancement. It makes sense to me and DRX could be another tool in the geologist toolbox.  Check out their website at objectivity.ca
Note: You can sign up for the KJK mailing list to get notified when new blogs are posted.   Follow me on Twitter at @KJKLtd for updates.
Share

Mining Financial Modeling – Make it Better!

In my view one thing lacking in the mining industry today is a consistent approach to quantifying and presenting the risks associated with mining projects. In a blog written in 2015, I discussed the limitations of the standard “spider graph” sensitivity analysis (blog link here) often seen in Section 22 of 43-101 reports. This new blog expands on that discussion by describing a preferred approach. A six-year time gap between the two articles – no need to rush I guess.
This blog summarizes excerpts from an article written by a colleague that specializes in probabilistic financial analysis. That article is a result of conversations we had about the current methods of addressing risk in mining. The full article can be found at this link, however selected excerpts and graphs have been reprinted here with permission from the author.
The author is Lachlan Hughson, the Founder of 4-D Resources Advisory LLC. He has a 30-year career in the mining/metals and oil gas industry as an investment banker and a corporate executive. His website is here 4-D Resources Advisory LLC.

Excerpts from the article

Mining can be risky

“The natural resources industry, especially the finance function, tends to use a static, or single data estimate, approach to its planning, valuation and M&A models. This often fails to capture the dynamic interrelationships between the strategic, operational and financial variables of the business, especially commodity price volatility, over time.”
“A comprehensive financial model should correctly reflect the dynamic interplay of these fundamental variables over the company life and commodity price cycles. This requires enhancing the quality of key input variables and quantitatively defining how they interrelate and change depending on the strategy, operational focus and capital structure utilized by the company.”
“Given these critical limitations, a static modeling approach fundamentally reduces the decision making power of the results generated leading to unbalanced views as to the actual probabilities associated with expected outcomes. Equally, it creates an over-confident belief as to outcomes and eliminates the potential optionality of different courses of action as real options cannot be fully evaluated.”

Monte Carlo can be risky

“Fortunately, there is another financial modeling method – using Monte Carlo simulation – which generates more meaningful output data to enhance the company’s decision making process.”
Monte Carlo simulation is not new.  For example  @RISK has been available as an easy to use Excel add-in for decades. Crystal Ball does much the same thing.
“Dynamic, or probabilistic, modeling allows for far greater flexibility of input variables and their correlation, so they better reflect the operating reality, while generating an output which provides more insight than single data estimates of the output variable.”
“The dynamic approach gives the user an understanding of the likely output range (presented as a normal distribution here) and the probabilities associated with a particular output value. The static approach is relatively “random” as it is based on input assumptions that are often subject to biases and a poor understanding of their potential range vs. reality (i.e. +/- 10%, 20% vs. historical or projected data range).”
“In the case of a dynamic model, there is less scope for the biases (compensation, optionality, historic perspective, desire for optimal transaction outcome) that often impact the static, single data estimates modeling process. Additionally, it imposes a fiscal discipline on management as there is less scope to manipulate input data for desired outcomes (i.e. strategic misrepresentation), especially where strong correlations to historical data exist.”
“It encourages management to consider the likely range of outcomes, and probabilities and options, rather than being bound to/driven by achieving a specific outcome with no known probability. Equally, it introduces an “option” mindset to recognize and value real options as a key way to maintain/enhance company momentum over time.”

Image from the 4-D Resources article

“In the simple example (to the right), the financial model was more real-world through using input variables and correlation assumptions that reflect historical and projected reality rather than single data estimates that tend towards the most expected value.”
“Additionally, the output data provide greater insight into the variability of outcomes than the static model Downside, Base and Upside cases’ single data estimates did.”
The tornado diagram, shown below the histogram, essentially is another representation of the spider diagram information. ie.e which factors have the biggest impact.
“The dynamic data also facilitated the real option value of the asset in a manner a static model cannot. And the model took less time to build, with less internal relationships to create to make the output trustworthy, given input variables and correlation were set using the @RISK software options. This dynamic modeling approach can be used for all types of financial models.”
To read the full article, follow this link.

Conclusion

image from 4-D Resources article

Improvements are needed in the way risks are evaluated and explained to mining stakeholders. Improvements are required given increasing complexity in the risks impacting on decision making.
The probabilistic risk evaluation approach described above isn’t new and isn’t that complicated. In fact, it can be very intuitive when undertaken properly.
Probabilistic risk analysis isn’t something that should only be done within the inner sanctums of large mining companies. The approach should filter down to all mining studies and 43-101 reports. It should ultimately become a best practice or standard part of all mining project economic analyses. The more often the approach is applied, the sooner people will become familiar (and comfortable) with it.
Mining projects can be risky, as demonstrated by the numerous ventures that have derailed. Yet recognition of this risk never seems to be brought to light beforehand. Essentially all mining projects look the same to outsiders from a risk perspective, when in reality they are not. The mining industry should try to get better in explaining this.
UPDATE:  For those interesting in this subject, there is a follow up article by the same author published in January 2022 titled “Using Dynamic Financial Modeling to Enhance Insights from Financial Reports!“.
Note: You can sign up for the KJK mailing list to get notified when new blogs are posted. Follow me on Twitter at @KJKLtd for updates.
Share

Pit Optimization – More Than Just a “NPV vs RF” Graph

In this blog I wish to discuss some personal approaches used for interpreting pit optimization data. I’m not going to detail the basics of pit optimization, instead assuming the reader is familiar with it .
Often in 43-101 technical reports, when it comes to pit optimization, one is presented with the basic “NPV vs Revenue Factor (RF)” curve.  That’s it.
Revenue Factor represents the percent of the base case metal price(s) used to optimize for the pit. For example, if the base case gold price is $1600/oz (100% RF), then the 80% RF is $1280/oz.
The pit shell used for pit design is often selected based on the NPV vs RF curve, with a brief explanation of why the specific shell was selected. Typically it’s the 100% RF shell or something near the top of the curve.
However the pit optimization algorithm generates more data than shown in the NPV graph (see table below). For each Revenue Factor increment, the data for ore and waste tonnes is typically provided, along with strip ratio, NPV, Profit, Mining cost, Processing, and Total Cost at a minimum. It is quick and easy to examine more of the data than just the NPV.

In many 43-101 reports, limited optimization analysis is presented.  Perhaps the engineers did drill down deeper into the data and merely included the NPV graph for simplicity purposes. I have sometimes done this to avoid creating five pages of text on pit optimization alone. However, in due diligence data rooms I have also seen many optimization summary files with very limited interpretation of the optimization data.
Pit optimization is a approximation process, as I outlined in a prior post titled “Pit Optimization–How I View It”. It is just a guide for pit design. One must not view it as a final and definitive answer to what is the best pit over the life of mine since optimization looks far into the future based on current information, .
The pit optimization analysis does yield a fair bit of information about the ore body configuration, the vertical grade distribution, and addresses how all of that impacts on the pit size. Therefore I normally examine a few other plots that help shed light on the economics of the orebody. Each orebody is different and can behave differently in optimization. While pit averages are useful, I also prefer to examine the incremental economic impacts between the Revenue Factors.

What Else Can We Look At?

The following charts illustrate the types of information that can be examined with the optimization data. Some of these relate to ore and waste tonnage. Some relate to mining costs. Incremental strip ratios, especially in high grade deposits, can be such that open pit mining costs (per tonne of ore) approach or exceed the costs of underground mining. Other charts relate to incremental NPV or Profit per tonne per Revenue Factor.  (Apologies if the chart layout below appears odd…responsive web pages can behave oddly on different devices).

Conclusion

It’s always a good idea to drill down deeper into the optimization data, even if you don’t intend to present that analysis in a final report. It will help develop an understanding of the nature of the orebody. It shows how changes in certain parameters can impact on a pit size and whether those impacts are significant or insignificant. It shows if economics are becoming very marginal at depth. You have the data, so use it.
This discussion presents my views about optimization and what things I tend to look at.   I’m always learning so feel free to share ways that you use your optimization analysis to help in your pit design decision making process.

 

Note: You can sign up for the KJK mailing list to get notified when new blogs are posted. Follow me on Twitter at @KJKLtd for updates.
Share

O/P to U/G Cross-Over – Two Projects into One

Over the years I have been involved in numerous mining tradeoff studies. These could involve throughput rate selection, site selection, processing options, tailings disposal methods, and equipment sizing. These are all relatively straightforward analyses. However, in my view, one of the more technically interesting tradeoffs is the optimization of the open pit to underground crossover point.
The majority of mining projects tend to consist of either open pit only or underground only operations. However there are instances where the orebody is such that eventually the mine must transition from open pit to underground. Open pit stripping ratios can reach uneconomic levels hence the need for the change in direction.
The evaluation of the cross-over point is interesting because one is essentially trying to fit two different mining projects together.

Transitioning isn’t easy

There are several reasons why open pit and underground can be considered as two different projects within the same project.
There is a tug of war between conflicting factors that can pull the cross-over point in one direction or the other. The following discussion will describe some of these factors.
The operating cut-off grade in an open pit mine (e.g. ~0.5 g/t Au) will be lower than that for the underground mine (~2-3 g/t Au). Hence the mineable ore zone configuration and continuity can be different for each. The mined head grades will be different, as well as the dilution and ore loss assumptions. The ore that the process plant will see can differ significantly between the two.
When ore tonnes are reallocated from open pit to underground, one will normally see an increased head grade, increased mining cost, and possibly a reduction in total metal recovered. How much these factors change for the reallocated ore will impact on the economics of the overall project and the decision being made.
A process plant designed for an open pit project may be too large for the subsequent underground project. For example a “small” 5,000 tpd open pit mill may have difficulty being kept at capacity by an underground mine. Ideally one would like to have some satellite open pits to help keep the plant at capacity. If these satellite deposits don’t exist, then a restricted plant throughput can occur. Perhaps there is a large ore stockpile created during the open pit phase that can be used to supplement underground ore feed. When in a restricted ore situation, it is possible to reduce plant operating hours or campaign the underground ore but that normally doesn’t help the overall economics.
Some investors (and companies) will view underground mines as having riskier tonnes from the perspective of defining mineable zones, dilution control, operating cost, and potential ore abandonment due to ground control issues. These risks must be considered when deciding whether to shift ore tonnes from the open pit to underground.
An underground mine that uses a backfilling method will be able to dispose of some tailings underground. Conversely moving towards a larger open pit will require a larger tailings pond, larger waste dumps and overall larger footprint. This helps make the case for underground mining, particularly where surface area is restricted or local communities are anti-open pit.
Another issue is whether the open pit and underground mines should operate sequentially or concurrently. There will need to be some degree of production overlap during the underground ramp up period. However the duration of this overlap is a subject of discussion. There are some safety issues in trying to mine beneath an operating open pit. Underground mine access could either be part way down the open pit or require an entirely separate access away from the pit.
Concurrent open pit and underground operations may impact upon the ability to backfill the open pit with either waste rock or tailings. Underground mining operations beneath a backfilled open pit may be a concern with respect to safety of the workers and ore lost in crown pillars used to separate the workings.
Open pit and underground operations will require different skill sets from the perspective of supervision, technical, and operations. Underground mining can be a highly specialized skill while open pit mining is similar to earthworks construction where skilled labour is more readily available globally. Do local people want to learn underground mining skills? Do management teams have the capability and desire to manage both these mining approaches at the same time?
In some instances if the open pit is pushed deep, the amount of underground resource remaining beneath the pit is limited. This could make the economics of the capital investment for underground development look unfavorable, resulting in the possible loss of that ore. Perhaps had the open pit been kept shallower, the investment in underground infrastructure may have been justifiable, leading to more total life-of-mine ore recovery.
The timing of the cross-over will also create another significant capital investment period. By selecting a smaller this underground investment is seen earlier in the project life. This would recreate some of the financing and execution risks the project just went through. Conversely increasing the open pit size would delay the underground mine and defer this investment and its mining risk.

Conclusion

As you can see from the foregoing discussion, there are a multitude of factors playing off one another when examining the open pit to underground cross-over point. It can be like trying to mesh two different projects together.
The general consensus seems to be to push the underground mine as far off into the future as possible.  Maximize initial production based on the low risk open open pit before transitioning.
One way some groups will simplify the transition is to declare that the underground operation will be a block cave. That way they can maintain an open pit style low cutoff grade and high production rate. Unfortunately not many deposits are amenable to block caving.  Extensive geotechnical investigations are required to determine if block caving is even applicable.
Optimization studies in general are often not well documented in 43-101 Technical Reports. In most mining studies some tradeoffs will have been done (or should have been done).  There might be only brief mention of them in the 43-101 report. I don’t see a real problem with this since a Technical Report is to describe a project study, not provide all the technical data that went into it. The downside of not presenting these tradeoffs is that they cannot be scrutinized (without having data room access).
One of the features of any optimization study is that one never really knows if you got it wrong. Once the decision is made and the project moves forward, rarely will someone ever remember or question basic design decisions made years earlier. The project is now what it is.

 

Note: You can sign up for the KJK mailing list to get notified when new blogs are posted. Follow me on Twitter at @KJKLtd for updates and insights.
Share

Pre-Concentration – Maybe Good, Maybe Not

A while back I wrote a blog titled “Pre-Concentration – Savior or Not?”. That blog was touting the benefits of pre-concentration. More recently I attended a webinar where the presenter stated that the economics of pre-concentration may not necessarily be as good as we think they are.
My first thought was “this is blasphemy”. However upon further reflection I wondered if it’s true. To answer that question, I modified one of my old cashflow models from a Zn, Pb project using pre-concentration. I adjusted the model to enable running a trade-off, with and without pre-con by varying cost and recovery parameters.

Main input parameters

The trade-off model and some of the parameters are shown in the graphic below. The numbers used in the example are illustrative only, since I am mainly interested in seeing what factors have the greatest influence on the outcome.

The term “mass pull” is used to define the quantity of material that the pre-con plant pulls and sends to the grinding circuit. Unfortunately some metal may be lost with the pre-con rejects.  The main benefit of a pre-con plant is to allow the use of a smaller grinding/flotation circuit by scalping away waste. This will lower the grinding circuit capital cost, albeit slightly increase its unit operating cost.
Concentrate handling systems may not differ much between model options since roughly the same amount of final concentrate is (hopefully) generated.
Another one of the cost differences is tailings handling. The pre-con rejects likely must be trucked to a final disposal location while flotation tails can be pumped.  I assumed a low pumping cost, i.e to a nearby pit.
The pre-con plant doesn’t eliminate a tailings pond, but may make it smaller based on the mass pull factor. The most efficient pre-concentration plant from a tailings handling perspective is shown on the right.

The outcome

The findings of the trade-off surprised me a little bit.  There is an obvious link between pre-con mass pull and overall metal recovery. A high mass pull will increase metal recovery but also results in more tonnage sent to grinding. At some point a high mass pull will cause one to ask what’s the point of pre-con if you are still sending a high percentage of material to the grinding circuit.
The table below presents the NPV for different mass pull and recovery combinations. The column on the far right represents the NPV for the base case without any pre-con plant. The lower left corner of the table shows the recovery and mass pull combinations where the NPV exceeds the base case. The upper right are the combinations with a reduction in NPV value.
The width of this range surprised me showing that the value generated by pre-con isn’t automatic.  The NPV table shown is unique to the input assumptions I used and will be different for every project.

The economic analysis of pre-concentration does not include the possible benefits related to reduced water and energy consumption. These may be important factors for social license and permitting purposes, even if unsupported by the economics.  Here’s an article from ThermoFisher on this “How Bulk Ore Sorting Can Reduce Water and Energy Consumption in Mining Operations“.

Conclusion

The objective of this analysis isn’t to demonstrate the NPV of pre-concentration. The objective is to show that pre-concentration might or might not make sense depending on a project’s unique parameters. The following are some suggestions:
1. Every project should at least take a cursory look at pre-concentration to see if it is viable. This should be done on all projects, even if it’s only a cursory mineralogical assessment level.
2. Make certain to verify that all ore types in the deposit are amenable to the same pre-concentration circuit. This means one needs to have a good understanding of the ore types that will be encountered.
3. Anytime one is doing a study using pre-concentration, one should also examine the economics without it. This helps to understand the  economic drivers and the risks. You can then decide whether it is worth adding another operating circuit in the process flowsheet that has its own cost and performance risk. The more processing components added to a flow sheet, the more overall plant availability may be effected.
4. The head grade of the deposit also determines how economically risky pre-concentration might be. In higher grade ore bodies, the negative impact of any metal loss in pre-concentration may be offset by accepting higher cost for grinding (see chart on the right).
5. In my opinion, the best time to decide on pre-con would be at the PEA stage. Although the amount of testing data available may be limited, it may be sufficient to assess whether pre-con warrants further study.
6. Don’t fall in love with or over promote pre-concentration until you have run the economics. It can make it harder to retract the concept if the economics aren’t there.

 

Note: You can sign up for the KJK mailing list to get notified when new blogs are posted.
Follow us on Twitter at @KJKLtd for updates and insights.
Share

Climbing the Hill of Value With 1D Modelling

Recently I read some articles about the Hill of Value.  I’m not going into detail about it but the Hill of Value is a mine optimization approach that’s been around for a while.  Here is a link to an AusIMM article that describes it “The role of mine planning in high performance”.  For those interested, here is a another post about this subject “About the Hill of Value. Learning from Mistakes (II)“.
hill of value

(From AusIMM)

The basic premise is that an optimal mining project is based on a relationship between cut-off grade and production rate.  The standard breakeven or incremental cutoff grade we normally use may not be optimal for a project.
The image to the right (from the aforementioned AusIMM article) illustrates the peak in the NPV (i.e. the hill of value) on a vertical axis.
A project requires a considerable technical effort to properly evaluate the hill of value. Each iteration of a cutoff grade results in a new mine plan, new production schedule, and a new mining capex and opex estimate.
Each iteration of the plant throughput requires a different mine plan and plant size and the associated project capex and opex.   All of these iterations will generate a new cashflow model.
The effort to do that level of study thoroughly is quite significant.  Perhaps one day artificial intelligence will be able to generate these iterations quickly, but we are not at that stage yet.

Can we simplify it?

In previous blogs (here and here) I described a 1D cashflow model that I use to quickly evaluate projects.  The 1D approach does not rely on a production schedule, instead uses life-of-mine quantities and costs.  Given its simplicity, I was curious if the 1D model could be used to evaluate the hill of value.
I compiled some data to run several iterations for a hypothetical project, loosely based on a mining study I had on hand.  The critical inputs for such an analysis are the operating and capital cost ranges for different plant throughputs.
hill of valueI had a grade tonnage curve, including the tonnes of ore and waste, for a designed pit.  This data is shown graphically on the right.   Essentially the mineable reserve is 62 Mt @ 0.94 g/t Pd with a strip ratio of 0.6 at a breakeven cutoff grade of 0.35 g/t.   It’s a large tonnage, low strip ratio, and low grade deposit.  The total pit tonnage is 100 Mt of combined ore and waste.
I estimated capital costs and operating costs for different production rates using escalation factors such as the rule of 0.6 and the 20% fixed – 80% variable basis.   It would be best to complete proper cost estimations but that is beyond the scope of this analysis. Factoring is the main option when there are no other options.
The charts below show the cost inputs used in the model.   Obviously each project would have its own set of unique cost curves.
The 1D cashflow model was used to evaluate economics for a range of cutoff grades (from 0.20 g/t to 1.70 g/t) and production rates (12,000 tpd to 19,000 tpd).  The NPV sensitivity analysis was done using the Excel data table function.  This is one of my favorite and most useful Excel features.
A total of 225 cases were run (15 COG versus x 15 throughputs) for this example.

What are the results?

The results are shown below.  Interestingly the optimal plant size and cutoff grade varies depending on the economic objective selected.
The discounted NPV 5% analysis indicates an optimal plant with a high throughput (19,000 tpd ) using a low cutoff grade (0.40 g/t).  This would be expected due to the low grade nature of the orebody.  Economies of scale, low operating costs, high revenues, are desired.   Discounted models like revenue as quickly as possible; hence the high throughput rate.
The undiscounted NPV 0% analysis gave a different result.  Since the timing of revenue is less important, a smaller plant was optimal (12,000 tpd) albeit using a similar low cutoff grade near the breakeven cutoff.
If one targets a low cash cost as an economic objective, one gets a different optimal project.  This time a large plant with an elevated cutoff of 0.80 g/t was deemed optimal.
The Excel data table matrices for the three economic objectives are shown below.  The “hot spots” in each case are evident.

hill of value

hill of value

Conclusion

The Hill of Value is an interesting optimization concept to apply to a project.  In the example I have provided, the optimal project varies depending on what the financial objective is.  I don’t know if this would be the case with all projects, however I suspect so.
In this example, if one wants to be a low cash cost producer, one may have to sacrifice some NPV to do this.
If one wants to maximize discounted NPV, then a large plant with low opex would be the best alternative.
If one prefers a long mine life, say to take advantage of forecasted upticks in metal prices, then an undiscounted scenario might win out.
I would recommend that every project undergoes some sort of hill of value test, preferably with more engineering rigor. It helps you to  understand a projects strengths and weaknesses.  The simple 1D analysis can be used as a guide to help select what cases to look at more closely. Nobody wants to assess 225 alternatives in engineering detail.
In reality I don’t ever recall seeing a 43-101 report describing a project with the hill of value test. Let me know if you are aware of any, I’d be interested in sharing them.  Alternatively, if you have a project and would like me to test it on my simple hill of value let me know.
Note: You can sign up for the KJK mailing list to get notified when new blogs are posted.
Share

Simple Financial Models Can Really Help

A few years ago I posted an article about how I use a simple (one-dimensional) financial model to help me take a very quick look at mining projects. The link to that blog is here. I use this simple 1D model with clients that are looking at potential acquisitions or joint venture opportunities at early stages. In many instances the problem is that there is only a resource estimate but no engineering study or production schedule available.

By referring to my model as a 1D model, I imply that I don’t use a mine production schedule across the page like a conventional cashflow model would.
The 1D model simply uses life-of-mine reserves, life-of-mine revenues, operating costs, and capital costs. It’s essentially all done in a single column.  The 1D model also incorporates a very rudimentary tax calculation to ballpark an after-tax NPV.
The 1D model does not calculate payback period or IRR but focuses solely on NPV. NPV, for me, is the driver of the enterprise value of a project or a company. A project with a $100M NPV has that value regardless of whether the IRR is 15% or 30%.

How accurate is a 1D model?

One of the questions I have been asked is how valid is the 1D approach compared to the standard 2D cashflow model. In order to examine that, I have randomly selected several recent 43-101 studies and plugged their reserve and cost parameters into the 1D model.
It takes about 10 minutes to find the relevant data in the technical report and insert the numbers. Interestingly it is typically easy to find the data in reports authored by certain consultants. In other reports one must dig deeper to get the data and sometimes even can’t find it.
The results of the comparison are show in the scatter plots. The bottom x-axis is the 43-101 report NPV and the y-axis is the 1D model result. The 1:1 correlation line is shown on the plots.
There is surprisingly good agreement on both the discounted and undiscounted cases. Even the before and after tax cases look reasonably close.
Where the 1D model can run into difficulty is when a project has a production expansion after a few years. The 1D model logic assumes a uniform annual production rate for the life of mine reserve.
Another thing that hampers the 1D model is when a project uses low grade stockpiling to boost head grades early in the mine life. The 1D model assumes a uniform life-of-mine production reserve grade profile.
Nevertheless even with these limitations, the NPV results are reasonably representative. Staged plant expansions and high grading are usually modifications to an NPV and generally do not make or break a project.

Conclusion

My view is that the 1D cashflow model is an indicative tool only. It is quick and simple to use. It allows me to evaluate projects and test the NPV sensitivity to metal prices, head grades, process recovery, operating costs, etc. These are sensitivities that might not be described in the financial section of the 43-101 report.
This exercise involved comparing data from existing 43-101 reports. Obviously if your are taking a look at an early stage opportunity, you will need to define your own capital and operating cost inputs.
I prefer using a conventional cashflow model approach (i.e. 2D) when I can. However when working with limited technical data, it’s likely not worth the effort to create a complex cashflow model. For me, the 1D model can work just fine. Build one for yourself, if you need convincing.
In an upcoming blog I will examine the hill of value optimization approach with respect to the 1D model.
Note: You can sign up for the KJK mailing list to get notified when new blogs are posted.
Share

Ore Dilution – An Underground Perspective

A few months ago I wrote a blog about different approaches that mining engineers are using to predict dilution in an open pit setting. You can read the blog at this link. Since that time I have been in touch with the author of a technical paper on dilution specifically related to underground operations. Given that my previous blog was from an open pit perspective, an underground discussion might be of interest and educational.
The underground paper is titled “Mining Dilution and Mineral Losses – An Underground Operator’s Perspective” by Paul Tim Whillans. You can download the paper at this link.

Here is the abstract

For the underground operator, dilution is often synonymous with over-break, which mining operations struggle to control. However, there are many additional factors impacting dilution which may surpass the importance of overbreak, and these also need to be considered when assessing a project. Among these, ore contour variability is an important component of both dilution and mineral losses which is often overlooked.  Mineral losses are often considered to be less important because it is considered that they will only have a small impact on net present value. This is not necessarily the case and in fact mineral losses may be much higher than indicated in mining studies due to aggregate factors and may have an important impact on shorter term economics.

My key takeaways

I am not going into detail on Paul’s paper, however some of my key takeaways are as follows. Download the paper to read the rationale behind these ideas.
  • Over-break is a component of dilution but may not be the major cause of it. Other aspects are in play.
  • While dilution may be calculated on a volumetric basis, the application of correct ore and waste densities is important. This applies less to gold deposits than base metal deposits, where ore and waste density differences can be greater.
  • Benchmarking dilution at your mine site with published data may not be useful. Nobody likes to report excessively high dilution for various reasons, hence the published dilution numbers may not be entirely truthful.
  • Ore loss factors are important but can be difficult to estimate. In open pit mining, ore losses are not typically given much consideration. However in underground mining they can have a great impact on the project life and economics.
  • Mining method sketches can play a key role in understanding underground dilution and ore losses, even in today’s software driven mining world.
  • Its possible that many mine operators are using cut-off grades that are too low in some situations.
  • High grading, an unacceptable practice in the past, is now viewed differently due to its positive impact on NPV. (Its seems Mark Bristow at Barrick may be putting a stop to this approach).
  • Inferred resources used in a PEA can often decrease significantly when upgraded to the measured and indicated classifications. If there is a likelihood of this happening, it should be factored into the PEA production tonnage.
  • CIM Best Practice Guidelines do not require underground ore exposure for feasibility studies. However exposing the ore faces can have a significant impact on one’s understanding of the variability of the ore contacts and the properties of minor faults.

Conclusion

The bottom line is that not everyone will necessarily agree with all the conclusions of Paul’s paper on underground dilution. However it does raise many issues for technical consideration on your project.
All of us in the industry want to avoid some of the well publicized disappointments seen on recent underground projects. Several have experienced difficulty in delivering the ore tonnes and grades that were predicted in the feasibility studies. No doubt it can be an anxious time for management when commissioning a new underground mine.
Note: previously I had shared another one of Paul’s technical papers in a blog called “Underground Feasibility Forecasts vs Actuals”. It also provides some interesting insights about underground mining projects.
If you need more information, Paul Whillans website is at http://www.whillansminestudies.com/.
Note: If you would like to get notified when new blogs are posted, then sign up on the KJK mailing list on the website.  Otherwise I post notices on LinkedIn, so follow me at: https://www.linkedin.com/in/kenkuchling/.
Share

Hydrogeology At Diavik – Its Complicated

About 20 years ago I was involved in the feasibility study and initial engineering for the Diavik open pit mine in the Northwest Territories. As you can see from the current photo, groundwater inflows were going to be a potential issue.
Predictions of mine inflow quantity and quality were required as part of the project design. Also integral to the operating plan were geotechnical issues, wall freezing issues, and methods for handling the seepage water.
This mine is going to be a unique situation. The open pit is located both within Lac de Gras and partly on exposed land (i.e. islands). The exposed land is underlain by permafrost of various depth while the rock mass under the lake was unfrozen. The sub-zero climate meant that pit wall seepage would turn into mega-icicles.  Phreatic pressures could buildup behind frozen pit walls. Many different factors were going to come into play in this mining operation so comprehensive field investigations would be required.

A good thing Rio Tinto was a 60% owner and the operator

At no time did the engineering team feel that field budgets were restricted and that technical investigations were going to be limited. Unfortunately in my subsequent career working on other projects I have seen cases where lack of funds does impact the quantity (and quality) of technical data.
The Golder Associates Vancouver hydrogeologcal team was brought on board to help out. Hydrogeological field investigations consisted of packer testing, borehole flowmeter testing, borehole temperature logging, and borehole camera imaging. Most of this work was done from ice level during the winter.
A Calgary based consultant undertook permafrost prediction modelling, which I didn’t even know was a thing.
All of this information was used in developing a three-dimensional groundwater model. MODFLOW and MT3DMS were used to predict groundwater inflow volumes and water quality. The modelling results indicated that open pit inflows were expected to range up to 9,600 m3/day with TDS concentrations gradually increasing in time to maximum levels of about 440 mg/ℓ.
The groundwater modelling also showed that lake water re-circulating through the rock mass would eventually comprise more than 80% of the mine water handled.

Modelling fractured rock masses is not simple

Groundwater modelling of a fractured rock mass is different than modelling a homogeneous aquifer. Discrete structures will have a great impact on seepage rates yet they can be difficult to detect beforehand.
As an example, when Diavik excavated the original bulk sample decline under the lake, water inflows were encountered associated with open joints. However a single open joint was by far the most significant water bearing structure intercepted over the 600-metre decline length.  It resulted in temporary flooding of the decline.

Before (2000) and After (2006) Technical Papers

Interestingly at least two technical papers have been written on Diavik by the project hydrogeologists. They describe the original inflow predictions in one paper and the actual situation in the second.
The 2000 paper describes the field investigations, the 1999 modeling assumptions, and results. You can download that paper here.
The subsequent paper (2006) describes the situation after a few years of mining, describing what was accurate, what was incorrect, and why. This paper can be downloaded here.
In essence, the volume of groundwater inflow was underestimated in the original model.  The hydraulic conductivity of the majority of the rock mass was found to be similar.  However a 30 m wide broken zone, representing less than 10% of the pit wall, resulted in nearly twice as much inflow as was predicted.
The broken zone did not have a uniform permeability but consisted of sparely spaced vertical fractures. This characteristic made it difficult to detect the zone using only core logging and packer tests in individual boreholes.

Groundwater Models Should Not be Static

The original intent was the Diavik groundwater model would not be static.  It continued to evolve over the life of the mine.
Now that Diavik has entered their underground mining stage, it would be interesting to see more updates on their hydrogeologcal performance. If anyone is aware of any subsequent papers on the project, please share.
Note: If you would like to get notified when new blogs are posted, then sign up on the KJK mailing list on the website.  Otherwise I post notices on LinkedIn, so follow me at: https://www.linkedin.com/in/kenkuchling/.
Share

Ore Dilution Prediction – Its Always an Issue

mining reserve estimation
Over my years of preparing and reviewing mining studies, ore dilution often seems to be a contentious issue.  It is deemed either too low or too high, too optimistic or too pessimistic.  Everyone realizes that project studies can see significant economic impacts depending on what dilution factor is applied.  Hence we need to take the time to think about what dilution is being used and why.

Everyone has a preferred dilution method.

I have seen several different approaches for modelling and applying dilution.   Typically engineers and geologists seem to have their own personal favorites and tend to stick with them.   Here are some common dilution approaches.
1. Pick a Number:
This approach is quite simple.  Just pick a number that sounds appropriate for the orebody and the mining method.  There might not be any solid technical basis for the dilution value, but as long as it seems reasonable, it might go unchallenged.
2. SMU Compositing:
This approach takes each percent block (e.g.  a block is 20% waste and 80% ore) and mathematically composites it into a single Selective Mining Unit (“SMU”) block with an overall weighted average grade.  The SMU compositing process will incorporate some waste dilution into the block.  Possibly that could convert some ore blocks to waste once a cutoff grade is applied.   Some engineers may apply additional dilution beyond SMU compositing while others will consider the blocks fully diluted at the end of this step.
3. Diluting Envelope:
This approach assumes that a waste envelope surrounds the ore zone.  One estimates the volume of this waste envelope on different benches, assuming that it is mined with the ore.  The width of the waste envelope may be correlated to the blast hole spacing being used to define the ore and waste mining contacts.  The diluting grade within the waste envelope can be estimated or one may simply assume a more conservative zero-diluting grade.   In this approach, the average dilution factor can be applied to the final production schedule to arrive at the diluted tonnages and grades.  Alternatively, the individual diluted bench tonnes can be used for scheduling purposes.
4. Diluted Block Model:
This dilution approach uses complex logic to look at individual blocks in the block model, determine how many waste contact sides each block has, and then mathematically applies dilution based on the number of contacts.  Usually this approach relies on a direct swap of ore with waste.  If a block gains 100 m3 of waste, it must then lose 100 m3 of ore to maintain the volume balance.   The production schedule derived from the “diluted” block model usually requires no subsequent dilution factor.
5. Using UG Stope Modelling
I have also heard about, but not yet used, a method of applying open pit dilution by adapting an underground stope
modelling tool.  By considering an SMU as a stope, automatic stope shape creators such as Datamine’s
Mineable Shape Optimiser (MSO) can be used to create wireframes for each mining unit over the entire
deposit. Using these wireframes, the model can be sub-blocked and assigned as either ‘ore’ (inside the
wireframe) or ‘waste’ (outside the wireframe) prior to optimization.   It is not entirely clear to me if this approach creates a diluted block model or generates a dilution factor to be applied afterwards.

 

When is the Cutoff Grade Applied?

Depending on which dilution approach is used, the cutoff grade will be applied either before or after dilution.   When dilution is being added to the final production schedule, then the cutoff grade will have been applied to the undiluted material (#1 and #2).
When dilution is incorporated into the block model itself (#3 and #4), then the cutoff grade is likely applied to the diluted blocks.   The timing of when to apply the cutoff grade will have an impact on the ore tonnes and had grade being reported.

Does one apply dilution in pit optimization?

Another occasion when dilution may be used is during pit optimization.  There are normally input fields for both a dilution factor and an ore loss factor.   Some engineers will apply dilution at this step while others will leave the factors at zero.  There are valid reasons for either approach.
My preference is use a zero dilution factor for optimization since the nature of the ore zones will be different at different revenue factors and hence dilution would be unique to each.   It would be good to verify the impact that the dilution factor has on your own pit optimization, otherwise it is simply being viewed as a contingency factor.

Conclusion

My personal experience is that, from a third party review perspective, reviewers tend to focus on the final dilution number used and whether it makes sense to them.   The actual approach used to arrive at that number tends to get less focus.
Regardless of which approach is being used, ensure that you can ultimately determine and quantify the percent dilution being applied.  This can be a bit more difficult with the mathematical block approaches.
Readers may yet have different dilution methods in their toolbox and I it would be interesting to share them.
Note: If you would like to get notified when new blogs are posted, then sign up on the KJK mailing list on the website.  Otherwise I post notices on LinkedIn, so follow me at: https://www.linkedin.com/in/kenkuchling/.
Share

Ore Stockpiling – Why are we doing this again?

ore stockpile
In many of the past mining studies that I have worked, stockpiling strategies were discussed and usually implemented. However sometimes team members were surprised at the size of the stockpiles that were generated by the production plan. In some cases it was apparent that not all team members were clear on the purpose of  stockpiling or had preconceived ideas on the rationale behind it. To many stockpiling may seem like a good idea until they saw it in action.
Mine Stockpile
In this blog I won’t go into all the costs and environmental issues associated with stockpile operation.  The discussion focuses on the reasons for stockpiling and why stockpiles can get large in size or numerous in quantity.
In my experience there are four main reasons why ore stockpiling might be done. They are:
1. Campaigning: For metallurgical reasons if there are some ore types that can cause process difficulties if mixed  with other ores. The problematic ore might be stockpiled until sufficient inventory allows one to process that ore (i.e. campaign) through the mill. Such stockpiles will only grow as large as the operator allows them to grow. At any time the operator can process the material and deplete the stockpile. Be aware that mining operations might still be mining other ore types, then those ores may need to be stockpiled during the campaigning.  That means even more ore stockpiles at site.
2. Grade Optimization: This stockpiling approach is used in situations where the mine delivers more ore than is required by the plant, thereby allowing the best grades to be processed directly while lower grades are stockpiled for a future date. Possibly one or more grade stockpiles may be used, for example a low grade and a medium-low grade stockpile. Such stockpiles may not get processed for years, possibly until the mine is depleted or until the mined grades are lower than those in the stockpile. Such stockpiles can grow to enormous size if accumulated over many years.  Oxidation and processability may be a concern with long term stockpiles.
3. Surge Control: Surge piles may be used in cases where the mine may have a fluctuating ore delivery rate and on some days excess ore is produced while other days there is underproduction. The stockpile is simply used to make up the difference to the plant to provide a steady feed rate. These stockpiles are also available as short term emergency supply if for some reason the mine is shut down (e.g. extreme weather). In general such stockpiles may be relatively small in size since they are simply used for surge control.
4. Blending: Blending stockpiles may be used where a processing plant needs a certain quality of feed material with respect to head grade or contaminant ratios (silica, iron, etc.). Blending stockpiles enables the operator to ensure the plant feed quality to be within a consistent range. Such stockpiles may not be large individually; however there could be several of them depending on the nature of the orebody.
There may be other stockpiling strategies beyond the four listed above but those are the most common.

Test Stockpiling Strategies

Using today’s production scheduling software, one can test multiple stockpiling strategies by applying different cutoff grades or using multiple grade stockpiles. The scheduling software algorithms determine whether one should be adding to stockpile or reclaiming from it. The software will track grades in the stockpile and sometimes be able to model stockpile balances assuming reclaim by average grade, or first in-first out (FIFO), or last in-first out (LIFO).
ore stockpile
Stockpiling in most cases provides potential benefits to an operation and the project economics. Even if metallurgical blending or ore campaigning is not required, one should always test the project economics with a few grade stockpiling scenarios.
Unfortunately these are not simple to undertake when using a manual scheduling approach and so are a reason to move towards automated scheduling software.
Make sure everyone on the team understands the rationale for the stockpiling strategy and what the stockpiles might ultimately look like. They might be surprised.
Note: If you would like to get notified when new blogs are posted, then sign up on the KJK mailing list on the website.   Follow us on Twitter at @KJKLtd for updates and insights.
Share

Resource Estimates – Are Independent Audits A Good Idea?

mining reserves
Question: How important is the integrity of a tailings dam to the successful operation of a mine?
Answer: Very important.
Tailings dam stability is so important that in some jurisdictions regulators may be requiring that mining companies have third party independent review boards or third party audits done on their tailings dams.  The feeling is that, although a reputable consultant may be doing the dam design, there is still a need for some outside oversight.
Differences in interpretation, experience, or errors of omission are a possibility regardless of who does the design.  Hence a second set of eyes can be beneficial.

Is the resource estimate important?

Next question is how important is the integrity of the resource and reserve estimate to the successful operation of a mine?
Answer: Very important.  The mine life, project economics, and shareholder value all rely on it.     So why aren’t a second set of eyes or third party audits very common?

NI 43-101 was the first step

In the years prior to 43-101, junior mining companies could produce their own resource estimates and disclose the results publicly.  With the advent of NI 43-101, a second set of eyes was introduced whereby an independent QP  could review the company’s internal resource and/or prepare their own estimate.  Now the QP ultimately takes legal responsible for the estimate.
Nowadays most small companies do not develop their own in-house resource estimates.  The task is generally awarded to an independent QP.

Resource estimation is a special skill

Possibly companies don’t prepare their own resource estimates due to the specialization needed in modelling and geostatistics. Maybe its due to the skills needed to operate block modeling software.   Maybe the companies feel that doing their own internal resource estimate is a waste of time since an independent QP will be doing the work anyway.

The QP is the final answer..or is it?

Currently it seems the project resource estimate is prepared solely by the QP or a team of QP’s.   In most cases this resource gets published without any other oversight. In other words no second set of eyes has taken a look at it.  We assume the QP is a qualified expert, their judgement is without question, and their work is error free.

Leapfrog Model

As we have seen, some resources estimates have been mishandled and disciplinary actions have been taken against QP’s.   The conclusion is that not all QP’s are perfect.
Just because someone meets the requirements to be a Competent Person or a Qualified Person does not automatically mean they are competent or qualified. Geological modeling is not an exact science and will be based on their personal experience.

What is good practice?

The question being asked is whether it would be good practice for companies to have a second set of eyes take a look at their resource estimates developed by independent QP’s?
Where I have been involved in due diligence for acquisitions or mergers, it is not uncommon for one side to rebuild the resource model with their own technical team.  They don’t have 100% confidence in the original resource handed over to them.   The first thing asked is for the drill hole database.
One downside to a third party review is the added cost to the owner.
Another downside is that when one consultant reviews another consultant’s work there is a tendency to have a list of concerns. Some of these may not be material, which then muddles the conclusion of the review.
On the positive side, a third party review may identify serious interpretation issues or judgement decisions that could be fatal to the resource.
If tailings dams are so important that they require a second set of eyes, why not the resource estimate?  After all, it is the foundation of it all.
Note: If you would like to get notified when new blogs are posted, then sign up on the KJK mailing list on the website.  Otherwise I post notices on LinkedIn, so follow me at: https://www.linkedin.com/in/kenkuchling/.
Share