Pre-Concentration – Maybe Good, Maybe Not

A while back I wrote a blog titled “Pre-Concentration – Savior or Not?”. That blog was touting the benefits of pre-concentration. More recently I attended a webinar where the presenter stated that the economics of pre-concentration may not necessarily be as good as we think they are.
My first thought was “this is blasphemy”. However upon further reflection I wondered if it’s true. To answer that question, I modified one of my old cashflow models from a Zn, Pb project using pre-concentration. I adjusted the model to enable running a trade-off, with and without pre-con by varying cost and recovery parameters.

Main input parameters

The trade-off model and some of the parameters are shown in the graphic below. The numbers used in the example are illustrative only, since I am mainly interested in seeing what factors have the greatest influence on the outcome.

The term “mass pull” is used to define the quantity of material that the pre-con plant pulls and sends to the grinding circuit. Unfortunately some metal may be lost with the pre-con rejects.  The main benefit of a pre-con plant is to allow the use of a smaller grinding/flotation circuit by scalping away waste. This will lower the grinding circuit capital cost, albeit slightly increase its unit operating cost.
Concentrate handling systems may not differ much between model options since roughly the same amount of final concentrate is (hopefully) generated.
Another one of the cost differences is tailings handling. The pre-con rejects likely must be trucked to a final disposal location while flotation tails can be pumped.  I assumed a low pumping cost, i.e to a nearby pit.
The pre-con plant doesn’t eliminate a tailings pond, but may make it smaller based on the mass pull factor. The most efficient pre-concentration plant from a tailings handling perspective is shown on the right.

The outcome

The findings of the trade-off surprised me a little bit.  There is an obvious link between pre-con mass pull and overall metal recovery. A high mass pull will increase metal recovery but also results in more tonnage sent to grinding. At some point a high mass pull will cause one to ask what’s the point of pre-con if you are still sending a high percentage of material to the grinding circuit.
The table below presents the NPV for different mass pull and recovery combinations. The column on the far right represents the NPV for the base case without any pre-con plant. The lower left corner of the table shows the recovery and mass pull combinations where the NPV exceeds the base case. The upper right are the combinations with a reduction in NPV value.
The width of this range surprised me showing that the value generated by pre-con isn’t automatic.  The NPV table shown is unique to the input assumptions I used and will be different for every project.

The economic analysis of pre-concentration does not include the possible benefits related to reduced water and energy consumption. These may be important factors for social license and permitting purposes, even if unsupported by the economics.  Here’s an article from ThermoFisher on this “How Bulk Ore Sorting Can Reduce Water and Energy Consumption in Mining Operations“.

Conclusion

The objective of this analysis isn’t to demonstrate the NPV of pre-concentration. The objective is to show that pre-concentration might or might not make sense depending on a project’s unique parameters. The following are some suggestions:
1. Every project should at least take a cursory look at pre-concentration to see if it is viable. This should be done on all projects, even if it’s only a cursory mineralogical assessment level.
2. Make certain to verify that all ore types in the deposit are amenable to the same pre-concentration circuit. This means one needs to have a good understanding of the ore types that will be encountered.
3. Anytime one is doing a study using pre-concentration, one should also examine the economics without it. This helps to understand the  economic drivers and the risks. You can then decide whether it is worth adding another operating circuit in the process flowsheet that has its own cost and performance risk. The more processing components added to a flow sheet, the more overall plant availability may be effected.
4. The head grade of the deposit also determines how economically risky pre-concentration might be. In higher grade ore bodies, the negative impact of any metal loss in pre-concentration may be offset by accepting higher cost for grinding (see chart on the right).
5. In my opinion, the best time to decide on pre-con would be at the PEA stage. Although the amount of testing data available may be limited, it may be sufficient to assess whether pre-con warrants further study.
6. Don’t fall in love with or over promote pre-concentration until you have run the economics. It can make it harder to retract the concept if the economics aren’t there.

 

Note: You can sign up for the KJK mailing list to get notified when new blogs are posted.
Follow us on Twitter at @KJKLtd for updates and insights.
Share

Climbing the Hill of Value With 1D Modelling

Recently I read some articles about the Hill of Value.  I’m not going into detail about it but the Hill of Value is a mine optimization approach that’s been around for a while.  Here is a link to an AusIMM article that describes it “The role of mine planning in high performance”.  For those interested, here is a another post about this subject “About the Hill of Value. Learning from Mistakes (II)“.
hill of value

(From AusIMM)

The basic premise is that an optimal mining project is based on a relationship between cut-off grade and production rate.  The standard breakeven or incremental cutoff grade we normally use may not be optimal for a project.
The image to the right (from the aforementioned AusIMM article) illustrates the peak in the NPV (i.e. the hill of value) on a vertical axis.
A project requires a considerable technical effort to properly evaluate the hill of value. Each iteration of a cutoff grade results in a new mine plan, new production schedule, and a new mining capex and opex estimate.
Each iteration of the plant throughput requires a different mine plan and plant size and the associated project capex and opex.   All of these iterations will generate a new cashflow model.
The effort to do that level of study thoroughly is quite significant.  Perhaps one day artificial intelligence will be able to generate these iterations quickly, but we are not at that stage yet.

Can we simplify it?

In previous blogs (here and here) I described a 1D cashflow model that I use to quickly evaluate projects.  The 1D approach does not rely on a production schedule, instead uses life-of-mine quantities and costs.  Given its simplicity, I was curious if the 1D model could be used to evaluate the hill of value.
I compiled some data to run several iterations for a hypothetical project, loosely based on a mining study I had on hand.  The critical inputs for such an analysis are the operating and capital cost ranges for different plant throughputs.
hill of valueI had a grade tonnage curve, including the tonnes of ore and waste, for a designed pit.  This data is shown graphically on the right.   Essentially the mineable reserve is 62 Mt @ 0.94 g/t Pd with a strip ratio of 0.6 at a breakeven cutoff grade of 0.35 g/t.   It’s a large tonnage, low strip ratio, and low grade deposit.  The total pit tonnage is 100 Mt of combined ore and waste.
I estimated capital costs and operating costs for different production rates using escalation factors such as the rule of 0.6 and the 20% fixed – 80% variable basis.   It would be best to complete proper cost estimations but that is beyond the scope of this analysis. Factoring is the main option when there are no other options.
The charts below show the cost inputs used in the model.   Obviously each project would have its own set of unique cost curves.
The 1D cashflow model was used to evaluate economics for a range of cutoff grades (from 0.20 g/t to 1.70 g/t) and production rates (12,000 tpd to 19,000 tpd).  The NPV sensitivity analysis was done using the Excel data table function.  This is one of my favorite and most useful Excel features.
A total of 225 cases were run (15 COG versus x 15 throughputs) for this example.

What are the results?

The results are shown below.  Interestingly the optimal plant size and cutoff grade varies depending on the economic objective selected.
The discounted NPV 5% analysis indicates an optimal plant with a high throughput (19,000 tpd ) using a low cutoff grade (0.40 g/t).  This would be expected due to the low grade nature of the orebody.  Economies of scale, low operating costs, high revenues, are desired.   Discounted models like revenue as quickly as possible; hence the high throughput rate.
The undiscounted NPV 0% analysis gave a different result.  Since the timing of revenue is less important, a smaller plant was optimal (12,000 tpd) albeit using a similar low cutoff grade near the breakeven cutoff.
If one targets a low cash cost as an economic objective, one gets a different optimal project.  This time a large plant with an elevated cutoff of 0.80 g/t was deemed optimal.
The Excel data table matrices for the three economic objectives are shown below.  The “hot spots” in each case are evident.

hill of value

hill of value

Conclusion

The Hill of Value is an interesting optimization concept to apply to a project.  In the example I have provided, the optimal project varies depending on what the financial objective is.  I don’t know if this would be the case with all projects, however I suspect so.
In this example, if one wants to be a low cash cost producer, one may have to sacrifice some NPV to do this.
If one wants to maximize discounted NPV, then a large plant with low opex would be the best alternative.
If one prefers a long mine life, say to take advantage of forecasted upticks in metal prices, then an undiscounted scenario might win out.
I would recommend that every project undergoes some sort of hill of value test, preferably with more engineering rigor. It helps you to  understand a projects strengths and weaknesses.  The simple 1D analysis can be used as a guide to help select what cases to look at more closely. Nobody wants to assess 225 alternatives in engineering detail.
In reality I don’t ever recall seeing a 43-101 report describing a project with the hill of value test. Let me know if you are aware of any, I’d be interested in sharing them.  Alternatively, if you have a project and would like me to test it on my simple hill of value let me know.
Note: You can sign up for the KJK mailing list to get notified when new blogs are posted.
Share

Simple Financial Models Can Really Help

A few years ago I posted an article about how I use a simple (one-dimensional) financial model to help me take a very quick look at mining projects. The link to that blog is here. I use this simple 1D model with clients that are looking at potential acquisitions or joint venture opportunities at early stages. In many instances the problem is that there is only a resource estimate but no engineering study or production schedule available.

By referring to my model as a 1D model, I imply that I don’t use a mine production schedule across the page like a conventional cashflow model would.
The 1D model simply uses life-of-mine reserves, life-of-mine revenues, operating costs, and capital costs. It’s essentially all done in a single column.  The 1D model also incorporates a very rudimentary tax calculation to ballpark an after-tax NPV.
The 1D model does not calculate payback period or IRR but focuses solely on NPV. NPV, for me, is the driver of the enterprise value of a project or a company. A project with a $100M NPV has that value regardless of whether the IRR is 15% or 30%.

How accurate is a 1D model?

One of the questions I have been asked is how valid is the 1D approach compared to the standard 2D cashflow model. In order to examine that, I have randomly selected several recent 43-101 studies and plugged their reserve and cost parameters into the 1D model.
It takes about 10 minutes to find the relevant data in the technical report and insert the numbers. Interestingly it is typically easy to find the data in reports authored by certain consultants. In other reports one must dig deeper to get the data and sometimes even can’t find it.
The results of the comparison are show in the scatter plots. The bottom x-axis is the 43-101 report NPV and the y-axis is the 1D model result. The 1:1 correlation line is shown on the plots.
There is surprisingly good agreement on both the discounted and undiscounted cases. Even the before and after tax cases look reasonably close.
Where the 1D model can run into difficulty is when a project has a production expansion after a few years. The 1D model logic assumes a uniform annual production rate for the life of mine reserve.
Another thing that hampers the 1D model is when a project uses low grade stockpiling to boost head grades early in the mine life. The 1D model assumes a uniform life-of-mine production reserve grade profile.
Nevertheless even with these limitations, the NPV results are reasonably representative. Staged plant expansions and high grading are usually modifications to an NPV and generally do not make or break a project.

Conclusion

My view is that the 1D cashflow model is an indicative tool only. It is quick and simple to use. It allows me to evaluate projects and test the NPV sensitivity to metal prices, head grades, process recovery, operating costs, etc. These are sensitivities that might not be described in the financial section of the 43-101 report.
This exercise involved comparing data from existing 43-101 reports. Obviously if your are taking a look at an early stage opportunity, you will need to define your own capital and operating cost inputs.
I prefer using a conventional cashflow model approach (i.e. 2D) when I can. However when working with limited technical data, it’s likely not worth the effort to create a complex cashflow model. For me, the 1D model can work just fine. Build one for yourself, if you need convincing.
In an upcoming blog I will examine the hill of value optimization approach with respect to the 1D model.
Note: You can sign up for the KJK mailing list to get notified when new blogs are posted.
Share

Ore Dilution – An Underground Perspective

A few months ago I wrote a blog about different approaches that mining engineers are using to predict dilution in an open pit setting. You can read the blog at this link. Since that time I have been in touch with the author of a technical paper on dilution specifically related to underground operations. Given that my previous blog was from an open pit perspective, an underground discussion might be of interest.
The underground paper is titled “Mining Dilution and Mineral Losses – An Underground Operator’s Perspective” by Paul Tim Whillans. You can download the paper at this link.

Here is the abstract

For the underground operator, dilution is often synonymous with over-break, which mining operations struggle to control. However, there are many additional factors impacting dilution which may surpass the importance of overbreak, and these also need to be considered when assessing a project. Among these, ore contour variability is an important component of both dilution and mineral losses which is often overlooked.  Mineral losses are often considered to be less important because it is considered that they will only have a small impact on net present value. This is not necessarily the case and in fact mineral losses may be much higher than indicated in mining studies due to aggregate factors and may have an important impact on shorter term economics.

My key takeaways

I am not going into detail on Paul’s paper, however some of my key takeaways are as follows. Download the paper to read the rationale behind these ideas.
  • Over-break is a component of dilution but may not be the major cause of it. Other aspects are in play.
  • While dilution may be calculated on a volumetric basis, the application of correct ore and waste densities is important. This applies less to gold deposits than base metal deposits, where ore and waste density differences can be greater.
  • Benchmarking dilution at your mine site with published data may not be useful. Nobody likes to report excessively high dilution for various reasons, hence the published dilution numbers may not be entirely truthful.
  • Ore loss factors are important but can be difficult to estimate. In open pit mining, ore losses are not typically given much consideration. However in underground mining they can have a great impact on the project life and economics.
  • Mining method sketches can play a key role in understanding underground dilution and ore losses, even in today’s software driven mining world.
  • Its possible that many mine operators are using cut-off grades that are too low in some situations.
  • High grading, an unacceptable practice in the past, is now viewed differently due to its positive impact on NPV. (Its seems Mark Bristow at Barrick may be putting a stop to this approach).
  • Inferred resources used in a PEA can often decrease significantly when upgraded to the measured and indicated classifications. If there is a likelihood of this happening, it should be factored into the PEA production tonnage.
  • CIM Best Practice Guidelines do not require underground ore exposure for feasibility studies. However exposing the ore faces can have a significant impact on one’s understanding of the variability of the ore contacts and the properties of minor faults.

Conclusion

The bottom line is that not everyone will necessarily agree with all the conclusions of Paul’s paper on underground dilution. However it does raise many issues for technical consideration on your project.
All of us in the industry want to avoid some of the well publicized disappointments seen on recent underground projects. Several have experienced difficulty in delivering the ore tonnes and grades that were predicted in the feasibility studies. No doubt it can be an anxious time for management when commissioning a new underground mine.
Note: previously I had shared another one of Paul’s technical papers in a blog called “Underground Feasibility Forecasts vs Actuals”. It also provides some interesting insights about underground mining projects.
If you need more information, Paul Whillans website is at http://www.whillansminestudies.com/.
The entire blog post library can be found at this LINK with topics ranging from geotechnical, financial modelling, and junior mining investing.
Note: If you would like to get notified when new blogs are posted, then sign up on the KJK mailing list on the website.  
Share

Hydrogeology At Diavik – Its Complicated

From 1997 to 2000 I was involved in the feasibility study and initial engineering for the Diavik open pit mine in the Northwest Territories. As you can see from the photo on the right, groundwater inflows were going to be a potential mining issue.
Predictions of mine inflow quantity and quality were required as part of the project design and permitting. Also integral to the mine operating plan were geotechnical issues, wall freezing issues, and methods for handling the seepage water.
This mine was going to be a unique situation. The open pit is located both within Lac de Gras and partly on exposed land (i.e. islands). The exposed land is underlain by permafrost of various depth while the rock mass under the lake was unfrozen. The sub-zero climate meant that pit wall seepage would turn into mega-icicles.
Phreatic pressures could buildup behind frozen pit walls. Many different factors were going to come into play in this mining operation so comprehensive field investigations would be required.

A good thing Rio Tinto was a 60% owner and the operator

Open Pit Slope

open pit wall

At no time did the engineering team feel that field budgets were restricted and that technical investigations were going to be limited. Unfortunately in my subsequent career working on other projects I have seen cases where lack of funds does impact the quantity (and quality) of technical field data.
The Golder Associates Vancouver hydrogeologcal team was brought on board to help out. Hydrogeological field investigations consisted of packer testing, borehole flowmeter testing, borehole temperature logging, and borehole camera imaging. Most of this work was done from ice level during the winter.
A Calgary based consultant undertook permafrost prediction modelling, which I didn’t even know was a thing at the time.
All of this information was used in developing a three-dimensional groundwater model. MODFLOW and MT3DMS were used to predict groundwater inflow volumes and water quality. The modelling results indicated that open pit inflows were expected to range up to 9,600 m3/day with TDS concentrations gradually increasing in time to maximum levels of about 440 mg/ℓ.
The groundwater modelling also showed that lake water re-circulating through the rock mass would eventually comprise more than 80% of the mine water handled.

Modelling fractured rock masses is not simple

Groundwater modelling of a fractured rock mass is different than modelling a homogeneous aquifer, like sand or gravel. Discrete structures in the rock mass will be the controlling factor on seepage rates yet such structures can be difficult to detect beforehand.
As an example, when Diavik excavated the original bulk sample decline under the lake, water inflows were encountered associated with open joints. However a single open joint was by far the most significant water bearing structure intercepted over the 600-metre decline length.
It resulted in temporary flooding of the decline, but was something that would be nearly impossible to find beforehand.

Before (2000) and After (2006) Technical Papers

Interestingly at least two technical papers have been written on Diavik by the project hydrogeologists. They describe the original inflow predictions in one paper and the actual situation in the second.
The 2000 paper describes the field investigations, the 1999 modeling assumptions, and results. You can download that paper here.
The subsequent paper (2006) describes the situation after a few years of mining, describing what was accurate, what was incorrect, and why. This paper can be downloaded here.
In essence, the volume of groundwater inflow was underestimated in the original model.  The hydraulic conductivity of the majority of the rock mass was found to be similar.  However a 30 metre wide broken zone, representing less than 10% of the pit wall, resulted in nearly twice as much inflow as was predicted.
The broken zone did not have a uniform permeability but consisted of sparely spaced vertical fractures. This characteristic made it difficult to detect the zone using only core logging and packer tests in individual boreholes.

Groundwater Models Should Not be Static

The original intent during initial design was the Diavik groundwater model would not be static.  It would continued to evolve over the life of the mine as more knowledge was acquired.
Now that Diavik has entered their underground mining stage, it would be interesting to see further updates on their hydrogeologcal performance. If anyone is aware of any subsequent papers on the project, please share.
One way to address excess amounts of pit wall seepage is through the use of pit perimeter depressurization wells.   In another blog post I discussed a new approach that allows direction drilling of wells to be done.  You can read that article at this link “Directional Drilling Open Pit Dewatering Wells – Great Idea“.
The entire blog post library can be found at this LINK with topics ranging from geotechnical, financial modelling, and junior mining investing.
Note: If you would like to get notified when new blogs are posted, then sign up on the KJK mailing list on the website.  Otherwise I post notices on LinkedIn, so follow me at: https://www.linkedin.com/in/kenkuchling/.
Share

Mining Dilution Prediction – Its Not That Simple

mining reserve estimation
Over my years of working on and reviewing mining studies, ore dilution often seems to be one of the much discussed issues.  It is deemed either too low or too high, too optimistic or too pessimistic.  Project economics can see significant impacts depending on what dilution factor is applied.  They are numerous instances where mines have been put into production, and excess dilution has subsequently led to their downfall.  Hence we need to take the time to think about what dilution is being applied and the basis for it.

Everyone has a preferred dilution method.

I have seen several different approaches for modelling and applying dilution.   It seems that engineers and geologists have their own personal favorites and tend to stick with them.   Here are some common dilution approaches that I have seen (and used myself).
1. Pick a Number:
This approach is quite simple.  Just pick a number that sounds appropriate for the orebody and the mining method.  There might not be any solid technical basis for the dilution value, but as long as it seems reasonable, it might go unchallenged.  Possibly its a dilution value commonly seen in numerous other studies.
2. SMU Compositing:
This approach takes each percent block (e.g.  a block is 20% waste and 80% ore) and mathematically composites it into a single Selective Mining Unit (“SMU”) block with an overall weighted average grade.  The SMU compositing approach will dilute the ore in the block with the contained waste.  Ultimately that might convert some highly diluted ore blocks to waste once a cutoff grade is applied.   Some engineers may apply an additional dilution percentage beyond the SMU compositing, while others will consider the blocks fully diluted at this step.
3. Diluting Envelope:
This approach assumes that a waste envelope surrounds the ore zone.  One estimates the volume of this envelope on different benches, assuming that it is mined with the ore.  The width of the waste envelope may be linked with the blast hole spacing used to define the ore and waste contacts for mining.  The diluting grade within the waste envelope can be estimated or one may simply assume a more conservative zero-diluting grade.   In this approach, the average dilution factor can be applied to the final production schedule to arrive at the diluted tonnages and grades sent to the process plant.
4. Diluted Block Model:
This dilution approach uses complex logic to look at individual blocks in the block model, determine how many waste contact sides each block has, and then mathematically applies dilution based on the number of contacts.  A block with waste on three sides would be more heavily diluted than a block with waste only on one side.   Usually this approach relies on a direct swap of ore with waste.  If a block gains 100 m3 of waste, it must then lose 100 m3 of ore to maintain the volume balance.   The production schedule derived from a “diluted” block model usually requires no subsequent dilution factor.
5. Using UG Stope Modelling
I have also heard about, but not yet used, a method of applying open pit dilution by adapting an underground stope
modelling tool.  By considering an SMU as a stope, automatic stope shape creators such as Datamine’s
Mineable Shape Optimiser (MSO) can be used to create wireframes for each mining unit over the entire
deposit. Using these wireframes, the model can be sub-blocked and assigned as either ‘ore’ (inside the
wireframe) or ‘waste’ (outside the wireframe) prior to optimization.   It is not entirely clear to me if this approach creates a diluted block model or generates a dilution factor to be applied afterwards.

 

When is the Cutoff Grade Applied?

Depending on which dilution approach is used, the cutoff grade will be applied either before or after dilution.   When the dilution approach requires adding dilution to the final production schedule, then the cutoff grade will have been applied to the undiluted material (#1 and #2).
When dilution is incorporated into the block model itself (#3 and #4), then the cutoff grade is likely applied to the diluted blocks.
The timing of when the cutoff grade is applied to the ore blocks will have an impact on the ore tonnes and had grade being reported.

Does one apply dilution in pit optimization?

Another occasion when dilution may be used is during pit optimization.  In the software, there are normally input fields for both a dilution factor and an ore loss factor.   Some engineers will apply dilution at this step while others will leave the factors at zero.  There are valid reasons for either approach.
My preference is use a zero dilution factor for optimization since the nature of the ore zones will be different at different revenue factors and hence dilution would be unique to each.   It would be good to verify the impact that the dilution factor has on your own pit optimization by running with a factor to see the result.

Conclusion

My personal experience is that, from a third party review perspective, reviewers tend to focus on the value of the  dilution percentage used and whether it seems reasonable.   The actual dilution approach tends to get less focus.
Regardless of which approach is being used, ensure that you can ultimately determine and quantify the percent dilution being applied.  This can be a bit more difficult with the mathematical block diluting approaches.
Readers may yet have different dilution methods in their toolbox and I it would be interesting to share them.
There is another blog post that discussed dilution from an underground mining perspective.  This discussion was written by another engineer who permitted me to share their paper.    You can read that at “Ore Dilution – An Underground Perspective“.
The entire blog post library can be found at this LINK with topics ranging from geotechnical, financial modelling, and junior mining investing.
Note: If you would like to get notified when new blogs are posted, then sign up on the KJK mailing list on the website.  
Share

Ore Stockpiling – Why are we doing this again?

ore stockpile
In many of the past mining studies that I have worked, stockpiling strategies were discussed and usually implemented. However sometimes team members were surprised at the size of the stockpiles that were generated by the production plan. In some cases it was apparent that not all team members were clear on the purpose of  stockpiling or had preconceived ideas on the rationale behind it. To many stockpiling may seem like a good idea until they saw it in action.
Mine Stockpile
In this blog I won’t go into all the costs and environmental issues associated with stockpile operation.  The discussion focuses on the reasons for stockpiling and why stockpiles can get large in size or numerous in quantity.
In my experience there are four main reasons why ore stockpiling might be done. They are:
1. Campaigning: For metallurgical reasons if there are some ore types that can cause process difficulties if mixed  with other ores. The problematic ore might be stockpiled until sufficient inventory allows one to process that ore (i.e. campaign) through the mill. Such stockpiles will only grow as large as the operator allows them to grow. At any time the operator can process the material and deplete the stockpile. Be aware that mining operations might still be mining other ore types, then those ores may need to be stockpiled during the campaigning.  That means even more ore stockpiles at site.
2. Grade Optimization: This stockpiling approach is used in situations where the mine delivers more ore than is required by the plant, thereby allowing the best grades to be processed directly while lower grades are stockpiled for a future date. Possibly one or more grade stockpiles may be used, for example a low grade and a medium-low grade stockpile. Such stockpiles may not get processed for years, possibly until the mine is depleted or until the mined grades are lower than those in the stockpile. Such stockpiles can grow to enormous size if accumulated over many years.  Oxidation and processability may be a concern with long term stockpiles.
3. Surge Control: Surge piles may be used in cases where the mine may have a fluctuating ore delivery rate and on some days excess ore is produced while other days there is underproduction. The stockpile is simply used to make up the difference to the plant to provide a steady feed rate. These stockpiles are also available as short term emergency supply if for some reason the mine is shut down (e.g. extreme weather). In general such stockpiles may be relatively small in size since they are simply used for surge control.
4. Blending: Blending stockpiles may be used where a processing plant needs a certain quality of feed material with respect to head grade or contaminant ratios (silica, iron, etc.). Blending stockpiles enables the operator to ensure the plant feed quality to be within a consistent range. Such stockpiles may not be large individually; however there could be several of them depending on the nature of the orebody.
There may be other stockpiling strategies beyond the four listed above but those are the most common.

Test Stockpiling Strategies

Using today’s production scheduling software, one can test multiple stockpiling strategies by applying different cutoff grades or using multiple grade stockpiles. The scheduling software algorithms determine whether one should be adding to stockpile or reclaiming from it. The software will track grades in the stockpile and sometimes be able to model stockpile balances assuming reclaim by average grade, or first in-first out (FIFO), or last in-first out (LIFO).
ore stockpile
Stockpiling in most cases provides potential benefits to an operation and the project economics. Even if metallurgical blending or ore campaigning is not required, one should always test the project economics with a few grade stockpiling scenarios.
Unfortunately these are not simple to undertake when using a manual scheduling approach and so are a reason to move towards automated scheduling software.
Make sure everyone on the team understands the rationale for the stockpiling strategy and what the stockpiles might ultimately look like. They might be surprised.
Note: If you would like to get notified when new blogs are posted, then sign up on the KJK mailing list on the website.   Follow us on Twitter at @KJKLtd for updates and insights.
Share

Resource Estimates – Are Independent Audits A Good Idea?

mining reserves
Question: How important is the integrity of a tailings dam to the successful operation of a mine?
Answer: Very important.
Tailings dam stability is so important that in some jurisdictions regulators may be requiring that mining companies have third party independent review boards or third party audits done on their tailings dams.  The feeling is that, although a reputable consultant may be doing the dam design, there is still a need for some outside oversight.
Differences in interpretation, experience, or errors of omission are a possibility regardless of who does the design.  Hence a second set of eyes can be beneficial.

Is the resource estimate important?

Next question is how important is the integrity of the resource and reserve estimate to the successful operation of a mine?
Answer: Very important.  The mine life, project economics, and shareholder value all rely on it.     So why aren’t a second set of eyes or third party audits very common?

NI 43-101 was the first step

In the years prior to 43-101, junior mining companies could produce their own resource estimates and disclose the results publicly.  With the advent of NI 43-101, a second set of eyes was introduced whereby an independent QP  could review the company’s internal resource and/or prepare their own estimate.  Now the QP ultimately takes legal responsible for the estimate.
Nowadays most small companies do not develop their own in-house resource estimates.  The task is generally awarded to an independent QP.

Resource estimation is a special skill

Possibly companies don’t prepare their own resource estimates due to the specialization needed in modelling and geostatistics. Maybe its due to the skills needed to operate block modeling software.   Maybe the companies feel that doing their own internal resource estimate is a waste of time since an independent QP will be doing the work anyway.

The QP is the final answer..or is it?

Currently it seems the project resource estimate is prepared solely by the QP or a team of QP’s.   In most cases this resource gets published without any other oversight. In other words no second set of eyes has taken a look at it.  We assume the QP is a qualified expert, their judgement is without question, and their work is error free.

Leapfrog Model

As we have seen, some resources estimates have been mishandled and disciplinary actions have been taken against QP’s.   The conclusion is that not all QP’s are perfect.
Just because someone meets the requirements to be a Competent Person or a Qualified Person does not automatically mean they are competent or qualified. Geological modeling is not an exact science and will be based on their personal experience.

What is good practice?

The question being asked is whether it would be good practice for companies to have a second set of eyes take a look at their resource estimates developed by independent QP’s?
Where I have been involved in due diligence for acquisitions or mergers, it is not uncommon for one side to rebuild the resource model with their own technical team.  They don’t have 100% confidence in the original resource handed over to them.   The first thing asked is for the drill hole database.
One downside to a third party review is the added cost to the owner.
Another downside is that when one consultant reviews another consultant’s work there is a tendency to have a list of concerns. Some of these may not be material, which then muddles the conclusion of the review.
On the positive side, a third party review may identify serious interpretation issues or judgement decisions that could be fatal to the resource.
If tailings dams are so important that they require a second set of eyes, why not the resource estimate?  After all, it is the foundation of it all.
Note: If you would like to get notified when new blogs are posted, then sign up on the KJK mailing list on the website.  Otherwise I post notices on LinkedIn, so follow me at: https://www.linkedin.com/in/kenkuchling/.
Share

Disrupt Mining Challenge – Watch for it at PDAC

Update:  This blog was originally written in January 2016, and has been updated for Jan 2018.

Gold Rush Challenge

In 2016 at PDAC, Integra Gold held the first the Gold Rush Challenge.  It was an innovative event for the mining industry.  It was following along on the footsteps of the Goldcorp Challenge held way back in 2001.
The Integra Gold Rush Challenge was a contest whereby entrants were given access to a geological database and asked to prepare submissions presenting the best prospects for the next gold discovery on the Lamaque property.  Winners would get a share of the C$1 million prize.
Integra Gold hoped that the contest would expand their access to quality people outside their company enabling their own in-house geological team to focus on other exploration projects.   In total 1,342 entrants from over 83 countries registered to compete in the challenge.  A team from SGS Canada won the prize.

Then Disrupt Mining came along

In 2017, its seem the next step in the innovation process was the creation of Disrupt Mining sponsoerd by Goldcorp.  Companies and teams developing new technologies would compete to win a $1 million prize.
In 2017, the co-winning teams were from Cementation Canada (new hoisting technology) and Kore Geosystems (data analystics for decision making).
In 2018, the winning team was from Acoustic Zoom, an new way to undertake seismic surveys.

The 2019 winners will be announced at PDAC.  The entry deadline has passed so you’re out of luck for this year.

Conclusion

At PDAC there are always a lot of things to do, from networking, visiting booths, presentations, trade shows, gala dinners, and hospitality suites.
Now Disrupt Mining brings another event for your PDAC agenda.
Note: You can sign up for the KJK mailing list to get notified when new blogs are posted.
Share

Measured vs. Indicated Resources – Do We Treat Them the Same?

measured and indicated
One of the first things we normally look at when examining a resource estimate is how much of the resource is classified as Measured or Indicated (“M+I”) compared to the Inferred tonnage.  It is important to understand the uncertainty in the estimate and how much the Inferred proportion contributes.   Having said that, I think we tend to focus less on the split between the Measured and Indicated tonnages.

Inferred resources have a role

We are all aware of the regulatory limitations imposed by Inferred resources in mining studies.  They are speculative in nature and hence cannot be used in the economic models for pre-feasibility and feasibility studies. However Inferred resource can be used for production planing in a Preliminary Economic Assessment (“PEA”).
Inferred resources are so speculative that one cannot legally add them to the Measure and Indicated tonnages in a resource statement (although that is what everyone does).   I don’t really understand the concern with a mineral resource statement if it includes a row that adds M+I tonnage with Inferred tonnes, as long as everything is transparent.
When a PEA mining schedule is developed, the three resource classifications can be combined into a single tonnage value.  However in the resource statement the M+I+I cannot be totaled.  A bit contradictory.

Are Measured resources important?

It appears to me that companies are more interested in what resource tonnage meets the M+I threshold but are not as concerned about the tonnage split between Measured and Indicated.  It seems that M+I are largely being viewed the same.  Since both Measured and Indicated resources can be used in a feasibility economic analysis, does it matter if the tonnage is 100% Measured (Proven) or 100% Indicated (Probable)?
The NI 43-101 and CIM guidelines provide definitions for Measured and Indicated resources but do not specify any different treatment like they do for the Inferred resources.
CIM Resources to Mineral Reserves

Relationship between Mineral Reserves and Mineral Resources (CIM Definition Standards).

Payback Period and Measured Resource

In my past experience with feasibility studies, some people applied a  rule-of-thumb that the majority of the tonnage mined during the payback period must consist of Measure resource (i.e. Proven reserve).
The goal was to reduce project risk by ensuring the production tonnage providing the capital recovery is based on the resource with the highest certainty.
Generally I do not see this requirement used often, although I am not aware of what everyone is doing in every study.   I realize there is a cost, and possibly a significant cost, to convert Indicated resource to Measured so there may be some hesitation in this approach. Hence it seems to be simpler for everyone to view the Measured and Indicated tonnages the same way.

Conclusion

NI 43-101 specifies how the Inferred resource can and cannot be utilized.  Is it a matter of time before the regulators start specifying how Measured and Indicated resources must be used?  There is some potential merit to this idea, however adding more regulation (and cost) to an already burdened industry would not be helpful.
Perhaps in the interest of transparency, feasibility studies should add two new rows to the bottom of the production schedule. These rows would show how the annual processing tonnages are split between Proven and Probable reserves. This enables one to can get a sense of the resource risk in the early years of the project.  Given the mining software available today, it isn’t hard to provide this additional detail.
Note: If you would like to get notified when new blogs are posted, then sign up on the KJK mailing list on the website.  Otherwise I post notices on LinkedIn, so follow me at: https://www.linkedin.com/in/kenkuchling/.
Share