A Data-Driven Approach To Precision Rifles, Optics & Gear
Home / Ammo & Handloading / 6.5 Creedmoor Ammo Test Part 5: Live-Fire Group Sizes & Precision
6.5 Creedmoor factory Ammo comparison

6.5 Creedmoor Ammo Test Part 5: Live-Fire Group Sizes & Precision

Welcome to Part 5 of the results from my massive 6.5 Creedmoor ammo field test!

This article will cover the results from over 150 five-shot groups that I fired. There were 760 shots for record, which I carefully collected over several tedious days at the range.

When it comes to long-range, precision rifles, many of us can fixate on trying to coax tiny groups out of our rifles. After all, if you can’t hit a 1” target at 100 yards – your odds of hitting a target at 1,000 yards aren’t great. But, what is factory ammo truly capable of? We may all know die-hard reloaders that claim, “you just can’t get the precision you need from factory ammo.” Factory ammo has come improved dramatically in the past 10 years, so is that still true? What can you expect from today’s factory ammo that is marketed to be “match” worthy? This article will provide objective insight into those questions.

If you’d like to see an objective comparison of how much group size and precision matter at long range, how much it’d take to miss, or how it impacts your hit probability, I’d encourage you to check out this article: How Much Does Group Size Matter?

Quantifying Precision & Group Size

When most shooters discuss precision, we virtually always talk about group size. More specifically, we talk about the extreme spread (ES) of either 3-shot or 5-shot groups. But, just because that is the most common way to quantify precision doesn’t mean it is the most effective way.

Trying to quantify the dispersion of a group with a single number is more complex than it sounds. A group of bullet holes on a target represents a rich and complex data set, yet we want to boil all of that down to one number that conveys “precision.” Having a single number as a summary is powerful because it makes it easier to make comparisons. However, any time we simplify a bunch of data points into a single number, that implies some loss of detail or nuance, which can sometimes lead to misleading conclusions. (Learn more here)

For example, the diagram below shows two groups that have the same exact extreme spread but very different dispersion:

The Problem with Extreme Spread For Rifle Shooting

Here is what expert ballistician Bryan Litz has to say about extreme spread:

“When you look at the extreme spread of a 5-shot group, that measurement is determined by only 2 out of the 5 shots. In other words, only 40% of the shots are considered in the measurement. Even worse, for a 10-shot group, a center-to-center measurement is only using information from 20% of the total shots. Since the extreme spread, center-to-center measurement, is determined by only a small portion of the total shots available, it’s just sort of an indicator of precision.”

Extreme spread ignores a large portion of the shots we fired. I paid over $1 per shot for every round fired in this test, so I wanted to make sure we got the maximum benefit from every one of them! I’d imagine you might want to get the full benefit from every round you fire, so this is relevant for all of us – not just us research nerds. 😉

So what is a better method for quantifying precision? There are a few to choose from (and we could really nerd out here), but the one I’ll focus on is called mean radius. Mean radius is the average distance to the center of the group from every bullet hole. That means if you fired 10 shots, the resulting average is based on 10 measurements (i.e., the distance of each shot to the center of the group) – instead of just the measurement between the furthest two shots for ES. We effectively get more data points for free! That is a big deal because it helps us more confidently characterize and quantify the precision of a weapon system.

In December, I wrote an article that dives into the best way to quantify precision and group dispersion from a statistical perspective – but I intentionally wrote it for those who aren’t math nerds. I spent a TON of time on that article because I want all my readers to get the most benefit from the results I’m sharing in this post. I’d highly recommend you read that article if you are one of the millions of shooters who have only even used ES as your measurement of precision or this is the first time you’ve heard “mean radius.”

Precision & Group Size

Here is a summary of some highlights from that article:

  • Extreme Spread (ES) is not a very good measure of dispersion. “Range statistics” like ES are statistically far weaker because they virtually ignore inner data points. They are the least efficient statistics but are also the most commonly used because they are so easy to measure in the field and so familiar to shooters.
  • Mean radius, also known as average to center, is the average distance from each shot to the center of the group.
  • Mean radius uses information from every shot in a group, not just the two most extreme points. Because of this, mean radius can provide a higher confidence measure of precision than ES. However, mean radius is harder to measure than ES.

Mean radius allows us to accurately resolve smaller differences in precision with fewer shots. If you are comparing types of ammo or different rifles to decide which is superior in terms of precision, comparing the mean radius of the groups fired will lead to more reliable conclusions than comparing ES.

If you are not convinced that mean radius is a more reliable statistic than ES for comparisons like this, read the comprehensive article on the topic here: Statistics for Shooters: Precision & Group Size.

So What Is A “Good” Mean Radius?

One benefit of ES is that most shooters are familiar with the numbers that kind of measurement produces. If I told you I have a rifle that holds 0.3 MOA or 1.0 MOA, you’d have a good idea of the kind of precision I’m talking about. But mean radius may be new to many of us, so here is the question: What would the mean radius measure for a “good” group?

During this test, I measured the ES and the mean radius for over 150 five-shot groups fired. After analyzing that massive sample size, I found that the ES averaged 2.75 times the mean radius. It did vary some, but the measured ES for 80% of the ammo tested fell between 2.6 to 2.9 times the measured mean radius. So roughly speaking, if you measured the ES of a group to be 0.5 MOA, the mean radius might be around 0.18 MOA (0.5 ÷ 2.75 = 0.18).

Now, you can’t convert from ES to mean radius simply by multiplying or dividing by 2.75. ES and mean radius are based on different aspects of a group’s dispersion, so it’s not that simple. But, since we’re all used to talking in ES, here is a “translation table” to help us become more familiar with what the rough equivalent for mean radius might be compared to a 5-shot extreme spread (ES):

5-Shot Extreme Spread (ES)Mean Radius
0.10.036
0.20.073
0.30.109
0.40.145
0.50.182
0.60.218
0.70.255
0.80.291
0.90.327
1.00.364

Note: I said “5-shot ES” in the table above, but remember that ES will always grow with the number of shots fired. Litz reminds us, “Mean radius will also grow with increasing number of shots, but not as much as center-to-center [i.e. ES].” So this translation table likely wouldn’t be accurate if you were comparing a 3-shot or 10-shot ES.

How I Gathered The Group Size Data

Where did the ammo come from?

I tested 40 rounds of each type of ammo and bought it all out-of-pocket from online retailers (cost well over $1,000). While I could have reached out to companies like Hornady, Berger, PRIME, Federal, and others, and they would have happily sent me discounted or even free ammo for this test – I decided to buy it all for full retail price from popular online distributors. I didn’t even tell any of the manufacturers that I was even doing this test. I paid full retail price for all the ammo to ensure none of it was cherry-picked or loaded “special” for this test. It was all just random ammo off a shelf somewhere. In fact, I ordered one box of each in December 2019 from popular online distributors, and then I waited 6 months and ordered another box of each from a completely different list of distributors. PRIME and Copper Creek only sell direct to customers, so I had to order those from the manufacturer directly. I made sure no manufacturers knew I was conducting this test. Getting ammo from different retailers and waiting 6 months between orders should ensure it was from two different/random lots, so it’s more representative of what you might experience.

What rifles did I use?

When it came to the rifle and firing process, I used two different rifles. I do personally own a couple of high-end 6.5 Creedmoor rifles, but I know most of my readers aren’t using an $8,000 custom rifle setup. Also, a particular kind of ammo might perform better out of one rifle than another because of differences in the chamber, barrel, and other mechanical nuances. Since more people use factory rifles than custom rifles, I decided to buy a stock Ruger Precision Rifle (RPR) to use in this test. Again, I bet if I had reached out to Ruger, they would have gladly loaned me a rifle for this test, but I decided to simply buy a brand new one from GunBroker.com, just like my readers would, to try to ensure it was representative and not a rifle that potentially had been cherry-picked off the line.

6.5 Creedmoor Ammo Review Test Rifles

The custom Surgeon rifle featured a 22-inch barrel, and I decided to shoot the test without the suppressor attached (i.e. bare muzzle). I didn’t want to risk it somehow affecting the results if it came loose or somehow the heat caused mirage issues that skewed the group sizes. The Ruger Precision Rifle came with their stock 24-inch barrel, and I didn’t change a thing about the whole rifle. Both rifle barrels had a 1:8 twist rate.

Altogether, I invested a few thousand dollars into this 6.5 Creedmoor ammo research project and 100+ hours of time. I hope all of this shows how serious I am about objective testing. I am whole-heartedly in search of the real, unbiased truth to help my fellow shooters!

I realize this research could potentially impact sales for some of these ammo manufacturers, so I tried to go above and beyond to ensure the data I collected was trustworthy. I’m certainly not “out to get” any manufacturers or to promote any of them either. None of them sponsor the website or have ever given me anything for free or even discounted. There aren’t hidden relationships or agendas here. I’m a 100% independent shooter who is simply in search of the truth to help my readers.

So, as always, I tried to identify any significant factors that could skew the results and then created testing methods that attempted to eliminate those or minimize them to an acceptable level. I’ll try to summarize how I fired the groups and collected the data:

  • Fired 40 rounds of each type of ammo from two different lots to have a good sample size. That was two boxes of 20 rounds each from different ammo lots purchased six months apart from different retailers.
  • Fired 5-Shot Groups: To avoid “bugholes” I fired all 5-shot groups, so I could confidently differentiate the exact location of each bullet hole. I also thought 5-shot groups would be more comparable to what most shooters do.
  • Target at 100 yards: I know many long-range shooters fire groups at distance, but when you are trying to quantify true mechanical precision firing at longer distances can allow environmental conditions, like the wind, to skew your groups. Firing groups at long range does let you see the cumulative effect of both group size and muzzle velocity consistency, but in this test I chose to analyze those separately (view velocity results). I will be putting it all together for a cumulative view in the next post that shows overall hit probability at various long-range distances based on all the data collected in this test. I’ll use the same kind of software analysis the military performs to calculate hit probability before launching a $1 million missile.
  • Only fired groups in calm conditions: I live in west Texas where the wind always blows, but I would only fire groups for record in extremely calm conditions. This mostly meant I had some very early mornings. I hung streamers on the target, and if the wind even started to blow mildly I packed everything up and was done for the day. That was a big part of why it took so long to gather the data, but I didn’t want to risk environmentals skewing my results.
  • Used professional optics and adjusted parallax appropriately: I used a Schmidt & Bender PMII 5-25×56 DT on the custom rifle and a Leica PRS 5-30x56i on the RPR. I carefully adjusted the parallax during setup each time and checked it periodically through the tests. Many shooters don’t realize how much parallax can impact group size at short range, but I do. I was careful to not allow it to skew the results.
  • Fired all groups from a rock-solid position on a concrete bench using a Pheonix bipod and Edgewood rear bag. The setup, my body position, and the target location were identical for every round fired.
Phoenix Bipod & Edgewood Rear Bag

Another factor I did my best to think through and mitigate was the barrel condition. Here are some of the things I did to try to avoid potential issues related to that:

  • First, I “broke in” the new barrel on the RPR by firing 150 rounds down it. In my experience, the muzzle velocity has always stabilized by that point (with mid-size cartridges like the 6.5CM), and that was true for this Ruger Precision Rifle, as well. The custom 6.5 Creedmoor had just over 2,600 rounds on it when the testing started (yes, I document every round). It was outfitted with a custom Bartlein barrel with a StraightJacket and was chambered by Surgeon Rifles. If you aren’t familiar with the StraightJacket, you should read my massive barrel test research published in Modern Advancements for Long Range Shooting, Vol 2 by Bryan Litz. (If you are reading about this test, I guarantee you will LOVE that book. I find myself referencing it more than any other book – and I’ve read virtually all of them.)
  • I ran a cleaning regimen during all testing where I would start with a clean barrel and fire 4 shots that weren’t for record to foul the barrel and ensure all the cleaning solvent was out. Then I would fire no more than 30 shots for record before I cleaned the barrel and repeated the process. That would always be 10 rounds from 3 types of different ammo. The cleaning process and method were consistent throughout testing. I randomized the order that I shot the first lots of ammo in and then re-randomized the order on the second lot so that it was different both times I shot through.
  • The barrels were always allowed to completely cool to ambient temperature between each 10-shot string for record.

While that is a lot of detail, my goal is to be transparent about how I gathered the data.

Custom Target from PRB

I used custom targets that I designed a few years ago, which helped make it very obvious if the reticle wasn’t perfectly centered on the bullseye when I broke a shot. The custom target also has a printed scale on it, making it easy to accurately calibrate a scanned image or photo of a target for software analysis. The targets print on standard letter-size paper, which makes it easy to print or scan. (You can download my custom target here: PDF | Full-Res Image. Be sure to double-check the scale on your printed targets against a ruler to ensure the program or printer you used didn’t skew the size of the image.) I used OnTarget to analyze the targets and calculate the group stats.

Called Fliers & Outliers

Of the 780 shots fired for record, I only had 3 called fliers. 3 out of 760 isn’t bad! But, remember I was on a concrete bench using a Pheonix bipod and Edgewood rear bag, so I was about as stable as you could get with a human still pulling the trigger. However, if a human is involved, you can’t expect perfection over such a large sample – so I admit I had a couple of called fliers.

What do I mean by a called flier? That means right as I broke the shot, I noticed my crosshairs weren’t perfectly centered on the bullseye. I would “call” where the crosshairs were before I looked downrange to see where the bullet landed in the group. For example, let’s say that as I broke the shot, I noticed my crosshairs slipped to the left edge of the yellow box, meaning the bullet impacted 1/4” further to the left than it would have if I had broken the shot precisely on the bullseye. The 3 times I had a called flier, I made a note of the exact aiming error, and then when I did the analysis on that group, I’d shift the related bullet hole, so the stats were correct. None of those adjustments were more than 1/4 inch, and they were on 3 different types of ammo, so it isn’t possible that could skew the results.

I didn’t exclude any “outliers.” I know some shooters throw out shots that land outside of their group as “fliers” or “outliers” – but that simply isn’t good science. Rigorous research always collects and reports on all data, which is what I will do here.

To exclude a shot from a group just because it appears to be a ‘flyer’ is bad measurement technique and would lead one to underestimate long-range dispersion.” – A User’s Guide to Dispersion Analysis by Bruce Winker

The Results

Alright! Let’s dive into the results. All results are shown in MOA, not inches – although those are similar at 100 yards. Remember, this data below is based on over 150 five-shot groups and 760 individual data points!

6.5 Creedmoor Ammo Comparison - Mean Radius

Top 3

Berger Match 130 Hybrid Factory Ammo

The Berger Match 6.5 Creedmoor 130 gr. Hybrid Tactical OTM ammo was the winner by a reasonably clear margin. The Berger 130 Hybrid was 9% better than the next best ammo in terms of precision. With an average mean radius of 0.208 MOA, it was 23% better than the overall average of 0.270 MOA for all 19 types of ammo tested.

#2 and #3 was a virtual tie between the Barnes 6.5 Creedmoor Precision Match 140 gr. OTM ammo and the Federal Premium Gold Medal 6.5 Creedmoor 130 gr. Berger Hybrid ammo. Both had an average mean radius of 0.228 MOA, which was 16% better than average, and 5% better than the next best performer on the list.

Middle Pack

12 different types of ammo landed in what I’ll call the “middle pack,” which had ranged from 0.239 to 0.261 MOA. That is over 60% of the ammo tested! There is less than a 10% difference in that narrow window, which is likely “in the noise” for this test. What I mean by “in the noise” is that the numbers are so close that I would expect the exact order of that middle pack could vary based on the boxes of ammo that happened to be tested or other factors that couldn’t be controlled perfectly in this test. If we ran through this test again 3 or 4 more times, the order of those in the middle pack would likely shift around. So I’d say all of these had a similar precision performance, at least on average over both rifles.

The next article in this series will be the conclusion and in it, we will put this group data together with all the other performance factors, like muzzle velocity, BC, and other factors to determine what the overall hit probability would be at long range, and it will help us better differentiate between the ammo that landed in this middle pack.

Bottom 4

The most apparent pattern in this data might be that most ammo tested had relatively good performance when it came to precision – except for the 4 types of ammo at the bottom of the pack. While there may not be a significant difference between the top 3 or the “middle pack,” the four types of ammo at the bottom of the list performed measurably worse than the rest.

Worst 6.5 Creedmoor Ammo

Unfortunately, the groups for these 4 types of ammo were poor – so poor that it might be a stretch to consider them “match” grade ammo:

  • Black Hills 6.5 Creedmoor 147 gr. ELD-M
  • Copper Creek 6.5 Creedmoor Ammo with the Berger 144 gr. LR Hybrid
  • Nosler Match 6.5 Creedmoor 140 gr. Custom Competition
  • Nosler Match 6.5 Creedmoor 140 gr. RDF
Black Hills 6.5 Creedmoor Hornady 147 gr. ELD-M

Black Hills 6.5 Creedmoor ammo being on that list of poor performers may surprise many people because many see it as a top ammo company. It actually surprised me, but I’m just trying to report on the data, and the fact is it didn’t group well out of either of the rifles that I tested it in.

I tested two types of Copper Creek ammo – I had 40 rounds that were loaded with the Berger 140 gr. Hybrid bullet and the other 40 were loaded with the newer Berger 144 gr. Hybrid bullet. Only one of those ended up in the bottom 4 (the 144) – although the other was also in the bottom half. Copper Creek says they make “handmade custom precision rifle ammunition,” so I would have hoped for better performance than what these results show. While I may be tempted to chalk this up to a couple of bad boxes, remember that this was an average of 8 five-shot groups from 2 different boxes of ammo that were bought 6 months apart. So it seems likely that this is representative of what you can expect from their loading process.

Copper Creek 6.5 Creedmoor Ammo
Nosler 6.5 Creedmoor Ammo

Both types of Nosler 6.5 Creedmoor ammo that I tested ended up on the very bottom of the list. I’m sure Nosler won’t like to see that, but I’m confident in the data. The truth is, I was shocked by how poor the performance was. Nosler has a fairly good reputation in the shooting community, but the precision of the ammo I tested could only be given a grade of “F.”

Average ES Results

I was hesitant to publish the extreme spread (ES) data because I sincerely believe that mean radius is a more accurate way to analyze precision – but I know many of you would like to see the extreme spread, too. So I’ll share it all.

Remember ES is based on the two most extreme shots of each 5-shot group, and it completely ignores the other 3 shots. That means ES ignores 60% of the shots fired. Because 60% of the shots are ignored, if one or two of the bullets fell unusually far from the center, it has a disproportionally pronounced effect on ES.

Precision Rifle Ammo Extreme Spread ES

3 of the top 4 on this list were also on top when we looked at the mean radius. One surprise is that the Hornady Match 120 gr. ELD-M edged those out and ended up with the smallest average for extreme spread, although it wasn’t by a significant margin. Honestly, I see the mean radius data as more reliable, so I won’t even guess what caused that.

The bottom 4 from the mean radius results were still in the bottom 4 here. Both types of Nosler ammo that I tested ended up with the worst extreme spreads, and they were the only two of the 19 types of ammo tested that didn’t average sub-MOA groups. Clearly, both of the test rifles were capable of sub-MOA groups with all the other ammo, but this Nosler ammo just doesn’t appear to be capable of precision.

Note: A reader asked my opinion on why the Nosler ammo didn’t perform better, and you can read my reply to that in the comments here.

Precision Differences Between Rifles

You might wonder if there was a significant difference in precision between the two rifles: the custom Surgeon rifle and the 100% stock Ruger Precision Rifle. I looked into that myself, and the data is VERY interesting!

The chart below shows what the overall mean radius was for each rifle and ammo type. The individual rifle data is based on 4 five-shot groups each, and the overall average is all 8 five-shot groups:

6.5 Creedmoor Group Size

Over all the 150+ groups fired, the custom rifle only had a 2% better mean radius than the 100% stock Ruger Precision Rifle! What?! If you aren’t shocked by that, consider that the custom Surgeon Scalpel rifle has a price tag that is around $5,000, compared to the $1,300 I bought the Ruger Precision for off GunBroker.com. That shows the kind of impressive performance you can now get from factory rifles! Ruger, I tip my cap to you! The hard data I collected over 760 rounds shows the RPR delivers a ton of precision for the price.

Here are the overall stats for each rifle based on all the groups from the 19 different types of ammo tested:

RifleAvg. Mean Radius (MOA)Avg. ES (MOA)
Custom Surgeon Rifle0.2670.737
100% Stock Ruger Precision Rifle0.2730.741
6.5 Creedmoor Ammo Review Test Rifles

Why does the ammo perform better in some rifles than others?

Did you notice that some ammo grouped better out of the custom rifle, and some grouped better out of the RPR? The mean radius for many types of ammo varied by less than 20% between the rifles, but there were a few types of ammo where the groups varied by over 30% between the two rifles tested. In all cases where the precision varied by 30% or more between the two rifles, the custom rifle was the better performer. That means if there was a large disparity between which rifle the ammo liked, it was always the custom. However, there were 3 types of ammo where the mean radius for the RPR was 24-27% better than the custom rifle, and those were:

  • Winchester Match 6.5 Creedmoor Sierra 140gr MatchKing (27% smaller mean radius in the RPR)
  • Nosler Match Grade 6.5 Creedmoor 140gr Custom Competition HPBT (25% smaller mean radius in the RPR)
  • Sig Sauer Elite Performance Match 6.5 Creedmoor 140gr OTM (24% smaller mean radius in the RPR)

To be clear, the custom rifle did have better precision overall – but the few types of ammo above seemed to perform better in the factory RPR by a considerable margin.

The reason one type of ammo performs better in one rifle or another could stem from a multitude of reasons. I had a conversation with an experienced precision rifle gunsmith about this research project, and he asked me specifically if I found some types of ammo preferred one rifle or the other. I shared some of these findings, and he suspected it might be related to the throat or chamber dimensions of a particular rifle or perhaps even the exact diameter of the grooves and lands of the barrels. I won’t even guess what caused it, but it was an interesting nuance I noticed in the data. I believe most serious research creates more questions than it answers, and it’d likely take another massive research project to get any definitive insight into what might have caused this.

Honestly, this is a big reason why I don’t see the goal of this research to crown one type of ammo as the definitive “best.” In fact, this type of rifle-to-rifle variance is why I tested all of this out of two different rifles. The “best” for one rifle may not be the “best” for another rifle. But, as I said right from the start, I realize that most shooters can’t spend over $1,000 to test all this ammo in their rifle. So my goal is to help them narrow down their search. Hopefully, you’ll be able to find 3 or 4 types of ammo that showed good performance in my research and try it in your rifle. It seems very likely that at least one of those will perform well in your rifle too.

Coming Up Next: Putting It All Together For Overall Performance!

While I always like to present the detailed results to my readers, I also know it is easy for us to put too much emphasis on one aspect or the other. What if one type of ammo had tiny groups, but the muzzle velocity wasn’t as consistent as another brand? What if one of the brands didn’t do great in groups or consistent MV, but the bullet has a ridiculously high ballistic coefficient (BC: essentially a measure of how aerodynamic a bullet is or how easily it cuts through the air) and has an extremely high velocity – does that makeup for it? Which gives you the best overall performance for long-range shots?

Well, there is a great analysis tool designed to help us answer those kinds of questions! I fed it all the measurements from my experiments, like group size, average muzzle velocity, the standard deviation in MV, BC of the bullet being fired, and a bunch of other ballistic data. Then I specified distances and target sizes and used that software to calculate the overall hit probability for each type of ammunition. If you want to take an objective, data-driven approach to decide which ammo provides the best performance, this is the ultimate solution! That next post will help us take all of those variables into consideration and see how each type of ammo ranks in long-range hit probability.

The next article has been published and can be read here: Summary & Long-Range Hit Probability

Find this helpful? Appreciate the content?
Sign-Up for Email Updates
Subscribe for free to get an email notification when the next set of results is posted.
or … Donate
To show your appreciation and help PRB cover its costs.Donate Button with Credit Cards

6.5 Creedmoor Match Ammo Field Test Series

Here is the outline of all the articles in this series covering my 6.5 Creedmoor Match-Grade Ammo Field Test:

  1. Intro & Reader Poll
  2. Round-To-Round Consistency For Physical Measurements
  3. Live-Fire Muzzle Velocity & Consistency Summary
  4. Live-Fire Muzzle Velocity Details By Ammo Type
  5. Live-Fire Group Sizes & Precision
  6. Summary & Long-Range Hit Probability
  7. Best Rifle Ammo for the Money!

Also, if you want to get the most out of this series, I’d HIGHLY recommend that you read what I published right before this research, which was the “Statistics for Shooters” series. I actually wrote that 3-part series so that my readers would better understand a lot of this research that I’m presenting, and get more value from it. Here are those 3 articles:

  1. How To Predict The Future: Fundamentals of statistics for shooters
  2. Quantifying Muzzle Velocity ConsistencyGaining insight to minimize our shot-to-shot variation in velocity
  3. Quantifying Group Dispersion: Making better decisions when it comes to precision and how small our groups are

About Cal

Cal Zant is the shooter/author behind PrecisionRifleBlog.com. Cal is a life-long learner, and loves to help others get into this sport he's so passionate about. Cal has an engineering background, unique data-driven approach, and the ability to present technical information in an unbiased and straight-forward fashion. For more info, check out PrecisionRifleBlog.com/About.

Check Also

How Do The Best Shooters Reload

How Do The Best Shooters Reload? What The Pros Use

These elite PRS shooters have cracked the code on loading ammo with unmatched accuracy, but their secrets might surprise you. Discover the reloading steps the top 200 Precision Rifle Series pros swear by – and which steps the very best skip entirely! Dive into this breakdown to learn what’s actually worth the time – and what you can leave out for top-tier accuracy.

45 comments

  1. Thank you, thank you, thank you! I have been waiting patiently for this for what seems like a very long time. You do the Precision Rifle Sports such an amazing service that I take my hat off to you. There were some surprises there but a lot of expected too. Was really surprised by the RPR performance. You probably just created a shortage in those rifles worldwide. I hope you don’t mind me sharing this with the various groups I belong to but I think it’s just so well done that it needs to be seen. Nosler aye?? Maybe it will give them a bit of a kick to the derière? Thanks again Cal, always a great read.

    • Thanks, Stephen. Yep, there were a few surprised in there. Honestly, I pretty shocked by the Nosler data. Before I started any of this, I would have bet the Nosler 140 RDF finished in the top half, but honestly, I didn’t have experience with Nosler’s loaded ammo … so I guess that was just my perception of the brand. That’s why we have to test this stuff!

      The performance of the Ruger Precision Rifle was also staggering. To perform within 2% of one the most expensive rifles I own is a complete surprise. I really believe there isn’t a better value out there than the Ruger Precision Rifle. When a friend asks me for a rifle recommendation it isn’t what I always say … but I sure find myself saying it a lot. It has all the must-have features, and really good performance out of the box. Honestly, I even like the USMC red color of the one I picked up off GunBroker. I have a lot of cool rifles, but I don’t see myself selling that one. It’ll probably become my standard test rifle for a lot of things, because so many people have a rifle in that class – and it’s such a good shooter!

      Feel free to share this post or any of my articles wherever you’d like. I’m just hear to help shooters, so the more that read it and get benefit from it the better.

      Thanks,
      Cal

  2. Great job telling the truth about ammo makers . I suspect you will get a lot of backlash from ammo makers and people shooting certain types of ammo. Keep up the great work you have done for us every day shooters in all your present and past testing . Thanks Tony👍

    • Thanks, Tony. I figured I’d get some people that push back or throw stones. That’s part of what motivated me to be thorough and paranoid in how I conducted all the tests, collected and analyzed the data, and then presented it all in exhaustive detail. I’ve done a few field tests like this, which were head-to-head comparisons … and there are always a few of them that don’t perform as well as the rest. That’s why magazines often don’t do tests like this, because there has to be a “loser” or “lowest ranked” … and you better hope they aren’t an advertiser. I’m fortunate to be 100% independent and can just say it like it is.

      I do try to be OCD in all my testing so that I can have full confidence in what I publish, which I do in all of this. Honestly, my bet is most manufacturers already knew how this was going to pan out. It doesn’t take a ton of time to test your own product, at least compared to the time investment to test 19 products.

      Thanks for taking the time to share your thoughts.

      Thanks,
      Cal

  3. Best report I’ve ever read. Bar none. Utterly impressed. Well done.

    • Thanks, Troy. Wow! That means a lot.

      The idea for this test had been rolling around in my head for years! It was quite the undertaking, but I’m glad to have done it. Honestly, writing the “Statistics for Shooters” series that came before this was the most difficult thing I’ve ever written. But I felt like people wouldn’t get the full benefit from this test if I didn’t first explain things like “mean radius” and “standard deviation.” Explaining all that stuff in a way that just about anyone could understand was immensely challenging, but I hoped it would help people get more benefit from the investment I made in this particular research project – and get more benefit out of their tests.

      I see this project as being super-helpful for all those people who aren’t reloading, especially all the new people who are trying to get into this. The ammo can make a huge difference in how successful you are in this game. I see a lot of my friends buy a nice rifle, but then buy super-cheap ammo … and it doesn’t make any sense to me. So I wanted to try to put all of that in perspective in an objective, data-driven way. I hope a ton of people read it and it helps them put more rounds on target!

      Thanks,
      Cal

      • You accomplished your goals, my friend. 100%.

        This was my first exposure to ES vs MR and I love it. I’ve never liked how “groups” were calculated. Never made sense to me. “Circular Error Probable”, the military’s science of ballistic accuracy made more sense, but for that to work you need a significant number of shots, far more than the typical 3 to 5 shot groups for PR.

        I’ve also never really liked how POA vs POI isn’t really accounted for in our industry. It’s great to shoot a 0.5″ MOA group, but if it’s 2″ off target at 200 yards, then … hmmm … can that really be considered “accurate”? And, how far off can the POI be from POA to be “accurate”?

        The idea of MR seems to take BOTH into account; both the POI vs POA issue as well as the group size itself. I’m going to start tracking my shots this way. Well done.

        On a side note, I shoot Hornady’s stock 108 gr ELD out of my 26″ Bergara in 6MM Creedmor and over a 200 shot weekend PR course I averaged a 0.66″ MOA group size … measured with ES. Next time, I’ll be using the MR method. 🙂

      • Thanks, Troy! I actually just replied to another reader’s comments and I mentioned CEP. You might be interested in reading that here: https://precisionrifleblog.com/2021/10/27/6-5-creedmoor-ammo-test-part-5-live-fire-group-sizes-precision/#comment-74073. There is a lot of technical merit to that measure.

        And I totally agree with you on the POA vs POI conversation. Often times we completely focus on tiny groups (i.e. precision) rather than hitting your mark (i.e. accuracy). Often people use accuracy and precision interchangeably, but they are very different. It’s funny how we as people can overly-fixate on one aspect over another equally important aspect in the grand scheme of things, but then again – who doesn’t like shooting a tiny group?! There is something intrinsically satisfying about it, isn’t there? One ragged hole is almost like a work of art!

        I’m glad this was helpful and convincing for you. I do think if we all used mean radius more often it’d lead us to better decisions. In fact, you can prove that from a statistical perspective, so it’s not really just my opinion or Bryan Litz’s or all the other ballistic or math experts out there that believe the same thing.

        Hey, and averaging 0.66″ group over that many rounds is good no matter how you measure it. If you have a sample size that large, then ES can be a reliable statistic. Either way, good shooting! Isn’t it crazy what factory ammo from a factory rifle can do these days with a good shooter behind it?!

        Thanks,
        Cal

  4. oh , it’s really a longggggg time , how are you , cal ? From the last post has passed almost half a year, but late is better than never . hope everything is fine with you .

    thanks for sharing
    respect

    • Ha! Yes, sir. It’s been a while. I’m doing well, but have just been busy. Some of it is related to a massive project that I’ve been working on related to this blog and my new house, but some of it was just life. This article certainly took me a long to publish. I had a lot of ideas for things I was hoping to do to display all the groups, but eventually I decided those were too time-consuming and I should just publish the data that I included here so people could start getting benefit from it. Sometimes my OCD can get me high-centered on projects like this, but I’m glad that it’s finally out. I probably spent 5-6 days over the couple of months just writing this post before it was to a place I was comfortable publishing it. It’s a complex topic and I know there will be a lot of scrutiny over it, so I just wanted to be careful and thoughtful in how I presented the data. I also committed to write the rest of the series before I published this article, because I didn’t want there to be a long pause between parts if something came up. I got the final part written today, so I hit publish on this one.

      Thanks for your patience! 😉

      Thanks,
      Cal

  5. Cal this was an incredibly helpful article with so much data presented in such an easy to understand format……just what I need! Thank you so much. Can’t wait for the next post.

  6. Humberto Claudino

    Cal Zant
    I’ve looked my email inbox countless times waiting for the notice of a new article, and it was worth a lot because your article is again full of important information.
    Greetings from Brazil.

    • Hey, Humberto! Greetings from Texas! 😉 Glad you found it informational. I promise to not keep you waiting long on the next few!

      Thanks,
      Cal

  7. Outstanding Cal. Just truly outstanding. Your objectivity in such a subjective field, dedication, and very needed “OCD” just truly shows your commitment to your craft.
    Thank you so very much!

  8. Very interesting many thanks.

    Whats a bit odd is I use Nosler 140 RDF’s in my handloads (Peterson Brass & RS 62 powder) and find them great, any idea why the factory stuff is so bad?

    • Hey, Martin. I remember writing recently (can’t remember if it was an article or in the comments) that I was also surprised by the Nosler 140 RDF, and if I’d have placed a bet before I started this test I would have said it would definitely finish in the top half because that seems to be a pretty incredible bullet. But, good ammo requires more than just a good bullet.

      In response to your question, I went back and looked at Part 2: Physical Round-To-Round Consistency. It looks like the Nosler Match 140 RDF ammo had one of the most inconsistent ammo lengths (in terms of CBTO, cartridge base to ogive measurement). It was 2-3 times worse than most of the ammo tested.

      Standard Deviation in CBTO Length

      Also, I deconstructed a few types of ammo to actually measure the powder charge weight to see how much variation they had, and the Nosler Match 140 RDF happened to be one of the 7 types of ammo I did that for. It showed the most variation in powder charge weight – by a pretty huge margin. It had a standard deviation of 0.28 grains, and we know on a normal distribution that 95% of the rounds would fall +/- 2SD from the average. That means the powder variance would be from 0.56 grains below the average to 0.56 grains above the average, which is a total variance of 1.12 grains in that range. Remember, I’m not saying 1.12 kernels of powder – that 1.12 GRAINS of powder. I believe a single kernel of H4350 weighs around 0.02 grains, which means that would be a variation of 56 kernels of powder! Think about that. That is a small pile of powder! I actually bought a Sartorius Entris II BCE64-1S Analytical Balance just to be able to measure those powder charge weights accurately for this test, so I couldn’t be more confident in that data.

      Standard Deviation in Powder Charge Weight

      I’d suspect there are many other factors that play into this, but I’d expect those two things are part of why the Nosler loaded ammo didn’t perform better than it did. You need much more consistent reloading methods to produce good ammo. The powder charge weight should be more consistent and the bullet seating depth should be more consistent. I’m not sure what equipment or processes Nosler is using to load their ammo, but I’d suspect that it is very different than what most of these companies are using – and it doesn’t seem to be working well.

      So I’m afraid I can’t explain all of it, but I am confident in the results. I’m not sure if Nosler brass is as consistent as others either, after seeing this. In my experience, brass consistency plays a huge role in ammo consistency – maybe even one of the biggest roles. The fact that you are using Peterson brass might also explain the difference in performance. I haven’t personally reloaded with Nosler brass, so I really can’t speak to that – but that is just something else I’m wondering about in the back of my mind.

      Sorry, I couldn’t be more help, but I’d say if it is working for you – don’t fix happy! Keep chugging away at it. I know Scott Satterlee uses Nosler 140 RDF bullets in his handloads, and he is one of the top shooters in the country and one of the guys I respect the most in this field. Obviously, the bullets are capable of good performance, it’s just the loaded ammo doesn’t seem to be.

      Thanks,
      Cal

  9. Nick James Dadamo

    Great article as always, I eagerly await your emails.
    I wonder do you ever do or have past posts regarding 6br or 6.5×284 ? ( Necked upto 7mm) ?
    Thanks .

    • Thanks, Nick.

      I don’t have a lot of content on the 6BR, but you might find what I do have to be super-helpful. 2 years ago I polled all the top-ranked shooters in the PRS and NRL and asked them about what gear they were running, but I also asked them about their load data. I was a little surprised, but the overwhelming majority of them shared their exact load data with me. As you probably know, there are a ton of guys running a 6BR, 6BRA, or 6BRX, so I grouped all that load data into a single article, and you can find that here: 6BRA, 6BRX & 6BR Load Data – What The Pros Use.

      I can’t recall much that I’ve written on the 6.5×284 a straight 284, although those are very capable cartridges. Here is what I could think of:

      • NF ELR Calibers & Cartridges – What The Pros Use: This post includes a little on the 6.5×284 and the straight 284 Win, but not a lot.
      • I did use a 6.5×284 when I went through Gunwerks Long Range University a few years ago, and reached out to a mile with it pretty easily … in the rain! I mentioned that in this post.

      Thanks,
      Cal

  10. Holy Cal the golden info.

  11. Cal, really like the way you lay out the ES info. As the ‘King of four shot groups’ I know how one round leaking out can skew perspective if you only look at the data one way. As an RPR owner, I agree with you about the value and accuracy this factory rifle is capable of, and it is a joy to shoot.

    • Thanks, Steve. I know what you’re saying! Sometimes when we have a good group going, it’s easy to stop at 3 or 4 shots and not keep going, because there is something in our mind that tells us this is “special.” What “special” could mean is unlikely to repeat, or not representative. It’s always fun to have a tiny bughole in paper. Who doesn’t love that?! And it’s fun to brag about it to your friends. But, if we really want to know what the truth is and be able to forecast what might happen in the future, we need to fire larger sample sizes and stop throwing out so-called “fliers.” It takes a humble person to do that, but it’s the way to the truth and real insight.

      And yes, the RPR is very impressive. Ruger is turning out an incredible value, and it is fun to shoot. People might nitpick one aspect or another, but it’s definitely one of the best values on the market and an extremely capable system.

      Thanks,
      Cal

  12. Outstanding!…..Science in action!…..

  13. Holy cow, this is absolutely awesome! I’ve been eagerly awaiting each and every update on this 6.5 survey, you really are doing us a humongous service!

    And what a interesting alternate discovery with the RPR. I’ve been a huge fan of the RPR and the steady improvements that Ruger has made to it (I think the Gen 3’s have a smoother action and a nicer trigger since that was the main complaint with the previous Gens)

    But to see the data say it’s only 2% (!!!) behind a custom precision rifle is mind boggling!

    • Thanks, Erik. There was a lot of time and effort put into this, so I appreciate you saying that.

      The Ruger Precision Rifle performance was a surprise and kind of a bonus to this research. I wouldn’t have guessed that. I knew it shot well, but to be within 2% of a super-expensive, custom rifle is a little nuts. But, that’s what the data said. When you do objective, data-driven research like this there are always a few surprises that you find along the way. That discovery process and how it challenges my perception is a fun part of things like this.

      Thanks,
      Cal

  14. I love the way you test.
    People can always wish for more, though. Given the quality of the test, and having the RPR and the custom (bolt) rifle, I would really have loved to see where the JP LRP007 performed vs those in this test.

    As always, thank you, Cal.

    • Thanks, Steve. I almost tested that rifle. I can’t remember if we’ve talked about it, but you probably know I personally own one. It’s a fine weapon for sure! It’s one of those guns that is just a pleasure to shoot. You can feel the quality of it. I think if I pulled a random person off the street and let them shoot that rifle, they’d know it was special – even if they hadn’t ever heard of JP Rifles. I just figured it wasn’t as representative of the Ruger Precision Rifle or a custom bolt-action when it comes to what most people are using. Plus, the budget and time for this test were already really stretching me, so adding a 3rd rifle was a non-starter. But I’m with you. I would have loved to see those results myself! … but I wouldn’t pay another $1,000 in ammo and another 20 hours at the range for them! 😉

      Thanks,
      Cal

  15. Cal, Thank you so much for all your research. I really appreciate your data gathering and analysis.
    I’m a re-loader, shooting a 6.5 Creedmoor RPR. The Factory ammo review helps me pick and choose components and set goals for my own evaluation of performance. I have been using Mean Radius for group evaluation and really see the benefits of this calculation. Thanks for all of the education.

    • That’s awesome, Scott! It sounds like you are exactly the kind of guy I was trying to help with this research. Thanks for letting me know it hit the spot!

      Thanks,
      Cal

  16. Long time fan.. first comment.

    Thank You for your dogged research.

    You might not hear it enough, but your articles and data has certainly had a positive impact on my shooting knowledge and “wisdom”.
    This article is a great example. The attention to detail, combined with a very thorough, illuminating explanations is very appreciated.
    Again… Thank You from all of us.

    ( PS … How can we donate / help fund your efforts ? .. no strings attached , just money well spent on a “teacher” ! )

    • You bet! I appreciate that.

      You can make a donation using the link below. I appreciate that, too. I’m estimating that I spent around $4,000 out-of-pocket on this test, and big part of that was because I wanted it to be completely independent and based on ammo and rifles that came right off the shelf somewhere (instead of reaching out to manufacturers to send me product). I want my readers to be able to trust my results, and honestly, I want to have confidence that the results are as representative as practically possible. Any help would be appreciated.

      Donate
      To show your appreciation and help PRB cover its costs.Donate Button with Credit Cards

      Thanks,
      Cal

  17. Cal, Outstanding work! Did you notice any quantifiable trends related to the order or sequence of the rounds fired within a group or across the board in general? For example, was the last shot in a five round group displaced farther from the mean than say from the first shot? If so, were there any measured observables like an attributable velocity outlier associated with that observation or trend?
    Thanks again, I truly appreciate your data driven approach and scientific methodology.
    Randy

    • That’s a great question, Randy, but I didn’t actually analyze the data that way. Based on my experience and what I can recall, I’d be surprised if there was a pattern that showed the 5th shot fell further from the center than the prior shots. But, I don’t want to say that definitively, because as this research showed – sometimes the data can surprise you! Unfortunately, I didn’t record the order of the shots on the targets, so I’m afraid that I can’t go back and analyze that now. I do hope to invest in an electronic target at some point and then I might be able to do more thorough research into that area. That would certainly be interesting to do some research around, so I appreciate you asking about it.

      Thanks,
      Cal

  18. Cal,
    As usual, you have formulated an excellent report on precision comparisons. I got very interested in precision using mean radius in the ’80s while I was active in the High Power Rifle competition. I forget where I picked it up, probably from Handloader magazine that mean radius could give you more reliable information than extreme spread. I also picked up on the concept of Radial Standard Deviation as a measure of the consistency of the rifle and shooter… Using your methodology, I can see that using a steady rest such as yours, and eliminating some of the variables such as wind, that a person could use the Radial Standard Deviation (RSD) throughout the life of the rifle to identify problems that might slip up on someone that only uses ES for measuring precision. You’ve probably already addressed this concept, so I may have missed it. Thanks for your great work and reports.

    Jim Crownover

    • Jim, that is a great point! Honestly, I’m impressed with your knowledge. I haven’t talked about Radial Standard Deviation much, although I might have mentioned it in the “Statistics for Shooters” article that was focused on quantifying precision and groups. Standard devatiation is a great metric to describe how spread out data is from the average, so it obviously could appropriately be applied to group dispersion. I did choose to exclusively focus on mean radius in this article, but there are several other statistics that provide insight into dispersion, and Radial Standard Deviation is one of those. Circular Error Probable (CEP) is another valuable metric used to quantify dispersion, and it’s basically the size of a circle in which 50% of the shot will land inside. I think of it as median radius that divides the shots in half, with half inside of it and half outside of it. In my “Statistics For Shooters Part 1” article I talked a lot about “What is the middle?” Is average or median a better stat for a particular application? I know that can turn into an academic argument, but I might lean towards the median being more representative if we’re trying to predict what will happen in the future, so CEP might be slightly better than mean radius. Median just isn’t as skewed by outliers is the main reason. But, Bryan Litz even says in Modern Advancements for Long-Range Shooting Vol II: “CEP and mean radius are very similar, within 5% or 6%, but are technically not the same thing.” So I am obviously splitting hairs, and in the real world, either is a valid way to analyze group data and would likely lead to the same conclusions. If they didn’t lead to the same conclusion, you’d likely have to fire a ridiculous number of shots to walk away with any conclusive evidence that the difference between them was real and not “in the noise” of the error you could/should expect in the experiment. U

      ltimately, there are several stats that are DRAMATICALLY more effective than extreme spread, which is unfortunately what 99% of shooters are using. I hope these articles help some shooters (at least the ones that are serious about precision) become educated about the difference. I thought mean radius was certainly easier to understand what the number physically represents or how it’s calculated compared to CEP or RSD. Average is something that almost everyone understands intuitively. And as, Litz says in that book, “You could get really carried away with statistical methods of characterizing precision.” I had to really restrain myself as I wrote this post to not completely nerd-out on all the different statistics to quantify precision. I figured if I did that, less people would read it or understand it … and therefore less people would get value from it.

      Hey, I do enjoy having a conversation with someone who understands and values some of those other statistical methods to quantify this kind of stuff. It seems like the deeper you go down the rabbit hole on some of this stuff the less people you can have deep conversations with. So I appreciate your thoughtful and educated comments more than you know!

      Thanks,
      Cal

  19. Outstanding article, I have been following since the massive scope test and these massive tests are so dense I find myself going back to reread them multiple times to try to absorb it all. Thanks for all the hardwork.

    • Thanks, Brian. Believe it or not, I actually find myself going back to them multiple times as well. This stuff is helpful for me, too. Thanks for taking the time to share that.

      Thanks,
      Cal

  20. Thanks Cal. I have one fundamental question. How did you determine the center of the group? I would suggest something like a function that minimizes the squares of the distance between a given point and the “theoretical” center, like OLS does for fitting a line.

    • Great question, Omar. The software does it automatically, but it basically is considered the “theoretical” center spot between the 5 shots. It assumes that the center of 5 shots could be adjusted in a way that POA aligns with that center POI. Here is a page I found that explains how to manually calculate the center of the group: https://riflestocks.tripod.com/calculate.html.

      I don’t love giving the answer “the software does it automatically” as the answer (because I was a software developer for several years and was the guy programming things like that), but in reality, almost everyone that uses mean radius scans in the target and does the analysis using software. There are apps these days where you can snap a photo of the target and quickly locate each bullet hole and it spits out all the statistics for the group.

      In fact, I did some free consulting for the guy who created Ballistic X and helped him add a lot of advanced statistical methods to the results from his app. I’m not sure if he’s released the version with all the updates that we worked on together, but below is a screenshot of the beta version that I was helping him test out. You can see it has mean radius, CEP, Radial SD, Vertical SD, and Horizontal SD. That is all of the major ways that I’ve ever heard of professionals quantifying shot dispersion, so my role was simply suggesting what advanced statistics he should add to help shooters make more informed decisions … and we brainstormed some other killer features that I hope get released one day related how to aggregate that data from multiple targets or even users and group it by rifle, caliber, cartridge, bullet, etc.

      Ballistic X Advanced Statistics

      All you do is snap a photo of the target, confirm where the individual bullet holes are on the target, and then it spits out what is shown above.

      But, for all this analysis I used OnTarget desktop software to do the analysis. I wasn’t comfortable using a beta-version of an app to calculate the stats that potentially will have a dramatic influence on what people buy. If there were bugs in the formulas or app, then it’d undermine all of the effort I put into this. So I used a tried-and-true software solution instead.

      Hope this answers your question.

      Thanks,
      Cal

  21. A very satisfactory answer, thank you, and thank you for all the useful research you do and findings published on this site.

Leave a Reply

Your email address will not be published. Required fields are marked *

Discover more from PrecisionRifleBlog.com

Subscribe now to keep reading and get access to the full archive.

Continue reading