A Data-Driven Approach To Precision Rifles, Optics & Gear
Home / Ammo & Handloading / How To Predict The Future – Statistics For Shooters Part 1
Statistics for Shooters

How To Predict The Future – Statistics For Shooters Part 1

I actually believe the average shooter might get more value from this Statistics for Shooters series of articles than anything I’ve published in a long time. I promise it’s worth your time to read!

I realize most shooters aren’t engineers or math nerds. Many people have an uncomfortable relationship with math and aren’t impressed with fancy formulas. However, statistics and probability are insanely applicable when it comes to long-range shooting in particular. Understanding just a few basics can help you gain actionable insight and put more rounds on target. Venturing beyond “average” and “extreme spread” will lead to better decisions.

I have spent an absurd amount of time arduously crafting this article with the math-averse shooter in mind. I pulled from a dozen books, white papers, magazine articles, and other sources on the subject (see works cited) to deliver a comprehensive, but approachable, overview of the most relevant aspects for fellow shooters. I literally spent months trying to make this content simple and balanced because I firmly believe this can help a lot of shooters once they wrap their heads around a few basics.

Predicting the Future

“Each time you pull the trigger, the bullet chooses a single outcome from infinite possibilities based on countless random factors. Without a time machine, you can never know exactly where the next bullet will go. However, you can predict the most likely outcome, and precisely describe the chances of it being high or low, left or right, fast or slow. Many people shy away from statistics because, well, math. It seems complicated and unnecessary. On the contrary. It is a way of thinking that hones your intuition and helps you make better decisions. … Just by understanding the relationship between a sample and a population, you can learn how to predict the future.” – Adam MacDonald, Statistics for Shooters

Often as shooters, we use stats to make some kind of comparison or decision. Here are two simple examples:

  1. Fred has a big hunt coming up, so he bought 3 different kinds of factory ammo to see which groups best from his rifle.
  2. Sam is a competitive long-range shooter and reloader, and he tried several different powder weights to find the one that produces the most consistent velocity.

In both of those examples, the shooter is trying to make an informed, data-driven decision. Fred and Sam will both fire a bunch of rounds during a practice session at the range and use the results to decide what they’ll use in the future. But what Fred and Sam measure in the practice session isn’t what they actually care about. What really matters to them is what happens in the big hunt or the upcoming competition. Whether they realize it or not, both are collecting a sample of data and using that to predict the future performance of their rifle/ammo. Let’s say Fred fires a 3-shot group from each box of ammo, and the extreme spreads of those groups measured 0.54, 0.57, and 0.94 inches. He should just go with the smallest, right? Does he need to fire more shots? How can he know he’s making the right choice? Those are questions statistics can help answer.

The Plan for This “Statistics For Shooters” Series

I plan to publish 3 articles focused on how stats can help us as shooters in a way that is practical and applicable. There will be one article focused on helping us make better decisions related to these common applications:

  1. Quantifying muzzle velocity consistency: Gaining insight to minimize our shot-to-shot variation in velocity
  2. Quantifying group dispersion: Making better decisions when it comes to precision and how small our groups are

This article will lay a foundation that we’ll use in the others. So let’s dive into some important basics.

Descriptive Statistics: The Good & The Bad

When we talk about our average muzzle velocity or the extreme spread of a group, both of those are descriptive statistics. So is a baseball player’s batting average or an NFL passer rating. Sports fans use descriptive statistics in everyday conversation. How good of a baseball player was Mickey Mantle? He was a career .298 hitter. To a baseball fan, that is a meaningful statement and is remarkable because that tiny statement encapsulates an 18-season career with over 8,000 at-bats.

Descriptive statistics are very good at summing up a jumble of data, like 18 seasons of baseball or a 10-shot group, and boiling it down to a single number. They give us a manageable and meaningful summary of some underlying phenomenon. The bad news is any simplification invites abuse. Descriptive statistics can be like online dating profiles: technically accurate and yet pretty darn misleading! Descriptive statistics exist to simplify, which always implies some loss of detail or nuance. So here is a very important point: An over-reliance on any descriptive statistic can lead to misleading conclusions.

Even under the best circumstances, statistical analysis rarely unveils “the truth.” Statistics can help us make more informed decisions, but I’ll caution that some professional skepticism is appropriate when it comes to statistics. That is why smart and honest people will often disagree about what the data is trying to tell us. “Lies, damned lies, and statistics” is a common phrase describing the persuasive power of numbers. The reality is you can lie with statistics – or you can make inadvertent errors. In either case, the mathematical precision attached to statistical analysis can dress up some serious nonsense.

What Is the “Middle”?

We use averages all the time, right? Average is one of the most common descriptive statistics, which is easy to understand and helpful – but sometimes average can be deceptive. Here is a great story that illustrates the point, which is from Naked Statistics by Charles Wheelan:

10 guys are sitting in a middle-class bar in Seattle, and each of them earns $35,000 a year. That means the average annual income for the group is $35,000. Bill Gates then walks into the bar, and let’s say his annual income is $1 billion. When Bill sits down on the 11th stool, the average income rises to around $91 million. Obviously, the original 10 drinkers aren’t any richer. If we described the patrons of this bar as having an average annual income of $91 million, that statement would be both statistically correct and grossly misleading. This isn’t a bar where multimillionaires hang out; it’s a bar where a bunch of guys with relatively low incomes happen to be sitting next to Bill Gates.

We often think about the average as being the “middle” of a set of numbers – but it turns out that the average is prone to distortion by outliers. That is why there is another statistic that is often used to signal the “middle”, albeit differently: the median. Okay, don’t check out or let your eyes glaze over! I promise this is applicable and easy to understand. The median is simply the point that divides a set of numbers in half, meaning half of the data points are above the median and half are below it.

If we return to the barstool example, the median annual income for the 10 guys originally sitting at the bar is $35,000. When Bill Gates walks in and perches on a stool, the median income for the group is still $35,000. Think about lining up all 11 of them on stools in order of their incomes, as shown in the diagram below. The income of the guy sitting on the 6th stool (bright yellow shirt) represents the median income for the group because half of the values are above him and half are below him. In fact, if Warren Buffet came in and sat next to Bill the median would still be $35,000!

Average vs. Median - Seattle Bar Barstool Example from Naked Statistics

If you had to bet $100 on what the income was of the very next guy who walked in the door, would $35,000 or $91 million be more likely? When we’re talking about what is most likely to happen in the future, the median can often be a better choice than the average.

When there aren’t major outliers, average and median will be similar – so it doesn’t matter which we use. It’s when there are major outliers that it does matter. Neither is “wrong!” The key is determining which measure of the “middle” is more accurate for a particular situation: median (less sensitive to outliers) or average (more affected by outliers)?

How Spread Out Is the Data?

Often as shooters, we want to understand how spread out a group of shots are, so we measure the extreme spread (ES). That is easy to measure by hand and it is useful. However, like any descriptive statistic, we are simplifying multiple data points into a single number – so we lose some level of detail. There is another statistic we can use to describe how spread out data points are. To understand it, let’s go to another example from Naked Statistics by Charles Wheelan:

Let’s say we collected the weights for two sets of people:

  1. 250 people who qualified for the Boston Marathon
  2. 250 people on an airplane flying to Boston
Standard Deviation Weight Comparison Boston Marathon vs Plane To Boston from Naked Statistics

Let’s assume the average weight for both groups was 155 pounds. If you’ve ever been squeezed into the middle seat on a flight, you know many American adults are larger than 155 pounds. However, if you’ve flown much you also know there are crying babies and poorly behaved children on flights, all of whom have huge lung capacity but not much weight. When it comes to calculating the average weight, the 320-pound football players on either side of your middle seat is likely to offset the six-year kicking the back of your seat from the row behind.

In terms of average and median weights, the airline passengers and marathon runners are nearly identical. But they’re not! Yes, the weights have roughly the same “middle,” but the airline passengers have far more dispersion, meaning their weights are spread farther from the midpoint. The marathon runners may all appear like they all weigh the same amount, while the airline passengers have some tiny people and some bizarrely large people.

Standard deviation (SD) is the descriptive statistic that allows us to communicate how spread out values are from the average with a single number. How to calculate SD isn’t straight-forward, but virtually nobody does it by hand – so I won’t bore you with the formula. Typically, a chronograph or app calculates it for us, or we can use a formula in a spreadsheet.

Some sets of numbers are more spread out than others, and that is what SD will provide insight into. The SD of the weights for our 250 airline passengers will be much higher than the SD of the weights for our 250 marathon runners because the weights of the marathon runners are much less spread out.

The Normal Distribution

Not only does SD describe how spread out data is, but it also helps introduce one of the most important, helpful, and common distributions in statistics: the normal distribution. Data that is distributed “normally” is symmetric and forms a bell shape that will look familiar.

A normal distribution can be used to describe so many natural phenomena. Wheelen points out a few practical examples:

  • Think of a distribution that describes how popcorn pops in your microwave. Some kernels start to pop early, maybe one or two pops per second; after a little time kernels start popping frenetically. Then gradually the number of kernels popping per second fades away at roughly the same rate as the popping began.
  • The height of American men is distributed normally, meaning they are roughly symmetrical around the average of 5 foot 10 inches.
  • According to the Wall Street Journal, people even tend to park in a normal distribution at shopping malls, with most cars park right in front of the entrance – the “peak” of the normal curve – and “tails” of cars going off to the right and left of the entrance.

“The beauty of the normal distribution – its Michael Jordan power, finesse, and elegance – comes from the fact that we know how the data will be spread out by only having to know one stat: standard deviation. In a normal distribution, we know precisely what proportion of the observations will lie within one standard deviation of the average (68%), within two standard deviations (95%), and within three standard deviations (99.7%). While those exact percentages may sound like worthless trivia, they are the foundation on which much of statistics is built.” – Charles Wheelen

Normal Distribution & Standard Deviation

By simply knowing the average and standard deviation of weights from our two collections in the example above, we could come up with a very good estimate of how many people on the plane weighed between 130-155 pounds, or what the odds were of a Boston marathon runner weighing over 200 pounds.

The more random/independent factors that play into an outcome, the more normal a distribution usually becomes. It’s no coincidence that almost every random process in nature works like this. Because there are so many factors that play into how a rifle and ammo performs that makes it an ideal application for a normal distribution.

So, what does all this mean to me as a shooter? Great question! If we fire a bunch of shots over our LabRadar, MagnetoSpeed, or other chronograph, those devices will calculate what our average muzzle velocity and SD were for those shots. Let’s say the average was exactly 3,000 fps and the SD was 9.0 fps. Because we expect this to form a normal distribution, we can come up with the following chart with real numbers based on that average and SD from our sample:

Muzzle Velocity SD Example

If the average is 3,000 fps and the SD is 9.0 fps, we can reasonably expect 68% of our bullets to leave the barrel between 2991-3009 fps (represented by combining both dark blue areas), and 95% of our bullets will leave the barrel between 2982-3018 fps (the dark blue areas combined with the two medium blue areas).

But, standard deviation and normal distributions have more applications for us as shooters than just muzzle velocity, and we’ll tap into other powerful applications in subsequent articles.

Sample Size & Confidence Levels: How Many Shots Do We Need To Fire?

Have you ever been reading through a forum and saw some nerd complain about a sample size being “too small to draw meaningful conclusions”? So, how many shots do we need to fire to have a “good” sample size? The answer depends on how minor the differences are that we’re trying to detect and how much confidence we want to have in the results being predictive of the future. So the short answer is, “It depends.” I realize that may not be very helpful, so I’ll try to provide a more helpful answer and share a useful tool that can give us a straight answer based on specific data you collected from your rifle.

The first step is to understand no test result is definite, and we’d often be more accurate speaking in ranges and probabilities than absolute values. That may sound cynical, but it’s an important concept. Let’s run through an example.

Let’s say I loaded up 250 rounds of ammo for a match, and I fired 10 of those rounds (our sample) and recorded the muzzle velocity for each shot on my LabRadar. After 10 shots, the LabRadar reported the average was 2,770.4 fps and the SD was 8.82 fps. Great! Now we know how the other 240 rounds will perform, right? Not really. The only thing we know with 100% certainty is the average and SD of the 10 rounds I just fired, and anything we say about what the remaining 240 rounds (our population) should be said in terms of ranges and probabilities. We can only talk about data collected in the past with absolute precision. Predicting the future is all about ranges and probabilities.

The problem is our LabRadar or MagnetoSpeed gives us very precise statistics for our sample (e.g. SD of 8.82 fps), but it doesn’t tell us how to use those to make estimations for our population – which is what we actually care about! That is also true for how we measure groups. We might measure a 5-shot group to have an extreme spread of 0.26 MOA, but that doesn’t tell us the odds of how small the next group will be. Here is a key concept: Just because we can measure or calculate something to the 2nd decimal place doesn’t mean we have that level of accuracy or insight into the future!

With a rifle, we have no choice but to guess what the population is from the samples it provides. The larger the sample, the more likely we are to have correctly measured the population. This is called ‘confidence.’”, explains MacDonald. So if we’re asking how much confidence we can have in the results, statistics can give us a straight answer!

Let’s say I recorded the following velocities over 10 shots: 2777, 2763, 2767, 2774, 2754, 2777, 2773, 2766, 2762, and 2775. Based on those 10 shots, here are the predicted ranges for various confidence levels:

Confidence LevelRange of Likely Averages for Remaining RoundsRange of Likely SD’s for Remaining Rounds
99%2,761 – 2,7774.7 – 17.4
95%2,763 – 2,7745.3 – 14.0
90%2,764 – 2,7735.6 – 12.6
85%2,765 – 2,7735.8 – 11.8
75%2,766 – 2,7726.2 – 10.8
50%2,767 – 2,7716.8 – 9.5

What the info above is telling us is that after firing those 10 shots that had an average of 2,768.8 fps and an SD of 8.82 fps, we can say with 99% confidence that the SD of our remaining 240 rounds will fall between 5.5 and 20.1 fps. There is only a 1 in 100 chance that it would fall outside of that range. But that is a huge range! Maybe going for 99% confidence is too strict, so let’s drop to 75% confidence and we can see the SD is predicted to be 7.1 – 12.5 fps. A 75% confidence interval means we can expect that 1 time out of 4 (25% of the time) the real value of the population would fall outside of that range.

One way to get more confidence in the results is to increase the sample size, so let’s say that we fired another 10 rounds for a total string of 20 shots. The range related to a 95% confidence level would shrink from 6.1-16.1 to 7.9-15.1. That first range is a window of 10 fps, and the second is 7.2 fps – not a huge difference for doubling the sample size. The confidence level you are comfortable with is a personal trade-off between accepting some risk that your results are not accurate vs. investing more time and money to keep testing.

Stats Calculator for Shooters

Adam McDonald is a Canadian FTR shooter and he wrote two outstanding articles related to statistics for shooters, which I highly recommend (click here for Part 1 and Part 2). Adam also created an insanely useful tool to help us calculate the ranges and probabilities for a certain set of data and the desired level of confidence – without needing a math degree. Here is a link to Adam’s Stats Calculator for Shooters (alternate link, if that other doesn’t work for you.)

The screenshot below shows two examples of what Adam’s calculator can do. In these examples, I simply put in three numbers:

  • # of Shots
  • Average
  • SD

I also selected the desired confidence level, and viola – the tool tells me the ranges I can expect for the population based on the sample data provided.

Stats Calculator from Adam MacDonald at AutoTrickler

I entered the same average and SD for both samples above, and only changed the desired confidence level and sample size between the two examples. We can see the range for the SD on the left is 5.5-26.4 fps, and on the right, it is 7.3-12.6. That wider range is a result of a sample size of just 5 shots and a desired confidence level of 95%. The narrower range was based on 20 shots and I lowered the confidence level to 90%. Adding more data points and compromising to a lower confidence level in the results will both effectively narrow the range. Hopefully, this illustrates some of the basics of how Adam’s tool can help.

Calculated Values For Our Sample vs. Predicted Ranges for Our Population

The key point here is that we can know the SD of both samples we fired is precisely 9.2 fps. Our chronograph tells us that number with absolute certainty. But, let’s say those samples came from a batch of 200 rounds of ammo we loaded, and we want to predict what the SD of the remaining rounds will be. In that case, we must switch to speaking in terms of ranges and confidence levels. We can’t say for certain that the entire population has an SD of 9.2 fps, only our sample. What this calculator tells us is that based on the 20-shot sample we collected, 9 times out of 10 the SD of the remaining 180 rounds would land between 7.3 and 12.6 fps. If we want more certainty or a more precise range, we have to fire more rounds. There is no free lunch!

Summary & Key Points

We’ve covered a lot of ground, so let’s recap some key points from this article:

  • Often the ammo performance we measure at the range isn’t what we actually care about. Whether we realize it or not, we’re often collecting a sample of data and using that to predict the future performance of our rifle or ammo. That’s when statistics can help!
  • Descriptive statistics (like average, median, extreme spread, standard deviation) are very good at summing up a bunch of data points into a single number. They provide a manageable and meaningful summary, but because they exist to simplify that implies some loss of detail or nuance. An over-reliance on any descriptive statistic can lead to misleading conclusions.
  • Average and median are both measures of the “middle” of a set of numbers. Neither is wrong. The key is determining what is more accurate for a particular situation: Average is more affected by outliers. Median is less sensitive to outliers.
  • Standard deviation (SD) is a number that communicates how spread out values are from the average.
  • Because there are so many independent factors that play into how a rifle and ammo performs, it can be an ideal application for a normal distribution. The power of a normal distribution comes from the fact that we know how the data will be spread out by only having to know one stat: standard deviation. In a normal distribution, we know precisely what proportion of the observations will lie within one standard deviation of the average (68%), within two standard deviations (95%), and within three standard deviations (99.7%).
  • How many shots do we need to fire to have a “good” sample size? The answer depends on how minor the differences are that we’re trying to detect and how much confidence we want to have in the results being predictive of the future.
  • The more samples used in a calculation, the more confidence we can have in the results.
  • A very important key is to understand that average and SD have confidence levels associated with them in the first place. Just because we fire 10 shots and measure an SD doesn’t mean we will get the same SD next time. In fact, it’s unlikely we’d get the same number. Just because we can measure or calculate something to the 2nd decimal place doesn’t mean we have that level of accuracy or insight into the future! We can only speak in terms of absolute, precise values about shots fired in the past. When we’re trying to predict the future, we can only speak in terms of ranges and probabilities.
  • The confidence level you are comfortable with is a personal trade-off between accepting some risk that your results are not accurate vs. investing more time and money to keep testing.

“The rifle talks to us by generating samples at $1 a pop,” MacDonald explains. “If we want to know how it truly works, we need to play its game. With enough samples, we can try to measure the population, but it can be expensive.” Engleman, another author on the subject of statistics and shooting, joked that if he measured 1,000 shots he’d be able to have a ton of confidence in the results, but “I would also burn out my barrel, do a lot of reloading and never make it to a match!” 😉

I’m a practical guy. I realize we can’t shoot a sample size of 100 or even 30 shots for every powder charge and seating depth we try in our load development. I’m not suggesting that. In fact, over the next two articles, I specifically want to share how we can get the most out of the shots we do fire – and how to leverage those to make more informed decisions.

Other Articles In This Series

Stay tuned for the next two articles in this series, which will dive into how we can get the most out of the shots we fire, and make more informed decisions.

  1. How To Predict The Future: Fundamentals of statistics for shooters (this article)
  2. Quantifying Muzzle Velocity Consistency: Gaining insight to minimize our shot-to-shot variation in velocity
  3. Quantifying Group Dispersion: Making better decisions when it comes to precision and how small our groups are
  4. Executive Summary: This article recaps the key points from all 3 articles in a brief bullet-point list

You can also view my full list of works cited if you’re interested in diving deeper into this topic.

About Cal

Cal Zant is the shooter/author behind PrecisionRifleBlog.com. Cal is a life-long learner, and loves to help others get into this sport he's so passionate about. Cal has an engineering background, unique data-driven approach, and the ability to present technical information in an unbiased and straight-forward fashion. For more info, check out PrecisionRifleBlog.com/About.

Check Also

Finish Out of PRB's 100 Yard Underground Range

Finish Out of PRB’s 100 Yard Underground Range

A ton of people have asked me for an update on my 100-yard underground range that I recently built. I published an article a couple of years ago that shared my plans and what I’d done so far, and I asked my readers for input on how to finish it out. I changed quite a bit based on those comments. This article will give you a complete walk-through of what it’s like today. Now that it’s up and running, I’d love to hear your suggestions for experiments I should run! Please leave your ideas in the comments!

59 comments

  1. Instead of starting with “what will get my readers to click on my articles?”, your first question is “what do my readers and I need to be better shooters?” Sure, theres always someone willing to write a caliber selection article because they know plenty of people want to argue about why the 30-06 is all they’ll ever need and anything else is pointless. Your readers are better shooters because of you. Thank you for all you do.

    • Thanks, Nathan. That is absolutely the question I try to start with. It seems like I’ve been in hundreds of conversations with shooters at SHOT Show, at a rifle match, or just out at the range … and in the back of mind I was thinking, “Man, if I could just help them understand just a little statistics – they’d be dangerous.” I’ve even tried to explain this stuff in a few of those conversations, but the truth is it is VERY hard to explain, especially without visuals, good illustrations, and some concrete examples. So I finally broke down and spent the time writing this. I literally have spent months on this 3 part series. It’s actually what has been holding me up from publishing the results of the match ammo test 4-6 months ago. But, I really believe this will help a ton of people. It could be one of the biggest knowledge gaps in the shooting community.

      I also know a lot of engineers in the industry who are frustrated when they hear guys make claims about the SD’s they achieve with their ammo or the precision their rifle is capable of “all day” or “if they do their part.” While they are frustrated, they haven’t invested the time to try to close the gap by explaining some of this stuff. I know a few of those guys were excited that I was working on this series of posts so that they could point their customers and others that need a better understanding of these topics straight to this content on PRB – instead of trying to explain it themselves.

      I did find a few great articles related to statistics and shooting, but the problem was the overwhelming majority got very technical. My hope is this series equips shooters with the practical stuff they need to know and gives them the confidence to apply it out at the range.

      Nathan, I do appreciate you noticing that difference. I do spend a lot of time on this, and it’s not to increase traffic or get more clicks … but simply to help fellow shooters. Thanks for taking the time to let me know you noticed and appreciate that approach.

      Thanks,
      Cal

      • This is why I use 30 examples or items of interest as the standard sample of an unknown population. From there I start to build my population and must not deviate. Anyway, this is all hybrid random sampling and not real statistical sampling. In statistical sampling we have a known population paired down to a sample frame from which a sample is statistically selected. From there we have to stratify and use estimators such as combined regression to determine if we have a good sample and whether we can use an upper or lower confidence. Basically, for shooting we create a sample frame in place of a population. We build up to acceptable confidence interval with between 20 to 30 items of interest. Not rocket science but gives a snapshot of performance only for that session. For reloading if you want to back into the population you cannot deviate but once you reuse that brass again you have just done that.

      • Hey, Joe. I was expecting someone to chime in with something like that. I appreciate that and I totally see your point – and it’s a good one. Thanks for adding your thoughts.

        The hardest part of writing this series (and the part that made it take months to write) was trying to figure out how far to go and what to try to explain … and what to leave out. This topic can be a rabbit hole. Honestly, I was originally hoping to make this one post … but it ballooned to 3 parts, because it’s HARD to keep any discussion like this brief. But, I also really tried hard to keep it practical. I’m not claiming what you are pointing out isn’t important from an academic or even serious research perspective – but the guys in the academic and research fields are already aware of those things. I am trying my best to focus the content in this series in particular with 80% of hobbyist shooters in mind – and even particularly the ones that might be uncomfortable with math or some of the more complex parts of statistics. What concepts can I explain to them that would help them “move the needle” on making decisions when doing load development? How can I help them start thinking about how we quantify precision and muzzle velocity consistency in a way that is more accurate? And most importantly, how can I do that without alienating the shooter who isn’t comfortable with math – or making them feel stupid or lost? How can I avoid confusing jargon, edge cases, or even the more advanced concepts when possible? I wanted to do all that without abandoning technical accuracy, but it is a TOUGH balancing act.

        I don’t claim I got it “right” when it comes to that balance – but I bet you’d laugh at me if you knew how much time I spent thinking about what to include or debating with myself over what was “over the line” or was “getting into the weeds.” I constantly thought about this quote attributed to Einstien: “Everything should be made as simple as possible, but no simpler.” Simple to say, but hard to practice!

        Okay, having said all that … I do think you will like where I’m going with this series. You’re exactly right about the reuse of brass changing one of the inputs to a very complex system with a ton of random/independent variables. You are clearly more familiar with these topics than 97% of shooters, so my bet is that you’re one of those guys that can get frustrated when people make big claims about the tiny groups their rifle “shoots all day if they do their part.” Or maybe reading about load development techniques that draw conclusions from slight patterns in very small sample sizes raise your blood pressure a little. But, communicating/teaching those things without coming off as a pretentious ass can be very tough.

        I realize this is an ambitious goal, but I’m hoping that this series is something that will help further the understanding of the precision rifle community as a whole, because I do believe there is a significant knowledge gap here for the average shooter. But, the key to it making any difference at all is to explain the basics and application without losing people. I’m sure at times I’ll go too far on one side or the other, but then again … that’s why I enable comments on my posts. I know there are professional statisticians that read my articles – likely hundreds of them. My content tends to attract an educated, critical reader. So I appreciate them correcting me or adding their own thoughts as they see fit. That only makes all of this content better – and I learn in the process, too. I certainly don’t want to make anything simpler than it should be. 😉

        Thanks for the comments, Joe!

        Thanks,
        Cal

  2. Mahmoud Shmaitelly

    As usual, that is a superbly written article.
    I cannot wait for the next 2 parts.

    • Thanks! I promise you won’t have to wait long. I wrote all 3 articles together, so the 2nd and 3rd installments will be published over the next week or two. Stay tuned!

      Thanks,
      Cal

  3. Excellent article and perhaps one of the most useful ever published on PRB. Thanks Cal.

    A follow up on ” BALLISTICS VARIABLES” would help. For example temperature stable v.s. non-temperature stable powders would be one of these variables.

    • Thanks, Eric! That makes me feel a lot better, honestly. I’m pretty sure I spent more time working on these 3 articles than anything else I’ve published! I almost gave up on it at least 3 or 4 times, but every time I would come back to it because I thought it’d help a lot of shooters.

      That’s an interesting idea for a future post. I think if I did that before publishing the results of the 6.5 Creedmoor Match Ammo Field Test, I might get strung up! But I’ll definitely consider that for a future post. There are certainly a ton of those if you really tried to include all of them. I don’t know if I’ve ever seen anyone ever try to itemize all of them, so that would be an interesting excercise and honestly something that my readers could probably help complete. It’s one of those ideal applications of “crowd-sourcing”, because it’d be hard for any single person to think of all of them (although some experts might be able to get the majority) … but a community of people could easily come up with a comprehensive list. Of course, all of those random/independent variables are a big part of what makes rifle shooting such an ideal application for statistics.

      Thanks for sharing the idea!

      Thanks,
      Cal

  4. Cal,

    Thanks for another wonderful article. We would nevwr string you up! Who else would produce this stuff?!

    A thought I have been tooling around with a very very little bit: how much is velocity, sd, etc. affected by the support to the rifle? I.e. if the rifle is allowed to accelerate an extra 1/2 inch before hitting something solid like a shoulder or a fixed support, will that rob velocity from the bullet?

    My mental force diagram may be all out of wack, so feel free to say so!

    • Thanks, Samuel! That’s a great question. In short, I’m not sure. I can see how that might be plausible, but I wonder if it’d be measurable – or simply “in the noise” of velocity variation.

      That might be one of those questions that I keep thinking about until I eventually break down and go out to the range to test it for myself. I suspect it wouldn’t be significant, because I know some top-ranked PRS shooters that sometimes free recoil their rifle off certain barricades, but don’t do that all the time. While it may change your POI (which they account for), I bet they don’t have to account for how it would skew their muzzle velocity. But, I’ve been surprised by stuff like that before, so I really don’t know. Great question!

      Thanks,
      Cal

  5. I love all of your articles. They have taught me so much and I appreciate them all! I actually bought a 29 inch hawk hill 338 lapua barrel to go on my axmc just because I read how low sd’s you got with yours. Can you give me some insight on what to try to get those low numbers? Let me give a little background… I have my own mile range that I can shoot every day if I want. My target is a 24 inch plate. So I’ve done the numbers and the extreme spreads need to be pretty close to single digits to hit that target taking out all other variables of course. Here’s my reloading process…I anneal with an AMP annealer. I full length size and bump the shoulder back 2 thousandths. I trim all my brass the same. I uniform my primer pockets. I measure all of my powder on a high end scale. I use a mandrel to finish neck tension. I am using h1000, lapua brass, 215m primers, and 300 Berger hybrids. I’ll shoot 5 shots and my extreme spread will be around 10. Next I’ll load up 5 more and go through all the same process and my extreme spread will be 30. My groups are under half moa at 500yards. What i’m I doing wrong to not keep extreme spreads low? I know you know as well as I do that an extreme spread of 30 is not going to hit the 24 inch target at 1 mile. What else can I try? I had the same kind of frustration with the 300norma. Any insight would be appreciated. I’m just looking for the best consistent caliber to shoot at 1 mile. I thought about a 300wsm but I don’t want to give up on the 338lapua yet. Thank you.

    • Jay, I’m glad you’ve learned a lot from my articles … and you couldn’t have led into my next post any better! It’s all about how to quantify muzzle velocity variation in a way that leads to better decisions. I can tell from how you explained your process that you are a detailed guy, so I think that the next article will be right up your alley and really applicable for you in particular.

      I’ll say that it seems like you’re on the right track with all of your components and process. It sounds like we have similar loading processes and use almost identical components for a 338 Lapua (i.e. Lapua Brass, H1000, Berger 300 gr. Hybrids). The only difference I noticed was that I use CCI 250 primers instead of Fed 215M. I have used Fed 215M in some of my other loads, so I’m not saying they’re not good. Which is more consistent might come down to the lot # you have and possibly even the exact powder charge, bullet tension, etc. for your ammo. When I’m looking for super-low SD’s, I do try a couple of different primers and for magnums those are always Fed 215M and CCI 250. So you might try CCI 250 primers and see if that improves anything.

      If you want to see the EXACT load specs of my 338 Lapua Mag ammo (or any of the other cartridge that I handload), you can find those details here: Rifle Reloading Data: My Pet Loads for Target & Hunting. That is literally where I save all my load info and what I pull up when I’m at the loading bench to remember what it was that I found to work the best for my rifles.

      The other practical tip I can offer is that I have started running 0.003″ of bullet tension, which is a little heavier than what most handloaders use. Bryan Litz actually found a measurable improvement in velocity consistency with tighter bullet tension, which is a little counterintuitive. But, in my anecdotal research (developing 2-3 loads for various cartridges), I can tell you that seems to have held true for me too. I haven’t tried heavier than 0.003″, but that’s what I’m using for a few of my loads now. I use one of the K&M Precision custom diameter neck expander mandrels so that I can get the exact neck tension I want. I have plans to do a more in-depth test of that at some point in the future, but I thought I’d at least mention it here. While I think a lot of what we fixate on when handloading might not make a measurable difference, I’m at least under the impression at this point that bullet tension does – and more might be better when it comes to velocity consistency.

      Beyond that, I would say it comes down to how you test loads and make decisions about what is “better” … which is precisely what I will explain in detail in the next post.

      I certainly wouldn’t give up on the 338 Lapua. It is extremely capable. Honestly, if I am really concerned about uber-consistent muzzle velocities, I’m of the opinion that Lapua brass is the surest way to get there. If you swapped to a 300 WSM (which is a capable round), I don’t think you’d have the option to use Lapua brass. ADG and Alpha brass are both good, along with a few others – but virtually everyone agrees Lapua is a gold standard. When it comes to low muzzle velocity variation, brass consistency seems to be one of the primary drivers (in my experience). When you’re talking about magnums, I’ve heard the 338 Lapua was one of the easier cartridges to load for (I suspect at least partly because of Lapua’s brass) – but I’ve also heard some of the shooters that I respect the most say the 338 Norma Mag is even easier. I’m not saying to switch. My 338 Lapua literally has the lowest SD’s of any rifle in my safe. But, if you do consider other cartridges for that 1 mile range, a 338 Norma Mag has Lapua brass available and is supposedly easy to find a consistent load for.

      I’d suggest you read through the next article (will be published within the next 7 days), and then see if you need to change your approach to load development a little to follow the recommendations it mentions. Just based on how you described your current performance, I suspect there will be some actionable nuggets in there for you that might help you zero in a more consistent load.

      I do wonder if sometimes the crazy low SD’s aren’t at least partially a result of a really consistent barrel. I don’t know that, but I am just thinking about it in the back of my mind. I’ll talk more about that in the 6.5 Creedmoor Match Ammo Test I’m about to publish, because I think I may have seen that in the data I collected. I do think Hawk Hill makes some of the premiere barrels, and an industry insider told me that because they’re a smaller shop you likely have the same guy lapping all of the barrels … which helps even more with consistency.

      This is all probably more than you wanted to know, but I was just sitting here trying to rack my brain on any tips I could provide. I know how frustrating it can be to not find a load as good as you were hoping for. It can drive me crazy! So I just wanted to provide anything I could that might help. I do REALLY think you might benefit from that next article as much as anyone, so stay tuned!

      Thanks,
      Cal

      • Thank you for the reply! Yes I am very particular down to the finest details. When I was trying to find that perfect load for the 300norma I did try 215m and cci250 but they both were about the same. But I will give the cci250s a try in the 338lapua. I will try more neck tension to see if that makes a difference. As you said, it can get so frustrating trying to find that perfect load. I just wish there was a mile caliber that is as easy to tune as the dasher. I never have any problems with it at all and sometimes I just shoot it at a mile and get hits as much as the bigger calibers. But we both know that it’s not the best option at that distance. I even thought about a 284 or 284 shehane. But I really want to make the 338lapua work since I’ve got time and money invested in it. I have read your pet loads and studied them allot. Lol. I forgot to mention that I also weight sort brass and bullets. I use dry lube inside the necks when using the expander mandrel. I probably need to know more about the process of finding a load. I usually load up about 10 rounds in .5 increments. I measure those rounds on my magnetospeed. I look for close numbers. Then I’ll load up 5 rounds to see if there is a node of a particular powder charge. Sometimes it works ok but with the 300norma and the 338lapua it’s not worked out so far.

  6. Ah yes statistics for sampling. I was trained in college for that, in a totally different area, forestry. More data is always the cry of those who have been involved in things like guns reloading, etc and have a background that includes sampling statistics.

    I have used 3 shot sample groups for shooting cut back from 5, which I am going to go back to 5 samples for each load (shooting session) and keep much better records. Not scientifically valid, but within tolerance of the shooter in my case. Your article has caused me to think this through more. I guess my “need” for a better chronograph setup than I have now just came to the fore.

    Getting more shooting time just came to the top. Thanks.

    • Oh man, John, you couldn’t be leading into my next article any better – and honestly the 3rd one in this series, too! I bet you’d going to like these articles! 😉 I’ve changed a few things I do since I started writing this (almost 12 months ago!). And I hear what you’re saying about the cry for more data, but I try to take a very balanced, but practical approach. I know that virtually none of us are going to fire 30 shots for every permutation of a load that we try (people only do that when it’s the full-time job and they have a government-sized budget), but there are some practical steps we can take to make better decisions based on the shots we do fire. Stay tuned!

      Thanks,
      Cal

    • Cal, I just want to say thanks for doing this. You are killing me though making us wait for the Creedmoor results. Honestly though you are a fine American and I can’t say thanks enough.

      • Thanks, Frank! And I promise I am just as anxious as you to get those results out. I just thought they might be misunderstood or misused if I didn’t first lay a little groundwork for why I went about the test the way that I did and why I’m going to weight the results the way that I plan to. Of course, I’m still going to provide all the data I collected, but I just think sometimes we (as in all of the shooting community) can focus on the wrong things. I think it’ll make more sense after the next 2 posts. I’m pretty sure you’ll see what I mean. I promise it’s worth it!

        Thanks,
        Cal

  7. Cal Zan
    Congratulations on another article, which improves not only our practice but also our understanding of shooting.

    I don’t know if it’s still possible to comment on your article (Wyoming ELR Scopes & Mounts – What The Pros Use – September 5, 2020) where we talked about the value of milRad and Moa. You presented that 1 MilRad = 3,375 MoA.
    Because in the article MIL vs MOA: An Objective Comparison – July 20, 2013 I found this data:
    “Conversion Formula: 1 mil = 3,438 MOA”
    “Calculation: 1.5 mil = 3,438 x 1.5 = 5,157 MOA”

    Greetings from Brazil

    • Thanks, Humberto! I appreciate that. Glad you found this helpful.

      As far as the conversion goes, honestly, I don’t find myself converting very often (if ever) … so I may have just misspoke there. Sorry for any confusion!

      Thanks,
      Cal

  8. I may have missed it, but do you mention the inherent error in the chronograph. For example magnetospeed reports 99.5-99.9% accuracy. With the above example of a round going 3000fps. The inherent error in the chronograph is anywhere from 3-15fps. So statistically speaking, an actual SD of 0-3 cannot be confidently reported. Is this right?

    • That is a Level 5 question. I will address that specifically in the very next article, which is 100% focused on quantifying muzzle velocity consistency. Thanks for leading into my very next post! 😉

      Thanks,
      Cal

  9. Cal, I’m a firm believer in case weight management. In your very valuable response to Jay, you said: “When it comes to low muzzle velocity variation, brass consistency seems to be one of the primary drivers (in my experience)”, indicating that it’s why Lapua brass is the gold standard. As far as I know, all cartridge brass is just “cartridge brass” (70% copper, 30% zinc) I’m fairly new to your website, and a rank amateur. I wonder about how much of brass consistency is consistent weight. Cartridge brass density is between 9x and 10x relative to powders, depending on powder “fluffiness”. If you size and trim all of your same-headstamp cases the same, their outside dimensions are exactly the same, so case weight variation is all on the inside, which means powder capacity. Whether 223 or 338, 1 gn of sized/trimmed case weight variation means roughly 0.1 gn of powder capacity variation. I’ve weighed headstamp-sorted range brass. Cheaper brands vary much more than semi-premium brands (I’m cheap!), but if (after sizing/trimming) you sort it into groups that are within half a grain of weight, then they’re within half a tenth of powder capacity…regardless of cost. They HAVE to be, right? That’s gotta help.

    • Scott, I can certainly see how you can logic your way there. I used to think the same thing when I first started off, so I will try my best to explain what I had to learn the hard way, but wish someone would have told me when I started. In my experience, no, what you’re saying is not true. I’ll try to objectively help you understand the potential disconnect.

      Part of the issue is likely your idea that all cartridge brass is just “cartridge brass.” I think you underestimate how complex metallurgy is. You can read this page about Norma brass to understand some of it. Lots of the specifics are a trade secret, so it’s not that they’re all publically available – but there is a difference. I have heard of some of the extra steps that companies like Lapua, Norma, ADG, and Alpha take in their brass manufacturing, and there are proprietary processes in every one of them. There is a lot of technology and science there – more than you might think.

      The second issue with your argument is the thought that if you weight sort and get the external case dimensions the same that means the cases will be virtually identical. That’s not the case either (pun intended 😉 ). Thickness can vary in the neck, body, and/or head of the case. Just because they weigh the same doesn’t mean the weight is distributed in the same way. It actually doesn’t mean that the case capacity is the same either.

      I wish there was a way to sort brass cases to a point where they are identical, but there isn’t. I am as OCD as anyone, and I have tried! Sorting helps, but if you want the most consistent brass, I believe you have to start with consistent brass – which means Lapua, ADG, Alpha Munitions, Peterson, Norma, etc. And not only the brand name, but for the ultimate consistency all your brass should be from the same lot. The idea that if it has the same headstamp that it is all the same is something most precision shooters would argue isn’t true. Look at what the top shooters in Benchrest and ELR use, disciplines where extreme consistency is critical. You’ll see Lapua is virtually always the most popular, usually by a wide margin – and for good reason. It certainly isn’t because it’s the cheapest! I’ve also heard some of the manufacturers I mentioned claim that their brass was “in the same class as Lapua” – but none of them ever claimed it was better. They all see them as the gold standard, which is strange … because you don’t see one company held in that kind of regard by virtually the entire shooting community for any other type of product (e.g. bullets, stocks, barrels, actions, triggers, etc.) – but Lapua dominates the market when it comes to brass. There might be others that are close or potentially even as good, but it seems safe to say that none are better. (To be clear, I’m not a fan-boy of any brand. I’m not sponsored. I buy this stuff out-of-pocket. I just bought some ADG brass this week for my 7mm Rem Mag hunting rifle, and I’ve used Hornady, Norma, Peterson, and others. But nothing can help your ammo be as consistent as Lapua, in my experience. If Lapua makes brass for a cartridge I shoot, I buy it. Honestly, if Lapua or ADG doesn’t make brass for a cartridge, I might not chamber one of my rifles for it.)

      Now “is it worth it?” That is a loaded question, and there is no one-size-fits-all “right” answer to that question. My standard answer to that question is something like this: “Something is worth it if the benefit exceeds the cost – to you. If ‘it’s worth it’ largely depends on what money means to you. We all come from different circumstances.” In fact, I’ve had so many conversations lead to this question that I wrote a fairly concise, but balanced, way to think about that: Is It Worth It?

      I hope this is helpful!

      Thanks,
      Cal

  10. Excellent Article ! Looking forward to next 2…….

  11. Wow, it seems that my book list must also include some books on mathematics, such as probability theory, statistics and so on.

    There is no end to learning. Like Steve Jobs said, stay hunger, stay foolish. LOL

    • Hey, if you’re even mildly interested in learning more, I know the absolute best book: Naked Statistics by Charles Wheelen. That is literally one of the best books I’ve ever read. Who says that about a stats book?! I actually read it for the first time about 6-7 years ago, just for fun. That’s how much of a nerd I am! And I’ve read parts of it 2-3 times over the years, and recommended it to people I work with, and they found it very helpful, too. It is really, really good. The author has very engaging examples, and it’s written in a very entertaining fashion. I loved it and would highly recommend it.

      Thanks,
      Cal

  12. WOW!!!! That’s good stuff.

  13. Dear Cal: Very good job so far. You might want to recommend the Statistics For Dummies series. It got me through Stat 101. lol.

    • Thanks, Anthony. I haven’t read that one, but I’ve read a couple of the other “For Dummies” series and they were well written. Thanks for the tip.

      Thanks,
      Cal

  14. From one who is an engineer but not a math nerd, let me thank you for sharing your thoughts freely. I realize how much work this was and it’s very generous of you to give it away. I hope your generosity is rewarded. As always, I will try to pay it forward.

  15. John Campbell-Smith

    Cal
    Another very interesting article. Looking at statistics from another angle, what would be really helpful for me is to see one or two worked examples on how a man with a rifle and a chronograph can methodically work over time to improve his shooting?
    A second point would be to comment explicitly on Extreme Spread, it’s not a quality measure, but some shooters set great store by it as it’s easy to capture. That ease of capture of ES, does not make it very useful.
    Thanks
    JCS

    • Thanks, John. Both are good points, but you’re getting ahead of me! I believe the next article will address some of the exact things you’re referencing. Stay tuned!

      Thanks,
      Cal

  16. Thanks for all your contributions, I’ve learned a lot from your site. If you have a chance, please take a look at violin plots (also know as bean plots) which conceptually combine a box plot with a histogram. Also, are we sure that ballistics follow a normal distribution? I expect it would but it should be verified given the other distributions in the universe. Finally, I didn’t see sample size mentioned in your article, and outliers should be trimmed for a more robust analysis of the data.

    Best,

    Omar

    • Thanks, Omar. I’ll check out the violin plots. I’m actually really into data visualization, so you piqued my interest. Most of your comments will be addressed in the next two posts. I will definitely talk about sample size and outliers when I dive into the specific applications of muzzle velocity analysis and precision/group size analysis. So stay tuned for those. I do think simply trimming outliers might not be the way to go, but I’ll dive way into that in the 3rd post and if you think I’m off base there or have a different perspective to share, I’d love you to chime in on the comments on that post after you read what I present.

      The question about the normal distribution is a deep rabbit hole, and I’m sure there are lots of opinions on that. It was one of those nuances that I think you can get hung up in and cause you to venture out beyond the basics, which is what I’m trying very hard to focus on. You can go to the works cited that I listed and I think there was at least one or two of those papers that speak to that.

      Thanks,
      Cal

  17. This series is needed in the shooting community. There are many of us who are in the beginning stages of learning a very demanding sport. Even if we are semi informed about the technical terms that are bouncing around the firing line we need information that helps us to effectively apply the information to our requirements. This first in the series accomplishes that need, I am sure the remaining articles will as well. From the comments above the wheels are already turning from shooters other than myself as we attempt to shrink our group sizes, and get the consistency required from our loads and equipment to improve our range day and match day performance. Keep up the good work Cal, all you do is greatly appreciated by this shooter, you have helped me improve in many ways.

    • Thanks, Steve! I really appreciate your kind words. I agree that this seems to fill a void in the shooting community. The biggest problem is almost nobody has tried to explain these concepts without going way into the technical side and losing most shooters along the way. The next two articles will be specifically focused on application and concepts directly related to muzzle velocity and group size, which are the primary ways most of us use to compare loads and make decisions based on samples. I think they’ll really help clear up a lot of things – and likely challenge some “conventional wisdom” too. So we’ll have to wait and see how well received those are! 😉 Ultimately I’m just in search of the truth and not running a popularity contest. I want to help people, which sometimes means debunking some myths or misconceptions. I didn’t have to do much of that in this article, but there is definitely some challenging aspects of the next two. Can’t wait to see what people think of them.

      Thanks,
      Cal

  18. Cal,

    As usual you’ve written an insightful and useful article that helps clarify how best to understand and interpret data collected from chronographs. Thank you for all of your articles! They represent one of the very best and most thorough-going examination of precision rifle shooting and competition. There is, however, a minor chink in the armor of this particular article that has nothing to do with your writing – the link “Stats Calculator for Shooters: (https://www.autotrickler.com/stats-calculator.html) leads to a page on which there is no way to get the calculator. Moreover, on that site I cannot find where or how to purchase the calculator. Minor issue and perhaps I’m just not seeing it. Nonetheless, it would be nice to have the calculator for future chronograph/data collection sessions.
    Thanks for all you do for the Precision Rifle community!
    John

    • Hey, John. Thanks for the kind words, and I’m sorry you are having trouble getting to the calculator. The trick is to click on the image of the calculator on the page I linked to. I know that isn’t the most intuitive thing. As a guy who has done lots of professional work on the web, I realize why it’s set up like that … but I agree it’s a bit confusing. Good new though: Nothing to buy – it’s all free! Here is a direct link to the calculator: http://172.104.26.4:4321/

      I’ll go back and add that next to the original link in the article.

      Thanks,
      Cal

  19. For the hunters out there that shoot 3 shot groups (maybe only one group per ammo type/brand) this will be an eye opener. I often suggest that these people find a range that has 200 and 300 yard targets. If you don’t have a chronograph, this can be a big help as the smallest grouping ammo at 100 yards is often different at 300 yards as the variance (ES and SD) shows up more at long range.

    Keep up the good work

    • Thanks, Bruce. I agree that if you don’t have access to a quality chronograph shooting groups at distance can help. The only downside is it can potentially introduce some noise in the data from wind or other environmentals. Of course, another upside of shooting at longer distance is parallax isn’t as critical. I bet most people don’t realize how much parallax can screw with groups at 100 yards if you don’t have it set properly. It’s less so at longer distances.

      And I’ll have a whole article focused on analyzing groups, and talk about sample size specifically … and maybe exhaustively! 😉 Stay tuned!

      Thanks,
      Cal

      • Great primer Cal. Engr and math geek here and I’d use this for training class pre-work for my techs. Quick question, well 2 actually. With several data types bounded by zero have you modeled a weibul distribution? Second,. Ever modeled the impact of randomization on a limited dataset? My datasets tend to be large but my gut tells me that 3×10 shot groups randomized may turn out different than a 10xa 10xb 10xc experiment. Cheers.

      • Thanks, Paul. Glad you found it helpful. I haven’t modeled with a Weibull distribution, but you might be on to something. That reminds me of a really interesting conversation I had a couple of years ago with Nick Vitalbo, who is an insanely smart engineer who is one of the foremost experts on lasers and a “Principal Engineer” at Applied Ballistics, and he was talking about how a Weibull distribution was more accurate when it comes to modeling wind speeds. I guess that may be a well-known fact in the wind science world, but it was a new thought for me … but makes a lot of sense. Ultimately, here I wanted to just stick to the basics and not go off into the weeds of other types of distributions, even though there is likely technical merit to do that.

        My next post will actually get into the impact of randomization on a limited dataset, and it’ll include a great study I came across that I think will really help us wrap our minds around that – or at least it did for me! 😉 That is an interesting idea about the 3×10 shot groups being different when randomized. Are you referring to the firing order? Like you’d fire randomized order of shots compared to all of one batch then all of another? I just want to make sure I’m understanding you correctly.

        Thanks,
        Cal

  20. You did a fine job, here!

    I wrote something like this 10 years ago for farmers wanting to test pesticides/fertilizers and understand what our academia and industry results meant. It wasn’t easy, and it took three other guys and me about 3-4 weeks to come up with it. Switch 3 and 5 shot groups into split farm fields vs. randomized and replicated plots set up in blocks across the field, and the statistics and analysis are the same.

    You could do least significant difference (LSD, which is what we use in agriculture) at p = 0.05 or p = 0.01 and that could help guys pick out loads and brands.

    • That’s interesting, Keith. I actually grew up a farmer (8th generation farmer actually), so I’m totally with you. It is very similar. And I can see how farmers as a group might be even more resistive to applied math as some are in the shooting community. Both are places where old wives tells, almanacs, and some form of palm reading all are more accepted guidance than rigorous statistical analysis. So you probably really appreciate the work that went on behind the scenes to make this approachable and not overly technical!

      I’m actually going to try to get through this whole series without mentioning P-Test, T-Test, or F-Test. I talked around the concepts, and guys like you will recognize what’s behind some recommendations … I’m just trying hard not to talk over people’s heads. It’s certainly tough know where to draw the line. This should probably be a 10 part series, but then I know most people wouldn’t have the attention span for that … so I’m just trying to teach as much application as I can in 3 posts.

      Since you’ve done something similar, please chime in if you notice I miss a point that you think is important or if you have a simpler way to explain any of these concepts.

      Thanks,
      Cal

      • Hey Cal. you wrote:
        That is an interesting idea about the 3×10 shot groups being different when randomized. Are you referring to the firing order? Like you’d fire randomized order of shots compared to all of one batch then all of another? I just want to make sure I’m understanding you correctly.

        Correct. I would imagine taking the 3 groups of 10 and creating A B and C groups, then create a random firing order (spreadsheet will do it) and then fire on a set cadence per the random order at targets A B and C. We can eliminate a potential cause of variation by having a buddy keep track of A/B/C so the shooter is blind to it all. Easy. We could get fancier with tracking of temp and wind but just a simple random order may be of use.

      • Yep. That is a very interesting thing. I love a double-blind test like that. If you want to really know the answer, a double-blind test will usually help remove any underlying bias.

        Thanks,
        Cal

  21. Can you clarify what you mean when using the term “Extreme Spread”? Is it group size on a target (distance between furthest apart holes) or the muzzle velocity high-low values from a chronometer?

    My opinion is the former data is not that useful to collect because the biggest variable is shooter skill. The latter I value even more highly than SD of a sample size (but of course when ES is low then SD is low as well).

    • Hey, Steve. Extreme Spread (ES) can represent both things you mentioned. If you are measuring velocity, then ES is the difference between your minimum and maximum velocities recorded. If you are measuring group size, then ES is typically expressed as the center-to-center distance between the two shots that are furthest apart. Math people sometimes refer to that same thing as “range,” but I use “Extreme Spread” because that is the common way that shooters refer to it.

      I saw that you read Part 2 and commented on that post after this one. I dive way deep into whether ES or SD is the more effective statistic when it comes to muzzle velocity in Part 2, so I won’t repeat that here.

      And I’ll dive more into a better way to quantify dispersion in Part 3, which will come out in about a week. I don’t agree that the biggest variable in group size is always the shooter. That may be true for many shooters, and maybe even most shooters – but it isn’t true for those of us that fire thousands of rounds each year from a rifle. While you can’t completely eliminate shooter error, you can certainly minimize it to the point that it isn’t the largest variable. It takes a lot of practice, but it is definitely plausible to reduce that factor so it isn’t the weakest link in the chain. As I said, we’ll talk more about that in Part 3, so stay tuned!

      Thanks,
      Cal

  22. Cal thank you for writing this article. I’m relatively new to shooting sports but I think this is the most important article of the year. The whole concept of measuring the accuracy of a load or rifle by the extreme spread of a group has always felt incomplete to me. Bridging the basic concepts from a Stats 101 class is a big leap forward in my view. Finally! In an age where all the major sports are driven by analytics, this is a perfect fit. Skeptics might argue that none of this matters in that moment when you are trying to steady the cross-hairs on target under stress. That might be true but this type of analysis certainly helps us make more informed decisions about our equipment, areas of focus for training, and in assessment of performance.
    Thank you and I look forward to reading the full series!

    Ryan

    • You bet, Ryan! Glad you felt like it was helpful. I do think these concepts can help a lot of shooters – new ones and veterans alike. The shooting industry is often dominated by gut decisions, old wives’ tells, and load development methods that aren’t based on solid statistical methods. That doesn’t mean we should all apply government-grade research methods to our hobbyist load development, but simply that we should be aware of the limits of what the data we collect is able to tell us. I also hope that it helps us understand how to get the most out of the shots we do fire so that it leads to better decisions, and ultimately helps us put more rounds on target.

      Your comment about “In an age where all the major sports are driven by analytics, this is a perfect fit.” is a very interesting view. I hadn’t thought about it like that. I agree that this is basically applying some of those same concepts to shooting sports. (For those that haven’t watched the movie Moneyball, I’d highly recommend it!) There was a lot of resistance to applying statistics to make decisions in baseball and other sports, and shooting sports will likely be no different. Putting on a shift in baseball was something so many teams laughed at when the Rays started doing it, but EVERY team is baseball now does it because it’s effective. The math said it was a good idea, but it took a while for everyone to come around to it. There are stubbornly ignorant people everywhere! 😉

      Ultimately, I’m a very pragmatic guy. I feel like you should understand what a doctor’s recommendations are and why they are saying that, but that doesn’t mean that you shouldn’t ever go against a doctor’s orders (if that makes sense). I heard a quote once that said you should understand why the fence is there before you take it down. I don’t personally fire 200+ rounds when doing load development. I used to, but today I see that as a waste of time for 99% of my applications. I primarily compete in field conditions under time constraints, and I’m either firing at targets that are either really small or really far away from a prone position, or at mid-range targets from improvised positions off barricades. Shrinking a group from 0.5 MOA to 0.3 MOA just doesn’t typically have a measurable difference in hit probability in real-world field conditions. We all like to fire a tiny bughole group or see an SD of 5 fps. There is something deeply satisfying about that, and we all want to reduce the error of our ammo/rifle system to give us a larger margin of error on the target – because my shooting isn’t perfect. But ultimately, if you are shooting in field conditions under stress, we have to be practical about how much effort we should exert to eke out that last bit of precision or consistency. Ultimately, if you have limited time and resources (which we all do), you should make good decisions about where you invest those. If you haven’t read my “How Much Does It Matter?” series, I feel like those articles hit that concept head-on and in a way that is as practical and science-based as possible.

      I will hit on that last paragraph more in Part 3, so stay tuned for that. I feel like all of these articles are simply trying to help people understand what the “doctor’s orders” are (from a science/math-based perspective or what experts in these areas would say), and then we can each decide how to apply those in our own personal situations. They’re fundamentally intended to dispel ignorance and support more educated decisions. I hope it’s the culmination of these two quotes: “A wise man makes his own decisions, an ignorant man follows the public opinion.” and “In the age of information, ignorance is a choice.” And I’m certainly not using ignorant as a derogatory term or insult. I’m ignorant in a lot of areas! Ignorance is simply lacking knowledge in a particular area, and it’s true that there is a knowledge gap for most shooters when it comes to the application of statistics and the scientific method to shooting and load development. I’m just trying to help close the gap as graciously and objectively as I can.

      Not sure why I got on that soapbox, but I hope the rambling made sense! 😉

      I do really appreciate your thoughtful comments and balanced perspective.

      Thanks,
      Cal