“Ah, man, that’s just a bunch of broscience.”
“This email is stupid. It’s all broscience.”
“Yeah, he’s just a bioscientist.”
– Bro brosef, 2020
I have a big problem with the word broscience.
Namely, I have no fucking idea what it means, even though I regularly get called out for my “broscience” on a semi-regular basis.
Is it an insult? A compliment? Just some cliché masculine bullshit throwaway term?
Does anybody actually know what they’re talking about when they say it? Or is that just the term fitness jabronis use to dismiss viewpoints they don’t agree with?
I can’t answer these questions, because broscience has become industry jargon. And the worst kind of jargon really, because even people in the industry can’t define it.
The Urban Dictionary (very reputable source, I know) defines broscience as, “Word of mouth knowledge passed off as fact, primarily among bodybuilders and weightlifters.”
But, over the years the word broscience has taken quite a beating since BroScienceLife brought it to our industry’s lexicon as the word for advice given by your typical gym meathead, regardless of their true knowledge or expertise.
Recently, I’ve seen it used as a way to dismiss differing viewpoints more than anything else, an excuse to exit a difficult discussion.
Today, I want to take back the term broscience, reestablish what today has an indiscernible meaning.
In order to understand what broscience means, let’s break it down into its two root words: bro and science. let’s start with science.
Science has developed in recent years (and months) unintended and unproductive connotations.
How often do politicians say “science says,” or “science proves” as if science is a mythical God making the final call on all that is?
Fuck, how often do people say that in debate and discourse?
“Well, if you look at the science…” … all it proves is you’re annoying to argue with.
Science can never prove, by its definition. But we’ll get to that.
The word science comes from Old French, meaning “what is known, knowledge of something acquired by study” (1). This French word can be traced back to the Latin scientia which simply meant, “knowledge.”
The key part in that French definition is the acquired by study portion. After all, knowledge and what we know doesn’t appear out of anywhere.
Rather, knowledge comes from careful study.
In particular, careful study of the world around us where we ask questions, form hypotheses, design experiments to test hypotheses, and draw conclusions.
Surely, you learned this in middle school, and this process of acquiring knowledge is what we call The Scientific Method.
The Scientific Method is how we do our best to observe the world around us and make conclusions through experiments.
If I want to find out if squats improve speed, first I need a question: do squats improve speed? Then, I’ll need to form a hypothesis.
I would hypothesize that squats DO improve speed because they target the same muscles as sprinting, and will, therefore, make those muscles stronger which will allow them to put more force in the ground during a coordinated running movement and lead to an increase in speed.
There’s my hypothesis, now I need to do the experiment.
I would take two groups of athletes, test their speed, and have them do identical workout programs, except one group will have squats as the main exercise. Then, I’ll retest after a few months. Now, I have some data to either support or reject my hypothesis. In my imaginary experiment, let’s say the squatters run faster than the non-squatters.
I just proved squats improve speed. Right? Right?
There’s so much we don’t know about the experiment. What were the demographics of the group? Did they have training experience or not? What was the protocol for squats? Did both groups live the same lifestyles? How did I control the rest of their workouts?
There are so many variables that aren’t accounted for, so this is NOT proof that squats improve speed.
But, it is evidence. And maybe, after this, I’ll do more experiments with different demographics, rep ranges, and time lengths.
Those experiments provide more evidence — to either support, reject, or live somewhere in between of my original hypothesis.
Perhaps, my hypothesis will adjust, become more specific as my understanding grows.
With well-formed scientific experiments, our understanding of the world grows and grows, as evidence mounts. But, we never learn the absolute truth, we only observe stronger evidence (or conflicting evidence that requires more experimentation).
At its heart, science is taking a question, and coming up with the best experiment you can to answer that question and drawing conclusions about those experiments.
But it never proves. As the cool kids say, it’s falsifiable.
In fact, stronger evidence usually only brings up further questions. We start to wonder about mechanisms and specific parameters and design more experiments to answer those questions.
How many squats turn out to be more effective for speed? At what weight? How frequently? How about split squats versus front squats?
These are more specific questions that require more experiments.
It’s these types of questions and experiments that have allowed the field of performance and health and fitness to continue to grow.
And, I, as a strength coach, use these experiments to help guide my decision making.
These experiments might get peer-reviewed by other scientists for efficacy and validity, and if they pass, affirm the validity and strengthen the credibility of the evidence. Often, the evidence becomes so strong, we treat it as proof.
Gravity, for example, has shown time and time again through experiments to be, in fact, real. The experiments all seem to support the theory of Gravity. In other words, we get closer and closer to the truth but we never FULLY understand.
Evolution might be a better example because, unlike gravity, it still has retractors.
But, if you look at all the evidence compiled on evolution, from Darwin and beyond, it strengthens and strengthens. Meanwhile, other possibilities fall flat not because they don’t make sense, but because there isn’t much evidence to support them.
Understanding this is one of the keys to science: that you’re confident in your findings but constantly skeptical of your beliefs.
And that’s one of the beauties about the world, to me, is that we get closer and closer to understanding…
But it only raises more questions — more experiments to do, more science.
Can you imagine where society would be if we stopped experimenting? If we ceased to ask questions?
Whether we’re curious about the best rep scheme to get beautiful pecs, or how to get to Mars, the answers — getting closer to the answers lie in the same method: The Scientific Method.
Today, we have lots of amazing professionals in the world who dedicate their professional careers to performing experiments on certain subjects. In the fitness world, there are exercise physiologists or other researchers whose JOB is to create these experiments and help find answers to questions and provide them to practitioners like me and you.
Because normal trainers don’t have the environment like large sample sizes and a way to isolate variables which are a key component to uncovering useful evidence.
There are lots of experiments that are poorly designed or funded with perverse incentives (like reports that pro-sugar groups funded studies that showed fat was evil — which of course was the prevailing narrative for decades and we now know isn’t true).
Then, there are the experiments that at the time are groundbreaking, but later on turn out to still be very far from the truth, like JJ Thomson’s “plum pudding model” of the atom that omitted a nucleus of positive particles — which we now know exist.
Thomson got us closer to the full understanding of subatomic particles. Without his discovery, Ernest Rutherford may not have connected the relationship between positive and negative particles when he discovered the nucleus of the atom. (wow my high school chemistry teacher would be so proud. I pulled that example out of my ass.)
We only get closer to the truth.
Science is not gospel, it’s a collection of experiments. And even groundbreaking, well-designed experiments have flaws.
This is important to remember whenever a discussion uses one research study to hold their viewpoints hostage. One study never explains it all; it’s only one piece of evidence with its own inevitable flaws or shortcomings.
Which leads us to how scientific research is often used to support unclear, often untruthful, and in some cases even sinister claims.
Because science is not perfect, and because we’re doing so many new experiments, there are almost always research studies that support whatever you want to believe.
It’s easy to extract truth from lies, to paraphrase an A Day to Remember lyric.
There’s evidence that squats are beneficial and detrimental to performance. That doesn’t mean there isn’t an answer, it means we need to go take a closer look at the studies, and maybe perform more.
But often, that’s not what people do.
If there’s one answer we want to be true, it’s common to gravitate towards that evidence and ignore what goes against it.
If you want sugar to be healthy, you can find research to say so (probably done on athletes who’ll use it for performance) and believe it, while ignoring the mountains of evidence that in most cases sugar leads to fat gain and other health complications.
If somebody is selling an ebook about why carbs are bad, they’re going to omit research that shows carbs can be beneficial.
If a politician wants to get elected, they will ignore research that goes against their case.
Scientific research, recently, has been abused, by everyone from trainers to politicians when they handpick the research that only supports what they’re incentivized to support, rather than looking at all of the evidence, and basing an opinion on that.
Let’s look at masks.
Masks in the United States became a partisan issue, with Republicans rejecting mask mandates and Democrats supporting them, generally.
If you look at all the research, the evidence supports masks’ effectiveness at reducing the spread of the coronavirus.
But of course, there are research studies that exist that show masks don’t help, or that they limit breathing, or don’t limit the spread of infection.
So I’m still getting emails like this from people in the fitness industry.
The problem here is not that the 50 studies they look at are bad (some of them are)… they just don’t look at the full picture. Most of these studies look at the effects of masks on people without an infectious disease (like COVID-19) and report some minor negative effects. (Oh, the irony of calling the scientific approach “unscientific.”)
Meanwhile, this curator of research conveniently IGNORES all the research that shows masks DO help slow the spread of COVID-19. A stance that nearly all infectious disease experts have adopted. Who are, you know, the scientists who study this for a living.
This is not a fault of science, or the scientific method, it’s in this person only selecting data that supports what they believe, and ignoring all the rest.
This leniency towards confirming what we want to believe spreads misinformation, and one reason science, in general, can get a bad reputation — because it doesn’t give clear answers.
I also got an email from this person (with a large audience in the fitness space) promoting a conspiracy theory that is beyond ridiculous.
Like… More ridiculous than faking the moon landing.
I want to make this point because it’s not that this person is stupid — it can be tempting to rush to that conclusion. It’s that, instead of using the scientific method as a framework to push against and question what they believe, they decided what they believe and then sought after any thread of evidence they could find.
Confirmation bias at its finest.
Then, a crumb of evidence on the floor they present like a steak dinner of fact.
While it’s of course more complex than this, I believe in many rhetorical situations, both sides succumbing to confirmation bias creates increasing polarized situations — like how the Facebook fitness zealots will argue to no end about steady-state versus high intensity (HIIT) cardio.
And, this kind of thinking is how you get bad broscience because good broscience, in my view, actively pushes AGAINST confirmation bias and abide by the framework of the scientific method.
“Bro” of course, comes from the shorthand for brother — but in our modern context can be anything from someone you’re very close to, and just, you know, some bro. In the context of broscience, you know the stereotype we’re thinking of.
Some guy with a tank top.
A blender bottle (with just water in it, always questionable).
Huge biceps and tiny calves.
These are all part of the stereotypical gym “bro.”
And thus, the prefix “bro” was attached to science, to refer to the type of advice this type of person would give you.
This, I imagine, is where the Urban Dictionary definition comes from. One of these stereotypical “bros” putting forth their opinion on training to whoever’s willing to listen.
And to this bro — I don’t invalidate your experience. Just because your thoughts on biceps training weren’t subject to a double-blind placebo study, doesn’t mean what comes out of your mouth is bullshit.
As John wrote about in 2015, the methods used by Arnold and other bodybuilders of his time have been called out as BS “broscience” … even though they, you know, worked. (It turns out, newer research is validating what Arnold knew from experience, but you can read more about that here).
However, your data points have likely not been subject to rigorous controls and isolation of one main variable, which of course will leave questions about the validity of why you think preacher curls are the best bicep exercise.
Did you compare preacher curls to hammer curls? How did you define “best” and how did you measure outcomes? Is this just your experience, or have you seen similar results from a large sample size?
These are all important questions, not just for science, but for broscience.
Broscience is performing experiments while taking as many practical steps to adhere to the scientific method.
It’s experimentation not unlike science, but it will have more flaws than normal because it’s less controlled and likely a smaller sample size, thus causing more room for errors. It’s not the type of experiment that deserves to be published in a scientific journal.
When I interned in college hockey, they had the whole team constantly hooked up to heart rate monitors during all training sessions. We also tested vertical jumps daily and recorded the velocity of their lifts with GymAwares.
With all of this data, we had lots of opportunities to perform experiments with the team.
In one simple experiment players on the ice leaned into a typical “recovery position” during practices, versus standing straight up.
They split the team into two groups, and measured their heart rate recovery. They found that those who hunched over with their hands on their knees had their heart rate recover quicker than those who put their sticks over their heads.
A few things the staff did well: They had a randomized group, evenly splitting up forwards and defensemen.
Then, after several weeks, the players switched and tested gain and it also showed they recovered better in the hands-on-knees positions. So that eliminated the possibility that one random group just luckily had players who recovered better.
They all did the same drill, and generally had similar heart rates, but we were comparing their recovery to themselves, anyways.
And, we had no “conflicts of interest” as the cool scientists say, going into the experiment: We just wanted our players to recover better. And the data we collected supported that hands-on-knees is better for short-term recovery from aerobic activity.
The reason we talked about this in my tenure in 2019 was that when I was there a study came out that supported our findings (2).
This is as close to science as you’ll see in the strength and conditioning world, and it’s part of why (among many other phenomenal traits) their head strength coach got promoted to the NHL — for his ability to collect data and use it perform experiments and draw conclusions to ultimately make players — and the team — better.
Earlier this year, I had been hearing over and over from nutrition experts like Jim Lavalle and others in the sports performance world about the importance of supplementing with magnesium, because it’s so important for recovery and sleep, yet the vast majority of Americans — especially those who are active, are drastically deficient. An NHL strength coach even told me that in a few years he thinks we’ll be talking about magnesium the same way we talk about vitamin: as a no-brainer supplement.
So, I did my research on the best magnesium supplements and decided to try BiOptimizers Magnesium.
When I started using it, how would I gauge its effectiveness? By now you should know that the answer is the scientific method.
In particular, I thought tracking my sleep would be the best barometer for its effects. For the previous seven days, I looked closely at my sleep stats from my activity tracker. My total hours of sleep probably weren’t the best indicator, because that depends on when I decide to go to bed and wake up more than anything. So I looked at my sleep latency (how long it took me to fall asleep) and sleep efficiency (how much time I spent in bed I actually slept for).
The two weeks before taking magnesium, my sleep latency averaged 9 minutes, and my sleep efficiency averaged 86%. Now I had some control data, a key part of the scientific method.
Next, I decided on the dose: I would take the recommended serving (3 capsules) within an hour of bedtime.
Over the next three weeks measured my sleep stats with magnesium, and my sleep latency dropped to an average of 6 minutes, and my sleep efficiency improved to 92%.
So, it appears the magnesium did improve my sleep.
This is broscience and not science. I tried my best to control variables, but there were still extraneous factors that come up in life. And, the effect could be a placebo (which is why double-blind scientific studies are highly touted highly; they eliminate the placebo effect). Regardless, I used the scientific method as best I could to test one variable: magnesium supplementation. In particular, BiOptimizers Magnesium, which I highly recommend you pick up.
Anytime you try a supplement, the Scientific Method is the only way to measure what it’s actually doing for you. And for gym rats out there, it’s a great way to learn the art of self-experimentation.
All right, all right. While example #1 was as close as you can get to science without getting true isolated data, and example #2 moved a bit further but still stuck to the method, this example flings itself as far away as you can get to science while still using the Scientific Method. I’m using it for two reasons:
You don’t need a whole hockey team with heart rate monitors to use the scientific method.
The disclaimer here that I want to repeat is, because this takes place in such an uncontrolled, small sample size setting, it’s very very far from any data or evidence. Rather, it illustrates that even when we’re seemingly as far as science and a lab as you can possibly get, the scientific method can help you learn more about any situation.
Also: don’t do drugs. Stay in school. Informational purposes only. Some other thing I have to say so we don’t get sued.
One summer weekend in New York City, I took adderall for the first time (don’t tell my mother, even though I know she’s reading this).
Now, I didn’t have my first drink until I was 18, so this was kind of a big deal, and also my first foray into substances beyond alcohol. In this text sequence, you’ll see principles of good experimentation in action.
First of all, I knew the dose. So it wasn’t a random amount that I wouldn’t be able to gauge the effectiveness of.
Secondly, I had no alcohol in my system to isolate variables. Then, I was also offered Molly on the same night but declined it because it would make the “findings” of my “experiment” unreliable. Here’s the text sequence from John and I (with some extra commentary).
I also didn’t have any reason to want the adderall to do anything, other than for, you know, fun (no conflict of interest). But there could have been a placebo effect, which is important to consider when I think about the effectiveness of the drug.
With this, I got a decent first understanding of what adderall feels like to me, untainted by other substances and in a dose I know. Next time I take adderall, it will just be more data to add to my experience of this one. If I hadn’t cultivated the principles of the scientific method, I would’ve had no way to even know what it did.
Once again, I caveat that THIS IS NOT SCIENCE. It’s very far from true isolated evidence. But using the concepts of the scientific method gave me a better understanding of the world, a mini-experiment which I can to my bank of knowledge.
I’ll let you in on a secret: the best trainers all track data, test new variables, and are keenly aware of the scientific method, even when they’re just doing drugs at parties (okay John and I might be the only ones who do that last one).
In the hockey world, we test our sprint times and vertical jump multiple times per week. They’re worked into our training both as a tool for testing but also just as part of our training. If a player is stuck at 27” vertical jump, and he’s been doing the same program, but then we switch one variable we think could be the culprit, then we can test what effect that variable has on his vertical jump.
And as a trainee, you should cultivate a similar mindset in your workouts, nutrition, and supplementation.
If you take a new supplement, you shouldn’t change anything else before and after the set experiment. Don’t try magnesium and a greens supplement at the same time; you won’t know which one is helping with what.
Use all these little opportunities in your training and life to gain evidence to see a clearer picture of the world.
And then, write it all down.
Is it perfect? No.
No science is, even the best research labs in the world aren’t perfect. But, it’s about adopting the approach of the scientific method: being open-minded, testing ideas, and getting ever so slowly closer to the truth.
This is how we as trainers, and as people who workout, should think about our training.
Do We Need a New Term?
Honestly, yes. We do. We need a gender-neutral term, for starters. Broscience is tainted by poor definitions anyways, despite my best efforts. And it’s filled with connotations of a gym rat sticking a needle of testosterone in his butt.
For now, anytime you hear someone say “broscience” push for clarification, and make sure you’re on the same page.
And, let’s all brainstorm on what the new, gender-neutral term for unofficial science.
Until then, think about how you can use the scientific method, the principles of the scientific method, to question your beliefs, and use the world as your platform for experimentation.