Skip to content

What Makes Something Scientific?

What Makes Something Scientific? hero image
Modified:
Published:

A vendor tells you their new chip “uses AI for predictive maintenance.” A colleague claims their debugging technique “always works.” A research paper reports a breakthrough in room-temperature superconductors. How do you know which of these claims to take seriously? The answer has been debated for centuries, and it starts with a deceptively simple question: what makes something scientific? #PhilosophyOfScience #Falsifiability #CriticalThinking

The Demarcation Problem

The demarcation problem is philosophy’s name for a practical question: where do you draw the line between science and non-science? This is not an abstract concern. Engineers encounter the boundary every day.

Consider three claims:

Astronomy

“Mars will be at opposition on January 16, 2027, appearing at magnitude -1.4 in the constellation Gemini.” This is specific, testable, and could be wrong. It is a scientific claim.

Astrology

“Mars in Gemini means communication will be challenging this week.” This is vague, unfalsifiable, and reinterpreted after the fact to fit any outcome. It is not a scientific claim.

The difference is not about the subject matter. Both involve Mars. The difference is about the structure of the claim itself.

The Demarcation Spectrum
Pseudoscience Gray Area Science
|-------|-------------|---------|----------|
^ ^ ^ ^ ^
Astro- "Quantum Nutrition Antenna Orbital
logy healing" claims gain specs mechanics
Less falsifiable -----> More falsifiable
Vague claims ---------> Specific, testable
Reinterpreted after --> Predictions stated
the fact in advance

Why Engineers Should Care

You might think this is a problem for philosophers, not engineers. But consider these engineering-adjacent claims:

ClaimScientific?Why?
”This antenna design achieves 3 dB gain at 2.4 GHz”YesMeasurable, specific, could be wrong
”This is an elegant design”NoSubjective, no test can refute it
”Our product uses AI”DependsIf they mean a specific ML model with measurable accuracy, maybe. If they mean “we have some if-else logic,” no.
”This capacitor improves audio quality”DependsMeasurable with distortion tests, or is it audiophile mysticism?

The demarcation problem is not academic. It determines whether you can trust a datasheet, evaluate a research paper, or assess a technology claim.

Popper’s Criterion: Falsifiability



Karl Popper (1902-1994) proposed the most influential answer to the demarcation problem. His criterion is elegant and practical: a claim is scientific if and only if it is falsifiable.

Falsifiable does not mean false. It means there exists some possible observation or experiment that could prove the claim wrong. If no conceivable evidence could refute a claim, it is not a scientific claim.

How Falsifiability Works

  1. Start with a claim. “This voltage regulator maintains 3.3V output within 1% tolerance for input voltages between 4.5V and 12V.”

  2. Ask: what observation would prove this wrong? Measuring an output of 3.4V at an input of 6V would falsify it. The claim is specific enough that we know exactly what would count as a failure.

  3. If no observation could prove it wrong, the claim is not scientific. “This is a good design” cannot be falsified because “good” has no agreed-upon measurement. It may be a valuable aesthetic judgment, but it is not a scientific one.

Falsifiability in Practice

Consider how this plays out in engineering:

Falsifiable Claims (Scientific):
"The system responds within 10 ms to any input"
"Mean time between failures exceeds 50,000 hours"
"Power consumption is under 100 mW in active mode"
"The error rate is below 1 in 10^6 bits transmitted"
Unfalsifiable Claims (Not Scientific):
"The code is clean and maintainable"
"This architecture is future-proof"
"Our platform is enterprise-grade"
"This solution is best-in-class"

Notice that unfalsifiable claims are not necessarily wrong or useless. “The code is clean” might be a valuable professional judgment. But it is not a testable, scientific statement. The problem arises when unfalsifiable claims are dressed up as if they were testable.

Pseudoscience: When Non-Science Pretends to Be Science



Pseudoscience is not simply “wrong science.” Wrong science can be corrected. Pseudoscience has a deeper problem: it is structured so that it cannot be corrected, because it is not falsifiable in practice.

Red Flags for Pseudoscientific Claims

The Pseudoscience Checklist

These patterns appear in pseudoscience, cargo cult engineering, and vendor hype alike. Any single red flag warrants caution. Multiple red flags together are a strong warning signal.

1. Unfalsifiable core claims. The central claim is stated in a way that no evidence could possibly refute it. “Our product improves wellness” cannot be tested because “wellness” is undefined.

2. Moving goalposts. When evidence contradicts the claim, the claim is reinterpreted rather than abandoned. “The test failed because the environment was not properly controlled.” “The prototype did not work because the components were not high enough quality.”

3. Appeal to testimonials instead of data. “Five customers love it” replaces controlled testing. Testimonials are selection-biased by definition: you never hear from the people for whom it did not work.

4. Conspiracy against the establishment. “Mainstream science rejects this because they are protecting their funding.” Sometimes mainstream science is wrong. But the appropriate response is better evidence, not conspiracy theories.

5. Excessive precision without calibration. “Our sensor achieves 0.001 degree accuracy” but no calibration certificate, no measurement uncertainty analysis, no traceability to a reference standard.

6. Irrelevant jargon. Using technical-sounding language that does not actually mean anything in context. “Quantum-enhanced blockchain AI synergy” is six buzzwords and zero content.

Engineering Pseudoscience Examples

DomainPseudoscientific ClaimWhat Real Science Looks Like
Audio”Oxygen-free copper cables produce warmer sound”Blind A/B tests show no audible difference above a basic quality threshold
Networking”Our proprietary protocol is unhackable”No system is provably secure; responsible claims specify threat models and known limitations
IoT”Our platform uses AI” (meaning: a lookup table with thresholds)Specify the model architecture, training data, accuracy metrics on a test set
Power”Green energy harvesting powers the device indefinitely”Specify the energy budget, harvesting conditions, duty cycle, and worst-case scenario

Case Study: Cold Fusion (1989)



On March 23, 1989, Martin Fleischmann and Stanley Pons held a press conference at the University of Utah. They claimed to have achieved nuclear fusion at room temperature in a tabletop electrochemistry experiment. If true, it would have solved the world’s energy problems.

What Happened

  1. The claim. Fleischmann and Pons reported excess heat from a palladium electrode immersed in heavy water (deuterium oxide). They attributed this excess heat to nuclear fusion occurring at room temperature inside the palladium lattice.

  2. The announcement. Instead of publishing in a peer-reviewed journal and waiting for replication, they held a press conference. The University of Utah pressured them to announce quickly, partly to secure patents and funding priority.

  3. The replication attempts. Laboratories around the world tried to reproduce the results. Most failed. A few reported positive results, but these were inconsistent and could not be reliably repeated.

  4. The verdict. Within months, major laboratories (MIT, Caltech, Harwell) reported negative results. A Department of Energy panel found no convincing evidence for cold fusion. Fleischmann and Pons’s calorimetry was found to have significant errors.

  5. The aftermath. Fleischmann and Pons left the United States. A small community of researchers continued cold fusion work under the name “Low Energy Nuclear Reactions” (LENR), but the field has never produced reproducible, independently verified results.

Why Cold Fusion Matters for Engineers

The cold fusion episode is instructive not because Fleischmann and Pons were frauds (most historians believe they were sincere but mistaken). It matters because it illustrates several critical principles:

Reproducibility is non-negotiable. A result that cannot be independently reproduced is not established science. In engineering terms: if only one team can make the prototype work, you do not have a product.

Press conferences are not peer review. Announcing results before they have been independently verified creates pressure to defend the claim rather than test it. In engineering: shipping before testing is the same mistake.

Extraordinary claims require extraordinary evidence. Room-temperature fusion would violate well-established physics. That does not make it impossible, but it does mean the evidence must be proportionally strong. A single experiment with possible calorimetry errors is not sufficient.

The Engineering Parallel

Every time a vendor claims performance numbers that seem too good to be true, apply the cold fusion test: Can the results be independently reproduced? Have they been reviewed by people with no financial stake in the outcome? Are the measurement methods sound?

Evaluating Engineering Claims



Armed with Popper’s criterion and the pseudoscience red flags, you can build a practical framework for evaluating the claims you encounter in engineering work.

The Claim Evaluation Framework

  1. Is the claim falsifiable? Can you identify a specific observation or measurement that would prove it wrong? If not, it is not a scientific or engineering claim, regardless of how technical it sounds.

  2. Is the evidence appropriate? Testimonials, case studies, and demos are not sufficient for strong claims. Look for controlled tests, independent replication, and measurement uncertainty.

  3. Are the conditions specified? “Works in the lab” and “works in production” are different claims. What are the operating conditions, tolerances, and edge cases? A datasheet that only specifies typical values without min/max ranges is hiding information.

  4. Who benefits? This is not about cynicism; it is about understanding incentives. A vendor’s benchmark of their own product is less trustworthy than an independent review. An internal test report is less trustworthy than an external audit.

  5. What would change your mind? Ask the claimant what evidence would cause them to abandon their position. If nothing would change their mind, they are not doing science; they are doing advocacy.

Reading Datasheets Critically

Datasheets are engineering’s most common source of technical claims. Here is how to read them with a philosopher’s eye:

Datasheet SectionWhat to Look ForRed Flag
Absolute Maximum RatingsThese are destruction limits, not operating conditionsConfusing absolute max with recommended operating range
Electrical CharacteristicsConditions column (temperature, voltage, load)Values given without specifying conditions
Typical vs Min/Max”Typical” means “what we measured on a few samples”Only typical values, no guaranteed min/max
Application CircuitReference design, not a guaranteed working circuit”Guaranteed to work” claims for reference designs
Test ConditionsTemperature, humidity, load, measurement setupUnspecified or unrealistic test conditions

Reading Research Papers Critically

The Five-Question Paper Review

For any research paper or technical report, ask these five questions before accepting the conclusions:

  1. What is the hypothesis, and is it falsifiable? Can you state in one sentence what the paper claims, and what evidence would refute it?

  2. What are the controls? Did they compare against a baseline? Against existing solutions? Against doing nothing?

  3. Sample size and statistical significance. “We tested three samples” is not enough for a strong conclusion. What is the confidence interval?

  4. Conflicts of interest. Is the study funded by the company whose product is being tested? This does not invalidate the results, but it changes how much independent verification you should require.

  5. Replication. Have other groups reproduced the results? If the paper is new, has the methodology been described in enough detail for replication to be possible?

From Philosophy to Practice



The demarcation problem is not just a philosophical puzzle. It is a daily engineering skill. Every time you write a specification, you are making a falsifiable claim (or you should be). Every time you run a test, you are attempting falsification. Every time you read a datasheet or a research paper, you are evaluating someone else’s claims against the evidence.

Popper did not invent testing. Engineers were testing things long before Popper was born. But Popper articulated why testing works and what makes it scientific, providing a framework that makes you a more rigorous thinker.

Key Takeaways

Falsifiability Is the Criterion

A claim is scientific if it can be proven wrong by some observation. Claims that cannot be tested are not scientific, no matter how technical they sound.

Watch for Red Flags

Unfalsifiable claims, moving goalposts, testimonials instead of data, and irrelevant jargon are warning signs. They appear in pseudoscience, vendor marketing, and sloppy engineering alike.

Reproducibility Is Non-Negotiable

If only one person can get the result, it is not established. This applies equally to scientific experiments and engineering prototypes.

Ask What Would Change Your Mind

If no evidence would cause someone to abandon a claim, they are not doing science. Apply this test to others and to yourself.

Looking Ahead

In the next lesson, we will look at the scientific method itself, not the textbook version, but the messy, iterative process that engineers actually use. You will see that debugging is hypothesis testing, design reviews are peer review, and every test plan is an experiment, whether you think of it that way or not.

Exercises



  1. Classify claims. Take three claims from a recent datasheet or product page you have encountered. For each one, determine whether it is falsifiable. If it is, describe the experiment that would refute it. If it is not, rewrite it so that it becomes falsifiable.

  2. Red flag audit. Find a product marketing page (any product in your domain). Count the number of pseudoscience red flags from the checklist in this lesson. How many did you find?

  3. The cold fusion test. Pick a recent “breakthrough” announcement in your field. Apply the cold fusion analysis: Was it peer-reviewed before announcement? Has it been independently replicated? Are the measurement methods sound?

  4. Personal falsification. Think of a technical opinion you hold strongly (“language X is better than language Y,” “framework A is superior to framework B”). State it as a falsifiable claim. What evidence would cause you to change your mind?

Comments

Loading comments...


© 2021-2026 SiliconWit®. All rights reserved.