One of my favorite problems in philosophy has always been the classical problem of induction. (I will explain what it is in a moment!) Admittedly, this is partly because at barbecues and other social gatherings, the problem of induction really brings to the fore two things I passionately want to share with others: (1) What philosophy is and what philosophers of today even think about; (2), how mind-blowing—and fun—some of these philosophical problems really are. Philosophy produces a lot of food for thought, and I guess I’m just a big Hungry Hungry Hippo®.
WHAT IS THE PROBLEM OF INDUCTION?
What is the problem of induction? To answer that, it serves us well to consider first what “induction” even is. Induction is a form of reasoning. What makes inductive reasoning unique is how it moves from a set of past, particular cases to a kind of general law or claim. Let’s take the most common example presented in philosophy classrooms:
1. The sun has always risen.
2. Therefore, the sun will rise tomorrow.
Let’s break it down a little further to really highlight what is meant by “past, particular cases to general claims”:
1. Yesterday, I saw the sun rise.
2. The day before yesterday I also saw the sun rise.
3. The day before that I saw the sun rise, too.
4. Therefore, the sun will rise tomorrow.
A more charming example of inductive reasoning would be the following:
1. That bunny is gray.
2. Hey, that bunny is gray, too!
3. All bunnies I’ve observed are gray.
4. Therefore, all bunnies are gray! (If I ever see another bunny, it’ll surely be gray!)
Hmm, you might feel a little suspicious. You might think to yourself, “Wait, just because all the bunnies you’ve seen are gray doesn’t necessarily mean that all bunnies are gray… You can’t say that!” Now, if you had this thought, you’d be right; inductive reasoning is not “logically valid”. Indeed, what makes an argument logically valid is that its conclusion necessarily follows from its premises. For instance, we noticed that the conclusion “All bunnies are gray!” didn’t necessarily follow from our observations about individual bunnies. We can easily imagine, say, peering into the next room or flying to the Rabbit Island in Japan and seeing an orange bunny.
Okay, Chris, so inductive reasoning is stupid and wrong. We already know that people who make sweeping generalizations are silly and usually racist. So why does this matter? Well, it matters a lot because inductive reasoning is the primary mode of reasoning used in science. Does this mean, then, that science is logically bankrupt, stupid and wrong?! Does that mean we should throw it out? No, it mustn’t be that way.
We have a deep trust in science. Consider that if we didn’t, we probably wouldn’t get into our cars in fear of them suddenly exploding or doing silly backflips. Indeed, there is very little else we feel more certain about and justified in than our scientific knowledge, and in result, we should feel strongly inclined to defend science against this awful charge of logical invalidity! With that in mind, I will now go over what seem to be the two most common defenses of scientific induction that I’ve had the genuine pleasure of hearing (especially in bars), and then show why they are unsuccessful.
Before I do, though, let me give an example to show that science really does, at least in large part, depend on induction.
1. Every piece of iron we’ve observed or tested is magnetic.
2. Therefore, iron is magnetic.
Surely we haven’t observed and tested every piece of iron on Earth, let alone all of the iron in our galaxy or the universe. Nevertheless, we are confident in asserting that iron is magnetic, and expecting it to be so whether we dig up some iron in France or in faraway galaxies like Andromeda.
What we are defending is how our scientific knowledge could be rationally justified, even if it relies on a logically invalid process: induction. That is, we are seeking a justification for scientific induction.
THE FIRST DEFENSE—IT WORKS
The first defense of scientific induction is to say, “Fine! You can call it logically invalid all you want, but at the end of the day, it works. And that’s what matters.”
It is certainly true that it works, but is this a good justification for induction? Let’s think about what it means to say that induction works and is, on that score, justified. To say that induction works is to say that every time we have employed inductive reasoning, it has worked or has given us good answers. Moreover, because it’s worked essentially every time we’ve used it, we’re justified in believing in it, using it, and so on. But isn’t that what induction, itself, is: trusting that future cases will be like past ones? What reason do we have to think that it will continue to work, apart from that it has worked for us in the past?
We’ve just tried to justify induction using induction. That is circular, and so will not do.
THE SECOND—PROBABILITY
The second defense is to argue that the question is, in some sense, a kind of straw-man. I’ve heard, for example, that “Scientific conclusions that depend on induction don’t say that whatever is, is 100% for sure certainly true. It just says that we’re 99.9% sure it’s true, or that it’s just very probably true.” A discernible virtue of this defense is that it enjoys a veneer of humility. It is admirable to not tread so boldly as to say that you’re 100% super certain about something.
Unfortunately, we’re still left in a bind. Our justification, now, depends on the notion of probability. And though this suggestion appears entirely transparent at first, it isn’t. What does probability … mean? Let us take the standard interpretation of probability taught in statistics courses in week one: i.e., the “frequentist interpretation of probability.” (There are others!) Wikipedia will be helpful, as always:
Frequentist probability is a standard interpretation of probability; it defines an event’s probability as the limit of its relative frequency in a large number of trials.
Let’s take a simple example. We all agree that flipping heads or tails has each a 50% chance of occurring (a 0.5 probability). What this means is that if we took a coin, flipped it 100 times, we should pretty much expect to see 50 of the flips be tails, and 50 of them be heads. Of course, sometimes when we flip coins, say, four times, we get four heads in a row. But we don’t think that the chance of getting heads is actually 100%. This is because when we think about what it means to say that something has so-and-so probability, we are saying that if we kept flipping the coin many many times, it would actually even out at 50%.
In real life, we don’t have an infinite number of tosses, even if we could gather a bunch of very strange people in a (padded) room to flip coins for a long time. Let’s say these strange people flipped a coin 100 times and observed that 48 tosses were heads and 52 were tails. We still say that the probability of getting heads or tails is still 0.5, because if we flipped it a thousand times, or a billion times, and so on, we’d see this gap between 48 and 52 narrow, evening out to 50%. By our definition, then, the probability is 0.5—a 50% chance.
What does this mean for our justification of scientific induction? We said, as our defense, that scientific induction only gives us probabilistic answers. The problem, however, is that our very account of probability involves taking a bunch of particular cases (in the past) to generalize about cases in the future. Why should we expect that when we flip the coin infinitely many times, it will tend toward this limit of 0.5? Why couldn’t it be that at toss one billion, the coin goes haywire and then lands heads forever from that point onward? How do we know? What we assume is that from the individual cases we’ve observed (whether they’re individual coin tosses, individual sunrises or individual bunnies) that this behavior will extend into the future. The worst part is that this is how we have literally defined probability!
Even if we say that induction is justified probabilistically, what we are still actually saying is that induction is justified on the basis of induction, since our very concept of probability depends on inductive reasoning. We have, again, provided a circular justification that just won’t do.
WHY I STUDY PHILOSOPHY
I always found this problem to be mind-blowing. For me, the mind-blowing part was realizing that our defense of induction through probability (which at first glance looks promising) also depends on induction, leading us in a circle. Examining things closely and discovering surprising results is the fun of it, and what makes doing this kind of thinking so exciting and, really, kind of beautiful.
I remind myself that the bigger picture problem is that inductive reasoning plays a primary role in science, an enterprise we love and trust. This presents us with a kind of paradox—desperately wanting of a solution—that grips and arrests me deep inside. If, at this point, you are thinking, “Okay, but no. That’s wrong. But wait. But because …”, then you’re doing philosophy too. It’s almost like we, as thinking beings, can’t help it.
Rest assured that there are more sophisticated and ingenious defenses of scientific reasoning and practice out there. But all of them required serious thinking, work and research. That is what philosophers do, and it stirs my heart and mind enough to be the reason why I study philosophy.
—Chris Nguyen