Monday, February 26, 2007

What scientists believe and what they can prove (with a flowchart for Sir Karl Popper) - Janet D. Stemwedel

Posted on: February 26, 2007 1:12 PM, by Janet D. Stemwedel

On the post in which I resorted to flowcharts to try to unpack people's claims about the process involved in building scientific knowledge, Torbjörn Larsson raised a number of concerns:

The first problem I have was with "belief". I have seen, and forgotten, that it is used in two senses in english - for trust, and for conviction. Rather like for theory, the weaker term isn't appropriate here. I would say that theories gives us trust in repeatability of predicted observations, and that kind of trust counts as knowledge. In fact, already the trust repeated observations gives count as knowledge.

The second problem I have is with "the problem of induction". Science has a set of procedures that observably generates robust knowledge, and the alleged problem is seldom seen. When the terrain and the map doesn't agree, junk the map.

The third problem I have is with the specific diagrams. Real scientific knowledge production will not yield to any one diagram. So for the philosopher that raises a hypothetical "problem of induction" we could turn around the question and ask why the obvious "problem of description" (which ironically is a real problem of induction :-) isn't bothersome. The scientist answer would probably be as above: "e puor si muove".

... Without feeling like testability is the end-all of science the diagram is slanted away from testing towards a weaker and in the end nonfunctional descriptive science. Whether we call tested knowledge "a conclusion" or "a tentative conclusion" is irrelevant IMHO, it is a conclusion we will (have to) trust in.

The fourth (oy!) problem I have is with the conflated description the diagram alludes to. In the text there is a distinction between individual scientists and the scientific enterprise. Different entities will obviously use different approaches to knowledge, and if the individual doesn't need to trust her findings the enterprise relies on such a trust.

These are reasonable concerns, so let me say a few words to address them.

I'll start with the fourth concern, the relation between what's going on with the individual scientist and the larger community of scientists working together to build knowledge. While some thinkers have framed the problem of building objective knowledge as one that depends on each individual scientist being highly objective and switching off his or her own biases, others (including Frederick Grinnell and Helen E. Longino) have put theburden of objectivity primarily with the community -- bias is stripped out of what ends up being identified as scientific knowledge when the community "checks the work" of individual scientists within it.

Myself, I'm inclined to think that the community has an easier time being more objective when each of the individuals within that community is doing his or her best to be aware of, and unmoved by, his or her own biases. This may be psychologically challenging, but it's not impossible. Certainly, the process of trying to persuade other scientists that you've found something interesting puts you in touch with the idea that others in your community may not share your hunches.

So, it might be advisable to have different flowcharts for what the individual scientist is doing and what the scientific community is doing. But given that each individual scientist is (or might be) striving to be as hardheadedly objective as the community of individual scientists working in concert, we might be able to get away with using the community-level process as an idealized model of the individual-level process.

Torbjörn Larsson's third concern is also related to the worry that a lot is being idealized in these charts -- that the actual process of building scientific knowledge "on the ground" is messy and can't be properly captured in a single road map. I agree. It's best to think of the flowcharts as trying to capture the process of justifying scientific claims, not the process by which you come up with them or get the experiment to work or what have you.

And justification is the issue at the heart of the problem of induction, which seems to be the sticking point in Torbjörn Larsson's second and first concerns. What is it that make a claim count as scientific knowledge? There is an operational kind of answer to this question: here are the steps you need to take to support your claim in order for scientists to regard it as playing a particular role in the scientific discourse (whether you want to identify the claim thus supported as "credible" or "convincing" or "the best available explanation of the phenomena" or something else).

But there's also a bare-knuckles logical warrant sort of answer to this question, and this is where the problem of induction comes in. The problem of induction is a worry if you think knowledge ought to come down to claims about which you need entertain no doubts. If you want your claim to be unsinkable before you call it knowledge, then you can't laugh the problem of induction off as a mere philosophical trifle.

The observations we've gathered so far don't provide empirical evidence about the observations we haven't yet made. As regular as the phenomena in the universe seem to be, we've only observed a fraction of all the things we could observe, and no one set of inferences we could draw from the data now in evidence is the only set of inferences that fit these data.

Sir Karl Popper didn't see the problem of induction -- that inductive inferences drawn from limited data could go wrong -- as something that could be "solved". However, he thought that the methodology of science avoided the problem by not identifying conclusions arrived at through inductive inference as "knowledge" in the strong sense of "there is no way this could fail to be true". Here's Popper's picture of the process of building scientific knowledge:

Notice that Popper doesn't think it matters all that much where your hypothesis P comes from. Maybe it comes from lots of poking around and observing your phenomena. Maybe it comes from that recurring nightmare of the snake biting his own tail. It's not important. The thing that can make P a respectable scientific claim is that it is tested in the right kind of way.

How it is tested, for Popper, comes down to working out the observable consequences that would follow if P were true and especially the things we should not be able to observe if P is false. With these predictions in hand, you make your observations. If your observations don't match with your predictions from P, they let you deduce that P cannot be true, and you achieve as much certainty as you can hope for. Since your conclusion that not-P is the conclusion of a deductive argument, you can bet the farm on it.

If, on the other hand, the observations match your predictions from P, Popper says that you haven't established P with certainty (since you come to P at the conclusion of an inductive argument, and new evidence might undermine that conclusion). So, you go through the whole process again. You can't, as far as Popper is concerned, conclude on the basis of all manner of successful observations (and an utter lack of observations that contradict P) that P is true -- just that it has (so far) survived all attempts at falsifying it.

Does taking Popper seriously mean that scientists can't ever draw positive conclusions? I don't think so. The fact that scientists acknowledge that their conclusions are tentative and could be updated in the face of future data strikes me as an acknowledgment that they recognize that inductive inference doesn't come with a guarantee. This recognition doesn't mean you're not allowed to use inductive inference, but rather that you have to be at least a little cautious about the weight you place on the conclusions derawn with it.

To the extent that using induction has generated pictures of the world that hold up to scientific scrutiny, inductive inference is a useful tool. Success to date is not, of course, a guarantee that inductive inference will always work, any more than the fact that the phenomena in our world seem reassuringly regular is a guarantee that they will remain so.

"Conviction" for a scientist, then, is not: "From this day forward, I am committed to P and nothing you could show me will ever shake my commitment to P." Instead, we have something like: "Given the data amassed, and the stringent tests which P has passed, and the current lack of other claims that fit the phenomena as well and have held up as well to our testing, I'm committed to P. I'd be surprised if the situation were to change, but it could, in which case, I may update my view."

For a belief to become a scientific conviction seems to require certain kinds of justification (from empirical data, theories, etc.). A belief without that kind of justification behind it is just a belief -- nothing wrong with that, but it has no special status in scientific discussions. The problem of induction is concerned with what we can prove. It's a matter of logic. To the extent that scientists find it fruitful to draw inductive inferences, they can, so long as they recognize (as they generally do) that the careful justifications that they offer don't quite meet the level of deductive proof. Still, they are good justifications, and a claim backed by these will be on better scientific footing than a claim without such justifications.


reposted from: http://scienceblogs.com/ethicsandscience
my highlights / emphasis /
comments

No comments: