Monday, September 29, 2008

Can virtue be taught

Reading through the Meno recently, which begins with the question can virtue be taught. Socrates in his conversation with Meno initially comes to the conclusion that virtue is a sort of knowledge and that as knowledge it can be taught. But then he rejects the view that virtue can be taught, because there are no teachers of virtue(93a-94e). Presumably everyone can be a teacher of virtue, if we take seriously Protagoras' Great Speech in Plato's Protagoras. But Socrates runs into the great problem that great virtuous exemplars don't seem to invariably (or even frequently) have great virtuous exemplars for children. I don't think we can argue with the empirical facts. I have noticed too that great artists don't tend to produce great artist children. And a logical explanation is that one simply cannot teach one how to be a great artist. It would seem that one's own children would be the first person that one would try to teach if one could, but since it's unteachable, then it makes sense that they wouldn't be able to match the standards set by their parents.

But there might be other reasons beyond just the simple explanation that virtue isn't teachable. For one, being a great virtuous exemplar or a great artist is a full time job (in fact, more than a full time job, an over-full-time obsession even) and teaching one how to be great in one's image is also a full time job. Simply, these great virtuous men cannot teach greatness because they don't have time. Socrates tries to dismiss this argument by saying that these great men would find teachers of virtue for their children in their stead, but this assumes that one could easily find such a teacher. And it entirely ignores the whole time issue: why would there be anyone whose not out doing great things and who has time to idle away teaching other people's kids how to be great? If such teachers existed they would be in hot demand, but if the best potential teachers are those who are doing the most great things, then they certainly have little time for teaching and certainly would prefer to focus their efforts on their own children.

Another factor is that teaching virtue may be a totally different thing than being virtuous. Teaching ain't as easy as it looks. Teaching someone something and having that lesson stick is not as simply as simply saying it. In the case of virtue, punishment is a important part of that teaching process, and it's easy to imagine that a person who's a good example of virtue may not know how to meet out judicious and effective punishment.

In addition we might also simply have a sampling bias of sorts. In other words, simply saying that great individuals can't quite bring their children up to their exceptionally high standards doesn't exactly prove that virtue can't be taught if their children are better than average, even though short of the same standard of greatness as their parents. In other words, great artists may not produce great artist children, but they frequently do produce talented and intelligent artists. They're just not quite as good as their parents, but they're still better than most of the rest of us. We might still be able to say that one can't teach one's children how to be a great artist or virtuous exemplar because really what defines that last extra quality that sets one apart as great is precisely an originality that one must discover on one's own. But, still perhaps virtue can be taught.

Thus, if we're to take Socrates' original arguments without this unconvincing problematization, it would seem virtue can be taught. Or can it?

Thursday, September 25, 2008

Healer's Fallacy

I want to look at a particular type of causal fallacy, which we might lump under "magical thinking," which I have dubbed the "healer's fallacy." I really can't think of a better name, though I'm open to suggestions. Let me explain it first. Since the body tends to recover on it's own from most minor illnesses, it becomes easy to confuse things that actually cure the disease with things that do nothing or only harm you slightly. For example, if you get a cold, then most likely you'll recover in a week or so. So, if you decided during that week to consume twice a day a shot of vodka mixed with mustard and Worcestershire sauce, you might be led to believe that it did some good. If you took a daily dose of homeopathic medicine, you might be led to the same conclusion. Maybe it might do some good as a placebo effect, but even if there is no placebo effect, it can sill appear like it cured your ailment. This can lead to confusions about what treatments are effective in the absence of carefully constructed experiments and propagate the myth of apparently effective home remedies. Thus, I would define the "healer's fallacy" as "a causal fallacy directly resulting from apparently affecting something that would happen anyways on its own." I use "on its own" very informally and broadly. For example, another good example, would be an orgonite cloudbuster (many videos on youtube of these things). The cloudbuster is basically a solid block of solidified chemical resin, containing crystals and metal shavings with some long copper tubes sticking out, that can apparently break up clouds when placed on the ground beneath them. Of course, the thing about clouds is that they are very fleeting and unstable and given time will always dissipate "on their own" (informally speaking). Thus, with a little patience, the cloudbuster will always seem to work, apparently confirming its effectiveness.

Economics is also prone to this healer's fallacy because economies tend to grow and improve as well. There are two reasons for this: 1) because two people will usually only engage in voluntary exchange if they both believe they are benefiting. The cumulative effect of lots of voluntary exchange is greater benefit for everybody. Even if people are sometimes mistaken about what will benefit them or sometimes willingly make sacrifices for the benefit of others, the economy will still grow because a) the cumulative effect of many exchanges is towards overall greater benefit and b) those who most often make exchanges which most benefit themselves will tend to grow richer and thereby become a larger part of the overall economy. The other reason economies grow is 2) because people tend to try to improve their situation. When they do this in the context of voluntary exchange, the general result is people symbiotically figuring out how to produce more with their finite allotment of time. They thereby earn more money with which to purchase more goods and services and also produce those goods and services in ways that require less money. Even those who don't try to improve their situation benefit as prices go down, since they can afford to buy more with the same amount of money, and thus can increase their real wealth.

This is important because it can seem like policies are improving the economy, when they have no effect or even a negative effect. The economy is like a boulder rolling down a hill. Attempts to speed it up are difficult and have little effect, whereas most policies end up simply getting in the way and slowing it down. The metaphors usually used are that the economy is like a car that needs to fueled, that needs to be "jump-started" when it slows, lest it stops, or can "run out of gas." But the economy never stops: people will exchange so long as they are able. The great impetus of the Industrial Revolution was not a positive push, but the removal of barriers. It is no coincidence that the Industrial Revolution began in a country with a very recent history of many philosophers advocating political philosophies of freedom.

This tendency of economies leads to certain epistemic problems when trying to understand economies, since it can be hard to isolate when something is helping it or preventing for growing as fast as it could. This is why economists usually look at how things change: what were conditions like before a policy went into place compared to what they were afterward. But this still limits the empirical breadth of economy and makes empirical evidence difficult.

Hume observed long ago about how we can't really observe causality, only constant correlation. What causality implies is universal correlation: that a will always lead to b. But we can never observe "always" only "all times in my experience." Thus, causal fallacies are always a threat when we attempt to understand the world and the type of magical thinking that leads to the healer's fallacy is an understandable, especially concerning certain phenomena.

Sunday, September 21, 2008

Lewis' Trilemma and the Reductio ad Absurdum

C. S. Lewis made a proof of Christ's divinity using a reductio ad absurdum, which appears in his book Mere Christianity (though he actually first uses the basic argument in a BBC interview in 1943). According to Wikipedia, Lewis was not the first to formulate this proof, but since he's its most famous exponent, we'll call it his.

Lewis sets up what he calls a "trilemma," a decision between three options, to prove that Jesus is divine. Jesus claimed that he was God which means that either he was speaking truthfully or untruthfully, and if untruthfully he either realized he was speaking untruthfully or didn't realize he was speaking untruthfully. This gives us three possibility: Jesus was either a liar, madman or God. Lewis originally formulated this argument to oppose those who think that Jesus is a good moral teacher, but not divine, since his status as good moral teacher would automatically eliminate liar and madman. Nonetheless, to improve the argument, Lewis actually made arguments that Jesus couldn't have been a liar or a madman, mostly resting on the assumption he wouldn't have had so many followers to follow him around and sacrifice their life for him, if he was a liar or a madman.

The first problem with this argument is the claim that he actually did say that he was God or the son of God. Christian Apologists who use this argument go to some length to establish this first point since 1) Jesus never says directly he is God, though he does seem to say it indirectly in a few places and 2) the sources for Jesus' words are both written well after the events of Jesus' life, in a language Jesus didn't speak, and by people who may not have even met him. If Jesus never did say that he was God or the son of God, then it leaves open the possibility that he was none of the three.

But even if we accept this, the argument still has difficulties. A good way to test an argument is to try to see if we can use it to prove things we don't want it to, as Gaunilo did with Anselem's ontological proof. Here, we notice that we can use the argument to prove that anyone making fabulous claims who can persuade people to follow them and sacrifice their life for them, are neither liars nor madmen. Therefore, Mohammed was the last prophet of God, Joseph Smith was told by God to found a new religion and divinely guided in his translation of the Book of Mormon, and L. Ron Hubbard was able to see into our planet's ancient history, and the story of the great massacre by Xenu. In fact, Jim Jones probably sets the record for persuading people to sacrifice themselves, persuading over 900 followers to give up their life in one day. Clearly, these various beliefs are irreconcilable, so Carrol's argument proves too much.

The fundamental problem is that the argues to gives much credit to Jesus' followers. These individuals are self-selecting and followed Jesus around because they genuinely believed his message (which seems to have been primarily focused on preparing ourselves for the immediate immanence of the second coming). As an itinerant preacher Jesus had contact with probably thousands of people, and yet we have no indication that he had more than a handful of followers. In fact, he seems to suggest that sometimes whole towns would complete reject his preaching. On top of that, one of his followers betrayed him, perhaps for the very reason that he began to realize Jesus was either a liar or a madman. Jesus' movement wasn't significant enough to be noticed by any historians until Josephus makes two brief mentions of Jesus in 94 ad, when the Christian movement had already had multiple generation to grow, and that growth was due to the persuasive ability of the subsequent followers, not of Jesus himself. In short, Jesus persuaded probably few people to follow him, and if we were to put the question to all those who had contact with him, then most of them probably did think he was neither a God nor even had the spiritual authority to be followed around. If we simply say, "those who followed him around, because they were willing to believe he was neither a liar nor a madman, were also willing to sacrifice their lives for him; therefore, they must have believed he was neither a liar nor a madman," then we have only made a circular argument.

We might even attack Lewis' first argument. There's no way we could automatically say that a liar or a madman couldn't be a great moral teacher. If a liar, he may have simply used this one noble lie in order to give greater authority to his teachings. And if his madness was limited to merely the status of his own divinity, he may still have been lucid on other critical moral matters, especially if he was simply repeating moral maxims directly from the Tanakh (Old Testament). In short, the Lewis' argument fails in several different ways, and certainly cannot replace faith.

Wednesday, September 17, 2008

Kant's Antinomy's

In Kant's 1st Critique, The Critique of Pure Reason, he employs four so-called "antinomies" to show the futility of using reason to answer certain unanswerable metaphysical questions: namely is the universe limited or limitless, does time have a beginning, is space infinitely divisible, are we free and is their extrinsic intelligence in the universe (eg God). What Kant does is that with each one of these debates he actually proves both sides. In other words, he starts by proving both that the universe is limited and that the universe is limitless, showing that both sides can be logically proven. Interestingly, Kant proves both sides of the antinomies with apagogic arguments, namely indirect arguments, using reductio ad absurdums. For example, to prove that time has no beginning he says, if time had a beginning, it must have been preceded by an "empty time" (a non-time before time). But how could time have emerged out of this "empty time"? Thus, time must be without beginning. But then he says, but if we assume that time has no beginning, then we're led to the assumption that it retreats into the past infinitely, which means it's taken infinite time to get to the present. But how could we possible pass through all this infinite time to get to the present? (Personally I'm not convinced by this latter argument, but let's overlook it for now).

But the really odd one of the four antinomies is the third one, about freedom. On the one hand, there is no freedom, then all events are part of a causal chain of cause and effect, whereby each event is caused by a previous, caused by a previous, caused by a previous, ad infinitum. The explicit problem he presents with this is that it prevents us from fully determining the cause of any event, because we can't determine every antecedent cause and cause of that cause and cause of that cause, etc. On the other hand, for the other side of the antinomy, he refutes the possibility of freedom. He says that if there are free acts, then these acts must occur outside the causal chain, which means that we can't understand them using our understand by means of uniform laws. Thus, we really can't have experience of free acts.

In both cases, the argument against relies on this idea of us being able to have experience. It is somewhat sensible because the larger goal of the antinomies is the refutation of "Transcendental realism," which is the idea that the world is at it appears. He wants to say that if the world is at it appears, then we should be able to have complete experience of the world. But of course this doesn't follow because we don't see the whole world, but only small pieces of it, meaning it would take infinite time to experience the whole of it. This means that an infinite regress of causes is compatible with transcendental realism. And of course, the other side of the argument doesn't work that well either. I of course, can have experience of freedom, I just may not immediately assume it is free, since I can assume that it's part of a causal chain. The problem with proving freedom through reason is not that both sides of the argument lead to absurdity, but that neither side leads to absurdity, making the issue irresolvable.

Ultimately, Kant will want to say that freedom is provable by means of practical reason. The argument that he presents in his 2nd Critique, The Critique of Pure Practical Reason, is basically that I have awareness of moral choice via practical reason (thereby assuming I have practical reason to begin with). For me, to have moral choice requires freedom. Therefore I have freedom. Of course, to say that I have accurate awareness of moral choice is a big assumption. Might as well just assume you have freedom and avoid the smoke and mirrors of trying to create the appearance of a legitimate argument.

Saturday, September 13, 2008

What's Aristotle to do without inertia?

Let's start looking at the reductio ad absurdum (which I explained in my last post) with Aristotle, since he liberally uses it as a logical technique not always most carefully. Aristotle is a careful thinker, but with the shear quantity of arguments he made in his dense body of surviving works there's bound to be more than a few stinkers. Now that the Large Hadron Collider has been recently started up, in commemoration of the fact that the earth has not been swallowed into nonexistence, let's talk about some of Aristotle's physics. This is a particularly good topic as well, since it is one place where Aristotle makes some of his notoriously odd conclusions.

One problem that Aristotle deals with in his physics is the conservation of motion. Undoubtedly it happened - all types of projectiles were thrown in warfare in the Olympics and just in the everyday fun of throwing stones at your friends to harass them - but how did it retain its motion after it left the hand or bowstring or whatever? Without the concept of inertia this could be a puzzling problem. Aristotle, brings up this problem in his discussion of the void, which he rejects for a number of reason. In this discussion, in Book IV.8 (215a14-19, he thinks that there can be only two possibilities, that either 1) the thrown object causes some sort of cyclonic motion, whereby it displaces air in front of it, and it circles behind it and pushes it forward or 2) the the thrown object pushes forward a column of air in front of it at a greater speed, and the column of air drags the arrow behind it. The second one seems completely implausible since it still demands explanation of how the column of air continues in motion, which is good reason to reject it and accept (1). But ultimately, it's silly to imagine that these are the only two possibilities.

In fact, his explanation of how a thing keeps moving seems to contradict his argument that a void is impossible because objects would move within it at infinite speed (215b1-216a8). He basically says that the speed of an object is equal to force divided by the resistance of the medium (f/r). This first of all doesn't make sense, since if this cyclonic motion were pushing the object, the increased resistance in front would be more than made up by increased push from behind, meaning objects would move equally fast through all mediums (which contradicts experience). But Aristotle argues that a void is impossible because it would offer no resistance, making r=0, producing f/0, which equals infinity. This doesn't follow because one would only have to make a small adjustment in the formula (eg. f/(r+c) where c is a positive number) to avoid infinities and still agree with empirical observation (and yes, I know the Greeks didn't use algebra, but it's so much easier to use to explain to contemporary audiences. One could easily express it geometrically too).

In short, this is the best argument for the nature of motion that Aristotle could make.

Tuesday, September 9, 2008

Reductio ad absurdum

The "reductio ad absurdum" is a type of apagogic (indirect) proof which proves something true by eliminating all other possibilities. The phrase "reductio ad absurdum" means "reduce to absurdity," and refers to the process of reducing to absurdity all other possibilities until there is only one left. The reductio is used quite frequently by Euclid in his Elements and in geometry in general. It makes good sense as a tool in geometric proof, but many a philosopher has tried to co-opt it into philosophical proofs, which is not quite as air tight.

So, basically, to show how a reductio works, if you want to prove that point A falls on the circumference of circle Y, then you can use a reductio and say that point A either is inside circle Y, outside of circle Y or right on circle Y. If you demonstrate that both the possibility that point A is outside or inside circle Y lead to absurdity - namely some sort of paradox - then it follows by process of elimination, that point A must fall on circle Y. Two critical assumptions are always being employed when one uses a reductio: 1) that the correct solution is among the possibilities listed and 2) that all possibilities are included in the possibilities listed (we should note, that if condition 2) is met, then condition 1) necessarily follows, nonetheless, I think it is important to note them as two separate conditions since if condition 1) is met then one can come up with a correct answer, even if condition 2) is not met). In the case of the geometrical proof we can be confident that we have all possibilities. Within our simplified space only occupied by circle Y and point A, there can be only three possibilities. Geometry, and in particular Euclidean geometry involves a very simplified space, built from the ground up from a finite set of definitions and axioms. The geometrical space in which geometrical proofs occur is finite and circumscribed. Thus, we can say, because we are perfectly aware of all the limits and rules of this space, what are all the possibilities within a reductio proof.

But what happens when we are philosophizing on topics that concern the wider world? We can neither create a finite set of all definitions nor a finite set of all fundamental axioms for the wider world, and thus for us to confidently assert that the possibilities listed are all possibilities is questionable. It's not like reductios aren't at all possible outside the well circumscribed confines of geometric space, but it certainly is more uncertain, more difficult, and more open to skepticism.

So, I make this entry as a preface to some discussions of some reductio arguments in philosophy.

Next post, Aristotle

Friday, September 5, 2008

Nasty, Brutish & Short ain't so bad

Thomas Hobbes, in his Leviathan made the case for strong central government on the grounds that it is necessary to evade the dangers of the hypothetical state of nature, in which there is no government. In this state of nature, life is "solitary, poor, nasty, brutish and short" because it's war of all against all with no one to prevent people from using force to get their way. The lesson I was taught when young and impressionable was that in a state without government, it would be simply the strong dominating the weak. Sounds reasonable, but unfortunately Hobbes is wrong. Peter T. Leeson writes an article for cato-unbound.org, about how anarchy isn't as bad as you think, and there are a number of historical examples to back it up. I think the general error that Hobbes makes is basically that we can see from our perspective that a life that is nasty, brutish and short is undesirable (in fact this is the rhetorical thrust of his argument) and yet we assume that the people living in this situation can't see it, or are incapable of doing anything about it. But the fact is they could both recognize how undesirable it is, and figure out ways to address this problem. One error many a political scientist and lawmaker has faced is underestimating how surprisingly creative people are - give them a law they don't like and they'll find a way around it, and likewise give them a state of nature and they'll figure out a creative way to make the best of it. The Leeson article is good at showing some of these creative solutions.

The error in Hobbes is not so much that he misunderstands human nature and thinks that people are more vile in nature than they really are. In theory, a few vile people could really undermine attempts to live in peaceful harmony, since they would take advantage of the whole system. So, whether you only had a few vile people or most people were vile, the results would probably be similar. His error is assuming that government is the only way to curb the behavior of these vile people. Government is one solution, but there can be many others.

In addition, we can admit how power can corrupt, and how if any of us were thrown into a situation without laws, temptation might lead us to do things we otherwise wouldn't. For example, if I was given complete legal immunity, I might be very well tempted to steal, in order to save money and to get stuff that I usually couldn't afford. Plato gives a similar example of Gyges' ring in the Republic. Gyges finds a ring that makes him invisible and suddenly he uses it to seduce the queen, kill the king and take over the kingdom (359d-360d). We might think of the movie Jumper, as a recent fictional example, wherein the ability to teleport anywhere instantly, leads the character to live lawlessly. But the confusion here is speaking of individuals getting unique powers, whereas the state of nature, applies equally to everyone. If everyone received a ring making them invisible, or everyone could teleport, then it might undermine the fabric of society, but only temporarily since people would figure out ways to cope. People would come up with surprising ways to protect their person and property and recreate guarantees that make trade and community possible, and thus allow people to attain prosperity.

I couldn't say how, but that comes back to the surprising creativity that emerges when you ask a whole lot of people to try and solve a problem. They'll try things out and some will work, and word of those successful ones will get around, and very quickly institutions will emerge that would astound even the most brilliant minds like Hobbes and Plato.

Monday, September 1, 2008

Parmenides refutation of change

Parmenides was a pre-Socratic philosopher from Elea. He is notorious for denying that there can be any change. He believed that everything is part of a single unified and unchanging whole. All apparent change is merely illusion. His follower, Zeno, extended this idea by providing further logical paradoxes which attempted to show that motion leads to essential contradictions that are logically irreconcilable. For example, he showed that motion isn't possible because in order to travel from A to B we have to travel half the distance, and then half that distance and then half that distance, and an infinite number of halves. But if we have to cross an infinite number of halves, how can we even get started? Aristotle, simply rejected this argument on the grounds that we can observe things in motion, but this isn't very effective because Parmenides already argument that motion is an illusion. The difficuly in this paradox is that of the infinite, which Greek mathematics couldn't handle, and it would require a far more sophisticated relationship with infinity in mathematics for this problem ultimately to be solved.

Parmenides' argument for lack of motion was twofold. First, he argued that for change to occur it must progress from being to non-being, since something which was not before now is. For example, if I grow tall, I have to start from not-tall and then change to tall. But how could something possible come from nothing? How could being come from nothing, since nothing is completely nothing? After Parmenides, thinkers would recognize that this absolute change, (something from nothing) is not possible, but change is possible because things don't need to change completely. There is something that persists through the change. For example, if I grow tall, it is I who persists through the change. Tall to non-tall is not absolute change, because the I is the unchanging ground upon which the ball of change can roll.

Parmenides other argument is about the incomprehensibility of non-being. A world in which there is change requires a combination of being and non-being, but we can't possibly comprehend non-being since it is absolutely nothing. Thus, the comprehensibility of the world would be undermined by change. Parmenides here was again mistaken since the presence of change only undermines the complete comprehensibility of the world, which is an unfortunate fact which all we people who would like to know more about the world have to deal with.

I think the essential lesson to learn from Parmenides is the danger of thinking only with absolutes. Parmenides assumed that all change must be absolute change and so rejected change altogether. He assumed that for the world to be comprehensible it must be completely comprehensible. He will not be the last philosopher to make this error. For example, if one were to say, since we can't have complete access to truth, then we can't know anything.