The Logic Café
Reference for Logic
Review
The ideas described here
presuppose that you've already read the tutorials. If not, it's probably
best to go back and work your way through those. Then it is recommended
that you print this reference and use it for review.
Contents: Section 1:
Logic Concepts; Section 2: The Language of Symbolic Logic;
Section 3: Quantifiers; Section 4: Basic
Symbolization; Section 5: Categorical Logic and Beyond!
Section 6: Probability
1. Logic Concepts
Arguments
Think about the following simple example of reasoning.
All living beings deserve respect because life is sacred
and the sacred deserves the greatest respect.
What's going on in this sentence? It's not just a claim. Instead, the
author is giving reasons for a conclusion (that all living beings deserve
respect.) This may well be part of very controversial thinking. But we
can better understand the thinking, to reasonably agree or disagree, if
we can analyze its non-controversial aspects.
First, we distinguish the reasons (we'll call these "premises")
from the conclusion. For clarity, we will sometimes rewrite reasoning
in "standard form": writing the premises
out first, drawing a line, then writing the conclusion. For the above
reasoning about life, the standard form is the following.
Life is sacred.
The sacred deserves the greatest respect.
All living beings deserve respect.
Notice that in the original form the conclusion came before the premises.
But standard form reverses this order.
We need a term to apply to reasoning from premises to a conclusion. We
will use the word "argument" even though reasoning need not
be particularly disputatious:
Definition: An argument is a collection of statements
some of which (the premises) are given as reasons for another member
of the collection (the conclusion).
Part of this definition involves the notion of a "statement".
Statements are true or false:
by definition a statement is a sentence which has a "truth
value". So, each statement makes a claim, true or false. On
the other hand, there are a number of ways in natural language to utter
a sentence but not make a statement: one can ask a question, make a request
or demand, or utter an exclamation like "Ugh!". But any premise
or conclusion of an argument has to have a truth value, so must be a statement.
Distinguishing and Judging Arguments: Validity and
Soundness
One of the main points of logic is to be able to distinguish good reasoning
from bad. There are two main parts to this process:
- the judgment of the force or support of premises for conclusion,
and
- the judgment of the correctness of the premises.
The strongest sort of force or support is associated with valid
arguments. The idea is that so long as the premises are assumed to be
true, the conclusion is inescapable. We make this a bit more precise
in the following terms:
An argument is valid just in case
it is not possible that its conclusion be false while its premises are
all true.
An argument is invalid
if and only if it is not valid.
So the definition of validity (the property of being valid) has to do
with 1. Our second definition combines judgments 1. and 2.:
An argument is sound if and only
if it is both (a) valid and (b) has only true premises.
An argument is unsound
if and only if it is not sound.
Think about the following argument. It's very uncontroversial and really
rather uninteresting. But that makes it easier to judge.
All whales are mammals.
The animal that played Free Willy is a whale.
The animal that played Free Willy is a mammal.
Notice first that this argument is valid. Even if you don't know anything
about whales or Free Willy, it's clear that the conclusion is inescapable
given that the two premises (the statements above the line)
are true. Second, the premises are true. So, the argument meets
the two conditions required for it to be sound.
Now, consider another argument.
All whales live in the Southern Hemisphere.
Shamu (of San Diego, CA) is a whale.
Shamu lives in the Southern Hemisphere.
This argument too is valid. How can you tell? A test is to imagine the
premises being true. Here you might have to imagine herding all the whales
south of the equator! But imagine it anyway. Then notice that you are
automatically imagining the conclusion being true as well. It's impossible
for the conclusion to be false while the premises too are true. So, the
argument is valid. But, of course, it's not sound. It has a false premise
-- imagining that all whales live south of the equator does not make it
so.
Now, not all arguments are meant to be valid or sound. We can only give
valid and sound arguments when we have the most forceful evidence. When
we do argue in this way, the reasoning is deductive;
we'll say the study of such reasoning is "deductive logic".
An argument is deductive if and
only if its premises are intended to lead to the conclusion in
a valid way.
Note the word "intended" that is part of this definition. Whether
or not an argument is deductive depends on how it is meant. Often we intend
to give a valid argument but fail. (Didn't you ever give a "proof"
in geometry class that was meant to validly imply some theorem, only to
find you were wrong?) In any case, an argument may count as deductive
even when it is not valid; judging an argument as deductive is a matter
of interpretation not just logic.
Distinguishing and Judging Arguments: Inductive Reasoning
Frequently we need to give arguments even when our evidence only makes
a conclusion likely, but not inescapable. Then our thinking is
often called "inductive". For example,
I have surveyed hundreds of students here at ITU and found
that less than 10% say they are happy with the new course fees. My sample
was selected at random. So, I conclude with confidence that the vast majority
of ITU students do not find the course fees acceptable.
Here, the argument's author is clearly claiming that the evidence cited
makes the conclusion likely to be true but not a certainty (surveys sometimes
do go badly awry, for instance when the participants have some reason
to lie.) So, this argument is a clear case of an inductive argument.
An argument is inductive if and
only if its premises are intended to lead to its conclusion with
high probability.
We do not use the word "valid" for inductive arguments.
Rather, an inductive argument whose premises do support its conclusion
as intended (i.e., they make the conclusion likely) is called "inductively
strong":
An argument is inductively strong if and only if its conclusion is highly probable to be true given its premises.
Inductive strength is a counterpart to validity: by definition, deductive
arguments are intended to be valid; inductive arguments are intended to
be inductively strong. Of course, people often give arguments falling
short of what was intended. That's why we have logic classes! But the
point is that "valid" and "inductively strong" play
similar roles for deductive and inductive arguments respectively: valid
and inductively strong arguments have premises that support their conclusions
as intended.
There is an interesting way to state the difference between
valid deductive arguments and strong inductive ones. The conclusion of
a valid argument is inescapable given it's premises. So, the content of
the conclusion is implicit already in the premises. Not so with inductive
arguments: their conclusions go beyond the content of their premises.
So, inductive reasoning is sometimes called "ampilative" because
it amplifies, or adds to, the information given in the premises.
Finally, we need to define a counterpart to "sound" for inductive
arguments. Remember, that an argument is sound if and only if it is both
valid and has all and only true premises. For an inductive argument, we
just substitute "inductively strong" for "valid" to
get the notion of cogency:
An argument is cogent if and only
if it is both inductively strong and all its premises are true.
So, if one gives an inductive argument, one hopes that it is cogent.
Further Deductive Concepts
There are a couple more logic concepts worth knowing. All these involve
possibility and for this reason are associated with deductive logic.
Quick
Overview of the concept of "possibility":
The type of possibility in question in our study of logic is sometimes
called logical possibility. Logical possibility
is about what might have happened in some possible world, about
how things could have been (even if actual matters that have become settled
now preclude it).
So, for example, it is logically possible that George W. Bush never became
US president. Even though we know that he did, our language allows us
to consider a possible, but counterfactual situation in which Gore won
in the Supreme Court, votes were recounted and Bush was declared the loser.
There are lots of other uses of the word "possible". You might
say: "I'm sure that G.W.Bush didn't lose; it's just not possible
that I'm mistaken." This is an epistemic sense of possibility. But
our logical possibility is different; it's a semantic conception.
Perhaps the most important deductive concept after validity is that of
a logical truth:
A sentence is logically true just
in case it is not possible for that sentence to be false.
So, for example, "All Irish males are male" is a logical truth.
So, is "Each triangle has three sides".
Sometimes these logical truths are called "analytic"
or "necessary truths". But such labels have slightly different
definitions and their equivocation with logical truth is controversial.
(Only as first approximation should you identify these notions; sorting
out the concepts is a very good philosophical exercise.)
Sometimes the notion of necessary truth is given a symbolization:
' S' (read "box-S")
symbolizes "it is necessary that S". However, we won't get to
this "modal logic" in what follows.
Also, we'll have reason to use:
A sentence is logically false just
in case it is not possible for that sentence to be true.
For example, "Agnes will attend law school and it's not the case
that she will (ever) attend law school."
Another definition worth keeping in mind is:
The members of a pair of sentences are logically
equivalent just in case it is not possible for one to be true while
the other is false.
And finally:
One sentence logically implies
(or logically entails) a second if and only
if it is not possible for the first to be true while the second is false.
(In this case we may also say that the second is logically
deducible from the first.)
You should think about examples meeting each of these definitions. If
you need help, check out the tutorials.
Fallacies
A fallacy is an argument that misleads. It's a "trick"
of reasoning. There are two main types of fallacious reasoning: Formal
and informal.
Formal Fallacies
These are arguments which are fallacious because of bad form. We've already
seen an example of this: the Sanchez
case.
Sanchez stays at her banking job only if she gets a raise.
So, if she gets a raise, she'll continue at the bank.
This reasoning may at first seem OK. But it's not. To see the problem,
notice that the same form of reasoning is obviously wrong in a
different context:
There is fire only if there's oxygen. So, if we add oxygen
to an area, there will be fire.
Notice that both the last two arguments are of this form:
_____ only if _ _ _ _ _. So, if _ _ _ _ _ , then ______.
But this is wrong, as the fire-oxygen example shows. When we do symbolic logic, we'll be able
to say just what's wrong with these arguments and this form.
Informal Fallacies
These are arguments which are fallacious because of problems with their
content (i.e., with what they say). Here we consider just a very few of the many types of informal
fallacy.
1. Ad Hominem Argumentsan argument against the person
These fallacies occur when one arguer attempts to discredit another
person rather than his or her argument or position.
Example: If you make an argument claiming to show
that God does not exist, and I reply that you're a damned atheist, then
my reply is an ad hominem fallacy. It does not show that your argument
is unsound or uncogent.
Ad hominem replies usually work by trying to
- attack the character of the arguer
- attack the circumstances of the arguer as indicating a bias
- claim that the arguer is a hypocrite
2. Straw Manargument by misrepresentation
The arguer attempts to refute another arguer but only by misdescribing
his argument or position to make it appear bad, silly, or just weak. The
fallacious reasoning misrepresents, making a "straw man" of
the opponent.
Example: George W. Bush is an idiot. He thinks he can run
a huge government and finance countless wars while cutting taxes to zero
for the wealthy.
This is extreme hyperbole. It makes a straw man of Bush's governance
... whatever else you may think of it. If you disagree with the man, good
reasoning in support of your position would argue against his actual
policies. (Unfortunately, this sort of fallacious argumentation by misrepresentation
is some of the most common reasoning in all political discourse. Better
to fault Bush for running up the deficit and risking the future tenability
of the federal bureaucracy. Perhaps the Bush position is unjust but it's
more plausibly crafty than idiotic!)
3. Red Herringargument by irrelevance
The hunt is on...the fox is released and the dogs are in pursuit; but to make the chase more sporting,
a dead and smelly herring is dragged across his path to obscure the scent.
Similarly in argument: the red herring fallacy is committed when one arguer
tries to obscure the obvious by bringing claims to bear which are only
apparently relevant to the issue at hand.
Example: You shouldn't even think about becoming a Catholic
priest. Just think about the scandal caused by sexual abuse. We should
be very angry.
4. Begging the Questionassuming what needs to be proved
It's all too common for an arguer to slip in as unnoticed presupposition
a premise that is central to his or her conclusion, in effectg presuming what
is just the point of contention!
There are fairly obvious cases of circular reasoning that fit
this mold:
Example A: God must exist for the bible tells us so.
And we can know the bible is 100% literal truth for it is the result of
divine inspiration.
That is: God's existence is "proven" by presupposing that the
bible is divine (Godly) inspiration. That's argument in a circle: one
type of question begging.
But there are also less obvious cases of begging the question.
Example B: As a good human being you should never
eat another mammal. For mammals are all sentient and we should never eat
what is sentient.
Here the problem is that the main point of contention, whether or not
we should eat sentient creatures, is merely left as a a presupposition
and never defended.
(Aside: "begging the question" is now commonly used to mean
something different, to mean "the question needs or begs to be asked".
We'll ignore this usage here.)
5. Suppressed evidenceleaving out the important part
Example: So, you need to convince a friend to attend
DSU with you. You tell her about the great times, the easy grading policies,
and the camaraderie the two of you would share. But you conveniently forget
to mention the huge cost of tuition and housing.
You've suppressed some of the most relevant information. The conclusion,
that she should attend with you, is vastly undermined by the missing information.
So, your attempt to convince is a kind of trickery.
2. The Language of Symbolic Logic
Begin by thinking about a simple compound sentence:
(*) Both Jeremy and Karla passed the bar exam, although Jeremy
did so before Karla.
In the simplest "sentence" logic, we would represent (*) with
something like '(J&K)&B'.
Here the '&' is a connective standing for "and". 'J' stands
for "Jeremy passed the bar exam", 'K' for "Karla passed
the bar exam", and 'B' for "Jeremy passed before Karla".
(NOTICE: We also use '&' for "although" . The idea is that
words like "although" mean roughly "and (in contrast)".
Words like "although", "but", and "however"
mean about the same as "and" but also mark a divergence or distinction.)
But we can do better at the symbolization. Use 'j' and 'k' as
names for Jeremy and Karla; then use 'P' to symbolize the predicate
"passed the bar exam" and 'B' to symbolize the relationship
"passing the bar exam before". We will write 'Pj' for "Jeremy
passed the bar exam, 'Pk' for "Karla passed the bar exam" and
'Bjk' for "Jeremy passed before Karla". So, (*) can be symbolized
as:
(Pj&Pk)&Bjk
Here we use parentheses to group.
A relationship, like that expressed by 'B' in 'Bjk', is sometimes called
a "two place predicate" because it's a predicate relating two
things. Can you think of an example of a three place predicate? (More
on these in a moment.)
Connectives
English uses many words other than "and" to connect simple
sentences to make compound ones.
For example, English uses the word "or" to connect sentences.
Our symbolic language uses 'v'
instead to express the idea of "either...or...".
So,
Either Karla or Bob passed the bar exam
may be symbolized as
Pk v Pb
English has the words "If...then..." which together connect
a pair of sentences. Our symbolic language uses the horseshoe, '>',
to express this "conditional". And while the English expresses
negation in a number of ways, for exam with "it's not the case",
our symbolic language will use just '~'.
Then, for a slightly more complicated example,
If Karla didn't pass, then Bob did.
would be symbolized as
~Pk>Pb
And now we come to a case for which parentheses are important:
It's not true that if Karla didn't pass, then Bob did.
This one needs to be symbolized like so:
~(Pk>Pb)
Again, the parentheses group so that the negation negates the
whole "Pk>Pb". This is similar
to the English which negates the whole conditional.
Because '>' (like '&', 'v',
and '=') connects a pair of sentences,
we call it a binary connective. All connectives
of SL except one are binary. The one exception is the tilde, '~'. It attaches
to a single sentence. For example, '~A'. We call '~' a unary
connective.
A synopsis of the third tutorial is presented in the following table.
The connectives are listed in the column on the left. Symbols used by
different authors can vary as noted. The "component(s)" of a
sentence built with a connective is/are just the simpler sentence(s) connected
by the connective in question.
|
Connective Name |
Resulting Sentence Type |
Component Names |
Typical English Versions |
English Statement |
Symbolization |
& |
Ampersand (sometimes
a dot or upside down 'v' is used instead) |
Conjunction |
Conjuncts |
"and", "both ... and ... " |
Karla and Bob passed the bar exam. |
Pk&Pb |
> |
Horseshoe (sometimes
an arrow is used instead) |
Conditional |
Antecedent, Consequent |
"if ... then ... " |
If Karla passed, then Bob did. |
Pk>Pb |
~ |
Tilde
(sometimes a
' ' is used instead) |
Negation |
Negate |
"it's not the case that", "not" |
Bob did not pass the bar exam. |
~Pb |
v |
Wedge |
Disjunction |
Disjuncts |
"or", "either... or... " |
Either Karla or Bob passed the bar exam. |
Pk v
Pb |
= |
Triple Bar (sometimes
a double arrow is used instead) |
Biconditional |
Bicomponets1 |
"if and only if", "just in case" |
Karla passed the bar exam if and only if Bob did. |
Pk=Pb |
We are interested in sentences. The atomic
ones are those like Bab or Jd or Rmno or just 'K'. Compound sentences,
often called molecular sentences, are formed
by using connectives. One can use as many connectives as one wishes
to "build" a grammatically correct sentence, but the connectives
must be added one at a time.
For example, one can use an ampersand, '&', to build 'Bab&Jd'
out of the atomic ones. Then go on to make an even longer compound sentence:
(Bab&Jd)>K
Note the parentheses. We are asked always to group when we use binary
connectives. Except we are allowed to drop outside parentheses. And
we can keep on building, for example we could negate the whole sentence
just produced, but to do so we need to return the "dropped"
outside parentheses (or use brackets instead):
~[(Bab&Jd)>K]
We can also spell out what sentences formed with a connective mean. For
example, any sentence formed with an ampersand, X&Y,
is true if and only if both its conjuncts are true. Otherwise it is false.
The yellow column below says the same thing. The columns under the other
connectives spell out their meaning.
|
X |
Y |
X&Y |
XvY |
X>Y |
X=Y |
~X |
possibility one:
|
T |
T |
T |
T |
T |
T |
F |
possibility two:
|
T |
F |
F |
T |
F |
F |
F |
possibility three:
|
F |
T |
F |
T |
T |
F |
T |
possibility four:
|
F |
F |
F |
F |
T |
T |
T |
So, we can tell whether a sentence formed from our connectives is true
or not just by knowing the truth value of its parts. That is to say that
it's truth value is a function of the truth value of its parts.
The usual lingo here is that each of the connectives is a "truth
function" and that we are developing "truth
functional logic".
Summary
We have a language which includes:
names which are lower case letters. So, we can symbolize
"Agnes" as 'a' (or any other lower case letter from 'a'
through 'u'; 'v' - 'z' are reserved for other uses).
atomic sentences and predicates which are upper case letters.
How will we tell the predicates apart from the atomic sentences?
- If an upper case letter is immediately followed by no
lower case letters, then it's a sentence letter.
- If an upper case letter is immediately followed by one
lower case letters, then it's a one-place predicate.
- If an upper case letter is immediately followed by two
lower case letters, then it's a two-place predicate.
- And so on: If an upper case letter is immediately followed by n
lower case letters, then it's an n-place predicate.
compound sentences which are constructed out of atomic sentences
and connectives. We have to be careful to group with parentheses or brackets
whenever we construct a compound sentence with a binary connective. Though
we may drop outside parentheses or brackets when we have finished constructing.
A logic with names, predicates and connectives is sometimes called "0th
order logic". What's missing in zeroth order
logic, what separates it from logic of the first order, is quantification
over the objects that are named. In English, words like "all"
and "some" serve to quantify. So, we need to add quantifiers
to our symbolic language. (A logic with quantification over properties
or sets of objects is called second order. We won't get into that for
now.)
3. Quantifiers
Thinking about numbers will help us see how to quantify.
For example, the English
There is an even number less than three
means that there is at least one thing x, which is even and less than
three. In other words:
(*) There is an x, x is even and x is less than three.
Bear with me! There's a reason to go through this example involving the
variable 'x' (which you remember using in high school, yes?). Let's begin
translating into PL; the above comes to:
(**) There is an x such that: Ex & Lxc
The phrase "there is" indicates a quantifier. It specifies
that there is something having certain properties. We will write this
with a new symbol, the backward-E: '%'.
(**), then, will be symbolized as follows:
(%x)(Ex&Lxc)
The backward-E is called the "existential" quantifier because
it says that something exists.
There is one more quantifier used in our symbolism: the universal quantifier
upside-down A: '^'. This quantifier means "all"
or "every". We can use it to symbolize the following.
Everyone will attend law school and need a loan
would be:
(^x)(Wx&Nx)
This should be understood to mean:
Everything x is such that it, x, is both W and N.
The use of upside-down A for the universal quantifier is not quite universal!
Some write '(x)' instead of '(^x)'.
One last point is in order. When we talk about "something"
or "everything" in English, we usually have some particular
group of things in mind. For instance, if we say that everyone will attend
law school, we don't mean literally everyone in the world. Instead, we
may have some circle of friends in mind. Similarly, all quantification
assumes a "universe of discourse" the collection
of all objects under discussion.
4. Basic Symbolization
Now that we have a sure handle on the syntax (or "grammar")
of our new language, we can press forward with semantical issues. The
easiest way to do this is to relate PL to English.
Some Symbolization Basics
English has many ways to name an object. The easiest way is the
proper name. But there other types of English expressions used to refer
to a unique individual. The following English expressions are typically
used to signify a specific individual and so can be symbolized with
names.
Names
Proper Nouns like "Paris", "Earth", "Mary",
"Oakland University", "Waiting for Godot", "tomorrow",
etc.
Kind Names like "oxygen", "Homo Sapiens"
(the species), "logic", etc.
Pronouns like: "this", "that", "he",
"she", "it", "who", "what", "there",
etc.
Definite Descriptions like: "the boy in the field",
"Smith's murderer", "the square root of 4", "my
son", etc.
Other tags like numerals or symbols, e.g., '(*)' as used
in this reference manual.
Natural language has many ways to specify predication. No list is particularly
helpful for these. But lists of English means to quantify are useful. Begin
with the existential quantifier.
Words
often symbolized with '%':
"some", "something", "someone", "somewhere",
"at least one", "there is", "a", "an",
"one"
(Warning: The last three of these fairly often mean something different
and not to be symbolized with '%'. For example,
"a whale is a mammal" probably means that any whale is
a mammal and needs to be symbolized with an '^'.)
It's good to keep some very basic examples in mind:
English |
Symbols |
Symbolization Key |
Jason knows someone. |
(%y)Kjy |
j: Jason, Kxy: x knows y |
I did something. |
(%x)Dix |
i: me, Dxy: x did y |
I see a person in my office. |
(%x)Sixo |
o: my office, Sxyz: x sees y in z |
We may do much the same thing with the universal quantifier.
Words
often symbolized with '^':
"all", "every", "each", "whatever",
"whenever", "always", "any", "anyone"
(Warning: The last two of these fairly often mean something
different and not to be symbolized with '^'.
For example, when I say "if anyone can do it, I can" this may
be symbolized as "(%x)Dx>Di".)
Here are some examples.
English |
Symbols |
Symbolization Key |
Jason knows everyone. |
(^y)Kjy |
j: Jason, Kxy: x knows y |
I can do anything. |
(^x)Dix |
i: me, Dxy: x can do y |
I need to see all students in my office. |
(^x)Sixo |
o: my office, Sxyz: x needs to see y in z |
5. Categorical Logic and Beyond!
It may be best to see languages (like English) as having two basic quantificational
forms: the existential and the universal.
Existential Form
The first basic form of English is the following.
existential form: Some
S are P.
where 'S' (the subject)
and 'P' (the predicate
of the expression) name groups or classes of individuals. (We will call
these the subject class and the predicate class, respectively.)
So, for example. "Some students are freshman" is of existential
form. And it's pretty easy to see how it might be symbolized. Given a
natural symbolization key, it could well be rendered as '(%x)(Sx&Fx)'.
For such an easy example, we don't need to think of forms. But for more
complicated cases it's best to fit the "mold".
Take this example,
(*) There are female logic students who are juniors set
to graduate next year.
Ugh! But we can fit this messy example sentence into the existential
form and then symbolize. The following steps will help as you consider
such a sentence.
First, here's the mold we need to fit:
(Step I) Some S are
P.
Begin by noting that (*) is about "female logic students".
So, this is the subject class. And the predicate class, which (*) attributes
to its subject is "juniors who will graduate next year".
Now, we need to provide a hybrid English, PL symbolization of the form:
(Step II) (%x)(x
is an S & x
is a P)
For (*) this should be "(%x)(x is a female
logic student & x is a junior set to graduate next year)".
Finally, we take this hybrid and restate it in pure PL, something of
this form:
(Step III) (%x)(Sx
& Px)
For (*) this means rewriting the subject phrase "x is a female logic
student" and the predicate phrase "x is a junior set to graduate
next year" into PL. Take this key:
universe of discourse: |
People |
Fx: |
x is female |
Jx: |
x is a junior |
Sxy: |
x is a student of subject y |
Gxy: |
x will graduate in year y |
l: |
logic |
n: |
next year |
Then the subject phrase becomes: 'Fx&Sxl' and the predicate phrase
becomes 'Jx&Gxn'. So, finally we have:
(*)'s Symbolization: (%x)[ (Fx&Sxl) & (Jx&Gxn)
]
Many different English sentences can likewise be seen to fit this form.
You may want to review the tutorial for details. In all cases, you move
from seeing the English as about a subject and predicate class to a PL
symbolization of form (%x)(Sx
& Px).
Universal Form
The second form is for sentences saying that all such-and-such
are so-and-so. For example, "All Swedes are Europeans". Again
we have a subject class and predicate class:
universal form: All
S are P.
Such a universal statement means that anything is such that if
it's in the subject class, then it's also in the predicate class.
So, our example might be translated as '(^x)(Sx>Ex)'.
In general, we have the same three step process as for existential
form. First we need to see that the English sentence is of a form relating
a subject to a predicate in the appropriate way:
(Step I) All S
are P
Next, we move to the hybrid form:
(Step II) (^x)(
x is an S
> x is a P
)
Finally we give the symbolization.
(Step III) (^x)(Sx>Px)
For another example of universal form, think about
(**) All female juniors will graduate next year.
This means:
(Step I) All female juniors are students who will
graduate next year.
Notice that the subject is a conjunction. So, we have the hybrid form:
(Step II) (^x)(
x is a female and a junior >
x is a student who will graduate next year
)
and finally the symbolization:
(Step III) (^x)( (Fx&Jx)
> Gxn )
Categorical Logic
Categorical logic treats logical relationships between the types of things
(categories) which satisfy one-place predicates. We can use PL
to quickly get at the heart of this logic because categorical forms are
built from existential and universal form sentences.
Categorical logic recognizes four main types of statement:
Type |
English Form |
PL Form |
A-form: |
All S are P |
(^x)(Sx>Px) |
E-form: |
No S are P |
(^x)(Sx>~Px)
or ~(%x)(Sx&Px)
|
I-form: |
Some S are P
|
(%x)(Sx&Px) |
O-form: |
Some S are not-P |
(%x)(Sx&~Px) |
Notice from this table that A-form and I-form are (respectively) just
what we call "universal" and "existential" forms.
The E-form is either universal with negated consequent or negated existential.
And the O-form is existential with negated second conjunct.
Now notice that A and O form sentences are "opposites": if
one is true, then the other is false. The same relation of opposition
holds between E and I forms. We call such pairs contradictories. This
fact is represented in the following table:

The
Modern "Square of Opposition"
|
|
(Pairs of sentences connected
by diagonal lines are contradictory.) |
Complications...
We should see an example of a more sophisticated use of our "1st
order logic". Categorical logic is very useful but is nonetheless
limited: It's restricted to logical relationships between one-place predicates.
We can look at one example that goes beyond categorical logic. Remember:
(*) Both Jeremy and Karla passed the bar exam, but Jeremy
did so before Karla.
We last symbolized this as
(Pj&Pk)&Bjk
But we may do better with quantifiers. The idea is that there is an exam,
the bar exam, passed first by Jeremy then later by Karla.
(%x)(%y)(%z)[(Ex&Byz)&(Pjxy&Pkxz)]
Or in a logician's English: There is a bar exam x and times y and z with
y coming before z such that Jeremy passed bar exam x at time y and Karla
passed this exam x at later time z.
Notice that we used this interpretation:
Ex: x is the bar exam; Bxy: time x comes before time
y; Pwxy: w passed x at time y. universe of discourse includes times, types of test (including
the bar exam) and people.
Mathematical Logic
We can move on from PL to give a logic sufficient for mathematics with a couple of additions. First we need to add an identity relation for '='. It's natural to pick 'I'. The only difference between 'I' and all other relations is that we also give rules of inference for how 'I' is used. Also we need to add functions! Ugh? Well, just think of functions as complex names. In English we might say "the youngest brother of person p". This is a function from people to other people. For the details, see chapter nine from the Logic Café.
6. Probability
Probability plays a role in inductive logic that is analogous to the
role played by possibility in deductive logic. For example, a valid deductive
argument has premises which, granted as true, make it impossible
for the conclusion to be false. Similarly, a strong inductive argument
has premises which, granted as true, make it improbable that the
conclusion is false.
The tutorials contain the briefest of introductions to the interpretation
of a theory of probability. Here we only give the axiomatic theory.
We'll just take probability as applying to sentences of our symbolic
language. For example, we'll write 'P[Wa]'
to stand for say "the probability that Agnes will attend law school".
Or, 'P[(%x)Wx]'
for the probability that someone will attend law school. For our purposes,
we'll restrict our new formal language to include PL and any PL sentence
surrounded by 'P[...]'.
We will need 5 basic "axioms" of probability:
1. 0 <
P[X] < 1
2. If X is a logical truth,
then P[X]=1.
3. If X and Y
are logically equivalent, then P[X]
= P[Y].
4. P[~X]
= 1 - P[X]
5. P[XvY]
= P[X]
+ P[Y]
- P[X&Y]
Conditional Probability and Independence
We often describe probabilities in less absolute terms. Instead of saying
that your probability of passing this class is high, I say something like
"you have a very high probability of passing given that you continue
your good work".
That is, we put a condition on the probability assignment. We'll write
the probability of X given Y
as 'P[X|Y]'
and define it this way:
Definition 1: P[X|Y]
= P[X&Y]
/ P[Y]
Finally, consider:
Definition 2: X and Y
are independent if P[X|Y]
= P[X]
Bayes' Theorem
P[X|Y]
= |
P[X]
x P[Y|X]
P[X]
x P[Y|X]
+ P[~X]
x P[Y|~X]
|
|
Ugh? But this one takes just a little work to prove. And it's worth it.
Think about X as a hypothesis and Y
as the evidence. Then the left hand side of Bayes' theorem gives the probability
of the hypothesis given the evidence. Just what we'd like to be able to
know! And the right had side provides the answer partly in terms of how
hypotheses provide probabilities for experimental results (evidence).
Something we might know. Here's a simplified version of the theorem (with 'H' for the hypothesis and 'E' for the evidence):
P[H|E] = |
P[H] x P[E|H]
P[E] |
|
Thus we have the basis for an epistemology of science: Bayesian Epistemology.
|