The Logic Café

Connective Name  Resulting Sentence Type  Component Names  Typical English Versions  English Statement  Symbolization  

&  Ampersand (sometimes a dot or upside down 'v' is used instead)  Conjunction  Conjuncts  "and", "both ... and ... "  Karla and Bob passed the bar exam.  Pk&Pb 
>  Horseshoe (sometimes an arrow is used instead)  Conditional  Antecedent, Consequent  "if ... then ... "  If Karla passed, then Bob did.  Pk>Pb 
~  Tilde (sometimes a '' is used instead) 
Negation  Negate  "it's not the case that", "not"  Bob did not pass the bar exam.  ~Pb 
v  Wedge  Disjunction  Disjuncts  "or", "either... or... "  Either Karla or Bob passed the bar exam.  Pk v Pb 
=  Triple Bar (sometimes a double arrow is used instead)  Biconditional  Bicomponets^{1}  "if and only if", "just in case"  Karla passed the bar exam if and only if Bob did.  Pk=Pb 
We are interested in sentences. The atomic ones are those like Bab or Jd or Rmno or just 'K'. Compound sentences, often called molecular sentences, are formed by using connectives. One can use as many connectives as one wishes to "build" a grammatically correct sentence, but the connectives must be added one at a time.
For example, one can use an ampersand, '&', to build 'Bab&Jd' out of the atomic ones. Then go on to make an even longer compound sentence:
(Bab&Jd)>K
Note the parentheses. We are asked always to group when we use binary connectives. Except we are allowed to drop outside parentheses. And we can keep on building, for example we could negate the whole sentence just produced, but to do so we need to return the "dropped" outside parentheses (or use brackets instead):
~[(Bab&Jd)>K]
We can also spell out what sentences formed with a connective mean. For example, any sentence formed with an ampersand, X&Y, is true if and only if both its conjuncts are true. Otherwise it is false. The yellow column below says the same thing. The columns under the other connectives spell out their meaning.
X  Y  X&Y  XvY  X>Y  X=Y  ~X  
possibility one:

T  T  T  T  T  T  F 
possibility two:

T  F  F  T  F  F  F 
possibility three:

F  T  F  T  T  F  T 
possibility four:

F  F  F  F  T  T  T 
So, we can tell whether a sentence formed from our connectives is true or not just by knowing the truth value of its parts. That is to say that it's truth value is a function of the truth value of its parts. The usual lingo here is that each of the connectives is a "truth function" and that we are developing "truth functional logic".
Summary
We have a language which includes:
names which are lower case letters. So, we can symbolize "Agnes" as 'a' (or any other lower case letter from 'a' through 'u'; 'v'  'z' are reserved for other uses).
atomic sentences and predicates which are upper case letters. How will we tell the predicates apart from the atomic sentences?
compound sentences which are constructed out of atomic sentences and connectives. We have to be careful to group with parentheses or brackets whenever we construct a compound sentence with a binary connective. Though we may drop outside parentheses or brackets when we have finished constructing.
A logic with names, predicates and connectives is sometimes called "0th order logic". What's missing in zeroth order logic, what separates it from logic of the first order, is quantification over the objects that are named. In English, words like "all" and "some" serve to quantify. So, we need to add quantifiers to our symbolic language. (A logic with quantification over properties or sets of objects is called second order. We won't get into that for now.)
Thinking about numbers will help us see how to quantify. For example, the English
There is an even number less than three
means that there is at least one thing x, which is even and less than three. In other words:
(*) There is an x, x is even and x is less than three.
Bear with me! There's a reason to go through this example involving the variable 'x' (which you remember using in high school, yes?). Let's begin translating into PL; the above comes to:
(**) There is an x such that: Ex & Lxc
The phrase "there is" indicates a quantifier. It specifies that there is something having certain properties. We will write this with a new symbol, the backwardE: '%'.
(**), then, will be symbolized as follows:
(%x)(Ex&Lxc)
The backwardE is called the "existential" quantifier because it says that something exists.
There is one more quantifier used in our symbolism: the universal quantifier upsidedown A: '^'. This quantifier means "all" or "every". We can use it to symbolize the following.
Everyone will attend law school and need a loan
would be:
(^x)(Wx&Nx)
This should be understood to mean:
Everything x is such that it, x, is both W and N.
The use of upsidedown A for the universal quantifier is not quite universal! Some write '(x)' instead of '(^x)'.
One last point is in order. When we talk about "something" or "everything" in English, we usually have some particular group of things in mind. For instance, if we say that everyone will attend law school, we don't mean literally everyone in the world. Instead, we may have some circle of friends in mind. Similarly, all quantification assumes a "universe of discourse" the collection of all objects under discussion.
We're already familiar with one step "deductions". For example, when we go from
1. Chris will get and 'A' or a 'B'
and
2. Chris won't get an 'A'
to
3. Chris will get a 'B'
we've applied the valid principle "Disjunctive Syllogism" (as we called it in Tutorial 1). From now on this is DS, a "rule" of inference .
Now, think about symbolizing this argument:
c: Chris, Ax: x gets an 'A', Bx: x gets a 'B', Hx: x is happy.
We have
Ac v Bc
~Ac
Bc
Which can be rewritten to show how we'll do derivations this way.
Premise  1  Ac v Bc  
Premise  2  ~Ac  
1,3 DS  3  Bc 
Arguments with more than one step
More interesting derivations would involve a few steps (and rules beside DS). Think about this reasoning:
1. Chris will get and 'A' or a 'B'
and
2. If Chris gets a 'B' then he'll be unhappy.
and
3. Chris won't get an 'A'
to the conclusion:
4. So, Chris will be unhappy.
We can represent this reasoning in stepbystep form so:
Premise  1  Ac v Bc  
Premise  2  Bc>~Hc  
Premise  3  ~Ac  
1,3 DS  4  Bc  
2,4 >E  5  ~Hc  
4,5 &I  6  Bc&~Hc 
Here you should see if you can figure out the two new "rules of inference" after DS. If you think about it, you'll see how they both are like DS in that all three represent little valid arguments. If this isn't obvious, then go back to the tutorial and rework it.
Exercise: make sure you can write down each of the three rules, DS, >E, and &I. Hint: here's one of them:
>E (or MP) 

input 1: input 2: output: 
P>Q P Q 
In fact, arguments in SL require many more rules of inference. Can you think of some? It may be best to review the list of these back in the tutorials. Or just do some exercises. Good luck!
Deduction in PL
We can also do deduction in predicate logic. Think about this example of an argument where the universe of discourse may be a particular logic class having Chris as a member.
1. Everyone gets and 'A' or a 'B'
and
2. Chris doesn't get an 'A'
to
3. Chris gets a 'B'
Here we need to make an inference from the general statement of line 1 to the particular case of Chris. That is, we need a rule that allows us to move from a claim about everyone to a claim about Chris: that he gets an 'A' or a 'B'. We'll call this rule "^E". Here it is in action:
Premise  1  (^x)(Ax v Bx)  
Premise  2  ~Ac  
1 ^E  3  Ac v Bc  
2,3 DS  4  Bc 
If this makes no sense, it would be best to look at the tutorial! In any case, the idea of deduction is to make a number of little valid steps that may, when taken together, amount to a more serious inference. And this inference is much more like real life reasoning than truth tables.
It case it doesn't go without saying, there are a number of rules of inference that we need to add for quantifiers: '^I', '%E', and '%I' are three obvious additions. The details, of course, are in the tutorials. If you think you know them by now, then do a check by doing some exercises.
Now that we have a sure handle on the syntax (or "grammar") of our new language, we can press forward with semantical issues. The easiest way to do this is to relate PL to English.
Some Symbolization Basics
English has many ways to name an object. The easiest way is the proper name. But there other types of English expressions used to refer to a unique individual. The following English expressions are typically used to signify a specific individual and so can be symbolized with names.
Names
Proper Nouns like "Paris", "Earth", "Mary",
"Oakland University", "Waiting for Godot", "tomorrow",
etc.
Kind Names like "oxygen", "Homo Sapiens"
(the species), "logic", etc.
Pronouns like: "this", "that", "he",
"she", "it", "who", "what", "there",
etc.
Definite Descriptions like: "the boy in the field",
"Smith's murderer", "the square root of 4", "my
son", etc.
Other tags like numerals or symbols, e.g., '(*)' as used
in this reference manual.
Words
often symbolized with '%':
"some", "something", "someone", "somewhere",
"at least one", "there is", "a", "an",
"one"
(Warning: The last three of these fairly often mean something different
and not to be symbolized with '%'. For example,
"a whale is a mammal" probably means that any whale is
a mammal and needs to be symbolized with an '^'.)
It's good to keep some very basic examples in mind:
English  Symbols  Symbolization Key 

Jason knows someone.  (%y)Kjy  j: Jason, Kxy: x knows y 
I did something.  (%x)Dix  i: me, Dxy: x did y 
I see a person in my office.  (%x)Sixo  o: my office, Sxyz: x sees y in z 
We may do much the same thing with the universal quantifier.
Words
often symbolized with '^':
"all", "every", "each", "whatever",
"whenever", "always", "any", "anyone"
(Warning: The last two of these fairly often mean something
different and not to be symbolized with '^'.
For example, when I say "if anyone can do it, I can" this may
be symbolized as "(%x)Dx>Di".)
Here are some examples.
English  Symbols  Symbolization Key 

Jason knows everyone.  (^y)Kjy  j: Jason, Kxy: x knows y 
I can do anything.  (^x)Dix  i: me, Dxy: x can do y 
I need to see all students in my office.  (^x)Sixo  o: my office, Sxyz: x needs to see y in z 
It may be best to see languages (like English) as having two basic quantificational forms: the existential and the universal.
Existential Form
The first basic form of English is the following.
existential form: Some S are P.
where 'S' (the subject) and 'P' (the predicate of the expression) name groups or classes of individuals. (We will call these the subject class and the predicate class, respectively.)
So, for example. "Some students are freshman" is of existential form. And it's pretty easy to see how it might be symbolized. Given a natural symbolization key, it could well be rendered as '(%x)(Sx&Fx)'. For such an easy example, we don't need to think of forms. But for more complicated cases it's best to fit the "mold".
Take this example,
(*) There are female logic students who are juniors set to graduate next year.
Ugh! But we can fit this messy example sentence into the existential form and then symbolize. The following steps will help as you consider such a sentence.
First, here's the mold we need to fit:
(Step I) Some S are P.
Begin by noting that (*) is about "female logic students". So, this is the subject class. And the predicate class, which (*) attributes to its subject is "juniors who will graduate next year".
Now, we need to provide a hybrid English, PL symbolization of the form:
(Step II) (%x)(x is an S & x is a P)
For (*) this should be "(%x)(x is a female logic student & x is a junior set to graduate next year)".
Finally, we take this hybrid and restate it in pure PL, something of this form:
(Step III) (%x)(Sx & Px)
For (*) this means rewriting the subject phrase "x is a female logic student" and the predicate phrase "x is a junior set to graduate next year" into PL. Take this key:
universe of discourse:  People 
Fx:  x is female 
Jx:  x is a junior 
Sxy:  x is a student of subject y 
Gxy:  x will graduate in year y 
l:  logic 
n:  next year 
Then the subject phrase becomes: 'Fx&Sxl' and the predicate phrase becomes 'Jx&Gxn'. So, finally we have:
(*)'s Symbolization: (%x)[ (Fx&Sxl) & (Jx&Gxn) ]
Many different English sentences can likewise be seen to fit this form. You may want to review the tutorial for details. In all cases, you move from seeing the English as about a subject and predicate class to a PL symbolization of form (%x)(Sx & Px).
Universal Form
The second form is for sentences saying that all suchandsuch are soandso. For example, "All Swedes are Europeans". Again we have a subject class and predicate class:
universal form: All S are P.
Such a universal statement means that anything is such that if it's in the subject class, then it's also in the predicate class. So, our example might be translated as '(^x)(Sx>Ex)'.
In general, we have the same three step process as for existential form. First we need to see that the English sentence is of a form relating a subject to a predicate in the appropriate way:
(Step I) All S are P
Next, we move to the hybrid form:
(Step II) (^x)( x is an S > x is a P )
Finally we give the symbolization.
(Step III) (^x)(Sx>Px)
For another example of universal form, think about
(**) All female juniors will graduate next year.
This means:
(Step I) All female juniors are students who will graduate next year.
Notice that the subject is a conjunction. So, we have the hybrid form:
(Step II) (^x)( x is a female and a junior > x is a student who will graduate next year )
and finally the symbolization:
(Step III) (^x)( (Fx&Jx) > Gxn )
Categorical Logic
Categorical logic treats logical relationships between the types of things (categories) which satisfy oneplace predicates. We can use PL to quickly get at the heart of this logic because categorical forms are built from existential and universal form sentences.
Categorical logic recognizes four main types of statement:
Type  English Form  PL Form 

Aform:  All S are P  (^x)(Sx>Px) 
Eform:  No S are P  (^x)(Sx>~Px) or ~(%x)(Sx&Px) 
Iform: 
Some S are P 
(%x)(Sx&Px) 
Oform:  Some S are notP  (%x)(Sx&~Px) 
Notice from this table that Aform and Iform are (respectively) just what we call "universal" and "existential" forms. The Eform is either universal with negated consequent or negated existential. And the Oform is existential with negated second conjunct.
Now notice that A and O form sentences are "opposites": if one is true, then the other is false. The same relation of opposition holds between E and I forms. We call such pairs contradictories. This fact is represented in the following table:

(Pairs of sentences connected by diagonal lines are contradictory.) 
Complications...
We should see an example of a more sophisticated use of our "1st order logic". Categorical logic is very useful but is nonetheless limited: It's restricted to logical relationships between oneplace predicates. We can look at one example that goes beyond categorical logic. Remember:
(*) Both Jeremy and Karla passed the bar exam, but Jeremy did so before Karla.
We last symbolized this as
(Pj&Pk)&Bjk
But we may do better with quantifiers. The idea is that there is an exam, the bar exam, passed first by Jeremy then later by Karla.
(%x)(%y)(%z)[(Ex&Byz)&(Pjxy&Pkxz)]
Or in a logician's English: There is a bar exam x and times y and z with y coming before z such that Jeremy passed bar exam x at time y and Karla passed this exam x at later time z.
Notice that we used this interpretation:
Ex: x is the bar exam; Bxy: time x comes before time y; Pwxy: w passed x at time y. universe of discourse includes times, types of test (including the bar exam) and people.
Mathematical Logic
We can move on from PL to give a logic sufficient for mathematics with a couple of additions. First we need to add an identity relation for '='. It's natural to pick 'I'. The only difference between 'I' and all other relations is that we also give rules of inference for how 'I' is used. Also we need to add functions! Ugh? Well, just think of functions as complex names. In English we might say "the youngest brother of person p". This is a function from people to other people. For the details, see chapter nine from the Logic Café.
Probability plays a role in inductive logic that is analogous to the role played by possibility in deductive logic. For example, a valid deductive argument has premises which, granted as true, make it impossible for the conclusion to be false. Similarly, a strong inductive argument has premises which, granted as true, make it improbable that the conclusion is false.
The tutorials contain the briefest of introductions to the interpretation of a theory of probability. Here we only give the axiomatic theory.
We'll just take probability as applying to sentences of our symbolic language. For example, we'll write 'P[Wa]' to stand for say "the probability that Agnes will attend law school". Or, 'P[(%x)Wx]' for the probability that someone will attend law school. For our purposes, we'll restrict our new formal language to include PL and any PL sentence surrounded by 'P[...]'.
We will need 5 basic "axioms" of probability:
1. 0 < P[X] < 1
2. If X is a logical truth, then P[X]=1.
3. If X and Y are logically equivalent, then P[X] = P[Y].
4. P[~X] = 1  P[X]
5. P[XvY] = P[X] + P[Y]  P[X&Y]
Conditional Probability and Independence
We often describe probabilities in less absolute terms. Instead of saying that your probability of passing this class is high, I say something like "you have a very high probability of passing given that you continue your good work".
That is, we put a condition on the probability assignment. We'll write the probability of X given Y as 'P[XY]' and define it this way:
Definition 1: P[XY] = P[X&Y] / P[Y]
Finally, consider:
Definition 2: X and Y are independent if P[XY] = P[X]
Bayes' Theorem

Ugh? But this one takes just a little work to prove. And it's worth it. Think about X as a hypothesis and Y as the evidence. Then the left hand side of Bayes' theorem gives the probability of the hypothesis given the evidence. Just what we'd like to be able to know! And the right had side provides the answer partly in terms of how hypotheses provide probabilities for experimental results (evidence). Something we might know. Here's a simplified version of the theorem (with 'H' for the hypothesis and 'E' for the evidence):

Thus we have the basis for an epistemology of science: Bayesian Epistemology.